Coding Is Cardio. The 1:1 Is the Weight Room.
AI makes us move faster. But where do we build the muscle to think?
My OpenClaw agent — I call it UC’s Digital Double — asked me a question the other morning. Not a reminder, not a calendar alert — an actual question: “Companies are setting up token-usage leaderboards now — employees competing to burn the most AI tokens per week. Would you run one? Or does gamifying usage miss the point?”
I stared at it for a minute. Then I typed back: “I’d run it. And also do the opposite.”
Let me explain.
There’s a debate right now about whether companies are entering an era of “productivity theater” — employees burning tokens to look busy, automating things that didn’t need automating, racking up AI usage stats like it’s a video game leaderboard. Some people think we’re heading toward a world where the busiest-looking person is actually the least thoughtful one.
I don’t buy it. Not yet.
On my team, adoption is actually pretty good. But what I keep seeing is that we’re still on the steep part of the curve — more usage keeps bringing more productivity, and we haven’t hit diminishing returns yet. Every time someone discovers a new way to use AI in their workflow, it unlocks something real. Not fake output. Not “I generated 500 lines of boilerplate.” Actual features shipping in days that used to take weeks. The quality didn’t drop — if anything, it went up, because engineers spent the saved time on design and testing instead of typing.
So would I run a token leaderboard? Honestly? Yeah, probably. Not as a metric that matters on its own, but as a signal. Low usage might mean someone hasn’t found their workflow yet. High usage probably means they’re experimenting and finding value. Both are useful data points.
But here’s the thing. Encouraging more AI usage creates a new problem — one I didn’t anticipate until I was already living in it.
If your engineers are spending most of their coding time with an AI copilot — and they should be — then where does the thinking happen?
I wrote last week about cognitive offloading and the risk of letting AI do your reps. But saying “be careful” isn’t a strategy. You need a place where the thinking actually happens. A structure. A habit.
For me, that place turned out to be the most old-school, low-tech ritual there is: the 1:1.
I’ve started treating my 1:1s as the AI-free zone.
Not that I announce it or anything — nobody walks in and sees a “no robots allowed” sign on the door. But the conversations we’re having aren’t the kind you can autocomplete your way through. I lean into questions like:
“What do you think the fundamental problem is here — the business problem, not the technical one?”
“If we had to do this over, what would you do differently?”
“Between approach A and B, what’s your recommendation — and why?”
“Pitch me this idea. I’m going to poke holes in it.”
“Convince me we should prioritize X over Y.”
First-principles thinking. Product-first thinking. The kind of reasoning where there’s no right answer to look up — you have to actually work through it in your own head. It’s just two people reasoning out loud, poking holes, changing their minds.
It’s slow. It’s messy. It’s deeply inefficient.
That’s the point.
These conversations are where judgment gets built. Not the kind of judgment that says “use a hash map here” — AI handles that fine. The kind that says “this feature feels wrong for our users even though the data supports it” or “this technical debt is going to bite us in Q3 even though nobody’s complaining yet.” The messy, intuitive, human judgment that comes from wrestling with tradeoffs enough times that you start to develop instincts.
You can’t train that with a prompt. You build it through friction.
Here’s the mental model that’s been working for me: coding is cardio, and the 1:1 is the weight room.
AI-assisted coding makes you faster — great. Run more miles. Ship more features. Cover more ground. That’s the cardio. It’s good for you, and doing more of it is almost always better.
But the 1:1 is where you build the strength to handle the problems that don’t come with a prompt template. The ambiguous tradeoffs. The “should we even build this?” questions. The judgment calls that don’t have a Stack Overflow answer.
Every engineer needs both. Cardio without strength training makes you fast but fragile — you can ship code all day but you crumble when the problem gets genuinely hard. Strength without cardio makes you a philosopher who never ships anything. The combination is what makes someone actually effective.
I want to be honest — I’m not saying this from some position of wisdom. I’m a few months into leading a team, and I’m figuring this out alongside everyone else. The 1:1 isn’t me teaching anyone to think. It’s us practicing thinking together. I learn as much from these conversations as anyone. Sometimes more.
I think this extends beyond engineering, by the way.
If AI is making everyone’s output better — and it is — then the differentiator isn’t output anymore. It’s the quality of thinking behind the output. The person who can explain why they made a decision, who can anticipate second-order effects, who can say “actually, we should do something completely different” and be right — that person becomes more valuable, not less.
The token leaderboard measures the cardio. Nobody’s measuring the weight room.
Maybe they should be. I’m just not sure how yet.
For now, my approach is simple: encourage the team to use AI for everything they can — no guilt, no gatekeeping. And then protect the spaces where we practice real thinking together.
My agent asked a good question. I’m still figuring out the answer.
This is part of my ongoing exploration of AI's impact on how we work and think. Previously: When AI Does the Work, Who Does the Learning? and Your Agent's First Job Will Be Hiring. I'm Yusi — an engineering manager at LinkedIn, figuring out management in the age of AI. Say hi on Substack.

