When AI Does the Work, Who Does the Learning?
The reps you skip are the skills you lose.
A few weeks ago, I had to put together a roadmap for my team. I’d just become an engineering manager, and this was one of those “prove you belong in this role” moments. So I did what any reasonable person in 2026 would do: I dumped every product doc, every engineering spec, every stakeholder email into Claude and said, “Give me a roadmap.”
It gave me a roadmap.
It was... fine. Organized. Had the right headers. Used all the right buzzwords. It looked like a roadmap the same way a wax fruit looks like an apple — convincing until you try to bite into it. I couldn’t defend any of the priorities. I couldn’t explain the tradeoffs. When someone asked “why Q2 for this?” I had nothing. The roadmap existed, but I hadn’t built it. I’d outsourced the thinking and kept the PowerPoint.
So I tried again. This time, I used AI differently. I researched with it — “what are the risks of this approach?” I debated with it — “I think you’re wrong about the priority, here’s why.” I organized my thinking through it — using it as a sounding board, not a ghostwriter.
The second roadmap looked surprisingly similar. But I could defend every line. During execution, I kept recalling the reasoning naturally — because I’d actually processed it. The material was mine.
The roadmap was never the point. My understanding was.
There’s a concept in cognitive science called “cognitive offloading” — using external tools to handle mental tasks. We’ve always done this. You don’t memorize phone numbers anymore. You don’t do long division by hand. That’s fine. Nobody mourns the loss of mental arithmetic.
But here’s where it gets uncomfortable. Researchers are finding that offloading thinking to AI — not just calculation, but reasoning, judgment, analysis — creates what they call a “cognitively sedentary condition.” Like a sedentary lifestyle, but for your brain. You feel fine. You’re producing output. But the muscles are quietly atrophying.
The data is pretty stark: confidence in AI is inversely correlated with critical thinking effort. The more you trust the tool, the less you exercise your own judgment. And the less you exercise it, the worse it gets. It’s a flywheel — just not the kind you want.
I think about this like the gym. If you hire a personal trainer who does your reps for you, you’d fire them immediately. That’s absurd — the whole point is that you do the work. The struggle isn’t a bug. It’s the product.
But with AI, we do exactly this. We hand over the cognitive reps and celebrate the output. “Look what I made!” No — look what it made. You held the phone and watched.
Now, I’m not saying you should stop using AI. I use it constantly. I’m using it right now to help me write this post, and I’m not going to pretend otherwise. The question isn’t whether to use AI — that ship has sailed, and it’s a really nice ship.
The question is: which reps are you skipping?
Here’s how I think about it in three layers:
Offloading calculation → totally fine. Let AI summarize a 50-page doc, crunch data, format a spreadsheet. These are mechanical tasks. You’re not losing a skill you need.
Offloading judgment → proceed with caution. When AI picks your priorities, evaluates your options, or tells you what to focus on — that’s where it gets risky. Judgment is built through practice. Skip the practice, lose the judgment.
Offloading the process of learning to judge → this is where it breaks. If you never struggle with a problem because AI solved it before you could even frame it, you don’t just miss that answer — you miss the skill of figuring things out. That compounds. A year of skipped reps is a year of skill you didn’t build.
This hits different when you’re hiring people. As an EM, I’m starting to wonder: how do you evaluate someone who ships great code with AI assistance but can’t debug without it?
On paper, they’re productive. Their output is clean. Their PRs look good. But take away the copilot and what’s left? Are they a strong engineer, or are they a strong prompt writer? Those are two different skills, and most interviews can’t tell the difference.
I don’t have a clean answer for this. But I think the filter is shifting from “can you do the work?” to “do you understand the work?” — and those are increasingly different questions.
The irony connects back to something I wrote about last week: in the agent economy, the scarce resource isn’t labor — it’s judgment. But if AI is quietly atrophying our judgment while boosting our output, we’re in a weird spot. We’re building an economy that runs on a skill we’re simultaneously degrading.
So here’s my current operating principle, for whatever it’s worth: use AI for the output, but do the thinking yourself.
Let it draft, but you decide. Let it research, but you synthesize. Let it suggest, but you choose — and know why you’re choosing.
The roadmap taught me this the hard way. The version AI wrote for me was useless — not because it was wrong, but because I hadn’t earned it. The version I built with AI as a sparring partner? I could defend it in my sleep.
The difference wasn’t the tool. It was whether I did the reps.
This is part of my ongoing exploration of AI's impact on how we work and think. Previously: Your Agent's First Job Will Be Hiring and When Your Knowledge Can Be Cloned. I'm Yusi — an engineering manager at LinkedIn, trying to use AI without letting it use me. Say hi on Substack.

