The Code is Just a Shadow
How to lead when you can’t read every line anymore.
I’ve been feeling a specific kind of tension lately.
It’s the friction of watching a system grow faster than you can realistically track, while still being the person responsible for its balance.
When you step into a TLM (Tech Lead Manager) or Senior IC role, your relationship with code fractures. You go from being the one who wrote every line to the one overseeing multiple massive, AI-assisted codebases. You aren’t in the weeds anymore. You physically can’t be.
But when a sev-1 pager goes off at 2 AM, or when a foundational architectural decision needs to be made, you’re still expected to have a profound, intuitive grasp of the system.
The gap between what you can realistically read and what you are responsible for knowing is widening every day.
I recently watched a video by Hak that perfectly articulated how to survive this gap. He pulled a concept from a 1985 paper by computer scientist Peter Naur called Programming as Theory Building.
The core argument is simple but radical: The code isn’t the program. The program is the theory of the system that lives in the programmer’s head. How the pieces connect, why they connect that way, and what happens if you pull one out.
The code is just the shadow of that theory.
If AI is now generating the “shadow” on demand, how do we, as leaders, maintain the “theory”?
The Three-Question Scaffold
Hak argues that as code becomes easy to generate, our job shifts from being the star player to coaching the team. The AI can run the floor and shoot threes faster than you. But you are the one who knows how to read the defense, when to call the pick-and-roll, and why you need to slow the pace down in the fourth quarter.
To coach effectively across massive projects, you need a mental scaffold. Hak boils this down to three questions you must be able to answer without looking at the code:
Where does the state live? Who owns the absolute truth of the system? If two different components think they own the state, you don’t have a feature—you have a bug that just hasn’t been triggered yet.
Where do the feedback loops live? How do you know this is actually working? Not just “it compiled,” but “is the intent being met?” If your logs are empty, your errors are swallowed, and your metrics are silent, you’re flying blind.
If I delete this, what breaks? This is the ultimate test of your theory. Can you trace the blast radius of a component in your head before you ever touch the deployment pipeline?
These aren’t coding questions. They are systems thinking questions.
The Determinism Trap
I wrote a while back about how the “reps” you skip are the skills you lose.
There’s a dangerous temptation right now to treat AI as just another abstraction layer, like moving from Assembly to Python. But there is a fundamental difference. When you write Python, the compiler is deterministic. You don’t need to know how it translates your code into machine logic because it guarantees a provably correct translation.
An LLM is unpredictable (at least now). It is a probabilistic collaborator, not a trustworthy compiler.
For the same input, it might give you a different architecture tomorrow. It can quietly introduce a race condition or violate a core business rule while writing perfectly formatted React.
As a Senior IC or TLM, your value is no longer in producing the output (the shadow). It’s in holding the judgment (the theory) to audit that output. You are looking for the places where the abstraction is jagged—where the code looks flawless but the system design is fundamentally incoherent.
Design Before You Prompt
So, how do you enforce this on a team moving at the speed of AI?
You force the friction upfront. Before the AI writes a single line of code, you draw the boxes.
Take a piece of paper or a whiteboard. Draw the components. Draw the arrows for data flows. Mark where the state lives. Mark where the failures surface. If you can’t draw the system, you don’t understand it. And the AI is just going to confidently build whatever you failed to specify.
When I sit down for a 1:1 now, I’m not nitpicking variable names or syntax. In the next design review, I am going to point at a box and ask: “Where is the state managed? Where do we get the feedback? And if this is deleted, will it recover?”
If the engineer (or their AI agent) can’t answer that, we haven’t built a program. We’ve just piled up some shadows.
The goal isn’t to ship the most code. It’s to build the most resilient theory.
I’m curious—how are you maintaining your “system intuition” as the codebases get bigger and the AI gets faster? Let’s practice thinking together.
This is part of my ongoing exploration of AI’s impact on how we work and think. Previously: When AI Does the Work, Who Does the Learning? and Coding Is Cardio. The 1:1 Is the Weight Room.. I’m Yusi — an engineering manager at LinkedIn, figuring out leadership in the age of AI. Say hi on Substack.

