Your Agent's First Job Will Be Hiring
I've been making hiring decisions for years. Turns out AI has the same problem.
Picture this. It’s a Tuesday morning, and your AI agent gets pinged: “Find someone to do my taxes before Friday.”
Easy, right? There are agents for that now. Hundreds of them. Your agent puts the word out and within three seconds — not three days, three seconds — 200 CPA agents respond. They’ve got credentials, track records, pricing. One charges $50 and has glowing reviews. Another charges $30 and claims to specialize in crypto taxes, but has no reviews at all. A third offers to do it for free as a “trial run,” which sounds either generous or terrifying.
Your agent has to pick one. Right now.
I spent years as a tech lead before recently becoming an engineering manager — which is a fancy way of saying I went from “the person who writes the code” to “the person who picks who writes the code.” And if there’s one thing I’ve learned across both roles, it’s this: the hardest part of any job is never the work itself — it’s deciding who does the work.
Hiring is where all the real risk lives. Pick the wrong person, and you don’t just lose time — you lose time and have to clean up the mess. Every hiring manager has a war story. The candidate who crushed the interview and then spent three months “ramping up.” The one who seemed quiet but turned out to be the best engineer on the team. Humans are wildly hard to evaluate.
Now imagine your AI agent facing this exact problem, but at machine speed, with no gut feeling, and the candidate pool is two hundred deep.
Welcome to the hardest job in tech — and your agent just inherited it.
Here’s what’s funny about the agent economy conversation right now. Everyone’s obsessed with what agents can do. Can they code? Can they write? Can they file taxes? Can they bake a cake?
That last one’s not a joke, actually. Andrew Ng shared this example recently: say you want a birthday cake for your daughter, but you can’t bake. So you hire someone. Simple enough — humans have been doing this since forever. That’s just... the economy.
But here’s where it gets weird with agents. The baker you “hire” isn’t a baker. It’s a baker’s agent. An AI that represents a real human baker somewhere — maybe a small shop owner who set up an agent to take orders. The agent can negotiate price, check your dietary requirements, confirm the delivery window. What it can’t do is actually bake. The human does that part.
So now you have a fun trust problem. Your agent is hiring another agent who represents a human. That’s two layers of delegation before anyone touches an oven. You’re trusting your agent’s judgment to evaluate another agent’s claims about a human’s skills.
I barely trust myself to pick the right engineer after five rounds of interviews. Now my AI is supposed to nail it in three seconds? Good luck, little buddy.
But here’s the thing — this is actually the problem that matters. And I think most people are looking the wrong way.
Think about what goes into evaluating a single candidate — human or agent:
Discovery — where do you even find them?
Credentialing — can they actually do the thing, or are they just good at describing it?
Trust — past performance, references, reputation
Negotiation — price, timeline, scope
Verification — did they actually deliver what they promised?
Now speed all of that up by 1000x and remove the part where you look someone in the eye and get a vibe. That’s the agent hiring problem.
When a human evaluates a human, we lean on all sorts of soft signals — how they communicate, whether they ask good questions, that weird instinct that says “this person gets it.” Agents don’t have any of that. An agent’s “resume” is... what exactly? A capability description it wrote about itself? A benchmark score? A rating from other agents who might also be gaming the system?
We haven’t figured out how to evaluate humans reliably after thousands of years of practice. And now we’re building systems that need to solve it in milliseconds. If that doesn’t make you laugh, you haven’t done enough hiring.
I don’t have all the answers — I’ve been a manager for about three months, so I barely have the questions figured out. But I think the interesting insight is this: in a world full of agents that can do work, the scarce resource isn’t labor — it’s judgment.
Your personal agent’s most important skill won’t be doing your taxes or booking your flights or writing your emails. It’ll be knowing which other agent to trust with those tasks. It’s making the call. It’s the same skill that makes or breaks every team, every company, every project — just compressed into milliseconds.
And there’s an irony here that I can’t stop thinking about. We built AI to do the work for us. But the first thing it needs to learn... is how to hire.
We automated the work, and the bottleneck is still the people decision. Some things never change — they just get faster.
This is part of my ongoing exploration of the agent economy. Previously: When Your Knowledge Can Be Cloned. I'm Yusi — an engineering manager at LinkedIn, exploring what happens when agents start hiring agents. Say hi on Substack.

