Nobody talks about how it actually feels.
The benchmarks get written. The tooling comparisons get published. The productivity numbers get cited in VC decks. But the human experience of sitting down at your keyboard, opening a context window with an AI that is faster than you, more patient than you, and never needs coffee — that conversation doesn't happen enough.
I've been writing code for over two decades. I remember the feeling of solving a hard bug at 2am — the particular satisfaction of it, the way your brain replays the insight for days. I didn't know that feeling had a price tag until AI started devaluing it.
This is me trying to be honest about what that's been like. Not optimistic. Not pessimistic. Just honest.

The Identity Question
Does AI coding make you feel less like a programmer?
Yes. And also no. And that contradiction is where everything interesting lives.
There are sessions where I hand the agent a gnarly migration task and it does in twelve minutes what would have taken me a focused afternoon. I watch it read the schema, understand the relationships, write clean SQL, handle the edge cases. And I feel something complicated — pride that I built the system well enough for an AI to understand it, and something quieter that I don't have a clean word for.
It's not quite loss. It's more like watching someone else dance a song you used to own.
| Old Self-Concept | New Self-Concept |
|---|---|
| "I write code" | "I direct code" |
| "I solve problems" | "I frame problems" |
| "I know the syntax" | "I know what matters" |
| "I debug line by line" | "I read diffs and judge" |
| "I am the craft" | "I orchestrate the craft" |
The shift happened gradually then all at once. One morning I realized I hadn't written a for-loop by hand in three weeks. I didn't miss it. That surprised me more than anything.
What I've landed on: the identity isn't gone. It evolved. The programmer identity used to be attached to the act of typing code. Now it's attached to something harder to articulate — taste, judgment, the ability to look at what an AI produced and know whether it's right. Not just "does it compile" right. Right right.
That's still a craft. It's just a different one.
What did you have to grieve?
The feeling of flow. That's the honest answer.
Deep work programming — the kind where you're so inside a problem that two hours pass and it feels like twenty minutes — that doesn't happen the same way anymore. The AI is always there, always faster, always ready with a suggestion before your thought is fully formed. You stop finishing your own sentences. After a while, you stop starting them.
I didn't realize how much I valued that solitude until it was crowded out.
I've had to deliberately structure time for unassisted thinking. Problems I work through on paper first, before I open any tool. Architecture decisions I sketch in a notebook before I ask anything. It's not productivity theater — it's protecting something that matters to me about how I think.
What did you gain that you didn't expect?
Ambition. Genuinely.
I used to scope projects based on what I could realistically build alone in the time I had. That constraint shaped everything — what I attempted, what I simplified, what I abandoned. AI didn't just make me faster at what I was already doing. It expanded the ceiling of what I'd even try.
| Year | The Reality |
|---|---|
| 2022 | "I want to build a real-time analytics dashboard for my store. That's a 3-month project. Pass." |
| 2025 | "I want to build a real-time analytics dashboard for my store." — 30 hours later — "It's live." |
The interesting part isn't the speed. It's that I tried. It's that I didn't talk myself out of it. Ambition that was rationed is now less rationed. I'm still figuring out what to do with that.
The first time I shipped something I wouldn't have attempted before, I felt something I hadn't expected: gratitude. Not toward the AI — toward the situation. Toward this weird transitional moment in history where someone with my skills and my constraints could suddenly build at a scale that wasn't available to me before.

The Emotional Reality
| Minute | What Happens |
|---|---|
| 1–10 | Agent reads codebase. Seems great. |
| 10–20 | First implementation. Looks reasonable. |
| 20–30 | You notice something slightly off. You correct it. Agent agrees. Continues. |
| 30–45 | The slight offness is still there, different shape, same root cause. |
| 45–60 | You realize the agent's mental model of the system is wrong at the foundation. Everything it's building is on that wrong foundation. |
What does a bad session actually feel like?
Like arguing with a very confident person who is wrong.
The agent has read 40,000 tokens of your codebase and formed a model of it. That model is close to correct but not quite. It has a blind spot — an architectural decision you made six months ago that isn't documented anywhere, just carried in your head. And now the agent is making changes that are technically coherent within its model but wrong within the actual system.
And you can see it happening. In real time. And the corrections don't stick because the context window doesn't hold the nuance you're trying to inject into it. You correct. It drifts. You correct again. It drifts again. It's confident. It keeps going. The code it's writing will pass the linter and fail in production in a way that will take hours to trace.
That's a bad session. They happen more than the productivity literature admits.
The emotional experience of a bad session is specific: a slow dawning frustration that's different from the frustration of a hard bug. A bug is a puzzle. This is watching someone solve the wrong puzzle confidently while you try to redirect them.
And a good session?
Magic. I don't use that word casually.
A good session is when you and the agent are genuinely thinking together. You describe the problem, it asks the right clarifying questions, you give context, it reads the relevant files without being told which ones, it produces code that surprises you in the best way — code that solves the problem in a way you wouldn't have found yourself but immediately recognize as right.
Those sessions exist. They're not rare. And they produce a different feeling than solo flow — less like being inside a problem and more like being in a great conversation with someone who is trying to understand what you actually need.
I've started thinking of the craft differently. There's solo craft and collaborative craft. Flow state is solo. The good AI sessions are collaborative. Both have value. I used to only have access to one.
How do you handle the trust problem?
Carefully. Slowly. In layers.
I don't trust agent output the way I trust my own output. That sounds obvious, but the failure mode is subtle: the code looks right, passes tests, ships, and breaks two weeks later in a production edge case that no test covered. The agent was confident. The tests passed. The code review looked clean.
| Trust Level | What Falls Here |
|---|---|
| Low stakes — trust more freely |
|
| High stakes — verify everything |
|
The skill I've developed — that I think is genuinely undervalued — is reading agent output the way a careful editor reads a manuscript. Not checking every word, but reading for coherence, for places where the logic shifts subtly, for assumptions that don't match the system's actual behavior. That's a human skill. It gets better with practice. It doesn't automate.
Danger Zone — high confidence + high complexity:
When the agent sounds most certain, be most careful. Confident wrong answers are harder to catch than uncertain ones. The agent doesn't know what it doesn't know. You have to know for it.

The Craft Question
Is programming still a craft?
More than ever, actually. Just differently.
The craft used to be mostly execution — knowing the syntax, remembering the patterns, typing correctly and quickly. Those skills still matter but they're table stakes now. The craft that matters more is upstream: understanding what you're building and why, making architectural decisions that will survive contact with the real world, knowing when simple is better than clever.
That judgment can't be automated because it depends on context that lives outside the codebase — business constraints, user behavior, team dynamics, the political history of why that module was built that way in 2019. The AI can read your code. It can't read your organization.
| Before | After |
|---|---|
| Remembering API signatures | Knowing what API to use |
| Typing implementations | Evaluating implementations |
| Debugging runtime errors | Preventing design errors |
| Writing tests | Deciding what to test |
| Following patterns | Recognizing when patterns fail |
| Individual skill | Specification skill |
I've become a better programmer since I started working with AI. Not because I learned new technical skills — because I had to get much better at knowing what I wanted. Vagueness used to hide in the gap between thinking and typing. That gap is much smaller now. You find out quickly when you didn't really know what you wanted.
What do you do that AI genuinely can't?
Three things, consistently.
Hold the why. I know why the system is the way it is. Not just what it does — why each decision was made, what constraints were active at the time, what was tried and failed, what was deferred and why. That institutional memory is not in the codebase. It's in me. An agent working from the code alone is always working from a simplified version of the truth.
Read the room. The right technical decision isn't always the best decision. Sometimes the right answer is the one your team can understand and maintain. Sometimes the right answer is the one that ships this week even if it's imperfect. Sometimes the right answer is "we shouldn't build this at all." Those calls require human context — about the team, the timeline, the business — that exists in no file the agent can read.
Care about the right things. An agent optimizes for what you measure. It doesn't independently value elegance, or maintainability, or the experience of the next programmer who reads this code at 11pm trying to trace a production bug. You have to bring those values. You have to specify them or they won't show up.
What advice would you give to a programmer just starting to use AI tools?
Stay in the driver's seat. That's the whole thing.
| The Programmer | The Tool (AI) |
|---|---|
| You are responsible | It is not |
| You understand the goal | It optimizes for the prompt |
| You know what you'll regret | It doesn't care |
| You judge correctness | It produces plausibility |
| You must never stop understanding | The moment you do — stop, read, question, then continue |
Keep writing code by hand. Not because it's more productive — it isn't — but because the act of typing a solution keeps you connected to the solution. Use AI to go faster and further. Use the saved time to go deeper on the parts that matter.
The programmers who will be most valuable in five years are the ones who maintained genuine technical understanding while learning to leverage AI effectively. Not one or the other. Both. The leverage without the understanding is a house of cards. The understanding without the leverage is just slower.
A precise, well-scoped spec produces good code.
A vague spec produces vague code, confidently.
The programmer's craft moved upstream.
Writing the right spec is harder than people admit.
It requires understanding the problem more deeply, not less.

The Bigger Picture
Do you worry about where this is going?
Yes. Specifically.
I'm not worried about AI replacing programmers in the dramatic sense. I'm worried about a subtler erosion — a generation of developers who learned to code by accepting AI output rather than by struggling through problems. The struggle is where the real understanding forms. The 3am debugging session where you trace through ten layers of state before you find the off-by-one error — that builds a mental model of how systems actually behave that autocompleted code never does.
If AI removes all the struggle, it removes all the learning. And then you have an industry full of people who can operate AI tools but can't reason about the systems those tools are building. That's a fragile foundation.
| Generation | How They Learned | Dependency Level |
|---|---|---|
| 1990s developers | Read the source, no docs | Low — built mental models from scratch |
| 2000s developers | StackOverflow + blogs | Medium — external answers, own synthesis |
| 2010s developers | GitHub + tutorials + Google | High — copy, adapt, understand |
| 2020s developers | LLMs + completion + generation | Very high — accept output, skip struggle |
The Dependency Risk
Each generation relies more on external intelligence. Each generation builds less raw model of the system. The question is where the floor is — at what point does the dependence compromise the ability to build anything the AI can't? We'll find out when something goes wrong that the AI can't diagnose, because none of the humans left can either.
I don't think this is inevitable. I think it's a choice, individually and collectively. But it requires being honest about the risk, which the productivity narrative doesn't encourage.
What gives you hope?
The human instinct to understand things.
I've watched junior developers who use AI constantly also stay up late reading source code because they were curious about how something worked. Not because they needed to — because they wanted to understand. That instinct doesn't seem to disappear just because the tool exists.
The programmers who thrive are the ones who use AI as a telescope — to see further than they could alone — not as a crutch. The telescope doesn't replace your eyes. It extends them. You still have to know what you're looking for. You still have to interpret what you see.
That interpretive capacity — the human ability to decide what matters, to hold values about software quality, to care about the experience of people who use what we build — I don't see that eroding. I see it becoming more central.
We were always more than our typing speed. We're just being asked to prove it now.
Leon Yeh is a GenX Computer Scientist writing about AI, blockchain, and the future of software. He has strong opinions about Kelly Criterion and medium opinions about everything else.