How I Stopped Writing Code and Started Shipping Faster

About a month ago, I looked at a week's worth of commits and realized I hadn't written most of the code myself.
I had deliberately stopped.
That wasn't alarming. It meant the system was working.
The shift
I work on a production React Native app. Complex features, real constraints, shipping cadence that doesn't forgive wasted days. At some point I started asking a different question: where does my time actually create value?
Not in writing code. In deciding what to build, how to build it, and whether what shipped is actually right.
So I started treating Claude Code as a developer I'm responsible for directing and reviewing. Not a tool I reach for occasionally when I'm stuck.
That framing changed what the work looks like day to day.
Two workflows
The mistake I see most people make when talking about coding with AI is treating it as one thing. It's two different modes, and mixing them up breaks both.
The autonomous flow
Some work I don't want to be involved in at all. Routine tasks, well-scoped features where the expected output is verifiable. I write the spec, Claude Code implements, I check the result later.
I don't read every line. If the outcome is correct, the process doesn't matter to me.
The goal is for this to run without me for hours, sometimes a day, and produce something ready to review. Not something half-done that needs my attention to move forward.
This requires more discipline on the spec side than I expected. Vague input produces vague output. If I want to check back and find something shippable, I need to front-load the thinking: what does done actually look like, what are the edge cases that matter, what should it explicitly not do.
When that thinking is in place, it works. When it isn't, I get something that looks right until it runs.
The collaborative flow
The second mode is different. I stay involved, but not in the implementation.
I have a feature I want to build. I spec it. Then I work through the approach before handing it off: what's the right architecture, what tradeoffs am I making, is there a better way I haven't thought of. This is closer to designing with a collaborator than delegating to one.
I'll push back on suggestions, change direction based on something the model surfaces, iterate on the approach. Then I let it implement.
For the critical parts, I read the code. Not all of it. The parts where a wrong call has real consequences. The parts where I need to be confident.
What actually changed
Output is up. But the shape of the work shifted.
I spend more time thinking before anything gets written. My specs got better because bad specs have visible consequences now. The autonomous flow produces something wrong, I trace it back, and the problem is always ambiguity I introduced upstream.
I spend more time reviewing. Not more total hours, just more of my time is reviewing rather than writing.
And I'm more deliberate about which work goes into which flow. Misjudging that costs more than it saves. Some tasks look like good candidates for full autonomy and aren't.
What doesn't work
Claude Code doesn't hold your codebase the way you do after months in it. The context you provide is the ceiling of what it can work with. Specs that assume knowledge it doesn't have show up as wrong implementations, not questions.
It also doesn't catch its own subtle bugs reliably. Not the obvious ones. The edge cases in complex stateful code, rare interaction patterns, things that only fail under specific conditions. Review is not optional. Shipping AI-generated code you haven't read is shipping a colleague's code without reading it. Nobody should do that.
The autonomous flow also requires trust you build incrementally. Start with lower-stakes tasks. Expand scope as you get a feel for where it holds and where it needs more guidance.
What this means
The senior engineer who's most valuable in this model isn't the one who writes the best code. It's the one who makes the right decisions before the code exists, and who can tell the difference between output that's correct and output that looks correct.
That judgment comes from having shipped things and watched them break. The model doesn't have it.
I think about this when I see engineers treating AI coding tools as a threat. The ones adapting aren't doing less skilled work. They're doing different skilled work. The identity shift is harder than the technical shift, and that's the part nobody talks about clearly.
Three years from now I don't know exactly what this job looks like. Right now it looks like: own the decisions, review what ships, let the implementation be someone else's problem.
That's a different job than the one I trained for. Turns out it suits me better.