Agentic Coding
Chapter 01How It Actually Works

What Claude Code actually does

The model, the harness, and the tools. How an AI coding agent actually works under the hood.

20 minLesson 1 of 3

Last week I built a full Stripe integration — checkout flow, webhook handler, database updates, error handling — in one prompt. Fourteen files created or modified. Two build failures caught and fixed automatically. I did not write a single line of code.

This is not autocomplete. It is not a chatbot generating snippets you paste in. An autonomous agent read my codebase, figured out what to do, made the changes, and caught its own mistakes.

To understand why this works — and more importantly, how to make it work well — you need to understand the architecture. The mechanical reality, not the marketing version.

The model and the harness

There are two pieces to every agentic coding tool. Most people think of them as one thing. They are not.

The model runs on Anthropic's servers. Claude Opus, Claude Sonnet — these are language models. They receive text, they reason about code, they produce text. They are very good at deciding what to do. But on their own, they cannot read your files, run your build, or edit your project. They are brains without hands.

The harness runs on your machine. Claude Code is a harness. It is a program in your terminal that sits between you and the model. When you type a prompt, Claude Code does not just send your message to Anthropic. It adds a list of tools the model can use.

The model (remote)

Claude Opus or Sonnet running on Anthropic's servers. Receives your prompt plus a list of tool definitions. Returns text and tool call instructions. Cannot touch your files directly.

The harness (local)

Claude Code running in your terminal. Attaches tool definitions to every request. Executes tool calls locally on your machine. Sends results back to the model. Manages the entire conversation loop.

This separation matters. The model decides what to do. The harness does it. The model never touches your files — it sends structured instructions, and Claude Code executes them.

How a prompt becomes action

When you type a prompt in Claude Code, this is what actually happens:

You type a prompt

Claude Code takes your message and attaches a list of tools the model can use — Read, Edit, Write, Bash, Grep, Glob, WebSearch, and more.

The model reasons

Anthropic's API receives your prompt plus the tool definitions. The model decides what to do and returns tool calls — structured instructions like "Read this file" or "Run this command."

Claude Code executes

Claude Code runs the tool calls on your machine. Read opens a file. Bash runs a terminal command. Edit changes a specific line. The results are sent back to the model.

The loop continues

The model sees the results and decides the next step — read another file, make an edit, run the build. This loop repeats until the task is done.

What the model actually sees:

What gets sent to Anthropic's API
Your prompt: "Add a contact form with Zod validation"

Available tools:
  Read(file_path)             → returns file contents
  Edit(file_path, old, new)   → surgically replaces text in a file
  Write(file_path, content)   → creates or overwrites a file
  Bash(command)               → runs a terminal command
  Grep(pattern, path)         → searches file contents
  Glob(pattern)               → finds files by name pattern

Model response:
  → tool_call: Read("src/components/ui/index.ts")
  → tool_call: Read("packages/db/src/database.types.ts")

The tools

Claude has five categories of tools. Each one serves a different purpose in the loop.

Read

Read files, find files by pattern (Glob), search content by regex (Grep). This is how Claude explores your codebase.

Write

Create new files (Write) or surgically replace specific strings in existing files (Edit). Edit only sends the changed portion — far cheaper than rewriting the whole file.

Execute

Run any terminal command: build, test, lint, git, npm. The most powerful and most dangerous category.

Search

Look up documentation, check npm packages, read external resources. Claude is not limited to what is in your repo.

Orchestrate

Spawn subagents for isolated tasks, track multi-step progress. These let Claude manage complexity by breaking work into pieces.

One distinction worth knowing now: Edit finds a specific string and replaces it. Write specifies the entire file content. On a 500-line file where you are changing 2 lines, Edit uses roughly 1% of the context that Write would. This will matter a lot in Lesson 3.

Why Claude Code

There are many agentic coding tools. Cursor, Windsurf, Cline, Copilot Workspace. Why does this course focus on Claude Code?

Because Claude Code runs in your terminal. That matters more than it sounds.

You control the tool system. The model gets whatever tools Claude Code gives it. You can add new tools, giving Claude access to your database, your browser, your documentation. You can restrict tools so a review agent can read but not edit. You can automate around tools, running a script every time Claude edits a file. IDE-based agents give you their toolset. Claude Code lets you build yours.

It is composable. A CLI program can be scripted, looped, piped, and automated. claude -p "review this file" runs as a one-shot command. You can put it in a bash script. Chain three Claude calls into a pipeline. Run it in CI/CD. Build a PR review bot. None of this is possible when your agent lives inside an IDE.

The default loop — what actually happens

Anthropic's own documentation describes the agentic loop as three phases that blend together: gather context, take action, and verify results.

Gather context

Read files, search code, check git history. Build a mental model of the relevant parts of the codebase.

Take action

Edit files, write new ones, run bash commands. The actual changes happen here.

Verify results

Run the build, run the linter, run tests. Check that the changes actually work. If something fails, loop back.

These phases are not rigid steps. They blend together. A question about your codebase might only need context gathering. A bug fix cycles through all three phases repeatedly. A refactor might involve extensive verification. Claude decides what each step requires based on what it learned from the previous step.

What this looks like on a real task. I gave Claude a detailed prompt with clear criteria:

Claude Code prompt

Add a contact form. Use the existing design system. Validate with Zod. Send submissions to the leads table in Supabase. Show a success toast after submission. Run the build after to make sure it compiles.

What Claude actually did
Read("packages/ui/src/index.ts")                → finds Button, Input, Card
Read("packages/db/src/database.types.ts")        → finds leads table schema
Write("packages/shared/src/schemas/contact.ts")  → creates Zod schema
Write("apps/web/components/contact-form.tsx")     → builds the form
Edit("packages/api/src/routers/leads.ts")         → adds create mutation
Edit("packages/api/src/index.ts")                 → registers new route
Edit("apps/web/app/contact/page.tsx")             → adds form to page
Bash("pnpm build")                               → type error: missing import
Edit("apps/web/components/contact-form.tsx")      → fixes the import
Bash("pnpm build")                               → success

Ten tool calls. Six files created or modified. One build failure caught and fixed. Notice two things about this: I gave Claude specific criteria ("run the build"), and it verified because I asked. I also told it which design system and validation library to use.

What the default loop does not do

Two things the default loop does not do automatically:

It does not plan explicitly. Claude has a feature called Plan Mode (Shift+Tab) that separates exploration from execution. In Plan Mode, Claude reads your codebase and creates a plan before making any changes. But Plan Mode is opt-in. Without it, Claude may jump straight to coding. We cover when and how to use Plan Mode in Chapter 3.

It does not always verify. Claude checks its work when you give it something to check against: "run the tests," "check that the build passes," "compare against this screenshot." Without criteria, it may skip verification entirely. Giving Claude verification criteria is the single highest-leverage habit you can build — we cover this in the next lesson.

The tradeoff you cannot ignore

More power means more risk. The agent can read, edit, and execute across your entire project. A misunderstanding is not a wrong line suggestion you can backspace. It is a wrong architectural decision executed across multiple files.

The rest of this course is about managing that tradeoff: structuring your codebase so the agent understands it, setting up guardrails that catch mistakes automatically, and building a system where more autonomy means better results, not bigger risks.

The power is real. The risk is real. The methodology is what makes the difference.

What comes next

You now understand the machine. The model reasons, the harness executes, the loop runs until the task is done. But understanding the machine and getting the most out of it are two different things.

In the next lesson, I am going to give you eight things you can do today that will immediately improve your results — and then show you what becomes possible when you go further.

Validate your understanding

See the separation

Install Claude Code and run a simple task. Your goal is not the output — it is to watch the model/harness split in action.

Exercise checklist: See the separation