Back to Blog
Agentic Coding
Claude Code
Developer Workflow
GitHub Projects
Production Lessons

How We Deleted Linear and Let Claude Code Run Our Sprints

JD
Jean Desauw
7 min read
How We Deleted Linear and Let Claude Code Run Our Sprints

A month ago we canceled our Linear subscription. Two developers and a designer on a React Native app. Roughly 30 issues per sprint. Zero project management tool on the stack. We never open a browser tab to manage work. Claude Code reads the sprint, files new issues when we spot bugs, and sets the labels and assignees the way a human would. Since the change, we ship more and our tooling bill is smaller.

Nothing against Linear, it's a solid product. But once a coding agent can file issues from conversation, you stop needing a PM UI at all. You just need a platform the agent can drive through the CLI.

Why we killed Linear

Two costs stacked up. The money one was small but real: per-seat, per-month, forever. The token cost was the one that actually bit.

Once you wire Linear into Claude Code through its MCP server, every session loads the MCP schema into the context window. Hundreds of messages a week multiplied by MCP overhead becomes a measurable line on your bill. The schema sits there even when the agent doesn't need it, which is most turns. You're paying attention tax on every call.

The deeper problem was duplication. We were writing commit messages, PR descriptions, CLAUDE.md rules, and Linear tickets that all said more or less the same thing in slightly different language. GitHub was the real source of truth. Linear was a derivative UI we kept syncing by hand.

GitHub already has issues, projects, iterations, labels, milestones. The gh CLI exposes all of it with zero MCP overhead. If the agent can file a GitHub issue from natural language, there's nothing left for Linear to do in our flow.

The stack

Four pieces hold the system together.

GitHub Projects v2 for iterations, fields, and the sprint board. Native to the platform we already pay for.

gh CLI for every issue and PR operation. Preinstalled in our dev environment, so Claude Code invokes it directly. No MCP to load, no auth dance per session.

A rules file at .ai-rules/github-project.md. This is where the team becomes legible to the agent: the project ID, the iteration field ID, each developer's GitHub username, the label taxonomy, the size-to-estimate mapping. When Claude files an issue, it reads this file first and gets the field values right.

A CI layer that catches what the agent misses. One workflow validates branch naming. One validates that every PR carries the right labels. A third checks whether the PR touches native code and auto-tags it needs-native-build or ota-safe. That last one matters because we ship a React Native app and the release flow differs between the two.

The first three give Claude enough context to do the work. The CI layer is the safety net for when it gets something wrong.

A day in the sprint

Morning starts in the terminal. I ask Claude "what's on my plate this iteration?" and it runs a GraphQL query against GitHub Projects, filters by my username, prints a list sorted by priority.

I pick one. If it involves a UI component, our designer has usually already built it in Storybook inside /packages/ui, so I can wire it into the app directly. If the issue is underscoped, I ask Claude to break it down and update the body before I start coding. Then I branch, work, push, open the PR. The CI checks the labels and the branch name. Claude updates the issue if there's something worth flagging for reviewers.

When I want to zoom out on the sprint I run /sprint-status. It's a slash command wrapping a few gh queries and it prints burn-down, blockers, stale items. Three seconds, still in the terminal.

I can open github.com if I want to see the board visually. Most days I don't.

Capturing bugs from conversation

This is the part I didn't expect.

Before, when I'd spot a bug in the middle of something else, I had two bad options. Stop everything and file the ticket properly, which meant three minutes of context switch for a one-line observation. Or let it sit in my head and hope I'd remember later. Half the time I didn't.

Now the flow is natural language. "Hey, I just noticed the profile avatar stretches on iPad when you rotate to landscape. Can you track this?" I drop the screenshot. Then I get back to what I was doing.

Claude does three things in sequence.

It investigates. Greps the codebase for the component, looks at the style props, forms a hypothesis about the root cause. Nothing heavy, a minute of reconnaissance so the issue has actual context instead of just a symptom.

It writes the issue body. Description, repro steps, suspected files, screenshot attached. The kind of body a reviewer can pick up cold.

It files with gh issue create, adds the item to the GitHub Project, sets the iteration to the current sprint, attaches the right labels (bug, area:mobile, probably ota-safe), and assigns a developer. Default assignee is me as reporter unless I specify someone else.

No UI, no context switch, and the bug is recorded in a state a teammate can actually act on. The workflow cost of capture collapsed. The gap between noticing and filing is under ten seconds now, which means we file everything we notice instead of losing half of it to friction. Our bug hygiene is better than it was with Linear, for the least glamorous reason: the tool costs nothing to use.

Why not just install a Linear MCP

People ask us this every time. Three reasons.

Tokens. The MCP's schema lives in your context window whether you use it that turn or not. At two devs and hundreds of messages a week, it's measurable.

Dependency surface. gh is always there because GitHub is always there. A Linear MCP is one more moving part that can update, fail, or drift out of sync with the upstream API.

UI graveyard. The moment the agent can file from conversation, nobody opens Linear except out of habit. Tickets start getting half-updated there, the real state lives in GitHub, and you're paying for a source of truth that isn't the source of truth anymore.

Going native to the platform we already used killed all three problems at once.

Where this might break

We don't know exactly where it stops working.

For two developers and a designer sharing a project, we've hit no ceiling yet. Backlog stays current. Nothing falls through because clicking is no longer part of the flow.

At twenty developers with cross-team dependencies and dedicated product managers, I'd expect this setup to creak. Assignment logic gets political, review load needs routing, reporting demands are shaped by people who don't live in a terminal. This isn't a Jira replacement for a 200-person company. For the kind of small, focused team that ships most production software, it works.

The real fragility point is the rules file. If .ai-rules/github-project.md goes stale (a new label, a renamed field, a new iteration naming convention), Claude starts filing issues in the wrong slots. Keeping that file current is a weekly habit, the same habit you'd keep for any piece of infrastructure.

Everything else holds up.

The actual takeaway

Most teams that adopt AI coding tools do it by MCPing every tool they already use. Linear MCP, Jira MCP, Slack MCP, Notion MCP. Each layer costs tokens and adds failure modes. On top of that, the agent now has another UI to keep in sync.

We went the other way. Pick the minimum platform that already does the job. Write a rules file that makes your team legible to the agent. Let the agent drive that platform through the CLI the way a human would.

That's the whole system. A CLAUDE.md, a rules file for GitHub Projects, a few slash commands. Linear's off the stack. We ship more. And because filing a bug costs nothing anymore, we capture everything we notice.

The full walkthrough of the rules file, the gh workflows, and the slash commands we built for this is in the agentic coding course.

If you want to strip your team's PM stack and route work through the agent instead, let's talk about what that would look like for your codebase.

First chapter free

Learn the agentic coding workflow I use in production

How I set up my repos, manage context, and run agents in production. Written down so you can do the same.