Why I stopped using MCP for AI coding stuff

Something has shifted quietly in 2026.

The developers I know/respect—the ones actually shipping, not just posting about shipping—stopped talking about MCP. No dramatic announcement. No hot take thread. They just… moved on. Back to the terminal, back to tools they’d been using for years, and suddenly their agents started doing things that felt almost unfair to watch.

Turns out the future of agentic coding was already installed on my machine.

Here’s what nobody wants to admit

We got seduced by abstraction. MCP looked clean on a diagram—standardized tool servers, structured integrations, everything typed and documented. Very enterprise. Very serious.

But diagrams don’t ship code. And in practice? You’re burning context on schema definitions before your agent has done a single useful thing. You’re maintaining custom servers that duplicate CLIs which already exist and work fine. You’re losing the thing that makes Unix tools genuinely great—the ability to pipe five commands together in ways nobody anticipated, solving problems that weren’t supposed to be solvable.

I’ve watched developers spend a week building an MCP integration for something curl | jq would’ve handled in eleven seconds. That’s not a workflow. That’s a hobby.

What actually works

Shell access.
Strong model.
Real tools.

git for version control (obviously). 
rg when grep isn’t cutting it. 
docker for environments. 
tail when something’s on fire in production. 
gh for PRs without leaving the terminal.

Tools with twenty years of documentation, Stack Overflow answers, and—crucially—training data.

Models that have seen millions of shell sessions. They understand pipes and flags and stderr in a way that feels almost native.

Drop your agent into the project directory, describe the goal, and the loop basically runs itself. It checks what’s broken, edits what needs changing, runs tests, reads the failure output, tries again. No handholding. No elaborate integration layer sitting between the model and the work.

The agents people are actually using

Claude Code

  • Strong at large refactors
  • Understands architectural tradeoffs
  • Safer with destructive changes
  • Slower but deliberate

Used when:

  • Touching many files
  • Migrating frameworks
  • Reworking design patterns

ChatGPT (Advanced / o-series)

Not CLI-first, but heavily used in:

  • System design
  • DevOps debugging
  • Infrastructure planning
  • Terraform/Bicep design
  • Incident analysis

People use it as:

  • A co-architect
  • A reasoning partner
  • A debugging analyst

Especially strong in structured reasoning tasks.

Fast Build / CLI-Driven Agents

Codex CLI

  • Lightweight
  • Direct
  • Good when intent is clear

Used for:

  • “Generate this feature”
  • “Write the tests”
  • “Build a quick API”

Gemini CLI

  • Fast
  • Handles big repos
  • Good multimodal support

Used when:

You want decent reasoning but not deep deliberation

Large project context needed

You want speed

IDE-Native Agent Experiences

Cursor

Probably the most used dev agent right now.

Why:

  • Feels native
  • Whole-codebase context
  • Inline refactors
  • Agent mode that edits multiple files

Used heavily in:

  • Startup teams
  • Full-stack dev
  • Rapid iteration

GitHub Copilot (with Copilot Chat / Workspace)

Still massive adoption.

Used for:

  • Autocomplete
  • PR summarisation
  • Test generation
  • Code explanation

Not as autonomous as others, but very embedded in enterprise.

Infra / DevOps-Focused Agents

Warp AI

Terminal-native agent.
Good for:

  • Command generation
  • Explaining CLI output
  • Kubernetes debugging

Aider

Very popular with engineers who:

  • Like git-driven workflows
  • Want structured file edits
  • Want diffs instead of magic overwrites

Feels very DevOps-y.

Use CaseTool
Hard architectural thinkingClaude or ChatGPT
Fast iteration in codeCursor
Enterprise-safe autocompleteCopilot
Power-user CLI agentAider
Multi-model experimentationOpenCode
Infra debuggingWarp AI

What this looks like in practice

Say there’s a deprecated function scattered across a monorepo. The agent runs rg to find every instance, builds a plan, makes the changes, diffs them, runs the test suite, fixes what breaks, commits with a sensible message. Start to finish, maybe fifteen minutes. No GitHub integration required. No MCP server. Just the tools your CI pipeline already uses.

Or a production auth bug. Agent pulls latest, installs, starts the dev server, tails the logs, curls the failing endpoint, reads the error, adjusts, tests again. It’s methodical in a way junior developers often aren’t—and it doesn’t get frustrated at 11pm.

Why this actually makes sense

These models weren’t trained on MCP schemas. They were trained on decades of shell history, documentation, tutorials, Stack Overflow threads, GitHub repos. The terminal isn’t a workaround—it’s the environment they know best. You’re not constraining them by using bash. You’re putting them in their natural habitat.

The teams figuring this out are shipping faster, spending less on tokens, and—maybe most importantly—actually understanding what their agents are doing. Every command is visible. Every decision is traceable. The loop is transparent in a way that elaborate integration layers fundamentally aren’t.

Skip the architecture diagram. Skip the custom server.

Open a terminal. Point the agent at your codebase. Tell it what you need.

That’s it. That’s the whole thing.

Leave a comment