Last Tuesday I was setting up a new project and wanted to add three MCP servers: one for our Postgres database, one for semantic code search, and one for our internal documentation. I use Claude Code as my daily driver, Cursor for visual work, and VS Code with Copilot for certain debugging sessions.
That meant configuring the same three MCP servers in three different locations, with three different config file formats, using three different path conventions. By the time I finished, I’d spent over three hours and still had a typo in the Cursor config that took another 20 minutes to debug.
This is the worst developer experience problem in AI coding right now, and nobody’s talking about it.
The Fragmentation Problem
Here’s what MCP server configuration looks like across the tools I use daily:
Claude Code wants a .mcp.json in your project root:
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres"],
"env": { "DATABASE_URL": "..." }
}
}
}
Cursor stores it in .cursor/mcp.json with a slightly different structure.
VS Code puts it in .vscode/mcp.json with its own format.
Codex reads from .codex/mcp.json.
Same protocol, same servers, four different config files. And if you use Gemini CLI or Goose, that’s two more. Each one has its own path, its own JSON schema, and its own quirks about how environment variables get resolved.
It’s like the pre-USB era where every phone had a different charger. Except worse, because at least phone chargers didn’t have slightly different voltage requirements.
Why This Matters More Than You Think
MCP (Model Context Protocol) is becoming the standard way AI tools connect to external data. Anthropic launched it, and now practically every AI coding tool supports it. The protocol itself is well-designed — standardized, open, and tool-agnostic.
But “tool-agnostic protocol” means nothing if every tool configures it differently. The friction isn’t in MCP itself; it’s in the integration layer.
I’ve seen this kill adoption on my team. I recommended an MCP server for our internal APIs — it would let any AI tool query our service catalog, read deployment configs, and check monitoring dashboards. Great in theory. In practice, two of my engineers set it up in Claude Code and never bothered configuring it in their other tools. The third gave up entirely because they couldn’t get the Cursor config right.
That’s not an MCP problem. That’s a distribution problem.
Enter add-mcp
add-mcp is a CLI built by the Neon team (the serverless Postgres folks) that solves exactly this. One command, all your tools:
npx add-mcp https://mcp.example.com/mcp
It detects which AI coding tools you’re using in your project, then writes the correct config file for each one. Claude Code gets its .mcp.json, Cursor gets its .cursor/mcp.json, VS Code gets its .vscode/mcp.json. All from one command.
What Works Well
Auto-detection is smart. It checks your project directory for existing config files and only targets tools you actually use. If you don’t have a .cursor folder, it skips Cursor. No phantom configs cluttering your project.
Global install option. For servers you want everywhere (like a database MCP), you can install globally:
npx add-mcp https://mcp.neon.tech/mcp -g -y -a cursor -a claude-code
This saved me from configuring the Postgres MCP server in every single project.
Non-interactive mode. The -y flag skips all prompts. Great for team onboarding scripts or CI setup:
npx add-mcp https://your-internal-api.com/mcp -y
What Doesn’t Work (Yet)
No removal command. You can add servers but there’s no remove-mcp or npx add-mcp --remove. If you want to uninstall, you’re back to editing config files manually. This feels like an oversight that’ll get fixed, but it’s annoying now.
Limited validation. It writes the config but doesn’t verify the MCP server is actually reachable. I had a misconfigured URL that add-mcp happily wrote to all four config files, and I didn’t discover the problem until I tried to use it.
New tool support lags. When a new AI coding tool adds MCP support (like Zed recently), add-mcp needs to be updated. This is open source and takes PRs, but there’s always a gap.
No diff/preview mode. I want a --dry-run flag that shows what config changes it would make before writing. Especially important when running it on a shared repo where config files are committed.
The Bigger Ecosystem Shift
add-mcp is a symptom of a larger trend: the AI coding tool ecosystem is fragmenting on configuration while standardizing on protocols.
Augment Code’s Context Engine MCP is another signal. They’ve released their proprietary semantic code search as an MCP server, which means any tool that supports MCP can use Augment’s codebase understanding. That’s powerful — you’re no longer locked into one editor’s context engine.
Vercel’s npx skills tackles the same fragmentation problem but for agent skills (system prompts and tool definitions). More editors are converging on a .agents/ folder for skills, which is progress. But MCP config unification is further behind.
My prediction: within six months, we’ll see either a standard config location (probably .agents/mcp.json or similar) or one of these CLI tools becomes the de facto standard. add-mcp has a head start.
Practical Advice for Teams
If you’re managing MCP servers across a team, here’s what I’d recommend:
Commit your MCP configs to the repo. Don’t make each developer configure locally. Use add-mcp to generate them once, commit the result, and everyone gets the same setup.
Use environment variables for secrets. Don’t hardcode database URLs in committed config files. All MCP config formats support
envfields.Start with 1-2 servers max. MCP server sprawl is real. I’ve seen projects with 8 configured servers where developers only actually use 2-3. Each server adds startup latency to your AI tools.
Document which servers your project uses and why. A one-line comment in your README saves the next person from wondering what
@acme/internal-docs-mcpdoes.Pin your MCP server versions. Running
npx -y some-mcp@latestin production configs means your AI tools might break when the server updates. Specify versions.
Is MCP the USB-C Moment for AI Tools?
Probably. The protocol is open, well-designed, and has critical mass. Anthropic, OpenAI, Google, Microsoft, and most independent tools all support it. That’s rare for a standard this young.
But USB-C also took years to kill the last Lightning cable. MCP config fragmentation is the Lightning cable of AI coding — technically unnecessary, annoying, and will eventually disappear. Tools like add-mcp are the adapters that make life bearable in the meantime.
The real win will be when I don’t need add-mcp at all because every tool reads from the same file. Until then, it’s the first thing I run on every new project.