21 March 2026
8 min read
Aiqbee Team
MCP Is Not Dead: Why Tool Discovery and Interoperability Still Matter
The "MCP is dead" narrative misses the point. Token costs are real, but so is the value of dynamic tool discovery, multi-tool interoperability, and standardised enterprise access control. Here is our take.
In late February 2026, a post titled "MCP is dead. Long live the CLI" hit the front page of Hacker News and kicked off a wave of hot takes. Perplexity's CTO said they were abandoning MCP internally. ScaleKit published benchmarks showing MCP costs 4-32x more tokens than CLI equivalents. The "MCP is dead" narrative took hold fast.
We build on MCP every day at Aiqbee. Our MCP server exposes 17+ tools for AI agents to search, read, and manage knowledge graphs. We have a stake in this debate, but we also have production data. The reality is messier than the headlines.
The Criticism Is Real — And Partially Right
The critics are right about token costs. MCP does consume context window tokens for tool schemas. ScaleKit's benchmark found that a simple "what language is this repo?" query needed 1,365 tokens via CLI but 44,026 via MCP, because 43 tool definitions got injected into the context before the agent processed a single message.
The GitHub MCP server ships with 93 tools at a context cost of around 55,000 tokens. That's over a quarter of Claude's 200K context window before you've asked a question. For a developer working solo on their own repo, using gh CLI with an 800-token skill file is genuinely more efficient.
Security is a fair concern too. Knostic identified 1,862 internet-exposed MCP servers. Of 119 they manually checked, every single one allowed access to internal tool listings with zero authentication. Bitsight found roughly 1,000 exposed servers with no authorisation. That needs fixing.
Where the "MCP Is Dead" Narrative Falls Apart
The critics are benchmarking one scenario (a solo developer using one tool on one repo) and generalising it to the whole protocol. That's like benchmarking HTTP against a Unix pipe for local file reads and concluding the web is dead.
1. Tool Discovery Changes the Game
CLI tools require the agent to know which tools exist, what flags they accept, and how to parse their output. That works when you have five tools. It breaks down when you have an ecosystem of services that an agent needs to discover at runtime.
MCP's tool discovery lets an AI agent connect to a server and learn what's available, with no prior knowledge of the API. The AAIF MCP Registry now lists over 10,000 public servers. Kong has built MCP Registry into Kong Konnect. Databricks provides MCP servers that let agents discover tools on the fly without hardcoded names or custom parsers.
CLI can't do this. Discovery isn't about running a known command. It's about finding the right capability in a system you've never seen before.
2. Cross-Tool Access Without Token Overhead
The token cost argument assumes every tool schema is dumped into the context window on every request. But that's an implementation problem, not a protocol problem. Well-designed MCP servers expose focused tool sets. Our Aiqbee MCP server exposes 17 tools for knowledge graph operations — not 93 generic GitHub operations. The context cost is proportional to scope.
MCP also handles things outside the context window entirely. Authentication, connection setup, and capability listing happen at the protocol level. A service account authenticates once via OAuth2 and the token gets reused. With CLI, the agent reasons about credentials on every call, and that reasoning costs tokens.
3. Enterprise Access Control Needs a Protocol
ScaleKit's own conclusion makes this point: for products where agents act on behalf of customers, or for multi-tenant infrastructure, MCP's authorisation model is necessary. CLI tools authenticate as whoever runs them. MCP servers authenticate as the specific service account or user that was granted access to specific resources.
At Aiqbee, our MCP server enforces the same permission model as our web app. A service account with Read access to a Brain cannot write to it via MCP. An agent connected to one Brain cannot discover another Brain it hasn't been granted access to. You cannot build this with CLI flags.
4. The Adoption Numbers Speak for Themselves
97 million monthly SDK downloads. OpenAI adopted it. Google adopted it. Microsoft supports it. The Linux Foundation runs the governance. In Zuplo's survey, 72% of developers expect their MCP usage to grow over the next year, and 54% believe it will become a lasting standard. By 2026, 75% of API gateway vendors are projected to ship MCP features.
Protocols with that kind of backing don't vanish. They grow up.
Skills and Direct API Calls Have Their Place
We are not arguing that MCP is always the right answer. It clearly is not.
If you are a solo developer running gh, git, and npm in a terminal, an 800-token skill file that teaches your agent the right CLI flags is more efficient than an MCP server. ScaleKit proved this conclusively. Skills — small, curated instruction documents — reduce tool calls by a third and latency by a third compared to naive CLI. They are the best ROI for individual developer workflows.
Direct API calls make sense when you control both sides, the interface is stable, and there's no multi-tenant access control to worry about. Building a Slack bot that hits one internal API? You don't need MCP for that.
The question is not "MCP or CLI?" — it's "what does this use case need?"
When MCP Is the Right Choice
- Multi-tenant platforms where agents act on behalf of different users with different permissions
- Enterprise environments requiring audit trails, access control, and governance
- Ecosystems where agents need to discover capabilities at runtime across multiple services
- Cross-tool workflows where the same knowledge needs to be accessible from Claude Code, Cursor, VS Code Copilot, and custom agents
- Non-CLI services like knowledge bases, design tools, CRMs, and internal platforms that have no command-line interface
- Team and organisational contexts where knowledge is shared and permissions matter
What Needs to Improve
MCP is not perfect. The 2026 roadmap from the MCP project acknowledges this. Key areas that need work:
- Token efficiency — servers should expose focused tool sets, not kitchen sinks. Lazy tool loading and schema compression are on the roadmap.
- Authentication standards — the OAuth2 story needs to be production-ready out of the box, not rolled per server. This is actively being addressed.
- Security defaults — exposed servers with no auth should not be possible. The ecosystem needs secure-by-default tooling.
- Registry governance — with 10,000+ public servers, quality and trust signals matter.
Our Position
MCP has dropped from the hype peak into what Gartner calls the trough of disillusionment. That's not death. It's the path every significant standard follows on the way to being useful.
We use MCP at Aiqbee because our use case demands it. Multi-tenant knowledge access, permission enforcement across tools, and runtime discovery are not problems you solve with CLI flags. Our customers connect Claude Code, Cursor, Copilot, and custom agents to the same Brains through the same MCP server, with consistent access controls.
The right mental model is not "MCP vs CLI" — it is "MCP for shared, discoverable, governed services" and "CLI/skills for known, local, developer-scoped tools." Both have a place. Neither is dead.
Try it yourself. Connect an Aiqbee Brain to your AI coding assistant in under 60 seconds. Free trial at app.aiqbee.com.
Ready to Try It?
Create a free Brain and connect your AI tools in under 60 seconds.
