Compare
Ferentin vs OSS AI Gateways: LiteLLM, Portkey, Helicone
LiteLLM, Portkey and Helicone are open-source AI gateways. Each now includes some form of MCP support, routing and observability for engineering teams. Ferentin is a different category: a managed enterprise trust layer where every request carries a verified user identity, every tool call is policy-evaluated against fine-grained RBAC and ABAC, and every interaction lands in a cryptographically signed, tamper-evident audit log. The OSS gateways are proxies. Ferentin is the trust layer that subsumes them.
At a glance
How the four compare across the dimensions that matter for enterprise AI.
| Dimension | Ferentin | LiteLLM | Portkey | Helicone |
|---|---|---|---|---|
| Primary scope | Trust layer: identity, policy and audit across LLMs, MCP servers and AI agents | LLM proxy with bolt-on MCP gateway and A2A agent support | AI gateway with MCP gateway, routing and observability | LLM observability with proxy mode and gateway |
| Architecture | Multi-plane: service edge + control plane + observability plane, with public-edge or customer-private-edge deployment | Single proxy process; managed cloud option | Single gateway service; managed cloud or private deployment | Observability backend with optional proxy mode |
| Identity model | Enterprise SSO via OIDC (Okta, Entra ID, Google, Ping, Auth0); SCIM provisioning; verified user identity carried on every request | API keys + virtual keys; SSO on enterprise plan | Virtual keys; user metadata forwarded to MCP servers; SSO on enterprise plan | API keys and virtual keys |
| User mode vs agent mode | Native distinction: interactive user mode (OAuth2 with consent) vs tenant-bound agent mode, with mode-aware authorization | Not modeled | Not modeled | Not modeled |
| Authorization depth | Fine-grained RBAC and ABAC on every LLM call, tool invocation and agent action; MCP elicitations route sensitive data around the model | Per-key budgets, model whitelists, project scopes | RBAC at team/server/tool level; instant revocation | Project-level access; request scoring |
| Audit | Cryptographically signed, tamper-evident receipts per call; replayable; tenant-isolated immutable log | Standard request logs | Request traces and observability logs | Detailed observability logs and analytics |
| Secret handling | Per-tenant envelope encryption (AES-256-GCM DEKs rooted in AWS KMS) with edge-local decryption; no plaintext at rest | Provider keys typically stored as environment variables | Virtual keys; vault model not detailed | Configurable; not detailed |
| Compliance | SOC 2 Type II, independent VAPT, multi-tenant cryptographic isolation | Compliance is the operator's responsibility | SOC 2, ISO, HIPAA, GDPR, CCPA on enterprise | SOC 2, GDPR |
| Deployment model | Managed multi-tenant SaaS; optional customer-private edge for data residency | Self-hosted proxy or managed cloud | Self-hosted, managed cloud or private deployments (AWS, Azure, GCP, K8s) | Self-hosted (Docker/Helm) or managed cloud |
| Best fit | Security teams governing AI for the enterprise across the full agent surface | Engineering teams that want a unified LLM API and a basic MCP gateway in their app stack | Engineering teams adding routing, MCP and observability to LLM apps | Engineering teams that need deep LLM observability with light gateway |
Where each tool fits
Choose Ferentin over LiteLLM when
LiteLLM is a developer-stack LLM proxy with a recently added MCP gateway. Provider keys still live as environment variables, identity is API keys and virtual keys, and compliance is the operator's problem. There is no SOC 2 report, no tamper-evident audit, no enterprise identity model.
Ferentin is the trust layer for the same surface: verified enterprise identity on every call, envelope-encrypted secrets with edge-local decryption, signed audit receipts, SOC 2 Type II. See the LiteLLM supply chain analysis for why this matters.
Choose Ferentin over Portkey when
Portkey is the closest competitor on surface area: it has an MCP gateway, RBAC on tools, virtual keys and SOC 2 on enterprise. The gap is the identity primitive. Portkey forwards user metadata (email, team, role) into MCP calls. Ferentin terminates a verified enterprise identity at the edge and evaluates policy against it, with native user-mode versus agent-mode distinction and mode-aware authorization.
The other gap is the audit guarantee: cryptographically signed, replayable receipts versus standard observability traces. The trust layer subsumes the gateway. You do not run both.
Choose Ferentin over Helicone when
Helicone is observability-first: deep request logs, evals, analytics. The MCP support is for accessing Helicone's own data, not a governed gateway between agents and enterprise tools. Identity is API keys.
Ferentin solves the prevention half: identity-based policy enforcement on every LLM and MCP request before execution, with the same observability built in. Audit after the fact is not access control. The trust layer enforces policy on the way in, then records it on the way out.
Architectural differences
LiteLLM, Portkey and Helicone are HTTP proxies sitting in front of LLM provider APIs, with MCP gateways layered in more recently. They translate a unified API into provider-specific dialects, log requests and apply lightweight policy. Their primary buyer is the engineering team.
Ferentin is a different shape, built for a different buyer. It is a multi-plane platform: a service edge that terminates AI traffic, a control plane that enforces policy and an observability plane that records every interaction. The service edge speaks not only to LLM providers but also to MCP servers and internal AI tools through a unified policy model. The control plane integrates with enterprise identity (Okta, Entra ID, Google Workspace) so every request carries a verified user identity — not an API key, not a virtual key, not user metadata forwarded as a header. See the platform overview for the full architecture.
Three primitives separate the trust layer from a proxy. Verified identity at the edge means policy is evaluated against an authenticated subject, with native user-mode versus agent-mode distinction. Mode-aware authorization with MCP elicitations means sensitive data (credential choices, destructive-action confirmations, OAuth scopes) routes around the model context, not through it — human-in-the-loop by protocol, not by convention. Cryptographically signed audit means every call produces a tamper-evident receipt that an auditor can replay weeks later and prove the exact sequence of policy decisions. None of the OSS gateways offer these.
The trust layer subsumes the LLM proxy. You do not run both. Once Ferentin is in place, the routing, observability and credential-handling responsibilities of an OSS gateway are delivered through one identity-aware boundary, with MCP, agent governance and tamper-evident audit added on top. For deeper context on how the threat model differs once an AI gateway holds enterprise credentials, see our analysis of the LiteLLM supply chain incident.
Move from LLM proxy to trust layer
The trust layer for AI agents. Zero trust from prompt to tool call, with MCP, identity and audit built in.