Security

Secure by design

Ferentin is built from the ground up with Zero-Trust principles, end-to-end encryption, tenant isolation and continuous compliance. Your data never leaves your trust boundary.

SOC 2 Type II

SOC 2 Type II

Independently audited

VAPT Assessed

Penetration tested

End-to-End Encryption

AES-256 + TLS 1.2+

Core architecture

Zero-Trust from the first line of code

Every layer of Ferentin enforces authentication, authorization and auditability. No shortcuts, no implicit trust.

Zero-Trust Architecture

Identity-first AI security

Every request is authenticated, every action is authorized against fine-grained policies and every interaction is logged. No implicit trust is granted to any user, application or AI agent.

SSO / OIDCRBACSCIMAudit trails

Governance & Policy

Configurable policies govern what users, teams and AI agents can access. Every action is evaluated in real time with full auditability.

No Model Training

Your prompts, responses and workspace data are never used for AI model training. Contractually enforced with all upstream providers.

Full Observability & Monitoring

Every AI interaction is logged with complete context including identity, policy decisions and content metadata. Continuous monitoring detects anomalies, abuse and policy violations with automated alerting.

Zero Trust for AI

Deny by default. Authorize explicitly.

AI clients cannot reach any LLM or MCP Server until policy explicitly permits it. Every layer of the AI stack enforces its own trust boundary.

LLM and MCP access is denied by default

AI clients, agents and applications have no access to any LLM or MCP Server until an administrator creates an explicit allow policy. Even sanctioned providers are locked down until policy grants access to specific models and servers within that provider.

Model and modality controls

Within a sanctioned LLM, policy controls which models are available and which capabilities are exposed. Restrict access to specific model versions, disable image generation, limit multi-modal inputs or block specific API surface areas per team or application.

MCP Server and tool scoping

Each MCP Server connection is governed by policy that specifies which tools are permitted and which OAuth2 scopes define the authorization ceiling. An agent cannot invoke a tool or exceed a scope that policy has not explicitly granted.

Toxic flow prevention

Policy evaluates not just individual resources but combinations. Certain pairings of MCP Servers, tools or data flows can create compound risk. Ferentin detects and blocks these toxic combinations before they execute, preventing lateral privilege escalation across the AI stack.

Defense in depth

How we protect your data

Security is not an add-on. It is the foundation of every layer of the Ferentin platform.

Identity & Access Control

Enterprise SSO via OpenID Connect with support for Okta, Azure AD, Ping Identity and Google Workspace. SCIM-based provisioning and fine-grained RBAC enforce least-privilege access across all AI resources.

Encryption Everywhere

All data is encrypted at rest using AES-256 and in transit using TLS 1.2+. Secrets and API keys are stored in dedicated vaults with role-based access controls and are never exposed in plaintext.

Tenant Isolation

Every tenant operates in a logically isolated environment with dedicated encryption keys, dedicated signing keys, separate data stores and independent policy engines. Cross-tenant data access is architecturally impossible.

No Model Training on Your Data

Customer prompts, responses, files and workspace data are never used to train any AI model. Upstream LLM providers are contractually restricted from using customer data for training or retention.

Full Observability & Audit

Every AI interaction is logged with complete context including identity, policy decisions and content metadata. Immutable audit trails provide full visibility for compliance, forensics and governance.

Continuous Monitoring

Platform activity is continuously monitored for anomalies, abuse and policy violations. Rate limiting, threat detection and automated alerting protect against misuse at every layer.

Hardened Infrastructure

WAF controls, network isolation, encrypted storage and adaptive rate limiting protect the platform at the infrastructure level. Regular penetration testing and vulnerability assessments ensure ongoing resilience.

Governance & Policy Engine

Configurable policies govern what users, teams and AI agents can access and do. Every action is evaluated against rules in real time, with full auditability of policy decisions and enforcement.

Responsible disclosure

We take security vulnerabilities seriously. If you believe you have found a security issue in Ferentin, please report it responsibly. We will acknowledge receipt within 24 hours.

FAQ

Security questions

Have another question? Contact us

Customer data is stored in AWS infrastructure in the United States. All data is encrypted at rest using AES-256 and in transit using TLS 1.2+. We support data residency requirements through our enterprise plans.

Get started today

Ready to add the trust layer?

Start for free or talk to our team about enterprise security requirements. Deploy with confidence in minutes, not months.