Security

Secure by design

Ferentin is built from the ground up with Zero-Trust principles, end-to-end encryption, tenant isolation and continuous compliance. Your data never leaves your trust boundary.

SOC 2 Type II

SOC 2 Type II

Independently audited

VAPT Assessed

Penetration tested

End-to-End Encryption

AES-256 + TLS 1.2+

Core architecture

Zero-Trust from the first line of code

Every layer of Ferentin enforces authentication, authorization and auditability. No shortcuts, no implicit trust.

Zero-Trust Architecture

Identity-first AI security

Every request is authenticated, every action is authorized against fine-grained policies and every interaction is logged. No implicit trust is granted to any user, application or AI agent.

SSO / OIDCRBACSCIMAudit trails

Governance & Policy

Configurable policies govern what users, teams and AI agents can access. Every action is evaluated in real time with full auditability.

No Model Training

Your prompts, responses and workspace data are never used for AI model training. Contractually enforced with all upstream providers.

Full Observability & Monitoring

Every AI interaction is logged with complete context including identity, policy decisions and content metadata. Continuous monitoring detects anomalies, abuse and policy violations with automated alerting.

Defense in depth

How we protect your data

Security is not an add-on. It is the foundation of every layer of the Ferentin platform.

Identity & Access Control

Enterprise SSO via OpenID Connect with support for Okta, Azure AD, Ping Identity and Google Workspace. SCIM-based provisioning and fine-grained RBAC enforce least-privilege access across all AI resources.

Encryption Everywhere

All data is encrypted at rest using AES-256 and in transit using TLS 1.2+. Secrets and API keys are stored in dedicated vaults with role-based access controls and are never exposed in plaintext.

Tenant Isolation

Every tenant operates in a logically isolated environment with dedicated encryption keys, dedicated signing keys, separate data stores and independent policy engines. Cross-tenant data access is architecturally impossible.

No Model Training on Your Data

Customer prompts, responses, files and workspace data are never used to train any AI model. Upstream LLM providers are contractually restricted from using customer data for training or retention.

Full Observability & Audit

Every AI interaction is logged with complete context including identity, policy decisions and content metadata. Immutable audit trails provide full visibility for compliance, forensics and governance.

Continuous Monitoring

Platform activity is continuously monitored for anomalies, abuse and policy violations. Rate limiting, threat detection and automated alerting protect against misuse at every layer.

Hardened Infrastructure

WAF controls, network isolation, encrypted storage and adaptive rate limiting protect the platform at the infrastructure level. Regular penetration testing and vulnerability assessments ensure ongoing resilience.

Governance & Policy Engine

Configurable policies govern what users, teams and AI agents can access and do. Every action is evaluated against rules in real time, with full auditability of policy decisions and enforcement.

Responsible disclosure

We take security vulnerabilities seriously. If you believe you have found a security issue in Ferentin, please report it responsibly. We will acknowledge receipt within 24 hours.

FAQ

Security questions

Have another question? Contact us

Customer data is stored in AWS infrastructure in the United States. All data is encrypted at rest using AES-256 and in transit using TLS 1.2+. We support data residency requirements through our enterprise plans.

Get started today

Ready to secure your AI?

Start for free or talk to our team about enterprise security requirements. Deploy with confidence in minutes, not months.