ABAC (Attribute-Based Access Control)
An access control model that evaluates requests against attributes of the user, resource, action and environment rather than predefined roles. In AI security, ABAC enables context-aware policies. For example, allowing access to a model only during business hours, from a specific location or when the request meets certain content criteria. ABAC provides finer granularity than RBAC and is well-suited for dynamic, multi-tenant AI environments.
Agentic AI
AI systems that can autonomously plan, decide and take actions to achieve goals, going beyond simple question-answer interactions. Agentic AI systems may invoke tools, access data sources, make API calls and chain multiple steps together, making governance and access control critical.
AI Agent Governance
The set of policies, controls and monitoring practices that determine what AI agents are allowed to do within an organization. Governance includes defining which tools agents can access, what actions they can take, setting approval workflows and maintaining audit trails of all agent decisions.
AI Observability
The ability to monitor, trace and understand the behavior of AI systems in production. AI observability includes logging inference requests, tracking token usage, monitoring latency, auditing policy decisions and providing dashboards for real-time visibility into AI usage across an organization.
AI Service Edge
An infrastructure layer that sits at the boundary between an organization and external AI services. The AI service edge handles authentication, policy enforcement, routing and observability, providing a single control point for all AI traffic entering or leaving the organization.
Catalog
In AI infrastructure, a catalog is a centralized registry of available resources: AI clients (IDEs, copilots, chat interfaces), MCP servers (tool providers) and LLM providers with their available models. The catalog enables administrators to discover, organize and govern which resources are available to their organization, while giving users a self-service way to find and connect to approved AI services. A well-maintained catalog is foundational to AI governance, as it defines the boundary of what can be accessed and by whom.
GenAI (Generative AI)
A category of artificial intelligence that creates new content (text, code, images, audio or video) based on patterns learned from training data. Generative AI includes Large Language Models (LLMs) like GPT and Claude, as well as image generators and code assistants. In the enterprise, GenAI adoption introduces new security challenges around data leakage, prompt injection, unauthorized model access and uncontrolled AI agent behavior, making governance, identity-based access control and observability essential.
Identity-Centric AI Security
A security approach where access to AI resources is governed by verified identity rather than network location or API keys. Every user, team and AI agent has a distinct identity, and access policies are evaluated based on who is making the request, not just where the request originates.
Inference
The process of running input data through a trained AI model to produce an output (such as text generation, classification or embedding). In enterprise AI, each inference call represents a request that can be routed, governed, metered and logged.
Intent
In the context of AI security, an intent represents the purpose or goal behind an AI agent's action, specifically what it is trying to accomplish when it invokes a tool, makes an API call or accesses a resource. Intent-based governance evaluates not just what an agent is doing, but why, enabling policies that allow or deny actions based on their declared purpose. Ferentin uses intent as a core governance primitive alongside inference and interactions.
LLM Gateway
A specialized AI gateway focused on Large Language Model traffic. It routes inference requests across multiple LLM providers (such as OpenAI, Anthropic and Google) through a unified API, while enforcing identity-based access controls, token limits and content policies.
LLM Routing
The process of directing LLM inference requests to different model providers based on configurable rules. Routing decisions can be based on cost, latency, model capabilities, compliance requirements or organizational policies. Multi-provider LLM routing enables resilience and optimization without changing application code.
MCP (Model Context Protocol)
An open protocol that standardizes how AI models connect to external tools, data sources and services. MCP defines a client-server architecture where AI applications (clients) communicate with tool providers (servers) through a structured interface, enabling AI agents to take actions in the real world.
MCP Apps (MCP UI)
Web-based user interfaces built on top of MCP servers that allow non-technical users to interact with AI tools and agents through a visual application rather than a command-line or chat interface. MCP apps bridge the gap between powerful MCP server capabilities and everyday business users, providing forms, dashboards and workflows that invoke MCP tools behind the scenes while respecting the same governance and access control policies.
Policy Enforcement
The runtime evaluation and application of security rules to AI requests. Policy enforcement can include checking user identity, validating permissions, applying token or rate limits, filtering content and blocking unauthorized actions, all before a request reaches the AI service.
Prompt Injection
An attack technique where malicious input is crafted to override or manipulate the instructions given to a Large Language Model. In direct prompt injection, the attacker embeds adversarial text in a prompt to alter the model's behavior. In indirect prompt injection, the malicious payload is hidden in external data the model retrieves, such as web pages, documents or tool outputs. Prompt injection can lead to data exfiltration, unauthorized actions or bypassing safety guardrails, making input validation and output monitoring critical components of AI security.
RBAC (Role-Based Access Control)
An access control model where permissions are assigned to roles rather than individual users. In AI security, RBAC determines which models, tools and actions are available to different user roles and AI agents within an organization.
ReBAC (Relationship-Based Access Control)
An access control model where permissions are derived from the relationships between entities, such as users, teams, organizations and resources, rather than static role assignments. In AI security, ReBAC enables policies like "a user can access models owned by their team" or "an agent can invoke tools shared with its parent workspace." ReBAC is particularly powerful for modeling hierarchical and collaborative access patterns in enterprise AI platforms.
Tenant Isolation
A security architecture where each customer organization's data, configuration and AI traffic are completely separated from other customers. Tenant isolation ensures that one organization's AI usage, policies and audit logs cannot be accessed by another.
Tokens
In the context of LLMs, tokens are the fundamental units of text that a model processes, typically word fragments, whole words or punctuation. Every inference request consumes input tokens (the prompt) and generates output tokens (the response). Token usage directly determines cost and latency, making token metering, budgeting and rate limiting essential controls in enterprise AI governance. Organizations use token-based policies to set per-user or per-team consumption limits, track spending across providers and prevent runaway costs from agentic workflows.
Toxic Flow
A threat pattern unique to AI agents where the combination of tools and data in an agent's context becomes dangerous, even when each tool is safe on its own. Toxic flows arise when an agent chains actions together, such as reading sensitive data from one source and sending it to an external service. They enable prompt injection, data exfiltration and tool misuse. Detecting toxic flows requires context-aware policy enforcement that inspects not just the current request but the full sequence of tools invoked and the data in scope.
Trustworthy Agents
AI agent systems that meet five key properties: keeping humans in control, aligning with human values, securing agent interactions, maintaining transparency and protecting privacy. Trustworthiness cannot be achieved through model safety alone. It requires coordinated infrastructure across the model, the harness (instructions and guardrails), the tools (external services) and the environment (runtime context). A well-trained model can still be exploited through an overly permissive tool or an exposed environment. Enterprise trust layers enforce these properties through identity-based access control, policy enforcement, audit trails and context-aware security at the tool and environment layers.
Put it into practice
Ready to add the trust layer?
Zero trust policy enforcement from prompt to tool call. Get started in minutes.