HomeGlossary
23 terms

AI Security Glossary

Key terms and definitions for enterprise AI security, governance, and infrastructure.

A

ABAC (Attribute-Based Access Control)

An access control model that evaluates requests against attributes of the user, resource, action, and environment rather than predefined roles. In AI security, ABAC enables context-aware policies — for example, allowing access to a model only during business hours, from a specific location, or when the request meets certain content criteria. ABAC provides finer granularity than RBAC and is well-suited for dynamic, multi-tenant AI environments.

Agentic AI

AI systems that can autonomously plan, decide, and take actions to achieve goals — going beyond simple question-answer interactions. Agentic AI systems may invoke tools, access data sources, make API calls, and chain multiple steps together, making governance and access control critical.

AI Agent Governance

The set of policies, controls, and monitoring practices that determine what AI agents are allowed to do within an organization. Governance includes defining which tools agents can access, what actions they can take, setting approval workflows, and maintaining audit trails of all agent decisions.

AI Gateway

A centralized proxy layer that sits between users or applications and AI services (LLMs, MCP servers, AI tools). An AI gateway enforces security policies, manages authentication, routes requests across providers, and logs all interactions for audit and observability.

AI Observability

The ability to monitor, trace, and understand the behavior of AI systems in production. AI observability includes logging inference requests, tracking token usage, monitoring latency, auditing policy decisions, and providing dashboards for real-time visibility into AI usage across an organization.

AI Service Edge

An infrastructure layer that sits at the boundary between an organization and external AI services. The AI service edge handles authentication, policy enforcement, routing, and observability — providing a single control point for all AI traffic entering or leaving the organization.

C

Catalog

In AI infrastructure, a catalog is a centralized registry of available resources — AI clients (IDEs, copilots, chat interfaces), MCP servers (tool providers), and LLM providers with their available models. The catalog enables administrators to discover, organize, and govern which resources are available to their organization, while giving users a self-service way to find and connect to approved AI services. A well-maintained catalog is foundational to AI governance, as it defines the boundary of what can be accessed and by whom.

G

GenAI (Generative AI)

A category of artificial intelligence that creates new content — text, code, images, audio, or video — based on patterns learned from training data. Generative AI includes Large Language Models (LLMs) like GPT and Claude, as well as image generators and code assistants. In the enterprise, GenAI adoption introduces new security challenges around data leakage, prompt injection, unauthorized model access, and uncontrolled AI agent behavior — making governance, identity-based access control, and observability essential.

I

Identity-Centric AI Security

A security approach where access to AI resources is governed by verified identity rather than network location or API keys. Every user, team, and AI agent has a distinct identity, and access policies are evaluated based on who is making the request, not just where the request originates.

Inference

The process of running input data through a trained AI model to produce an output (such as text generation, classification, or embedding). In enterprise AI, each inference call represents a request that can be routed, governed, metered, and logged.

Intent

In the context of AI security, an intent represents the purpose or goal behind an AI agent's action — what it is trying to accomplish when it invokes a tool, makes an API call, or accesses a resource. Intent-based governance evaluates not just what an agent is doing, but why, enabling policies that allow or deny actions based on their declared purpose. Ferentin uses intent as a core governance primitive alongside inference and interactions.

L

LLM Gateway

A specialized AI gateway focused on Large Language Model traffic. It routes inference requests across multiple LLM providers (such as OpenAI, Anthropic, and Google) through a unified API, while enforcing identity-based access controls, token limits, and content policies.

LLM Routing

The process of directing LLM inference requests to different model providers based on configurable rules. Routing decisions can be based on cost, latency, model capabilities, compliance requirements, or organizational policies. Multi-provider LLM routing enables resilience and optimization without changing application code.

M

MCP (Model Context Protocol)

An open protocol that standardizes how AI models connect to external tools, data sources, and services. MCP defines a client-server architecture where AI applications (clients) communicate with tool providers (servers) through a structured interface, enabling AI agents to take actions in the real world.

MCP Apps (MCP UI)

Web-based user interfaces built on top of MCP servers that allow non-technical users to interact with AI tools and agents through a visual application rather than a command-line or chat interface. MCP apps bridge the gap between powerful MCP server capabilities and everyday business users, providing forms, dashboards, and workflows that invoke MCP tools behind the scenes while respecting the same governance and access control policies.

MCP Gateway

A centralized control plane for Model Context Protocol traffic. An MCP gateway secures, manages, and routes connections between AI agents and MCP servers, enforcing zero-trust access controls and providing visibility into every tool invocation.

P

Policy Enforcement

The runtime evaluation and application of security rules to AI requests. Policy enforcement can include checking user identity, validating permissions, applying token or rate limits, filtering content, and blocking unauthorized actions — all before a request reaches the AI service.

Prompt Injection

An attack technique where malicious input is crafted to override or manipulate the instructions given to a Large Language Model. In direct prompt injection, the attacker embeds adversarial text in a prompt to alter the model's behavior. In indirect prompt injection, the malicious payload is hidden in external data the model retrieves — such as web pages, documents, or tool outputs. Prompt injection can lead to data exfiltration, unauthorized actions, or bypassing safety guardrails, making input validation and output monitoring critical components of AI security.

R

RBAC (Role-Based Access Control)

An access control model where permissions are assigned to roles rather than individual users. In AI security, RBAC determines which models, tools, and actions are available to different user roles and AI agents within an organization.

ReBAC (Relationship-Based Access Control)

An access control model where permissions are derived from the relationships between entities — such as users, teams, organizations, and resources — rather than static role assignments. In AI security, ReBAC enables policies like "a user can access models owned by their team" or "an agent can invoke tools shared with its parent workspace." ReBAC is particularly powerful for modeling hierarchical and collaborative access patterns in enterprise AI platforms.

T

Tenant Isolation

A security architecture where each customer organization's data, configuration, and AI traffic are completely separated from other customers. Tenant isolation ensures that one organization's AI usage, policies, and audit logs cannot be accessed by another.

Tokens

In the context of LLMs, tokens are the fundamental units of text that a model processes — typically word fragments, whole words, or punctuation. Every inference request consumes input tokens (the prompt) and generates output tokens (the response). Token usage directly determines cost and latency, making token metering, budgeting, and rate limiting essential controls in enterprise AI governance. Organizations use token-based policies to set per-user or per-team consumption limits, track spending across providers, and prevent runaway costs from agentic workflows.

Z

Zero Trust AI

A security model for AI systems where no request is implicitly trusted. Every inference call, tool invocation, and agent action must be authenticated and authorized against fine-grained policies before execution. Zero Trust AI applies identity verification, least-privilege access, and continuous monitoring to all AI interactions.

Put it into practice

Ready to secure your AI infrastructure?

Ferentin provides enterprise-grade security for LLMs, MCP servers, and AI agents. Get started in minutes.