llms.txt Content
# PolicyLayer
> PolicyLayer lets teams run AI agents in production with enforceable limits. Intercept is an open-source proxy that sits at the MCP transport layer, enforcing YAML-defined policies on every tool call. It blocks dangerous actions before they execute -- deterministic enforcement, not system prompt alignment.
> For the complete database of 349 MCP servers and 4,835 tools, see: https://policylayer.com/llms-full.txt
## The Problem
AI agents connected to MCP servers have unrestricted access to every tool those servers expose. There are no built-in rate limits, spend caps, access controls, or audit trails in the MCP protocol. A misconfigured or misbehaving agent can delete databases, drain payment accounts, email thousands of customers, or terminate production infrastructure -- and nothing in the protocol stops it.
Most existing safety mechanisms rely on system prompts or model alignment. These are probabilistic: the agent can ignore, misinterpret, or be injected past them. There is no hard stop.
PolicyLayer moves enforcement to infrastructure. Tool calls are intercepted and evaluated against policy at the transport layer, before they reach the upstream server. The agent cannot reason around it, inject past it, or ignore it -- it never sees the enforcement logic.
## How Intercept Works
Intercept is a drop-in proxy between an AI agent and one or more MCP servers. One line change in your MCP config -- no code changes to your agent or server. The agent sees the same tools, same schemas, same behaviour. The proxy is invisible until a policy is violated.
When the agent makes a tool call (tools/call), Intercept evaluates it against a YAML policy file:
- **Allowed calls** are forwarded to the upstream server and the response is returned to the agent.
- **Denied calls** are blocked before reaching the server. The agent receives an error with the policy rule that fired.
- **Rate-limited calls** are tracked with stateful counters across sliding time windows