llms.txt Content
# Port of Context (pctx)
> pctx is an open-source execution layer for AI agent tool calls.
> It converts MCP servers and custom tools into typed code APIs
> that run in secure Deno sandboxes, replacing sequential LLM tool
> calling with single-pass code execution. Self-hosted, MIT
> licensed, works with any LLM.
pctx is NOT an LLM, NOT an agent framework, and NOT a hosted
service. It sits between your agent and your MCP servers. The
core is Rust with Deno-based sandboxing. The Python SDK
(pctx-py) is an HTTP client to the pctx server.
- Current stable: v0.5.x, beta: v0.6.x
- Install via: npm, Homebrew, or curl
- SDKs: Python (pctx-py), TypeScript
- Compatible with: Claude, GPT, Gemini, local models
- Any existing MCP server works unchanged
- Auth secrets never reach the LLM
For complete documentation in a single file, see
[Full Documentation](https://portofcontext.com/llms-full.txt)
## Getting Started
- [README](https://raw.githubusercontent.com/portofcontext/pctx/main/README.md): Installation, quick start, architecture overview, usage modes (HTTP server and unified MCP server)
- [Code Mode](https://raw.githubusercontent.com/portofcontext/pctx/main/docs/code-mode.md): How Code Mode works — presenting MCP servers as code APIs instead of direct tool calls
## CLI & Configuration
- [CLI Reference](https://raw.githubusercontent.com/portofcontext/pctx/main/docs/CLI.md): All pctx commands — init, start, mcp add/dev/start
- [Configuration](https://raw.githubusercontent.com/portofcontext/pctx/main/docs/config.md): Config file format, auth, allowed hosts, ToolDisclosure settings
- [Upstream MCP Servers](https://raw.githubusercontent.com/portofcontext/pctx/main/docs/upstream-mcp-servers.md): Adding and managing upstream MCP server connections
## Python SDK
- [Python SDK](https://raw.githubusercontent.com/portofcontext/pctx/main/pctx-py/README.md): @tool decorator, AsyncTool classes, Pydantic schemas, agent framework integration
## Sandbox & Runtime
- [Code Exec