llms.txt Content
# Zenable - AI-Native SDLC Governance & Guardrails
> Production-ready tooling AI-accelerated development: correctness, conformance, observability in the SDLC.
_Last-Updated: 2025-10-31 Canonical domains: https://zenable.io | https://www.zenable.app/docs | https://www.zenable.app_
## Overview - How It Works
Zenable runs where your developers already work: inside IDEs, through MCP servers that extend AI assistants, in cloud runtimes alongside kubernetes or CI/CD pipelines, and connected to ticketing systems and pull request reviews. These integrations provide seamless guardrails without disrupting existing workflows.
1. Spec-Driven Development (Guardrails) - Ensure speed and quality by enforcing conformance and custom requirements directly in the SDLC. Provide just enough context, only when it matters, to keep AI-accelerated workflows efficient and precise.
2. Continuous Improvement - Automatically measure, learn, and refine prompts, context, and guardrails to reduce token waste, keep integrations healthy, and adopt new IDE features for throughput and reliability.
3. Evidence and Observability - Instrument the SDLC with metrics, telemetry, and audit-ready evidence. Turn usage data into actionable reports that feed SDD and the improvement loop, doubling as governance metrics, compliance artifacts/assurance, and internal performance KPIs.
## Getting Started (MCP)
To configure make sure `uv` is installed, and then run `uvx zenable-mcp install <foo>` where `<foo>` is the IDE name, for instance claude, codex, vscode, or many others. Use `uvx zenable-mcp install --help` to get a full list of options. Ultimately this will:
- Configure streaming HTTP to **https://mcp.zenable.app**
- Leverage the **`conformance_check`** tool
- Use **OAuth** for auth, in line with the latest **MCP specification (2025-06-18)**
- **Learn more:** https://www.zenable.app/docs/integrations/mcp/getting-started?utm_source=llms-txt&utm_medium=ai
## Getting Started (GitHub)
- Go to **https://z