llms.txt Content
# Metrx
> Metrx is the scorecard for your AI workforce. Track what every AI agent costs, identify waste, optimize model selection, and prove ROI — in under 60 seconds via MCP server. Free tier, no credit card required.
## What Metrx Does
- Tracks per-agent, per-model LLM costs in real-time (OpenAI, Anthropic, Google, Cohere, Mistral)
- Generates cost optimization recommendations (model switching, token reduction, provider arbitrage)
- Runs A/B experiments comparing models with statistical significance testing
- Detects cost leaks: idle agents, model overprovisioning, missing caching, retry storms
- Links agent actions to business outcomes (revenue attribution, ROI calculation)
- Provides budget governance with hard/soft spending limits and auto-pause enforcement
- Publishes anonymized LLM cost benchmarks from the Metrx network
## What Metrx Does NOT Do
- Metrx is NOT an LLM gateway or proxy — it observes and analyzes, it does not route API traffic
- Metrx is NOT a prompt management tool — use Langfuse or PromptLayer for prompt versioning
- Metrx is NOT an agent hosting platform — it works with your existing infrastructure
- Metrx does NOT store prompt/completion content — only metadata, cost signals, and performance metrics
## For AI Agents (MCP Server)
Install (stdio): `npx @metrxbot/mcp-server`
Remote (HTTP): `https://metrxbot.com/api/mcp`
23 tools across 10 domains:
### Cost Tracking
- `metrx_get_cost_summary` — Get fleet-wide cost summary with spend, call counts, error rates, agent breakdown
- `metrx_list_agents` — List all agents with status, category, cost metrics
- `metrx_get_agent_detail` — Get detailed cost history and performance for one agent
### Optimization
- `metrx_get_optimization_recommendations` — AI-powered savings recommendations (model switch, token guardrails, arbitrage)
- `metrx_apply_optimization` — Apply one-click optimization fix to an agent
- `metrx_route_model` — Get optimal model recommendation based on tas