llms.txt Content
# ToolRank
> ToolRank is the first ATO (Agent Tool Optimization) platform. It scores, optimizes, and monitors how AI agents discover and select MCP tools. ToolRank Score is a 0-100 metric measuring agent-readiness across four dimensions: Findability (25%), Clarity (35%), Precision (25%), and Efficiency (15%).
## What is ATO?
ATO (Agent Tool Optimization) is the practice of optimizing tools, APIs, and services so AI agents can autonomously discover, select, and execute them. ATO differs from SEO (optimizing for search engines) and LLMO (optimizing for LLM citations) because it targets autonomous agent selection, not human-mediated discovery.
Three stages: Stage 1 = Be recognized (LLMO), Stage 2 = Be selected (ATO core), Stage 3 = Be used reliably (execution quality).
## Key Data
We scan multiple registries daily: Smithery Registry and Official MCP Registry. Among visible servers, average ToolRank Score is 84.7/100. Research shows optimized tools achieve 72% selection probability vs 20% baseline (3.6x advantage). Our simulation confirms: r=0.828 correlation between score and selection rate.
## Key Resources
- [Score Your Tools](https://toolrank.dev/score): Free MCP tool definition scoring
- [ATO Framework](https://toolrank.dev/framework): Complete methodology with before/after examples
- [Ecosystem Ranking](https://toolrank.dev/ranking): Live MCP server quality rankings
- [Badge](https://toolrank.dev/badge): Add ToolRank Score badge to your README
- [Pricing](https://toolrank.dev/pricing): Free, Pro ($29/mo), and Team ($99/mo) plans
- [Blog](https://toolrank.dev/blog): ATO insights and ecosystem analysis
## Open Source
- [GitHub Repository](https://github.com/imhiroki/toolrank): Open source scoring engine
- [Scoring Logic](https://github.com/imhiroki/toolrank/blob/main/packages/scoring/toolrank_score.py): How ToolRank Score is calculated
- [ATO Manifesto](https://toolrank.dev/blog/ato-manifesto): Why ATO matters
## Contact
Built by Hiroki Honda. GitHub: @i