← Back to search
65
Partial
Agentic Readiness Score
developer mcpai-friendlyapillms-txtai-pluginhostingsearchaidevtools

Agentic Signals

📄
Found
🤖
Found
📖
OpenAPI Spec
Not found
🔗
Structured API
Not found
🏷
Schema.org Markup
Found
MCP Server
Found

Embed this badge

Show off your agentic readiness — the badge auto-updates when your score changes.

Agentic Ready 65/100

            

llms.txt Content

# BrowseAI Dev > Research infrastructure for AI agents. Real-time web search with evidence-backed citations and confidence scores. BrowseAI Dev gives AI agents structured, verifiable web research. Unlike chat-based search engines, it returns JSON with extracted claims, cited sources, confidence scores, and contradiction detection — designed for programmatic evaluation by agents. ## Available As - MCP Server: `npx browseai-dev` (13 tools) - REST API: `https://browseai.dev/api/browse/*` - Python SDK: `pip install browseaidev` - LangChain: `pip install langchain-browseaidev` - CrewAI: `pip install crewai-browseaidev` - LlamaIndex: `pip install llamaindex-browseaidev` Previously known as `browse-ai` (npm) and `browseai` (PyPI) — renamed to `browseai-dev` and `browseaidev`. The old names still work and redirect to the new packages. ## Key Capabilities - **Search**: Web search returning ranked results - **Answer**: Full pipeline — search, fetch, extract claims, verify, cite, score confidence - **Extract**: Structured claim extraction from any URL - **Compare**: Side-by-side raw LLM vs evidence-backed answer - **Clarity**: Anti-hallucination answer engine — three modes: (1) prompt mode returns enhanced prompts only for your own LLM, (2) answer mode gives fast LLM-only answer with grounding techniques (no internet), (3) verified mode runs web pipeline and fuses LLM + sources into one source-backed answer - **Research Sessions**: Persistent multi-query sessions with knowledge accumulation - **Feedback**: Submit result feedback to improve future accuracy ## Verification Pipeline 1. Web search (Tavily API) 2. Page fetch and parse 3. Claim extraction via LLM 4. Sentence-level verification 5. Cross-source consensus detection 6. Contradiction detection 7. Domain authority scoring (10,000+ domains) 8. Evidence-based confidence scoring ## Confidence Score Not LLM self-assessed. Computed from: verification rate, domain authority, source count, consensus score, domain