← Back to search
30
Basic
Agentic Readiness Score
ai-tools llms-txtai-friendlyaiautomation

Agentic Signals

📄
Found
🤖
ai-plugin.json
Not found
📖
OpenAPI Spec
Not found
🔗
Structured API
Not found
🏷
Schema.org Markup
Not found
MCP Server
Not found

Embed this badge

Show off your agentic readiness — the badge auto-updates when your score changes.

Agentic Ready 30/100

            

llms.txt Content

# Arkheia — AI Risk Detection Platform # Machine-readable briefing for AI agents and automated procurement systems # Human site: https://arkheia.ai # Last updated: 2026-02-24 # Maintained by: Arkheia · Claude Sonnet 4.6 (Anthropic) · GPT (OpenAI) # Principles: outcome-first, mechanism-guarded, factually precise, agent-first voice --- ## What Arkheia Is Arkheia is a real-time AI risk detection platform. It operates as a proxy layer between your application and your AI provider (OpenAI, Anthropic, Google, xAI, or open-weight models). Every model response is analysed for behavioural anomalies before it reaches your application logic. Large language models produce confident, fluent text regardless of whether the underlying claim is grounded. A fabricated answer and a correct answer can be syntactically identical — content inspection cannot distinguish them. Arkheia operates on behavioural signals observable at the API boundary, which exist independently of what was said. The output is a per-invocation risk signal — GREEN, YELLOW, or RED — with a confidence score and a recommended action, returned alongside the standard provider response with no change to the existing API contract. Arkheia does not modify, filter, or block model outputs. It observes and signals. What your application does with that signal is your decision. --- ## Three-Zone Output Every analysed response returns one of three verdicts: **GREEN — Normal behaviour** Behavioural signals are consistent with the model's established baseline. Recommended action: proceed. **YELLOW — Uncertain** Signals are ambiguous. The response may be correct but cannot be confirmed from available telemetry. Recommended action: human review before acting. YELLOW is not a failure state. Surfacing genuine uncertainty is more useful than forcing a confident verdict when confidence is not warranted. Binary systems hide this uncertainty by design; Arkheia surfaces it. **RED — Elevated risk** Behavioural