llms.txt Content
# Kayba
> Kayba is the open-source learning layer for AI agents. It analyzes agent execution traces, extracts reusable skills into a transparent Skillbook, and generates improved system prompts — making agents self-improve from their own experience without fine-tuning.
## What Kayba Is
Kayba is a framework and platform that makes AI agents self-improving. It sits on top of any agent framework and adds a learning layer: analyze traces, extract skills, build a Skillbook, generate better prompts. The pipeline is: Trace analysis → Skills → Skillbook → Prompt generation.
Kayba synthesizes three published research streams into a unified, production-ready system:
- **Agentic Context Engineering (ACE)** — Three-agent architecture (Generator, Reflector, Curator) with delta updates for incremental Skillbook refinement. From Stanford/SambaNova research, published at ICLR 2026 (arXiv:2510.04618).
- **Recursive Language Models (RLM)** — REPL-based trace introspection that goes deeper than single-pass LLM analysis. From MIT CSAIL (arXiv:2512.24601). Kayba's implementation is called the Recursive Reflector.
- **Dynamic Cheatsheet** — Self-curated external memory with usage tracking and persistent learning. From Stanford/Together AI (arXiv:2504.07952).
No other tool combines these approaches.
## Key Concepts
- **Skillbook**: A transparent, auditable collection of learned behaviors. Each skill links back to the trace that produced it, tracks helpful/harmful counters, and can be approved, edited, or rejected by humans.
- **Recursive Reflector**: Kayba's REPL-based trace analysis engine. Uses a Python sandbox with sub-LLM calls to programmatically explore agent execution traces — deeper than single-pass LLM reflection.
- **Delta updates**: Incremental Skillbook modifications that prevent context collapse and information loss during adaptation.
- **Context engineering**: Automated construction of selective, high-signal context for each agent step.
- **Test-time