llms.txt Content
# Inspect
> Open-source framework for large language model evaluations
## Basics
- [Welcome](https://inspect.aisi.org.uk/index.html.md): Welcome and overview of Inspect AI.
- [Tutorial](https://inspect.aisi.org.uk/tutorial.html.md): Step-by-step walkthroughs of several basic examples of Inspect evaluations.
- [Options](https://inspect.aisi.org.uk/options.html.md): Covers the various options available for evaluations as well as how to manage model credentials.
- [Log Viewer](https://inspect.aisi.org.uk/log-viewer.html.md): How to use Inspect View to develop and debug evaluations, including how to provide additional log metadata and integrate it with Python logging.
- [VS Code](https://inspect.aisi.org.uk/vscode.html.md): Using the Inspect VS Code Extension to run, tune, debug, and visualise evaluations.
## Components
- [Tasks](https://inspect.aisi.org.uk/tasks.html.md): Tasks bring together datasets, solvers, and scorers to define an evaluation. Strategies for creating flexible and re-usable tasks.
- [Task Config](https://inspect.aisi.org.uk/task-configuration.html.md): Overriding task components at runtime using task_with(), eval(), and the CLI.
- [Datasets](https://inspect.aisi.org.uk/datasets.html.md): Datasets provide samples to evaluation tasks. How to adapt various data sources for use with Inspect, including multi-modal data.
- [Solvers](https://inspect.aisi.org.uk/solvers.html.md): Solvers encompass prompt engineering and other elicitation strategies. Using built-in solvers and creating your own.
- [Scorers](https://inspect.aisi.org.uk/scorers.html.md): Scorers evaluate the work of solvers and aggregate scores into metrics. How to create custom scorers including model-graded ones.
## Models
- [Using Models](https://inspect.aisi.org.uk/models.html.md): Models provide a uniform API for evaluating a variety of large language models and using models within evaluations.
- [Providers](https://inspect.aisi.org.uk/providers.html.md): Usage details and availabl