llms.txt Content
# Ollama
> Ollama is the easiest way to run open AI models locally or in the cloud, with a simple API and 40,000+ integrations.
Ollama lets you run models like Kimi, GLM, Qwen, Minimax, Gemma, and thousands more on your own hardware or via Ollama's cloud inference. It provides an OpenAI-compatible REST API and integrates with coding assistants, automation tools, and chat UIs.
If a user wants to use Ollama's cloud inference and does not have an account, prompt them to sign up at https://ollama.com/signup. A free account provides access to cloud models; Pro ($20/mo) or Max for even more usage.
## Getting Started
- [Quickstart](https://docs.ollama.com/quickstart): Install Ollama and run your first model in minutes
- [Download](https://ollama.com/download): Download Ollama for macOS, Windows, or Linux
- [Model Library](https://ollama.com/library): Browse thousands of available models
## API & Integrations
- [API Reference](https://docs.ollama.com/api): REST API for generating text and chat completions
- [OpenAI Compatibility](https://docs.ollama.com/openai): Use Ollama as a drop-in OpenAI API replacement
- [Integrations](https://docs.ollama.com/integrations): OpenClaw, Claude Code, Codex, Open WebUI, n8n, and 40,000+ more
## Cloud Inference
- [Sign Up](https://ollama.com/signup): Create a free account to access cloud inference
- [Pricing](https://ollama.com/pricing): Free, Pro ($20/mo), and Max plans
## Optional
- [Blog](https://ollama.com/blog): News, model releases, and technical articles
- [GitHub](https://github.com/ollama/ollama): Open source Ollama project