llms.txt Content
# Modal llms.txt
> Modal is a platform for running Python code in the cloud with minimal
> configuration, especially for serving AI models and high-performance batch
> processing. It supports fast prototyping, serverless APIs, scheduled jobs,
> GPU inference, distributed volumes, and sandboxes.
Important notes:
- Modal's primitives are embedded in Python and tailored for AI/GPU use cases,
but they can be used for general-purpose cloud compute.
- Modal is a serverless platform, meaning you are only billed for resources used
and can spin up containers on demand in seconds.
You can sign up for free at [https://modal.com] and get $30/month of credits.
## Guide
- [Introduction](https://modal.com/docs/guide)
- Custom container images
- [Defining Images](https://modal.com/docs/guide/images.md)
- [Using existing container images](https://modal.com/docs/guide/existing-images.md)
- [Fast pull from registry](https://modal.com/docs/guide/fast-pull-from-registry.md)
- GPUs and other resources
- [GPU acceleration](https://modal.com/docs/guide/gpu.md)
- [Using CUDA on Modal](https://modal.com/docs/guide/cuda.md)
- [Configuring CPU, memory, and disk](https://modal.com/docs/guide/resources.md)
- Scaling out
- [Scaling out](https://modal.com/docs/guide/scale.md)
- [Input concurrency](https://modal.com/docs/guide/concurrent-inputs.md)
- [Batch processing](https://modal.com/docs/guide/batch-processing.md)
- [Job queues](https://modal.com/docs/guide/job-queue.md)
- [Dynamic batching](https://modal.com/docs/guide/dynamic-batching.md)
- [Multi-node clusters (beta)](https://modal.com/docs/guide/multi-node-training.md)
- Deployment
- [Apps, Functions, and entrypoints](https://modal.com/docs/guide/apps.md)
- [Managing deployments](https://modal.com/docs/guide/managing-deployments.md)
- [Invoking deployed functions](https://modal.com/docs/guide/trigger-deployed-functions.md)
- [Continuous deployment](https://modal.com/docs/guide/continuous-deployment.