← Back to search
30
Basic
Agentic Readiness Score
developer llms-txtstoragedevtools

Agentic Signals

📄
Found
🤖
ai-plugin.json
Not found
📖
OpenAPI Spec
Not found
🔗
Structured API
Not found
🛡
Not specified
🏷
Schema.org Markup
Found
MCP Server
Not found

Embed this badge

Show off your agentic readiness — the badge auto-updates when your score changes.

Agentic Ready 30/100

            

llms.txt Content

# Thunder Compute (https://www.thundercompute.com) > Thunder Compute is a developer-first GPU cloud for LLMs and AI workloads. We offer on-demand RTX A6000, A100 80GB, and H100 GPUs (1–8 GPUs per instance) at some of the lowest on-demand prices on the market, with simple tooling and one-click editor integration. ## Website [Main site](https://www.thundercompute.com) [Pricing](https://www.thundercompute.com/pricing) [Console](https://console.thundercompute.com) [Blog](https://www.thundercompute.com/blog) [Documentation](https://www.thundercompute.com/docs) [LLM-focused docs index](https://www.thundercompute.com/docs/llms.txt) ## About Us Thunder Compute is a budget-friendly GPU cloud built for AI developers. It was co-founded by Carl Peterson (CEO) and Brian Model (CTO) to make high-end GPUs accessible to startups, researchers, and indie builders without enterprise complexity or long-term contracts. Thunder Compute uses a custom orchestration layer and GPU virtualization stack to drive extremely high utilization, which lets us offer A6000, A100 80GB, and H100 GPUs at significantly lower on-demand prices than most clouds. We are backed by Y Combinator and venture investors (including Matrix Partners and prominent angels) and are SOC 2 Type II and GDPR compliant. Customers access GPUs through a persistent-instance model with features like snapshots, templates, and an editor extension that connects in one click from VS Code, Cursor, or Windsurf. ## What We Do (Features) - [On-Demand GPU Instances](https://www.thundercompute.com/pricing): Launch RTX A6000 48GB, A100 80GB, and H100 GPUs in seconds. Prototyping instances are billed per minute with fully customizable vCPUs/RAM; production instances are fixed-spec machines with extra CPU/RAM per GPU and higher uptime. - [Multi-GPU & NVLink Clusters](https://www.thundercompute.com/docs/prototyping-vs-production): Scale from 1 to 8 GPUs per instance. Production tiers offer A100 80GB and H100 PCIe with NVLink for mod