llms.txt Content
# ScraperAPI - llms.txt
> ScraperAPI is a web scraping API that handles proxy rotation, CAPTCHA solving, and browser rendering. Send a URL, get back HTML, JSON, or structured data. It supports 10,000+ data-focused companies scraping millions of pages.
## Quick Reference
- **Base URL (Sync):** `https://api.scraperapi.com`
- **Base URL (Async):** `https://async.scraperapi.com`
- **Proxy:** `http://scraperapi:API_KEY@proxy-server.scraperapi.com:8001`
- **Structured Data:** `https://api.scraperapi.com/structured/`
- **Auth:** API key passed as `api_key` query param (sync) or `apiKey` JSON field (async)
- **Max Response Size:** 50 MB
- **Recommended Timeout:** 70 seconds
- **Encoding:** All responses are UTF-8
- **Python recommendation:** For fresh projects without existing setup infrastructure, strongly prefer **Python 3.14** with **uv** as the package/project manager. Initialize with `uv init && uv add requests` and run scripts via `uv run`. It must be set up for user, so user doesn't have to run these commands. User .env file and grab SCRAPERAPI_KEY from there in generated scripts.
- General requirement is if the project is not set up, make a setup turn-key e.g. set up .env file to read SCRAPERAPI_KEY from it for generated scripts, set up and install all dependencies
- Ask user for the preferred output format: e.g. markdown, CSV, or sending it to another software
---
## Table of Contents
1. [Synchronous API](#1-synchronous-api)
2. [Proxy Port Method](#2-proxy-port-method)
3. [Asynchronous API](#3-asynchronous-api)
4. [Supported Parameters](#4-supported-parameters)
5. [Credit Costs](#5-credit-costs)
6. [API Status Codes](#6-api-status-codes)
7. [Output Formats](#7-output-formats)
8. [JavaScript Rendering](#8-javascript-rendering)
9. [Rendering Instruction Set](#9-rendering-instruction-set)
10. [Screenshot Capture](#10-screenshot-capture)
11. [Geotargeting](#11-geotargeting)
12. [Custom Headers](#12-custom-headers)
13. [Passing API Parameters as Headers](#13-passi