- It manages complexities: proxies, caching, rate limits, js-blocked content
- Handles dynamic content: dynamic websites, js-rendered sites, PDFs, images
- Outputs clean markdown, structured data, screenshots or html.
Scraping a URL with Firecrawl
/scrape endpoint
Used to scrape a URL and get its content.Installation
Usage
Response
SDKs will return the data object directly. cURL will return the payload exactly as shown below.Scrape Formats
You can now choose what formats you want your output in. You can specify multiple output formats. Supported formats are:- Markdown (
markdown
) - Summary (
summary
) - HTML (
html
) - Raw HTML (
rawHtml
) (with no modifications) - Screenshot (
screenshot
, with options likefullPage
,quality
,viewport
) - Links (
links
) - JSON (
json
) - structured output
Extract structured data
/scrape (with json) endpoint
Used to extract structured data from scraped pages.JSON
Extracting without schema
You can now extract without a schema by just passing aprompt
to the endpoint. The llm chooses the structure of the data.
JSON
JSON format options
When using thejson
format, pass an object inside formats
with the following parameters:
schema
: JSON Schema for the structured output.prompt
: Optional prompt to help guide extraction when a schema is present or when you prefer light guidance.
Interacting with the page with Actions
Firecrawl allows you to perform various actions on a web page before scraping its content. This is particularly useful for interacting with dynamic content, navigating through pages, or accessing content that requires user interaction. Here is an example of how to use actions to navigate to google.com, search for Firecrawl, click on the first result, and take a screenshot. It is important to almost always use thewait
action before/after executing other actions to give enough time for the page to load.
Example
Output
Location and Language
Specify country and preferred languages to get relevant content based on your target location and language preferences.How it works
When you specify the location settings, Firecrawl will use an appropriate proxy if available and emulate the corresponding language and timezone settings. By default, the location is set to ‘US’ if not specified.Usage
To use the location and language settings, include thelocation
object in your request body with the following properties:
country
: ISO 3166-1 alpha-2 country code (e.g., ‘US’, ‘AU’, ‘DE’, ‘JP’). Defaults to ‘US’.languages
: An array of preferred languages and locales for the request in order of priority. Defaults to the language of the specified location.
Caching and maxAge
To make requests faster, Firecrawl serves results from cache by default when a recent copy is available.- Default freshness window:
maxAge = 172800000
ms (2 days). If a cached page is newer than this, it’s returned instantly; otherwise, the page is scraped and then cached. - Performance: This can speed up scrapes by up to 5x when data doesn’t need to be ultra-fresh.
- Always fetch fresh: Set
maxAge
to0
. - Avoid storing: Set
storeInCache
tofalse
if you don’t want Firecrawl to cache/store results for this request.
Batch scraping multiple URLs
You can now batch scrape multiple URLs at the same time. It takes the starting URLs and optional parameters as arguments. The params argument allows you to specify additional options for the batch scrape job, such as the output formats.How it works
It is very similar to how the/crawl
endpoint works. It submits a batch scrape job and returns a job ID to check the status of the batch scrape.
The sdk provides 2 methods, synchronous and asynchronous. The synchronous method will return the results of the batch scrape job, while the asynchronous method will return a job ID that you can use to check the status of the batch scrape.
Usage
Response
If you’re using the sync methods from the SDKs, it will return the results of the batch scrape job. Otherwise, it will return a job ID that you can use to check the status of the batch scrape.Synchronous
Completed
Asynchronous
You can then use the job ID to check the status of the batch scrape by calling the/batch/scrape/{id}
endpoint. This endpoint is meant to be used while the job is still running or right after it has completed as batch scrape jobs expire after 24 hours.