Firecrawl is a web scraping API that turns any website into clean, structured data. Cargo’s native integration with Firecrawl allows you to scrape individual pages or crawl entire websites directly from your workflows.
How to set up Firecrawl
You can use Firecrawl in two ways:
- Cargo credits – Use Firecrawl through Cargo’s managed integration
- Your own API key – Connect your Firecrawl account
Connection details
| Field | Description |
|---|
| API Key | Your Firecrawl API key |
Get your API key from the Firecrawl dashboard.
Firecrawl actions
Scrape
Extract content from a single URL.
Use cases
- Page content extraction – Get clean text from any webpage
- Product data – Scrape product information from e-commerce sites
- News monitoring – Extract articles and updates
- Research – Gather information from multiple sources
Configuration
| Field | Description |
|---|
| URL | URL of the page to scrape |
Credit cost: 0.05 credits per page
Crawl
Recursively crawl a website and extract content from multiple pages.
Use cases
- Site-wide extraction – Crawl documentation or knowledge bases
- Competitive analysis – Extract content from competitor sites
- Content aggregation – Gather content across multiple pages
- Data collection – Build datasets from websites
Configuration
| Field | Description |
|---|
| URL | Starting URL for the crawl |
| Maximum depth | How deep to crawl (0 = single page only) |
| Includes | Path patterns to include |
| Excludes | Path patterns to exclude |
| Limit | Maximum pages to crawl |
Options
| Option | Description |
|---|
| Ignore sitemap | Don’t use the site’s sitemap for discovery |
| Allow backward links | Allow crawling back to previously linked pages |
| Allow external links | Follow links to external websites |
Use Include and Exclude patterns to focus your crawl on specific sections of a website.
Credit cost: 0.05 credits per page crawled
Firecrawl data models
Create data models from crawled websites to power your workflows.
Crawl a website and create records from each page.
Configuration
| Field | Description |
|---|
| URL | Starting URL for the crawl |
| Maximum depth | How deep to crawl |
| Includes | Path patterns to include |
| Excludes | Path patterns to exclude |
| Limit | Maximum pages (default: 10,000) |