How to set up BigQuery
Prerequisites
Before connecting BigQuery to Cargo, ensure you have:- An active Google Cloud Project with billing enabled
- BigQuery API enabled
- A service account with appropriate permissions
Creating a Service Account
To connect Cargo to BigQuery, you need a service account with the necessary permissions:- Console
- CLI
- Go to the Google Cloud Console
- Navigate to IAM & Admin → Service Accounts
- Click Create service account
- Give the service account a name (e.g.,
cargo-bigquery) - Grant the following roles:
BigQuery Data Editor– Read and write data in BigQuery tablesBigQuery Job User– Run queries and jobs
- Click Done
- Click on the newly created service account
- Go to the Keys tab
- Click Add Key → Create new key → JSON
- Save the downloaded JSON file securely—you’ll need its contents for the connection
Connection details
To set up the connection, provide the following details when creating the connector:| Field | Description |
|---|---|
| Service Account | Paste the full JSON content of your service account key file |
BigQuery actions
Once connected, you can use BigQuery in your workflows with the following actions:Insert
Insert new records into a BigQuery table. Configuration| Field | Description |
|---|---|
| Dataset | The BigQuery dataset containing the target table |
| Table | The table to insert data into |
| Mappings | Map columns to values using expressions |
- Lead capture – Insert new leads from form submissions or enrichment workflows
- Event logging – Record workflow events and outcomes
- Data aggregation – Store computed results for reporting
Update
Update existing records in a BigQuery table based on a matching column. Configuration| Field | Description |
|---|---|
| Dataset | The BigQuery dataset containing the target table |
| Table | The table to update |
| Matching Column | The column to match records against |
| Matching Value | The value to match (supports expressions) |
| Mappings | Map columns to new values using expressions |
- Data enrichment – Update records with enriched data from external sources
- Status updates – Mark records as processed or update stages
- Sync external changes – Keep BigQuery in sync with CRM or other systems
Upsert
Create new records or update existing ones based on a matching column. Configuration| Field | Description |
|---|---|
| Dataset | The BigQuery dataset containing the target table |
| Table | The table to upsert into |
| Matching Column | The column to match records against |
| Matching Value | The value to match (supports expressions) |
| Mappings | Map columns to values using expressions |
- Data sync – Keep your warehouse updated regardless of whether records exist
- Idempotent operations – Safely retry operations without creating duplicates
- Master data management – Maintain a single source of truth
Delete
Delete records from a BigQuery table based on a matching column. Configuration| Field | Description |
|---|---|
| Dataset | The BigQuery dataset containing the target table |
| Table | The table to delete from |
| Matching Column | The column to match records against |
| Matching Value | The value to match (supports expressions) |
- Data cleanup – Remove outdated or invalid records
- GDPR compliance – Delete personal data on request
- Workflow automation – Remove processed records from staging tables
BigQuery data models
Cargo allows you to create data models on top of your BigQuery data that can be used to trigger Plays and power workflows.Creating BigQuery data models
To create a BigQuery data model:- Navigate to Data Models in Cargo
- Click Create data model
- Select BigQuery as the source
- Configure the following fields:
| Field | Description |
|---|---|
| Name | Choose a descriptive name for your model |
| Slug | Set a unique identifier that cannot be changed once created |
| Dataset | Select the BigQuery dataset containing your data |
| Table | Select the table or view to model |
| ID Column | The column containing unique record identifiers |
| Title Column | The column to display as the record title |
| Cursor Column | (Optional) Column for incremental syncing (date or number) |
Using BigQuery data models
Once created, your BigQuery data model can be used to:- Trigger Plays – Start automated workflows when data changes
- Power enrichment – Use BigQuery data to enrich records in workflows
- Create segments – Filter and target specific records from your data
Required permissions
Ensure your service account has the following IAM roles:Required IAM roles
Required IAM roles
- BigQuery Data Editor – Read and write data in BigQuery tables
- BigQuery Job User – Run queries and jobs
- BigQuery Data Viewer – (Optional) Read access to additional datasets outside your main dataset
Required permissions breakdown
| Permission | Purpose |
|---|---|
bigquery.tables.create | Create new tables |
bigquery.tables.getData | Read data from tables |
bigquery.tables.updateData | Write data to tables |
bigquery.jobs.create | Execute queries and jobs |
Network configuration
If you restrict access to your BigQuery instance, add these Cargo IP addresses to your VPC firewall rules:3.251.34.13454.220.135.9979.125.105.52
Security
- All BigQuery connections use Google’s secure authentication
- Service account keys are encrypted at rest
- Data in transit is encrypted using TLS
- Cargo never overwrites existing tables—it always creates its own

