Amazon Redshift is a fast, fully managed cloud data warehouse that makes it simple and cost-effective to analyze data using standard SQL. Cargo’s native integration with Redshift allows you to use it as your system of records—powering data models, plays, and automated workflows.
How to set up Redshift
Prerequisites
Before connecting Redshift to Cargo, ensure you have:
- An active Amazon Redshift cluster
- Network connectivity between Cargo and your Redshift cluster
- A dedicated schema and user for Cargo
- Proper IAM permissions and database credentials
Connection details
To set up the connection, provide the following details when creating the connector:
| Field | Description |
|---|
| Host | Your Redshift endpoint (e.g., cluster-id.region.redshift.amazonaws.com) |
| Port | Default is 5439 |
| Database | Your database name |
| Username | The Cargo service user (e.g., cargo_user) |
| Password | The user’s password |
Find your Redshift endpoint in the AWS Console under Redshift → Clusters →
Your Cluster → General information.
Redshift actions
Once connected, you can use Redshift in your workflows with the following actions:
Insert
Insert new records into a Redshift table.
Configuration
| Field | Description |
|---|
| Schema | The Redshift schema containing the target table |
| Table | The table to insert data into |
| Mappings | Map columns to values using expressions |
Use cases
- Lead capture – Insert new leads from form submissions or enrichment workflows
- Event logging – Record workflow events and outcomes
- Data aggregation – Store computed results for reporting
Update
Update existing records in a Redshift table based on a matching column.
Configuration
| Field | Description |
|---|
| Schema | The Redshift schema containing the target table |
| Table | The table to update |
| Matching Column | The column to match records against |
| Matching Value | The value to match (supports expressions) |
| Mappings | Map columns to new values using expressions |
Use cases
- Data enrichment – Update records with enriched data from external sources
- Status updates – Mark records as processed or update stages
- Sync external changes – Keep Redshift in sync with CRM or other systems
Upsert
Create new records or update existing ones based on a matching column.
Configuration
| Field | Description |
|---|
| Schema | The Redshift schema containing the target table |
| Table | The table to upsert into |
| Matching Column | The column to match records against |
| Matching Value | The value to match (supports expressions) |
| Mappings | Map columns to values using expressions |
Use cases
- Data sync – Keep your warehouse updated regardless of whether records exist
- Idempotent operations – Safely retry operations without creating duplicates
- Master data management – Maintain a single source of truth
Delete
Delete records from a Redshift table based on a matching column.
Configuration
| Field | Description |
|---|
| Schema | The Redshift schema containing the target table |
| Table | The table to delete from |
| Matching Column | The column to match records against |
| Matching Value | The value to match (supports expressions) |
Use cases
- Data cleanup – Remove outdated or invalid records
- GDPR compliance – Delete personal data on request
- Workflow automation – Remove processed records from staging tables
Redshift data models
Cargo allows you to create data models on top of your Redshift data that can be used to trigger Plays and power workflows.
Creating Redshift data models
To create a Redshift data model:
- Navigate to Data Models in Cargo
- Click Create data model
- Select Redshift as the source
- Configure the following fields:
| Field | Description |
|---|
| Name | Choose a descriptive name for your model |
| Slug | Set a unique identifier that cannot be changed once created |
| Schema | Select the Redshift schema containing your data |
| Table | Select the table or view to model |
| ID Column | The column containing unique record identifiers |
| Title Column | The column to display as the record title |
| Cursor Column | (Optional) Column for incremental syncing (date or number) |
Using Redshift data models
Once created, your Redshift data model can be used to:
- Trigger Plays – Start automated workflows when data changes
- Power enrichment – Use Redshift data to enrich records in workflows
- Create segments – Filter and target specific records from your data
Network configuration
If you restrict access to your Redshift cluster, add these Cargo IP addresses to your security group or VPC whitelist:
3.251.34.134
54.220.135.99
79.125.105.52
Update via AWS CLI
aws ec2 authorize-security-group-ingress \
--group-id sg-your-security-group-id \
--protocol tcp \
--port 5439 \
--cidr 3.251.34.134/32
Security
- All Redshift connections are encrypted using SSL/TLS
- Credentials are securely stored and encrypted at rest
- Cargo uses dedicated user credentials with minimal required permissions
- Cargo never overwrites existing tables—it always creates its own