Skip to main content
Each example shows the plain-English prompt, what the agent does step by step, and the exact CLI commands it runs.

Run a workflow on a single record

Prompt:
Run the lead enrichment tool on Acme Inc. (domain: acme.com)
What the agent does:
  1. cargo-ai orchestration tool list — finds workflowUuid for the enrichment tool
  2. cargo-ai orchestration run create with the record data, waits for completion
  3. Parses and presents the result
cargo-ai orchestration run create \
  --workflow-uuid <tool.workflowUuid> \
  --data '{"company":"Acme Inc.","domain":"acme.com"}' \
  --wait-until-finished
run create works with tool workflows only. For plays, use batch create instead.

Trigger a batch play across a segment

Prompt:
Trigger the MQL scoring play on all leads added this week.
What the agent does:
  1. cargo-ai orchestration play list — finds workflowUuid and segmentUuid
  2. cargo-ai orchestration batch create with the segment
  3. Polls batch get every 5 seconds until terminal status
  4. Reports runsCount, executedRunsCount, failedRunsCount
# Discover the play
cargo-ai orchestration play list

# Trigger with a segment
cargo-ai orchestration batch create \
  --workflow-uuid <play.workflowUuid> \
  --data '{"kind":"segment","segmentUuid":"<play.segmentUuid>"}' \
  --wait-until-finished

# Check result
cargo-ai orchestration batch get <batch-uuid>
# → {"batch":{"runsCount":450,"executedRunsCount":450,"failedRunsCount":3,...}}
A batch with status: "success" can still contain individual run failures. Always inspect failedRunsCount and re-queue as needed.

Query your data warehouse

Prompt:
How many companies in our model have more than 500 employees and are headquartered in the US?
What the agent does:
  1. cargo-ai storage model list — finds the Companies model UUID
  2. cargo-ai storage model get-ddl <uuid> — gets the exact table name
  3. Composes and executes a SQL query
# Step 1: Get the DDL (always required before querying)
cargo-ai storage model get-ddl <model-uuid>
# → {"ddl":"CREATE TABLE `datasets_default.models_companies` (name STRING, domain STRING, employee_count INT64, country STRING, ...)"}

# Step 2: Execute the query using the exact table name from DDL
cargo-ai system-of-record client query \
  "SELECT COUNT(*) as total FROM datasets_default.models_companies WHERE employee_count > 500 AND country = 'US'"
# → {"outcome":"queried","rows":[{"total":142}]}
Never guess the table name. Always run model get-ddl first — the format is datasets_default.models_<slug>.

Build a workflow from scratch

Prompt:
Build a workflow that enriches company domains with Clearbit and writes the results back to the Companies model.
What the agent does:
  1. cargo-ai connection connector list --integration-slug clearbit — finds connectorUuid
  2. cargo-ai connection integration get clearbit — gets available actionSlug values and their config schemas
  3. cargo-ai storage model list — finds the Companies model UUID
  4. Assembles and validates a node graph, then executes
# Step 1: Find the Clearbit connector
cargo-ai connection connector list --integration-slug clearbit
# → [{"uuid":"clearbit-connector-uuid","name":"Clearbit - Prod","integrationSlug":"clearbit"}]

# Step 2: Discover available actions
cargo-ai connection integration get clearbit
# → includes actionSlug: "company_enrich", required inputs: ["domain"]

# Step 3: Validate the node graph
cargo-ai orchestration node validate --nodes '[
  {
    "uuid":"00000000-0001-0000-0000-000000000000",
    "slug":"start",
    "kind":"native",
    "actionSlug":"start",
    "config":{},
    "childrenUuids":["00000000-0002-0000-0000-000000000000"],
    "fallbackOnFailure":false,
    "position":{"x":0,"y":0}
  },
  {
    "uuid":"00000000-0002-0000-0000-000000000000",
    "slug":"enrich",
    "kind":"connector",
    "integrationSlug":"clearbit",
    "actionSlug":"company_enrich",
    "connectorUuid":"<clearbit-connector-uuid>",
    "config":{
      "domain":{"kind":"templateExpression","expression":"{{nodes.start.domain}}","instructTo":"none","fromRecipe":false}
    },
    "childrenUuids":["00000000-0003-0000-0000-000000000000"],
    "fallbackOnFailure":false,
    "position":{"x":166,"y":0}
  },
  {
    "uuid":"00000000-0003-0000-0000-000000000000",
    "slug":"end",
    "kind":"native",
    "actionSlug":"end",
    "config":{},
    "childrenUuids":[],
    "fallbackOnFailure":false,
    "position":{"x":332,"y":0}
  }
]'
# → {"outcome":"valid"}

# Step 4: Run it
cargo-ai orchestration run create \
  --workflow-uuid <tool.workflowUuid> \
  --data '{"domain":"acme.com"}' \
  --nodes '[...]' \
  --wait-until-finished

Create and deploy an AI agent

Prompt:
Create a lead scoring agent using GPT-4o-mini with a temperature of 0.0, focused on ICP fit.
What the agent does:
  1. cargo-ai ai agent create — creates the agent resource
  2. cargo-ai ai release get-draft — fetches the editable draft
  3. cargo-ai ai release update-draft — sets model, prompt, temperature
  4. cargo-ai ai release deploy-draft — publishes the configuration
# Step 1: Create the agent
cargo-ai ai agent create \
  --name "ICP Lead Scorer" \
  --icon-color green \
  --icon-face "📊" \
  --description "Scores leads based on ICP fit from 1 to 10"
# → {"agent":{"uuid":"agent-uuid",...}}

# Step 2: Get the draft
cargo-ai ai release get-draft --agent-uuid <agent-uuid>

# Step 3: Update draft config
cargo-ai ai release update-draft --agent-uuid <agent-uuid> \
  --system-prompt "You are a B2B lead scoring assistant. Given a company's profile, score its fit with our ICP from 1 to 10 and explain your reasoning in 2 sentences." \
  --language-model-slug gpt-4o-mini \
  --temperature 0.0 \
  --max-steps 5

# Step 4: Deploy
cargo-ai ai release deploy-draft --agent-uuid <agent-uuid> \
  --integration-slug openai \
  --language-model-slug gpt-4o-mini \
  --tools '[]' \
  --mcp-clients '[]' \
  --resources '[]' \
  --capabilities '[]' \
  --suggested-actions '[]' \
  --description "Initial deployment — ICP scorer v1"
Model selection guide:
TaskRecommended modelTemperature
Scoring, classification, extractiongpt-4o-mini or claude-3-5-haiku0.00.2
Research, summarization, analysisgpt-4o or claude-3-5-sonnet0.20.5
Personalized outreach, copywritinggpt-4o or claude-3-5-sonnet0.50.8
Creative brainstorminggpt-4o or claude-opus0.71.0

Monitor workflow health

Prompt:
Show me the error rate for the CRM sync play over the last 7 days, broken down by node.
What the agent does:
  1. cargo-ai orchestration play list — finds workflowUuid
  2. cargo-ai orchestration run get-metrics with a date range
  3. Computes errorExecutionsCount / totalExecutionsCount per node
cargo-ai orchestration run get-metrics \
  --workflow-uuid <uuid> \
  --created-after 2025-01-12 \
  --created-before 2025-01-19
Response:
{
  "runMetrics": [
    {
      "nodeUuid": "enrich-node-uuid",
      "totalExecutionsCount": 1000,
      "successExecutionsCount": 950,
      "errorExecutionsCount": 48,
      "cancelledExecutionsCount": 2,
      "creditsUsedCount": 450
    }
  ]
}
Error rate = 48 / 1000 = 4.8% Re-queue failed records:
# 1. Download failed runs
cargo-ai orchestration run download \
  --workflow-uuid <uuid> \
  --statuses error \
  --created-after 2025-01-12 > failed_runs.json

# 2. Re-run only the failed records
RECORD_IDS=$(jq '[.[].recordId]' failed_runs.json)
cargo-ai orchestration batch create \
  --workflow-uuid <uuid> \
  --data "{\"kind\":\"recordIds\",\"recordIds\":$RECORD_IDS}"

Export segment data

Prompt:
Export all US companies with fewer than 200 employees, sorted by creation date descending.
What the agent does:
  1. cargo-ai storage model list — finds the Companies model UUID
  2. cargo-ai segmentation segment download with filter and sort
cargo-ai segmentation segment download \
  --model-uuid <companies-model-uuid> \
  --filter '{
    "conjonction": "and",
    "groups": [{
      "conjonction": "and",
      "conditions": [
        {"kind":"string","columnSlug":"country","operator":"is","values":["US"]},
        {"kind":"number","columnSlug":"employee_count","operator":"lowerThan","value":200}
      ]
    }]
  }' \
  --sort '[{"columnSlug":"created_at","kind":"desc"}]'
Use --model-uuid, not --segment-uuid. The filter and sort flags use the same JSON schema as the Cargo UI segment builder.

Bootstrap a new workspace

Prompt:
Set up a fresh Cargo workspace: Companies and Contacts models, a HubSpot connector, and a GPT-4o scoring agent.
What the agent does (full sequence across 4 skills):
# 1. Workspace — create folders for organization
cargo-ai workspace folder create --name "Enrichment" --emoji-slug "sparkles" --kind tool
cargo-ai workspace folder create --name "Scoring" --emoji-slug "chart-bar" --kind play

# 2. Storage — create models and columns
cargo-ai storage dataset list   # → datasetUuid
cargo-ai storage model create --slug companies --name "Companies" --dataset-uuid <uuid> --extractor-slug csv --config '{}'
cargo-ai storage model create --slug contacts --name "Contacts" --dataset-uuid <uuid> --extractor-slug csv --config '{}'

cargo-ai storage column create --model-uuid <companies-uuid> \
  --column '{"slug":"domain","type":"string","label":"Domain","kind":"custom"}'
cargo-ai storage column create --model-uuid <companies-uuid> \
  --column '{"slug":"employee_count","type":"number","label":"Employees","kind":"custom"}'
cargo-ai storage column create --model-uuid <contacts-uuid> \
  --column '{"slug":"email","type":"string","label":"Email","kind":"custom"}'

# 3. Set relationship: Contacts → Companies
cargo-ai storage relationship set \
  --from-model-uuid <contacts-uuid> \
  --to-model-uuid <companies-uuid>

# 4. Connection — create HubSpot connector
cargo-ai connection connector create \
  --integration-slug hubspot \
  --slug hubspot_prod \
  --name "HubSpot - Production"

# 5. AI — create and deploy scoring agent
cargo-ai ai agent create --name "Lead Scorer" --icon-color green --icon-face "📊"
cargo-ai ai release update-draft --agent-uuid <uuid> \
  --system-prompt "Score leads from 1–10 based on company fit." \
  --language-model-slug gpt-4o-mini \
  --temperature 0.0
cargo-ai ai release deploy-draft --agent-uuid <uuid> \
  --integration-slug openai --language-model-slug gpt-4o-mini \
  --tools '[]' --resources '[]' --capabilities '[]' \
  --suggested-actions '[]' --mcp-clients '[]' \
  --description "Initial scoring agent"

Chat with an AI agent

Prompt:
Ask the lead researcher agent to find the VP of Sales at Acme Corp.
# 1. Find the agent
cargo-ai ai agent list
# → [{"uuid":"agent-uuid","name":"Lead Researcher"}]

# 2. Create a chat session
cargo-ai ai chat create \
  --trigger '{"type":"draft"}' \
  --agent-uuid <agent-uuid> \
  --name "Acme research"
# → {"chat":{"uuid":"chat-uuid",...}}

# 3. Send a message
cargo-ai ai message create \
  --chat-uuid <chat-uuid> \
  --parts '[{"type":"text","text":"Find the VP of Sales at Acme Corp and their LinkedIn URL."}]'
# → {"userMessage":{"uuid":"..."},"assistantMessage":{"uuid":"assistant-msg-uuid","status":"pending"}}

# 4. Poll for the response (every 2 seconds)
cargo-ai ai message get <assistant-msg-uuid>
# → Terminal when status is "success" — read .message.parts[].text

# 5. Continue the conversation (context is preserved in the chat)
cargo-ai ai message create \
  --chat-uuid <chat-uuid> \
  --parts '[{"type":"text","text":"Now find their email address."}]'

Track credit usage

Prompt:
How many credits did the enrichment play consume last month, broken down by day?
cargo-ai billing usage get-metrics \
  --from 2025-01-01 \
  --to 2025-01-31 \
  --workflow-uuid <enrichment-play.workflowUuid>
Group by connector to see which enrichment provider costs the most:
cargo-ai billing usage get-metrics \
  --from 2025-01-01 \
  --to 2025-01-31 \
  --group-by connector_uuid
Check remaining credits:
cargo-ai billing subscription get
# → {"subscription":{"plan":"self-serve","subscriptionAvailableCreditsCount":10000,"subscriptionCreditsUsedCount":3200,"resetAt":"2025-02-01T00:00:00Z"}}
# → Remaining: 10000 - 3200 = 6800 credits
Invoice amounts returned by subscription get-invoices are in cents. Divide by 100 for dollars.

Best practices

Run cargo-ai whoami and share the output with your agent if it seems confused about which workspace to target. If you work across multiple workspaces, make the active one explicit: cargo-ai login --token <token> --workspace-uuid <uuid>.
Start a session by telling your agent what exists: “I have Companies and Contacts models, a HubSpot connector, and a lead enrichment tool called ‘Enrich from Clearbit’.” This lets the agent confirm UUIDs rather than guessing, and saves multiple discovery round-trips.
The table name for a Cargo model is not guessable. Always run storage model get-ddl <uuid> before writing queries. The DDL includes the exact table name (e.g., datasets_default.models_companies) and every column’s type.
Start every new workflow with 1 record, then 50, then 500, before running on your full segment. This surfaces connector rate limit issues before they affect thousands of records. Only connector nodes (kind: "connector") have rate limits — native nodes do not.
A batch reporting status: "success" can still have individual run failures. Check failedRunsCount in batch get, download failures with run download --statuses error, and re-queue with batch create --data '{"kind":"recordIds","recordIds":[...]}'.
For frequently used workflows or segments, note their UUIDs in a .cargo-context.md in your project root. Your agent will use them directly instead of re-running discovery commands every session.
# Cargo Workspace Context

## Models
- Companies: `model-uuid-companies`
- Contacts: `model-uuid-contacts`

## Key workflows
- Lead Enrichment Tool: `workflow-uuid-enrichment`
- MQL Scoring Play: `workflow-uuid-scoring`

## Connectors
- HubSpot: `connector-uuid-hubspot`
- Clearbit: `connector-uuid-clearbit`
For quick runs (single records, small batches under 100 records), --wait-until-finished is the simplest pattern. For large batches (1000+ records), poll manually with batch get so your agent can report incremental progress without a timeout.

Next steps

CLI Overview

Command reference, UUID flows, filter syntax, async patterns, and gotchas.

Cargo Skills on GitHub

Browse skill source, report issues, or contribute improvements.