How many companies in our model have more than 500 employees and are headquartered in the US?
What the agent does:
cargo-ai storage model list — finds the Companies model UUID
cargo-ai storage model get-ddl <uuid> — gets the exact table name
Composes and executes a SQL query
Copy
# Step 1: Get the DDL (always required before querying)cargo-ai storage model get-ddl <model-uuid># → {"ddl":"CREATE TABLE `datasets_default.models_companies` (name STRING, domain STRING, employee_count INT64, country STRING, ...)"}# Step 2: Execute the query using the exact table name from DDLcargo-ai system-of-record client query \ "SELECT COUNT(*) as total FROM datasets_default.models_companies WHERE employee_count > 500 AND country = 'US'"# → {"outcome":"queried","rows":[{"total":142}]}
Never guess the table name. Always run model get-ddl first — the format is datasets_default.models_<slug>.
Create a lead scoring agent using GPT-4o-mini with a temperature of 0.0, focused on ICP fit.
What the agent does:
cargo-ai ai agent create — creates the agent resource
cargo-ai ai release get-draft — fetches the editable draft
cargo-ai ai release update-draft — sets model, prompt, temperature
cargo-ai ai release deploy-draft — publishes the configuration
Copy
# Step 1: Create the agentcargo-ai ai agent create \ --name "ICP Lead Scorer" \ --icon-color green \ --icon-face "📊" \ --description "Scores leads based on ICP fit from 1 to 10"# → {"agent":{"uuid":"agent-uuid",...}}# Step 2: Get the draftcargo-ai ai release get-draft --agent-uuid <agent-uuid># Step 3: Update draft configcargo-ai ai release update-draft --agent-uuid <agent-uuid> \ --system-prompt "You are a B2B lead scoring assistant. Given a company's profile, score its fit with our ICP from 1 to 10 and explain your reasoning in 2 sentences." \ --language-model-slug gpt-4o-mini \ --temperature 0.0 \ --max-steps 5# Step 4: Deploycargo-ai ai release deploy-draft --agent-uuid <agent-uuid> \ --integration-slug openai \ --language-model-slug gpt-4o-mini \ --tools '[]' \ --mcp-clients '[]' \ --resources '[]' \ --capabilities '[]' \ --suggested-actions '[]' \ --description "Initial deployment — ICP scorer v1"
Ask the lead researcher agent to find the VP of Sales at Acme Corp.
Copy
# 1. Find the agentcargo-ai ai agent list# → [{"uuid":"agent-uuid","name":"Lead Researcher"}]# 2. Create a chat sessioncargo-ai ai chat create \ --trigger '{"type":"draft"}' \ --agent-uuid <agent-uuid> \ --name "Acme research"# → {"chat":{"uuid":"chat-uuid",...}}# 3. Send a messagecargo-ai ai message create \ --chat-uuid <chat-uuid> \ --parts '[{"type":"text","text":"Find the VP of Sales at Acme Corp and their LinkedIn URL."}]'# → {"userMessage":{"uuid":"..."},"assistantMessage":{"uuid":"assistant-msg-uuid","status":"pending"}}# 4. Poll for the response (every 2 seconds)cargo-ai ai message get <assistant-msg-uuid># → Terminal when status is "success" — read .message.parts[].text# 5. Continue the conversation (context is preserved in the chat)cargo-ai ai message create \ --chat-uuid <chat-uuid> \ --parts '[{"type":"text","text":"Now find their email address."}]'
Run cargo-ai whoami and share the output with your agent if it seems confused about which workspace to target. If you work across multiple workspaces, make the active one explicit: cargo-ai login --token <token> --workspace-uuid <uuid>.
Front-load context about your workspace
Start a session by telling your agent what exists: “I have Companies and Contacts models, a HubSpot connector, and a lead enrichment tool called ‘Enrich from Clearbit’.” This lets the agent confirm UUIDs rather than guessing, and saves multiple discovery round-trips.
Get the DDL before any SQL query
The table name for a Cargo model is not guessable. Always run storage model get-ddl <uuid> before writing queries. The DDL includes the exact table name (e.g., datasets_default.models_companies) and every column’s type.
Ramp batch sizes gradually
Start every new workflow with 1 record, then 50, then 500, before running on your full segment. This surfaces connector rate limit issues before they affect thousands of records. Only connector nodes (kind: "connector") have rate limits — native nodes do not.
Always inspect failedRunsCount after a batch
A batch reporting status: "success" can still have individual run failures. Check failedRunsCount in batch get, download failures with run download --statuses error, and re-queue with batch create --data '{"kind":"recordIds","recordIds":[...]}'.
Keep a .cargo-context.md for repeat tasks
For frequently used workflows or segments, note their UUIDs in a .cargo-context.md in your project root. Your agent will use them directly instead of re-running discovery commands every session.
For quick runs (single records, small batches under 100 records), --wait-until-finished is the simplest pattern. For large batches (1000+ records), poll manually with batch get so your agent can report incremental progress without a timeout.