The activities data table associated with your workspace compiles insights record enrolments and node success and failure outcomes. These logs give you real-time visibility into how your tools and plays are performing, helping you identify bottlenecks, errors, and optimization opportunities.

Notes

  1. You can run these queries from your **data warehouse’s query editor, **a Cargo data model or the **Sandbox **query editor in the system of records section of your workspace’s settings view
  2. Adapt your query’s syntax for your database (BigQuery or Snowflake)
  3. If you’re using your own data warehouse as the system of record, the dataset name is the Dataset name from your System of records configuration. If you’re using Cargo’s Snowflake under the hood, reach out to the Cargo team for these details.

Understanding the activities table

The activities data is stored in the compute__activities table with the following key structure:
  • _id: Unique identifier for each activity
  • _time: Timestamp when the activity occurred
  • kind: Type of activity (workflowNodeExecuted, workflowEntered, workflowLeft, segmentEntered, segmentLeft)
  • model_uuid: Reference to the data model (if applicable)
  • workflow_uuid: Reference to the workflow
  • play_uuid: Reference to the play (if applicable)
  • tool_uuid: Reference to the tool (if applicable)
  • segment_uuid: Reference to the segment (for segment activities)
  • context: JSON object containing detailed activity information

Example: Track workflow node executions

Here’s a simple query to track all node executions for a specific workflow, including their status and any error messages: BigQuery syntax:
SELECT 
  _time,
  json_value(context, '$.nodeSlug') as node_name,
  json_value(context, '$.nodeKind') as node_type,
  json_value(context, '$.nodeStatus') as status,
  json_value(context, '$.nodeErrorMessage') as error_message
FROM [DATASET].compute__activities
WHERE kind = 'workflowNodeExecuted' 
  AND json_value(context, '$.workflowUuid') = 'your-workflow-uuid-here'
ORDER BY _time DESC;
Snowflake syntax:
SELECT 
  _time,
  context:nodeSlug as node_name,
  context:nodeKind as node_type,
  context:nodeStatus as status,
  context:nodeErrorMessage as error_message
FROM [DATABASE]._compute.activities
WHERE kind = 'workflowNodeExecuted' 
  AND context:workflowUuid = 'your-workflow-uuid-here'
ORDER BY _time DESC;

Key differences between BigQuery and Snowflake

  • JSON field access: BigQuery uses json_value(context, '$.fieldName') while Snowflake uses context:fieldName
  • Table naming: Depending on the scope selected for your system of record, you’ll need to adapt the table references as follows:
    • BigQuery project-level scope: Use [PROJECT_ID].[DATASET].compute__activities to fully qualify the table
    • BigQuery dataset-level scope: Use [DATASET].compute__activities when working within a specific dataset
    • Snowflake database-level scope: Use [DATABASE]._compute.activities to reference tables at the database level
    • Snowflake schema-level scope: Use [DATABASE].[SCHEMA]._compute__activities for schema-specific access
Note: Replace the bracketed values (e.g. [PROJECT_ID], [DATASET], etc.) with your actual configuration values.