Skip to main content
OpenAI provides advanced language models that power intelligent automation. Cargo’s native integration with OpenAI allows you to use GPT models directly in your workflows for text generation, classification, summarization, and more.

How to set up OpenAI

You can use OpenAI in two ways:
  1. Cargo credits – Use OpenAI through Cargo’s managed integration (no API key required)
  2. Your own API key – Connect your OpenAI account for direct access

Using Cargo credits

Select OpenAI from the node catalog and start using it immediately. Costs are deducted from your Cargo credits based on token usage.

Using your own API key

To connect your OpenAI account:
FieldDescription
API KeyYour OpenAI API key
Find your API key in the OpenAI platform under API keys.

OpenAI actions

Instruct

Send a prompt to an OpenAI model and receive a text response. Use cases
  • Content generation – Generate personalized emails, messages, or content
  • Classification – Categorize text into predefined categories
  • Summarization – Summarize long text into concise summaries
  • Data extraction – Extract structured data from unstructured text
  • Translation – Translate text between languages
  • Analysis – Analyze sentiment, intent, or other text properties
Configuration
FieldDescription
ModelSelect the GPT model to use
PromptThe prompt to send to the model
System promptOptional system instructions
Maximum tokensLimit output length (includes reasoning tokens)
TemperatureControl randomness (0 = deterministic, 2 = very random)
With web searchEnable web search capabilities for the agent
Response formatText, JSON object, or JSON schema

Available models

The most capable model for complex reasoning and coding tasks.Best for: Complex multi-step problems, advanced analysis, coding tasksCredit cost: ~0.2 credits per 1,000 tokens
A faster, more cost-effective version of GPT-5 for well-defined tasks.Best for: Balanced cost-performance, everyday tasksCredit cost: ~0.03 credits per 1,000 tokens
The fastest and cheapest GPT-5 variant for simple tasks.Best for: Summarization, classification, simple extractionsCredit cost: ~0.006 credits per 1,000 tokens
High-capability model with extended context window (1M tokens).Best for: Complex tasks requiring large contextCredit cost: ~0.3 credits per 1,000 tokens
Balanced performance model with large context window.Best for: Cost-effective tasks with large inputsCredit cost: ~0.05 credits per 1,000 tokens
Fastest and cheapest model with large context support.Best for: High-volume, low-latency tasksCredit cost: ~0.01 credits per 1,000 tokens
General-purpose model supporting text and images.Best for: Multimodal tasks, image analysisCredit cost: ~0.5 credits per 1,000 tokens
Compact multimodal model for efficient image and text tasks.Best for: Cost-effective multimodal tasksCredit cost: ~0.02 credits per 1,000 tokens
Recommended model: GPT 5 Mini offers the best balance of cost and performance for most use cases.

Response formats

Control how the model returns its response:
FormatDescription
TextFree-form text response (default)
JSON objectResponse formatted as valid JSON
JSON schemaResponse validated against a specific JSON schema
Use JSON schema when you need structured output with specific fields. Define your schema to ensure consistent, parseable responses.

Example JSON schema

{
  "type": "object",
  "properties": {
    "sentiment": { "type": "string", "enum": ["positive", "negative", "neutral"] },
    "confidence": { "type": "number" },
    "summary": { "type": "string" }
  },
  "additionalProperties": false,
  "required": ["sentiment", "confidence", "summary"]
}

Advanced settings

System prompt

Set instructions that guide the model’s behavior across all prompts. Use system prompts to:
  • Define the model’s role or persona
  • Set output format guidelines
  • Provide context about your use case
  • Establish constraints or rules

Temperature

Control the randomness of outputs:
TemperatureBehavior
0Most deterministic, consistent outputs
0.5-1Balanced creativity and consistency
1-2More creative, varied outputs
For tasks requiring consistency (classification, extraction), use low temperature (0-0.3). For creative tasks (content generation), use higher temperature (0.7-1).
Enable the model to search the web for current information. Useful for:
  • Real-time data lookups
  • Current events or news
  • Company or product research
  • Fact verification