Skip to main content

How to set up Google Gemini

Authentication

Google Gemini uses API key authentication:
  1. Go to Google AI Studio
  2. Sign in with your Google account
  3. Create an API key
  4. Copy the API key and paste it in Cargo when connecting

Google Gemini actions

Instruct

Generate text responses using Google’s Gemini models. Required fields:
  • Model: Select the Gemini model
  • Prompt: Your instruction or question
Available models:
ModelDescriptionContext Window
gemini-3-pro-previewLatest generation with advanced reasoning2M tokens
gemini-2.5-flashFast, cost-efficient for diverse tasks1M tokens
gemini-2.5-proMost capable for complex reasoning2M tokens
gemini-2.0-flashFast multimodal from 2.0 generation1M tokens
gemini-1.5-flashEfficient, optimized for speed1M tokens
gemini-1.5-proAdvanced with large context2M tokens
Advanced settings:
  • System prompt: Set context for the model’s behavior
  • Maximum output tokens: Limit response length
  • Temperature: Control randomness (0-2, default 1)
  • With Google Search: Enable real-time information retrieval
Output options:
  • Text: Plain text response
  • JSON object: Unstructured JSON output
  • JSON schema: Structured output matching your schema

Use cases

  • Lead qualification: Analyze lead data and provide scoring insights
  • Email generation: Create personalized outreach at scale
  • Data parsing: Extract structured data from unstructured sources
  • Translation: Translate content for international campaigns
  • Summarization: Condense long documents or conversations

Credits and pricing

Costs vary by model (per 1,000 tokens):
ModelStandardWith Google Search
gemini-3-pro-preview0.2 credits0.2 + 1 fixed
gemini-2.5-pro0.15 credits0.15 + 1 fixed
gemini-2.5-flash0.03 credits0.03 + 1 fixed
gemini-2.0-flash0.01 credits0.01 + 1 fixed
gemini-1.5-pro0.1 credits0.1 + 1 fixed
gemini-1.5-flash0.01 credits0.01 + 1 fixed

Rate limits

ModelRequests per minute
gemini-3-pro-preview1,000
gemini-2.5-flash15,000
gemini-2.5-pro1,000
gemini-2.0-flash15,000
gemini-1.5-flash15,000
gemini-1.5-pro2,000

Best practices

  • Use Flash models for simple, high-volume tasks
  • Use Pro models for complex reasoning and analysis
  • Enable Google Search for questions requiring current information
  • Define JSON schemas for consistent, structured outputs
  • Adjust temperature based on task (lower for factual, higher for creative)
  • Leverage the large context windows for processing lengthy documents