How to set up Google Gemini
Authentication
Google Gemini uses API key authentication:- Go to Google AI Studio
- Sign in with your Google account
- Create an API key
- Copy the API key and paste it in Cargo when connecting
Google Gemini actions
Instruct
Generate text responses using Google’s Gemini models. Required fields:- Model: Select the Gemini model
- Prompt: Your instruction or question
| Model | Description | Context Window |
|---|---|---|
| gemini-3-pro-preview | Latest generation with advanced reasoning | 2M tokens |
| gemini-2.5-flash | Fast, cost-efficient for diverse tasks | 1M tokens |
| gemini-2.5-pro | Most capable for complex reasoning | 2M tokens |
| gemini-2.0-flash | Fast multimodal from 2.0 generation | 1M tokens |
| gemini-1.5-flash | Efficient, optimized for speed | 1M tokens |
| gemini-1.5-pro | Advanced with large context | 2M tokens |
- System prompt: Set context for the model’s behavior
- Maximum output tokens: Limit response length
- Temperature: Control randomness (0-2, default 1)
- With Google Search: Enable real-time information retrieval
- Text: Plain text response
- JSON object: Unstructured JSON output
- JSON schema: Structured output matching your schema
Use cases
- Lead qualification: Analyze lead data and provide scoring insights
- Email generation: Create personalized outreach at scale
- Data parsing: Extract structured data from unstructured sources
- Translation: Translate content for international campaigns
- Summarization: Condense long documents or conversations
Credits and pricing
Costs vary by model (per 1,000 tokens):| Model | Standard | With Google Search |
|---|---|---|
| gemini-3-pro-preview | 0.2 credits | 0.2 + 1 fixed |
| gemini-2.5-pro | 0.15 credits | 0.15 + 1 fixed |
| gemini-2.5-flash | 0.03 credits | 0.03 + 1 fixed |
| gemini-2.0-flash | 0.01 credits | 0.01 + 1 fixed |
| gemini-1.5-pro | 0.1 credits | 0.1 + 1 fixed |
| gemini-1.5-flash | 0.01 credits | 0.01 + 1 fixed |
Rate limits
| Model | Requests per minute |
|---|---|
| gemini-3-pro-preview | 1,000 |
| gemini-2.5-flash | 15,000 |
| gemini-2.5-pro | 1,000 |
| gemini-2.0-flash | 15,000 |
| gemini-1.5-flash | 15,000 |
| gemini-1.5-pro | 2,000 |
Best practices
- Use Flash models for simple, high-volume tasks
- Use Pro models for complex reasoning and analysis
- Enable Google Search for questions requiring current information
- Define JSON schemas for consistent, structured outputs
- Adjust temperature based on task (lower for factual, higher for creative)
- Leverage the large context windows for processing lengthy documents

