How to build an AI Agent with Gemini 2.5 Pro

In this guide, you'll build a production-ready AI agent using Gemini 2.5 Pro via Langbase SDK. Gemini 2.5 Pro excels at handling long context windows, multimodal understanding, and advanced reasoning tasks.

Let's build your first AI agent with Gemini 2.5 Pro!

Step #0

Before you begin, create a free account on Langbase.

Step #1

Create a new directory for your AI agent project:

mkdir gemini-agent && cd gemini-agent

Step #2

Initialize a Node.js project and create a TypeScript file:

npm init -y && touch agent.ts

Step #3

Install the Langbase SDK to interact with Gemini 2.5 Pro:

npm install langbase dotenv

Step #4

Every request to Langbase requires an API key. Generate your API key by following these steps. Add Google AI API key to your account to use Gemini 2.5 Pro. Create a .env file in your project root:

# Replace with your actual Langbase API key LANGBASE_API_KEY=your_api_key_here

Step #5

Navigate to LLM API keys page and add your Google AI API key. This allows Langbase to use Gemini 2.5 Pro on your behalf.

Step #6

Add the following code to your agent.ts file:

import 'dotenv/config'; import { Langbase } from 'langbase'; // Initialize Langbase SDK const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); async function createGeminiAgent() { // Create an AI agent with Gemini 2.5 Pro const geminiAgent = await langbase.pipes.create({ name: 'gemini-agent', messages: [ { role: 'system', content: 'You are a helpful AI assistant powered by Gemini 2.5 Pro. You excel at coding and technical problem-solving.' } ], model: 'google:gemini-2.5-pro', }); }

Step #7

Add the following code to your agent.ts file to run your agent:

async function runGeminiAgent() { const pipeAgents = await langbase.pipes.list(); const isPipeAgentExists = pipeAgents.find(pipe => pipe.name === 'gemini-agent'); if (!isPipeAgentExists) { await createGeminiAgent(); } const response = await langbase.pipes.run({ name: 'gemini-agent', messages: [ { role: 'user', content: 'Explain how to implement a binary search algorithm in Python.' } ], model: 'google:gemini-2.5-pro' }); console.log(response.completion); } runGeminiAgent();
npx tsx agent.ts

You should see Gemini's response explaining the binary search algorithm!

Step #8

Give your agent access to documents for context-aware responses:

// Upload a document to memory const memory = await langbase.memories.create({ name: 'support-memory-agent', description: 'Support chatbot memory agent', }); const content = 'Langbase is a platform for building AI agents. It provides a set of tools and APIs to build AI agents.'; const documentBlob = new Blob([content], { type: 'text/plain' }); const document = new File([documentBlob], 'langbase-faqs.txt', { type: 'text/plain' }); const response = await langbase.memories.documents.upload({ document, memoryName: 'support-memory-agent', contentType: 'text/plain', documentName: 'langbase-faqs.txt', });

Run the Pipe Agent with memory just add the memory name to the parameters:

// Query with memory const response = await langbase.pipes.run({ name: 'gemini-agent', messages: [ { role: 'user', content: 'What is Langbase?' } ], memory: [{ name: 'support-memory-agent' }], model: 'google:gemini-2.5-pro' }); console.log(response.completion);

Step #9

Enable your agent to call external functions:

const response = await langbase.pipes.run({ stream: false, name: 'gemini-agent', messages: [ { role: 'user', content: "What's the weather in SF?", }, ], tools: [weatherToolSchema], model: 'google:gemini-2.5-pro' }); const toolCalls = await getToolsFromRun(response); const hasToolCalls = toolCalls.length > 0; const threadId = response.threadId; if (hasToolCalls) { // Process each tool call const toolResultPromises = toolCalls.map(async (toolCall): Promise<Message> => { const toolName = toolCall.function.name; const toolParameters = JSON.parse(toolCall.function.arguments); const toolFunction = tools[toolName as keyof typeof tools]; // Call the tool function with the parameters const toolResponse = await toolFunction(toolParameters); // Return the tool result return { role: 'tool', name: toolName, content: toolResponse, tool_call_id: toolCall.id, }; }); // Wait for all tool calls to complete const toolResults = await Promise.all(toolResultPromises); // Call the agent pipe again with the updated messages const finalResponse = await langbase.pipes.run({ threadId, stream: false, name: 'gemini-agent', messages: toolResults, tools: [weatherToolSchema], model: 'google:gemini-2.5-pro' }); console.log(JSON.stringify(finalResponse, null, 2)); } else { console.log('Direct response (no tools called):'); console.log(JSON.stringify(response, null, 2)); } // Mock implementation of the weather function async function getCurrentWeather(args: { location: string }) { return 'Sunny, 75°F'; } // Weather tool schema const weatherToolSchema: Tools = { type: 'function', function: { name: 'getCurrentWeather', description: 'Get the current weather of a given location', parameters: { type: 'object', required: ['location'], properties: { unit: { enum: ['celsius', 'fahrenheit'], type: 'string', }, location: { type: 'string', description: 'The city and state, e.g. San Francisco, CA', }, }, }, }, }; // Object to hold all tools const tools = { getCurrentWeather };

Step #10

Your AI agent is production-ready from the start with Langbase:

  • Serverless Infrastructure - Scales automatically from 1 to 1M requests
  • Multi-Model Support - Switch between 600+ models without code changes
  • Real-time Analytics - Track performance, usage, and costs
  • Built-in Tracing - Debug and monitor every request

Explore more advanced features:

Build powerful AI agents with Gemini 2.5 Pro and Langbase!