How to build an AI Agent with Claude 4.5 Sonnet

In this guide, you'll build a production-ready AI agent using Claude 4.5 Sonnet via Langbase SDK. Claude 4.5 Sonnet excels at coding tasks, technical problem-solving, and complex reasoning.
Let's build your first AI agent with Claude 4.5 Sonnet!
Step #0
Before you begin, create a free account on Langbase.
Step #1
Create a new directory for your AI agent project:
mkdir claude-agent && cd claude-agent
Step #2
Initialize a Node.js project and create a TypeScript file:
npm init -y && touch agent.ts
Step #3
Install the Langbase SDK to interact with Claude 3.5 Sonnet:
npm install langbase dotenv
Step #4
Every request to Langbase requires an API key. Generate your API key by following these steps.
Add Anthropic API key to your account to use Claude 4.5 Sonnet.
Create a .env file in your project root:
# Replace with your actual Langbase API key
LANGBASE_API_KEY=your_api_key_here
Step #5
Navigate to LLM API keys page and add your Anthropic API key. This allows Langbase to use Claude 3.5 Sonnet on your behalf.
Step #6
Add the following code to your agent.ts file:
import 'dotenv/config';
import { Langbase } from 'langbase';
// Initialize Langbase SDK
const langbase = new Langbase({
apiKey: process.env.LANGBASE_API_KEY!
});
async function createClaudeAgent() {
// Create an AI agent with Claude 4.5 Sonnet
const claudeAgent = await langbase.pipes.create({
name: 'claude-agent',
messages: [
{
role: 'system',
content: 'You are a helpful AI assistant powered by Claude 3.5 Sonnet. You excel at coding and technical problem-solving.'
}
],
model: 'anthropic:claude-sonnet-4-20250514',
});
}
Step #7
Add the following code to your agent.ts file to run your agent:
async function runClaudeAgent() {
const pipeAgents = await langbase.pipes.list();
const isPipeAgentExists = pipeAgents.find(pipe => pipe.name === 'claude-agent');
if (!isPipeAgentExists) {
await createClaudeAgent();
}
const response = await langbase.pipes.run({
name: 'claude-agent',
messages: [
{
role: 'user',
content: 'Explain how to implement a binary search algorithm in Python.'
}
],
model: 'anthropic:claude-sonnet-4-20250514'
});
console.log(response.completion);
}
runClaudeAgent();
npx tsx agent.ts
You should see Claude's response explaining the binary search algorithm!
Step #8
Give your agent access to documents for context-aware responses:
// Upload a document to memory
const memory = await langbase.memories.create({
name: 'support-memory-agent',
description: 'Support chatbot memory agent',
});
const content = 'Langbase is a platform for building AI agents. It provides a set of tools and APIs to build AI agents.';
const documentBlob = new Blob([content], { type: 'text/plain' });
const document = new File([documentBlob], 'langbase-faqs.txt', { type: 'text/plain' });
const response = await langbase.memories.documents.upload({
document,
memoryName: 'support-memory-agent',
contentType: 'text/plain',
documentName: 'langbase-faqs.txt',
});
Run the Pipe Agent with memory just add the memory name to the parameters:
// Query with memory
const response = await langbase.pipes.run({
name: 'claude-agent',
messages: [
{ role: 'user', content: 'What is Langbase?' }
],
memory: [{ name: 'support-memory-agent' }],
model: 'anthropic:claude-sonnet-4-20250514'
});
console.log(response.completion);
Step #9
Enable your agent to call external functions:
const response = await langbase.pipes.run({
stream: false,
name: 'claude-agent',
messages: [
{
role: 'user',
content: "What's the weather in SF?",
},
],
tools: [weatherToolSchema],
model: 'anthropic:claude-sonnet-4-20250514'
});
const toolCalls = await getToolsFromRun(response);
const hasToolCalls = toolCalls.length > 0;
const threadId = response.threadId;
if (hasToolCalls) {
// Process each tool call
const toolResultPromises = toolCalls.map(async (toolCall): Promise<Message> => {
const toolName = toolCall.function.name;
const toolParameters = JSON.parse(toolCall.function.arguments);
const toolFunction = tools[toolName as keyof typeof tools];
// Call the tool function with the parameters
const toolResponse = await toolFunction(toolParameters);
// Return the tool result
return {
role: 'tool',
name: toolName,
content: toolResponse,
tool_call_id: toolCall.id,
};
});
// Wait for all tool calls to complete
const toolResults = await Promise.all(toolResultPromises);
// Call the agent pipe again with the updated messages
const finalResponse = await langbase.pipes.run({
threadId,
stream: false,
name: 'claude-agent',
messages: toolResults,
tools: [weatherToolSchema],
model: 'anthropic:claude-sonnet-4-20250514'
});
console.log(JSON.stringify(finalResponse, null, 2));
} else {
console.log('Direct response (no tools called):');
console.log(JSON.stringify(response, null, 2));
}
// Mock implementation of the weather function
async function getCurrentWeather(args: { location: string }) {
return 'Sunny, 75°F';
}
// Weather tool schema
const weatherToolSchema: Tools = {
type: 'function',
function: {
name: 'getCurrentWeather',
description: 'Get the current weather of a given location',
parameters: {
type: 'object',
required: ['location'],
properties: {
unit: {
enum: ['celsius', 'fahrenheit'],
type: 'string',
},
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
},
},
},
};
// Object to hold all tools
const tools = {
getCurrentWeather
};
Step #10
Your AI agent is production-ready from the start with Langbase:
- Serverless Infrastructure - Scales automatically from 1 to 1M requests
- Multi-Model Support - Switch between 600+ models without code changes
- Real-time Analytics - Track performance, usage, and costs
- Built-in Tracing - Debug and monitor every request
Explore more advanced features:
Build powerful AI agents with Claude 4.5 Sonnet and Langbase!