Call tools with Pipe agent locally using OpenAI
In this guide, you’ll learn what tool calls are and how you can use them with an AI agent pipe using GPT-4o-mini to create a self-healing agent.
Tool calling lets an LLM (like GPT) use functions from your codebase to handle tasks it can't manage on its own. Instead of just producing text, the model can generate a tool call (a function name with parameters) that triggers a specific function in your code.
With tool calling, you can have the model do things like fetch real-time info, run complex calculations, retrieve data from a database, or interact with other systems.
Key Benefits
- More functionality: It extends the model’s abilities beyond text generation, allowing it to perform practical tasks.
- Improved accuracy: For tasks like calculations or getting real-time data, it ensures accurate and reliable results.
- Broader use cases: It makes the model more useful in real-world applications, easily connecting it with other systems.
Let's build a tool locally using BaseAI that will return the current weather for a given location.
Step #1: Create a weather tool
Create a directory in your local machine and navigate to it. Run the following command in the terminal:
# create a project directory
mkdir my-ai-project
# intialize npm
npm init -y
# install dotenv package
npm install dotenv
This command will create a package.json file in your project directory with default values. It will also install the dotenv package to read environment variables from the .env file.
Let’s initialize BaseAI in our project. To do this, run the following command in your project terminal:
npx baseai@latest init
Let’s create a tool now. To do it, run the following command in your project terminal:
npx baseai@latest tool
The CLI will ask you to provide the name and description of the tool. Let's call it getCurrentWeather and provide a description like Get the current weather for a given location.
Your tool will be created at /baseai/tools/get-current-weather.ts.
Step #2: Update the Tool
Navigate to your project directory and open the tool you created. You can find it at /baseai/tools/get-current-weather.ts. This is what it looks like:
import { ToolI } from '@baseai/core';
export async function getCurrentWeather() {
// Your tool logic here
}
const getCurrentWeatherTool = (): ToolI => ({
run: getCurrentWeather,
type: 'function' as const,
function: {
name: 'getCurrentWeather',
description: 'Get the current weather for a given location',
parameters: {},
},
});
export default getCurrentWeatherTool;
Let's add parameters to the getCurrentWeather function. The LLM will give values to these parameters when it calls the tool.
Let’s add a static return from the getCurrentWeather function. You can replace it with your logic to get the current weather.
import {ToolI} from '@baseai/core';
export async function getCurrentWeather(location: string, unit: string) {
return `Weather in ${location} is 72 degrees ${unit === 'celsius' ? 'Celsius' : 'Fahrenheit'}`;
}
const getCurrentWeatherTool = (): ToolI => ({
run: getCurrentWeather,
type: 'function' as const,
function: {
name: 'getCurrentWeather',
description: 'Get the current weather for a given location',
parameters: {
type: 'object',
properties: {
location: {
type: 'string',
description: 'The city and state, e.g. San Francisco, CA',
},
unit: {
type: 'string',
enum: ['celsius', 'fahrenheit'],
},
},
required: ['location'],
},
},
});
export default getCurrentWeatherTool;
Step #3: Integrate the Tool in an AI Agent Pipe
To integrate the tool in an AI agent pipe, let’s create a basic summarizer pipe using this command:
npx baseai@latest pipe
It will ask you for the name, description, and other details of the pipe step-by-step.
Once you are done, your pipe will be created successfully at /baseai/pipes/summarizer.ts.
Let's update the pipe. We will do the following:
- Add system prompt to the pipe
- Update pipe function name from
pipeNametopipeSummarizer - Import the
getCurrentWeathertool we created in step 2 inside pipe - Call the
getCurrentWeather()tool inside tools array
This is what it will look like after adding these details:
import { PipeI } from '@baseai/core';
import getCurrentWeatherTool from '../tools/get-current-weather';
const pipeSummarizer = (): PipeI => ({
apiKey: process.env.LANGBASE_API_KEY!, // Replace with your API key https://langbase.com/docs/api-reference/api-keys
name: 'summarizer',
description: 'A pipe that summarizes content and make it less wordy',
status: 'public',
model: 'openai:gpt-4o-mini',
stream: true,
json: false,
store: true,
moderate: true,
top_p: 1,
max_tokens: 1000,
temperature: 0.7,
presence_penalty: 1,
frequency_penalty: 1,
stop: [],
tool_choice: 'auto',
parallel_tool_calls: false,
messages: [
{ role: 'system', content: `You are a content summarizer. You will summarize content without loosing context into less wordy to the point version.` },
],
variables: [],
memory: [],
tools: [getCurrentWeatherTool()]
});
export default pipeSummarizer;
To integrate the summarizer pipe with your Node.js project, create an index.ts file in your project by running this command:
touch index.ts
In this index.ts file, import the summarizer pipe and the tool you created. We will use the pipe primitive from @baseai/core to run the pipe. Let's update the user message to the following in our summarizer pipe. And instead of streaming, let's generate text from LLM.
What's the weather in San Francisco?
This is how the index.ts file will look like:
import 'dotenv/config';
import {Pipe} from '@baseai/core';
import pipeSummarizer from './baseai/pipes/summarizer';
const pipe = new Pipe(pipeSummarizer());
const userMsg = `What's the weather in San Francisco?`;
async function main() {
const response = await pipe.run({
messages: [{role: 'user', content: userMsg}],
stream: false,
});
console.log(response.completion);
}
main();
Since we are using OpenAI’s model GPT-4o-mini, your OpenAI key is required. Create an .env file in the root directory and add the key.
OPENAI_API_KEY="<REPLACE-OPENAI-KEY>" # Add your OpenAI API key in .env file
Step #4: Start the BaseAI server
To run the pipe locally, you need to start the BaseAI server. Run the following command in your terminal:
npx baseai@latest dev
Step #5: Run the Pipe
Let’s call our pipe now. To do this, run the following command in your terminal:
npx tsx index.ts
It will prompt the LLM model to get answers to your weather query.
The current weather in San Francisco is 72 degrees Fahrenheit.
When we configured the weather tool, we added 72 degrees Fahrenheit as a static return of the getCurrentWeather function. That's why we are getting this response.
This all happens locally on your machine and the response should be streamed in your terminal.
In the same way you can create more tools and ship AI features. Since tool is a function in your code, you can call any pipe from it as well.