Agent Run langbase.agent.run()
Agent, an AI Primitive by Langbase, works as a runtime LLM agent. You can specify all parameters at runtime and get the response from the agent.
You can use the agent.run() function as runtime LLM agent. You can specify all parameters at runtime and get the response from the agent.
This makes agent.run() ideal for scenarios where you want maximum flexibility — you can dynamically set the input, tools, and configuration without predefining them.
Generate a User/Org API key
You will need to generate an API key to authenticate your requests. For more information, visit the User/Org API key documentation.
API reference
langbase.agent.run()
Request the Agent by running the langbase.agent.run()
function.
Function Signature
langbase.agent.run(options);
// with types.
langbase.agent.run(options: AgentRunOptions);
options
- Name
options
- Type
- AgentRunOptions
- Description
AgentRunOptions Object
model: string; apiKey: string; input: string | Array<InputMessage>; instructions?: string; stream?: boolean; tools?: Tool[]; tool_choice?: 'auto' | 'required' | ToolChoice; parallel_tool_calls?: boolean; mcp_servers?: McpServerSchema[]; top_p?: number; max_tokens?: number; temperature?: number; presence_penalty?: number; frequency_penalty?: number; stop?: string[]; customModelParams?: Record<string, any>;
Following are the properties of the options object.
model
- Name
model
- Type
- string
- Required
- Required
- Description
LLM model. Combination of model provider and model id, like
openai:gpt-4o-mini
Format:
provider:model_id
You can copy the ID of a model from the list of supported LLM models at Langbase.
apiKey
- Name
apiKey
- Type
- string
- Required
- Required
- Description
LLM API key for the selected model.
input
- Name
input
- Type
- String | Array<InputMessage>
- Required
- Required
- Description
A string (for simple text queries) or an array of input messages.
When using a string, it will be treated as a single user message. Use it for simple queries. For example:
String Input Example
langbase.agent.run({ input: 'What is an AI Agent?', ... });
When using an array of input messages
InputMessage[]
. Each input message should include the following properties:Input Message Object
role: 'user' | 'assistant' | 'system'| 'tool'; content: string | ContentType[] | null; name?: string; tool_call_id?: string;
instructions
- Name
instructions
- Type
- string
- Description
Used to give high level instructions to the model about the task it should perform, including tone, goals, and examples of correct responses.
This is equivalent to a system/developer role message at the top of LLM's context.
stream
- Name
stream
- Type
- boolean
- Description
Whether to stream the response or not. If
true
, the response will be streamed.
tools
- Name
tools
- Type
- Array<Tools>
- Description
A list of tools the model may call.
Tools Object
type: 'function'; function: FunctionOptions
tool_choice
- Name
tool_choice
- Type
- 'auto' | 'required' | ToolChoice
- Description
Tool usage configuration.
- Name
'auto'
- Type
- string
- Description
Model decides when to use tools.
- Name
'required'
- Type
- string
- Description
Model must use specified tools.
- Name
ToolChoice
- Type
- object
- Description
Forces use of a specific function.
ToolChoice Object
type: 'function'; function: { name: string; };
parallel_tool_calls
- Name
parallel_tool_calls
- Type
- boolean
- Description
Call multiple tools in parallel, allowing the effects and results of these function calls to be resolved in parallel.
mcp_servers
- Name
mcp_servers
- Type
- McpServerSchema[]
- Description
An SSE type MCP servers array
McpServerSchema Object
name: string; type: 'url'; url: string; authorization_token?: string; tool_configuration?: { allowed_tools?: string[]; enabled?: boolean; }; custom_headers?: Record<string, string>;
temperature
- Name
temperature
- Type
- number
- Description
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic.
Default:
0.7
top_p
- Name
top_p
- Type
- number
- Description
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
Default:
1
max_tokens
- Name
max_tokens
- Type
- number
- Description
Maximum number of tokens in the response message returned.
Default:
1000
presence_penalty
- Name
presence_penalty
- Type
- number
- Description
Penalizes a word based on its occurrence in the input text.
Default:
0
frequency_penalty
- Name
frequency_penalty
- Type
- number
- Description
Penalizes a word based on how frequently it appears in the training data. Default:
0
stop
- Name
stop
- Type
- string[]
- Description
Up to 4 sequences where the API will stop generating further tokens.
customModelParams
- Name
customModelParams
- Type
- Record<string, any>
- Description
Additional parameters to pass to the model as key-value pairs. These parameters are passed on to the model as-is.
CustomModelParams Object
[key: string]: any;
Example:
{ "logprobs": true, "service_tier": "auto", }
Usage example
Install the SDK
npm i langbase
Environment variables
Environment variables
LANGBASE_API_KEY="<USER/ORG-API-KEY>"
LLM_API_KEY="<YOUR-LLM-API-KEY>"
langbase.agent.run()
examples
langbase.agent.run()
import {Langbase} from 'langbase';
const langbase = new Langbase({
apiKey: process.env.LANGBASE_API_KEY!,
});
async function main() {
const {output} = await langbase.agent.run({
model: 'openai:gpt-4o-mini',
instructions: 'You are a helpful AI Agent.',
input: 'Who is an AI Engineer?',
apiKey: process.env.LLM_API_KEY!,
stream: false,
});
console.log('Agent response:', output);
}
main();
Response
Response of langbase.agent.run()
is a Promise<AgentRunResponse | AgentRunResponseStream>
object.
RunResponse Object
AgentRunResponse Object
output: string | null;
id: string;
object: string;
created: number;
model: string;
choices: ChoiceGenerate[];
usage: Usage;
system_fingerprint: string | null;
rawResponse?: {
headers: Record<string, string>;
};
- Name
output
- Type
- string
- Description
The generated text response (also called completion) from the agent. It can be a string or null if the model called a tool.
- Name
id
- Type
- string
- Description
The ID of the raw response.
- Name
object
- Type
- string
- Description
The object type name of the response.
- Name
created
- Type
- number
- Description
The timestamp of the response creation.
- Name
model
- Type
- string
- Description
The model used to generate the response.
- Name
choices
- Type
- ChoiceGenerate[]
- Description
A list of chat completion choices. Can contain more than one elements if n is greater than 1.
Choice Object for langbase.agent.run() with stream off
index: number; message: Message; logprobs: boolean | null; finish_reason: string;
- Name
usage
- Type
- Usage
- Description
The usage object including the following properties.
Usage Object
prompt_tokens: number; completion_tokens: number; total_tokens: number;
- Name
system_fingerprint
- Type
- string
- Description
This fingerprint represents the backend configuration that the model runs with.
- Name
rawResponse
- Type
- Object
- Description
The different headers of the response.
RunResponseStream Object
Response of langbase.agent.run()
with stream: true
is a Promise<AgentRunResponseStream>
.
AgentRunResponseStream Object
stream: ReadableStream<any>;
rawResponse?: {
headers: Record<string, string>;
};
- Name
rawResponse
- Type
- Object
- Description
The different headers of the response.
- Name
stream
- Type
- ReadableStream
- Description
Stream is an object with a streamed sequence of StreamChunk objects.
StreamResponse Object
type StreamResponse = ReadableStream<StreamChunk>;
StreamChunk
- Name
StreamChunk
- Type
- StreamChunk
- Description
Represents a streamed chunk of a completion response returned by model, based on the provided input.
StreamChunk Object
id: string; object: string; created: number; model: string; choices: ChoiceStream[];
A
StreamChunk
object has the following properties.- Name
id
- Type
- string
- Description
The ID of the response.
- Name
object
- Type
- string
- Description
The object type name of the response.
- Name
created
- Type
- number
- Description
The timestamp of the response creation.
- Name
model
- Type
- string
- Description
The model used to generate the response.
- Name
choices
- Type
- ChoiceStream[]
- Description
A list of chat completion choices. Can contain more than one elements if n is greater than 1.
Choice Object for langbase.agent.run() with stream true
index: number; delta: Delta; logprobs: boolean | null; finish_reason: string;
RunResponse type of langbase.agent.run()
{
"output": "AI Engineer is a person who designs, builds, and maintains AI systems.",
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1720131129,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "AI Engineer is a person who designs, builds, and maintains AI systems."
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 28,
"completion_tokens": 36,
"total_tokens": 64
},
"system_fingerprint": "fp_123"
}
RunResponseStream of langbase.agent.run() with stream true
{
"stream": StreamResponse // example of streamed chunks below.
}
StreamResponse has stream chunks
// A stream chunk looks like this …
{
"id": "chatcmpl-123",
"object": "chat.completion.chunk",
"created": 1719848588,
"model": "gpt-4o-mini",
"system_fingerprint": "fp_44709d6fcb",
"choices": [{
"index": 0,
"delta": { "content": "Hi" },
"logprobs": null,
"finish_reason": null
}]
}
// More chunks as they come in...
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"content":"there"},"logprobs":null,"finish_reason":null}]}
…
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}