Run Pipe langbase.pipe.run()
You can use the langbase.pipe.run()
function to run a pipe. It can both generate or stream text for a user prompt like "Who is an AI Engineer?" or give it a an entire doc and ask it to summarize it.
Generate a User/Org API key
You will need to generate an API key to authenticate your requests. For more information, visit the User/Org API key documentation.
API reference
langbase.pipe.run(options)
Request LLM by running a pipe with langbase.pipe.run()
function.
Function Signature
langbase.pipe.run(options);
// with types.
langbase.pipe.run(options: RunOptions);
options
- Name
options
- Type
- RunOptions
- Description
RunOptions Object
interface RunOptions { name: string; messages: Message[]; variables?: Variable[]; threadId?: string; rawResponse?: boolean; tools: Tool[]; }
Following are the properties of the options object.
name
- Name
name
- Type
- string
- Required
- Required
- Description
The name of the pipe to run. E.g.
ai-agent
.Pipe name and the User/Org API key is used to run the pipe.
apiKey
- Name
apiKey
- Type
- string
- Description
API key of the pipe you want to run.
If provided,
pipe.run()
will use this key instead of user/org key to identify and run the pipe.
messages
- Name
messages
- Type
- Array<Message>
- Description
A messages array including the following properties. Optional if variables are provided.
Message Object
interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | null; name?: string; tool_call_id?: string; tool_calls?: ToolCall[]; }
- Name
role
- Type
- 'user' | 'assistant' | 'system'| 'tool'
- Description
The role of the author of this message.
- Name
content
- Type
- string
- Description
The contents of the chunk message.
- Name
name
- Type
- string
- Description
The name of the tool called by LLM
- Name
tool_call_id
- Type
- string
- Description
The id of the tool called by LLM
- Name
tool_calls
- Type
- Array<ToolCall>
- Description
The array of tools sent to LLM.
ToolCall Object
interface ToolCall { id: string; type: 'function'; function: Function; }
- Name
function
- Type
- Function
- Description
Function definition sent to LLM.
Function Object
export interface Function { name: string; arguments: string; }
variables
- Name
variables
- Type
- Array<Variable>
- Description
A variables array including the
name
andvalue
params. Optional if messages are provided.Variable Object
interface Variable { name: string; value: string; }
- Name
name
- Type
- string
- Description
The name of the variable.
- Name
value
- Type
- string
- Description
The value of the variable.
threadId
- Name
threadId
- Type
- string | undefined | null
- Description
The ID of the thread. Enable if you want to continue the conversation in the same thread from the second message onwards. Works only with deployed pipes.
- If
threadId
is not provided, a new thread will be created. E.g. first message of a new chat will not have a threadId. - After the first message, a new
threadId
will be returned. - Use this
threadId
to continue the conversation in the same thread from the second message onwards.
- If
rawResponse
- Name
rawResponse
- Type
- boolean | undefined
- Description
Enable if you want to get complete raw LLM response.
Default:
false
tools
- Name
tools
- Type
- Array<Tools>
- Description
A list of tools the model may call.
Tools Object
interface ToolsOptions { type: 'function'; function: FunctionOptions }
- Name
type
- Type
- 'function'
- Description
The type of the tool. Currently, only
function
is supported.
- Name
function
- Type
- FunctionOptions
- Description
The function that the model may call.
FunctionOptions Object
export interface FunctionOptions { name: string; description?: string; parameters?: Record<string, unknown> }
- Name
name
- Type
- string
- Description
The name of the function to call.
- Name
description
- Type
- string
- Description
The description of the function.
- Name
parameters
- Type
- Record<string, unknown>
- Description
The parameters of the function.
options
- Name
options
- Type
- RunOptions
- Description
RunOptions Object
interface RunOptionsStream extends RunOptions { stream: boolean; }
RunOptionsStream
Usage example
Install the SDK
npm i langbase
Environment variables
.env file
LANGBASE_API_KEY="<USER/ORG-API-KEY>"
langbase.pipe.run()
examples
langbase.pipe.run()
import {Langbase} from 'langbase';
const langbase = new Langbase({
apiKey: process.env.LANGBASE_API_KEY!, // User/Org API key
});
const response = await langbase.pipe.run({
name: 'ai-agent',
messages: [
{
role: 'user',
content: 'Who is an AI Engineer?',
},
],
});
Response
Response of langbase.pipe.run()
is a Promise<RunResponse | RunResponseStream>
object.
RunResponse Object
RunResponse Object
interface RunResponse {
completion: string;
threadId?: string;
id: string;
object: string;
created: number;
model: string;
choices: ChoiceGenerate[];
usage: Usage;
system_fingerprint: string | null;
rawResponse?: {
headers: Record<string, string>;
};
}
- Name
completion
- Type
- string
- Description
The generated text completion.
- Name
threadId
- Type
- string
- Description
The ID of the thread. Useful for a chat pipe to continue the conversation in the same thread. Optional.
- Name
id
- Type
- string
- Description
The ID of the raw response.
- Name
object
- Type
- string
- Description
The object type name of the response.
- Name
created
- Type
- number
- Description
The timestamp of the response creation.
- Name
model
- Type
- string
- Description
The model used to generate the response.
- Name
choices
- Type
- ChoiceGenerate[]
- Description
A list of chat completion choices. Can contain more than one elements if n is greater than 1.
Choice Object for langbase.pipe.run() with stream off
interface ChoiceGenerate { index: number; message: Message; logprobs: boolean | null; finish_reason: string; }
- Name
usage
- Type
- Usage
- Description
The usage object including the following properties.
Usage Object
interface Usage { prompt_tokens: number; completion_tokens: number; total_tokens: number; }
- Name
system_fingerprint
- Type
- string
- Description
This fingerprint represents the backend configuration that the model runs with.
- Name
rawResponse
- Type
- Object
- Description
The different headers of the response.
RunResponseStream Object
Response of langbase.pipe.run()
with stream: true
is a Promise<RunResponseStream>
.
RunResponseStream Object
interface RunResponseStream {
stream: ReadableStream<any>;
threadId: string | null;
rawResponse?: {
headers: Record<string, string>;
};
}
- Name
threadId
- Type
- string
- Description
The ID of the thread. Useful for a chat pipe to continue the conversation in the same thread. Optional.
- Name
rawResponse
- Type
- Object
- Description
The different headers of the response.
- Name
stream
- Type
- ReadableStream
- Description
Stream is an object with a streamed sequence of StreamChunk objects.
StreamResponse Object
type StreamResponse = ReadableStream<StreamChunk>;
StreamChunk
- Name
StreamChunk
- Type
- StreamChunk
- Description
Represents a streamed chunk of a completion response returned by model, based on the provided input.
StreamChunk Object
interface StreamChunk { id: string; object: string; created: number; model: string; choices: ChoiceStream[]; }
A
StreamChunk
object has the following properties.- Name
id
- Type
- string
- Description
The ID of the response.
- Name
object
- Type
- string
- Description
The object type name of the response.
- Name
created
- Type
- number
- Description
The timestamp of the response creation.
- Name
model
- Type
- string
- Description
The model used to generate the response.
- Name
choices
- Type
- ChoiceStream[]
- Description
A list of chat completion choices. Can contain more than one elements if n is greater than 1.
Choice Object for langbase.pipe.run() with stream true
interface ChoiceStream { index: number; delta: Delta; logprobs: boolean | null; finish_reason: string; }
RunResponse type of langbase.pipe.run()
{
"completion": "AI Engineer is a person who designs, builds, and maintains AI systems.",
"threadId": "thread_123",
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1720131129,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "AI Engineer is a person who designs, builds, and maintains AI systems."
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 28,
"completion_tokens": 36,
"total_tokens": 64
},
"system_fingerprint": "fp_123"
}
RunResponseStream of langbase.pipe.run() with stream true
{
"threadId": "string-uuid-123",
"stream": StreamResponse // example of streamed chunks below.
}
StreamResponse has stream chunks
// A stream chunk looks like this …
{
"id": "chatcmpl-123",
"object": "chat.completion.chunk",
"created": 1719848588,
"model": "gpt-4o-mini",
"system_fingerprint": "fp_44709d6fcb",
"choices": [{
"index": 0,
"delta": { "content": "Hi" },
"logprobs": null,
"finish_reason": null
}]
}
// More chunks as they come in...
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"content":"there"},"logprobs":null,"finish_reason":null}]}
…
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}