Generate Text pipe.generateText()

You can use a pipe to get any LLM to generate text based on a user prompt. For example, you can ask a pipe to generate a text completion based on a user prompt like "Who is an AI Engineer?" or give it a an entire doc and ask it to summarize it.

The Langbase AI SDK provides a generateText() function to generate text using pipes with any LLM.

Deprecation Notice

This SDK method has been deprecated. Please use the new run SDK method with stream false.


Deprecated

Generate a text completion using generateText() function.

Function Signature

generateText(options) // With types. generateText(options: GenerateOptions)

  • Name
    options
    Type
    GenerateOptions
    Description

    GenerateOptions Object

    interface GenerateOptions { messages?: Message[]; variables?: Variable[]; chat?: boolean; threadId?: string; }

    Following are the properties of the options object.


messages

  • Name
    messages
    Type
    Array<Message>
    Description

    A messages array including the following properties. Optional if variables are provided.

    Message Object

    interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string; name?: string; tool_call_id?: string; }

    • Name
      role
      Type
      'user' | 'assistant' | 'system'| 'tool'
      Description

      The role of the author of this message.

    • Name
      content
      Type
      string
      Description

      The contents of the chunk message.

    • Name
      name
      Type
      string
      Description

      The name of the tool called by LLM

    • Name
      tool_call_id
      Type
      string
      Description

      The id of the tool called by LLM


variables

  • Name
    variables
    Type
    Array<Variable>
    Description

    A variables array including the name and value params. Optional if messages are provided.

    Variable Object

    interface Variable { name: string; value: string; }
    • Name
      name
      Type
      string
      Description

      The name of the variable.

    • Name
      value
      Type
      string
      Description

      The value of the variable.


chat

  • Name
    chat
    Type
    boolean
    Description

    For a chat pipe, set chat to true.

    This is useful when you want to use a chat pipe to generate text as it returns a threadId. Defaults to false.


threadId

  • Name
    threadId
    Type
    string | null
    Description

    The ID of the thread. Useful for a chat pipe to continue the conversation in the same thread. Optional.

    • If threadId is not provided, a new thread will be created. E.g. first message of a new chat will not have a threadId.
    • After the first message, a new threadId will be returned.
    • Use this threadId to continue the conversation in the same thread from the second message onwards.

Install the SDK

npm i langbase pnpm i langbase yarn add langbase

.env file

# Add your Pipe API key here. LANGBASE_MY_PIPE_API_KEY="pipe_12345" # … add more keys if you have more pipes.

Generate Pipe: Use generateText()

import {Pipe} from 'langbase'; // 1. Initiate the Pipe. const myPipe = new Pipe({ apiKey: process.env.LANGBASE_PIPE_API_KEY!, }); // 2. Generate the text by asking a question. const {completion} = await myPipe.generateText({ messages: [{role: 'user', content: 'Who is an AI Engineer?'}], }); console.log(completion);

Variables with generateText()

// 1. Initiate the Pipe. // … same as above // 2. Stream text by asking a question. const {completion} = await myPipe.streamText({ messages: [{role: 'user', content: 'Who is {{question}}?'}], variables: [{name: 'question', value: 'AI Engineer'}], });

Chat Pipe: Use generateText()

import {Pipe} from 'langbase'; // Initiate the Pipe. const myPipe = new Pipe({ apiKey: process.env.LANGBASE_PIPE_API_KEY!, }); // Message 1: Tell something to the LLM. const {completion, threadId} = await myPipe.generateText({ messages: [{role: 'user', content: 'My company is called Langbase'}], chat: true, }); console.log(completion); // Message 2: Ask something about the first message. // Continue the conversation in the same thread by sending // `threadId` from the second message onwards. const {completion: completion2} = await myPipe.generateText({ messages: [{role: 'user', content: 'Tell me the name of my company?'}], chat: true, threadId, }); console.log(completion2); // You'll see any LLM will know the company is `Langbase` // since it's the same chat thread. This is how you can // continue a conversation in the same thread.

Response of generateText() is a Promise<GenerateResponse>.

GenerateResponse Object

interface GenerateResponse { completion: string; threadId?: string; id: string; object: string; created: number; model: string; system_fingerprint: string | null; choices: ChoiceGenerate[]; usage: Usage; }
  • Name
    completion
    Type
    string
    Description

    The generated text completion.

  • Name
    threadId
    Type
    string
    Description

    The ID of the thread. Useful for a chat pipe to continue the conversation in the same thread. Optional.

  • Name
    id
    Type
    string
    Description

    The ID of the raw response.

  • Name
    object
    Type
    string
    Description

    The object type name of the response.

  • Name
    created
    Type
    number
    Description

    The timestamp of the response creation.

  • Name
    model
    Type
    string
    Description

    The model used to generate the response.

  • Name
    system_fingerprint
    Type
    string
    Description

    This fingerprint represents the backend configuration that the model runs with.

  • Name
    choices
    Type
    ChoiceGenerate[]
    Description

    A list of chat completion choices. Can contain more than one elements if n is greater than 1.

    Choice Object for generateText()

    interface ChoiceGenerate { index: number; message: Message; logprobs: boolean | null; finish_reason: string; }
  • Name
    usage
    Type
    Usage
    Description

    The usage object including the following properties.

    Usage Object

    interface Usage { prompt_tokens: number; completion_tokens: number; total_tokens: number; }

Response of generateText()

{ "completion": "AI Engineer is a person who designs, builds, and maintains AI systems.", "raw": { "id": "chatcmpl-123", "object": "chat.completion", "created": 1720131129, "model": "gpt-4o-mini", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "AI Engineer is a person who designs, builds, and maintains AI systems.", }, "logprobs": null, "finish_reason": "stop" } ], usage: { prompt_tokens: 28, completion_tokens: 36, total_tokens: 64 }, "system_fingerprint": "fp_123" } }

Response with function call

// Completion is null when an LLM responds // with a tool function call. { "completion": null, "raw": { "id": "chatcmpl-123", "object": "chat.completion", "created": 1720131129, "model": "gpt-3.5-turbo-0125", "choices": [ { "index": 0, "message": { "role": "assistant", "content": null, "tool_calls": [ { "id": "call_abc123", "type": "function", "function": { "name": "get_current_weather", "arguments": "{\n\"location\": \"Boston, MA\"\n}" } } ] }, "logprobs": null, "finish_reason": "stop" } ], usage: { prompt_tokens: 28, completion_tokens: 36, total_tokens: 64 }, "system_fingerprint": null } }