Pipe API: Update beta
The update
pipe API endpoint allows you to update a pipe on Langbase dynamically with the API. You can use this endpoint to update a pipe with all the custom configuration. This endpoint requires a User or Org API key. To generate a User or Org API key visit your profile/organization settings page on Langbase.
Generate a User/Org API key
You will need to generate an API key to authenticate your requests. For more information, visit the User/Org API key documentation.
Update a pipe
Update a pipe by sending the pipe configuration inside the request body.
Required headers
- Name
Content-Type
- Type
- string
- Description
Request content type. Needs to be
application/json
.
- Name
Authorization
- Type
- string
- Description
Replace
YOUR_API_KEY
with your User/Org API key.
Required path parameters
- Name
owner
- Type
- string
- Description
Your organization name or username.
Replace
{owner}
with your organization name or username.
- Name
pipe
- Type
- string
- Description
Name of the pipe.
Replace
{pipe}
with the name of the pipe.
Optional attributes
- Name
name
- Type
- string
- Description
Name of the pipe.
- Name
description
- Type
- array
- Description
Short description of the pipe.
- Name
status
- Type
- string
- Description
Status of the pipe.
Can be one of:
public
,private
- Name
config
- Type
- object
- Description
Configuration object of the pipe.
- Name
config.meta.stream
- Type
- boolean
- Description
If enabled, the output will be streamed in real-time like ChatGPT. This is helpful if user is directly reading the text.
- Name
config.meta.json
- Type
- boolean
- Description
Enforce the output to be in JSON format.
- Name
config.meta.store
- Type
- boolean
- Description
If enabled, both prompt and completions will be stored in the database. Otherwise, only system prompt and few shot messages will be saved.
- Name
config.meta.moderate
- Type
- boolean
- Description
If enabled, Langbase blocks flagged requests automatically.
- Name
config.model.name
- Type
- string
- Description
ID of the LLM model. You can copy the ID of a model from the list of supported LLM models at Langbase.
- Name
config.model.provider
- Type
- string
- Description
Name of the LLM model provider. Check out the list of all the supported LLM providers at Langbase.
Can be one of the supported providers:
OpenAI
,Together
,Anthropic
,Groq
,Google
,Cohere
.
- Name
config.model.tool_choice
- Type
- 'auto' | 'required' | 'object'
- Description
Controls which (if any) tool is called by the model.
auto
- the model can pick between generating a message or calling one or more tools.required
- the model must call one or more tools.object
- Specifying a particular tool via{"type": "function", "function": {"name": "my_function"}}
forces the model to call that tool.
Default:
auto
- Name
config.model.parallel_tool_calls
- Type
- boolean
- Description
Call multiple tools in parallel, allowing the effects and results of these function calls to be resolved in parallel.
- Name
config.model.params.top_p
- Type
- number
- Description
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
- Name
config.model.params.max_tokens
- Type
- number
- Description
Maximum number of tokens in the response message returned.
- Name
config.model.params.temperature
- Type
- number
- Description
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic.
- Name
config.model.params.presence_penalty
- Type
- number
- Description
Penalizes a word based on its occurrence in the input text.
- Name
config.model.params.frequency_penalty
- Type
- number
- Description
Penalizes a word based on how frequently it appears in the training data.
- Name
config.model.params.stop
- Type
- array
- Description
Up to 4 sequences where the API will stop generating further tokens.
- Name
config.prompt.system
- Type
- string
- Description
System prompt. Insert variables in the prompt with syntax like .
- Name
config.prompt.opening
- Type
- string
- Description
Chat opening prompt.
- Name
config.prompt.safety
- Type
- string
- Description
AI Safety prompt.
- Name
config.prompt.messages
- Type
- array
- Description
An array containing message objects.
- Name
config.prompt.variables
- Type
- array
- Description
An array containing different variable objects.
- Name
config.prompt.json
- Type
- string
- Description
Use this prompt to define the JSON output format, schema, and more. It will be appended to the system prompt.
- Name
config.prompt.rag
- Type
- string
- Description
Use this prompt to make the LLM answer questions from Memoryset documents.
- Name
config.tools
- Type
- array
- Description
An array of objects with valid tool definitions.
Read more about valid tool definition
Default:
[]
- Name
config.memorysets
- Type
- array
- Description
An array of memoryset names.
Basic Pipe
curl https://api.langbase.com/beta/pipes/{owner}/{pipe} \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer <YOUR_API_KEY>" \
-d '{
"name": "Test Pipe",
"description": "This is a test pipe",
"status": "public"
}'
Response
{
"name": "test-pipe",
"type": "chat",
"description": "This is a create Pipe test from API",
"status": "private",
"api_key": "pipe_4FVBn2DgrzfJf...",
"owner_login": "langbase",
"url": "https://langbase.com/langbase/test-pipe"
}