Page https://langbase.com/docs/ import { Guides } from '@/components/Guides'; import { Resources } from '@/components/Resources'; import { HeroPattern } from '@/components/HeroPattern'; import { generateMetadata } from '@/lib/generate-metadata';

Langbase Docs

### Langbase is the most powerful serverless AI cloud for building and deploying AI agents. Build, deploy, and scale serverless AI agents with tools and memory (RAG). Simple AI primitives no bloated frameworks, all with a world-class developer experience without using any frameworks. CHAI.newvibe code ai agents.can build agents for you, literally vibe code any ai agent --- --- ## Why Langbase? Langbase is the best way to build and deploy AI agents. Our mission: AI for all. Not just ML wizards. Every. Single. Developer. **Build AI agents without any bloated frameworks**. You write the logic, we handle the logistics. Compared to complex AI frameworks, Langbase is serverless and [the first composable AI platform][composable]. 1. Start by building simple [AI agents (pipes)](/pipe) 2. Then train serverless semantic [Memory agents (RAG)](/memory) to get accurate and trusted results Get started for free: - [CHAI.new](https://chai.new): Vibe code any AI agent. Chai turns prompts into prod-ready agents. - [AI Studio][studio]: Build, collaborate, and deploy AI Agents with tools and Memory (RAG). - [Langbase SDK][sdk]: Easiest wasy to build AI Agents with TypeScript. (recommended) - [HTTP API][api]: Build AI agents with any language (Python, Go, PHP, etc.). --- **Langbase is free for anyone to [get started][signup]**. We process billions of AI messages tokens daily, used by thousands of developers. [Tweet][x] us — what will you ship with Langbase? It all [started][start] with a developer thinking … GPT is amazing, I want it everywhere, that's what ⌘ Langbase does for me. --- --- --- --- --- [pipe]: /pipe [memory]: /memory [studio]: https://studio.langbase.com [composable]: /composable-ai [start]: https://langbase.fyi/starting-langbase [signup]: https://langbase.fyi/awesome [x]: https://twitter.com/LangbaseInc [li]: https://www.linkedin.com/company/langbase/ [email]: mailto:support@langbase.com?subject=Pipe-Quickstart&body=Ref:%20https://langbase.com/docs/pipe/quickstart [api]: /api-reference [sdk]: /sdk
Workflow https://langbase.com/docs/workflow/ import { generateMetadata } from '@/lib/generate-metadata'; # Workflow Workflow, an AI Primitive by Langbase, helps you in building multi-step agentic applications by breaking them into simple steps with built-in durable features like timeouts and retries. Building AI applications often requires orchestrating multiple operations that depend on each other. Workflow enables you to: - Orchestrate operations in both sequential and parallel execution patterns - Create conditional execution paths based on previous step results - Implement resilient processes with configurable retry strategies - Set time boundaries to prevent operations from running indefinitely - Track execution flow with detailed step-by-step logging --- ## Quickstart This workflow example shows how to build a simple email processing workflow that summarizes the email, analyzes the sentiment, and determines if a response is needed. ## Step #1: Generate Langbase API key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. If not, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Step #2: Setup your project Create a new directory for your project and navigate to it. ```bash {{ title: 'bash' }} mkdir langbase-workflow cd langbase-workflow ``` ### Initialize the project Create a new Node.js project. ```bash {{ title: 'npm' }} npm init -y ``` ```bash {{ title: 'pnpm' }} pnpm init ``` ```bash {{ title: 'yarn' }} yarn init -y ``` ### Install Langbase SDK Install Langbase SDK: ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` Install `dotenv` for environment variables and `@types/node` packages: ```bash {{ title: 'npm' }} npm i dotenv -D @types/node ``` ```bash {{ title: 'pnpm' }} pnpm i dotenv -D @types/node ``` ```bash {{ title: 'yarn' }} yarn add dotenv -D @types/node ``` ### Create an env file Create a new file called `.env` and paste the following code: - `LANGBASE_API_KEY`: Your Langbase API key - `LLM_API_KEY`: Your LLM API key ```bash LANGBASE_API_KEY= LLM_API_KEY= ``` ## Step #3: Create a new workflow Create a new file called `workflow.ts` and paste the following code. In this example, we create an email processing workflow that summarizes the email, analyzes the sentiment, and determines if a response is needed. Each step in the workflow requires at least: 1. An `id` to identify the step 2. A `run` function that contains the logic for the step The `run` function can be an async function that returns a value or a promise. The retruned value will be available to the next step in the workflow. ```ts import 'dotenv/config'; import {Langbase, Workflow} from 'langbase'; async function processEmail({emailContent}: {emailContent: string}) { // Initialize Langbase const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); // Create a new workflow const workflow = new Workflow({ debug: true, }); try { // Steps 1 & 2: Run summary and sentiment analysis in parallel const [summary, sentiment] = await Promise.all([ workflow.step({ id: 'summarize_email', // The id for the step run: async () => { const response = await langbase.agent.run({ model: 'openai:gpt-4.1-mini', instructions: `Create a concise summary of this email. Focus on the main points, requests, and any action items mentioned.`, apiKey: process.env.OPENAI_API_KEY!, input: [{role: 'user', content: emailContent}], stream: false, }); return response.output; }, }), workflow.step({ id: 'analyze_sentiment', run: async () => { const response = await langbase.agent.run({ model: 'openai:gpt-4.1-mini', instructions: `Analyze the sentiment of this email. Provide a brief analysis that includes the overall tone (positive, neutral, or negative) and any notable emotional elements.`, apiKey: process.env.OPENAI_API_KEY!, input: [{role: 'user', content: emailContent}], stream: false, }); return response.output; }, }), ]); // Step 3: Determine if response is needed (using the results from previous steps) const responseNeeded = await workflow.step({ id: 'determine_response_needed', run: async () => { const response = await langbase.agent.run({ model: 'openai:gpt-4.1-mini', instructions: `Based on the email summary and sentiment analysis, determine if a response is needed. Answer with 'yes' if a response is required, or 'no' if no response is needed. Consider factors like: Does the email contain a question? Is there an explicit request? Is it urgent?`, apiKey: process.env.OPENAI_API_KEY!, input: [ { role: 'user', content: `Email: ${emailContent}\n\nSummary: ${summary}\n\nSentiment: ${sentiment}\n\nDoes this email require a response?`, }, ], stream: false, }); return response.output.toLowerCase().includes('yes'); }, }); // Step 4: Generate response if needed let response: string | null = null; if (responseNeeded) { response = await workflow.step({ id: 'generate_response', run: async () => { const response = await langbase.agent.run({ model: 'openai:gpt-4.1-mini', instructions: `Generate a professional email response. Address all questions and requests from the original email. Be helpful, clear, and maintain a professional tone that matches the original email sentiment.`, apiKey: process.env.OPENAI_API_KEY!, input: [ { role: 'user', content: `Original Email: ${emailContent}\n\nSummary: ${summary}\n\n Sentiment Analysis: ${sentiment}\n\nPlease draft a response email.`, }, ], stream: false, }); return response.output; }, }); } // Return the results return { summary, sentiment, responseNeeded, response, }; } catch (error) { console.error('Email processing workflow failed:', error); throw error; } } async function main() { const sampleEmail = ` Subject: Pricing Information and Demo Request Hello, I came across your platform and I'm interested in learning more about your product for our growing company. Could you please send me some information on your pricing tiers? We're particularly interested in the enterprise tier as we now have a team of about 50 people who would need access. Would it be possible to schedule a demo sometime next week? Thanks in advance for your help! Best regards, Jamie `; const results = await processEmail({emailContent: sampleEmail}); console.log(JSON.stringify(results, null, 2)); } main(); ```` ## Step 4: Run the workflow ```bash npx tsx workflow.ts ```` This workflow efficiently processes an email through multiple steps: 1. Email Summarization and Sentiment Analysis are run in parallel. 2. After the summary and sentiment analysis are complete, the response determination step is run. 3. If a response is needed, the response generation step is run. By running sentiment analysis and response determination in parallel, we optimize the workflow's execution time without sacrificing functionality. Each step still has access to the data it needs from previous steps. ```json { "summary": "The sender is interested in learning more about the product for their company of about 50 people. They are requesting pricing information, particularly for the enterprise tier, and would like to schedule a demo next week.", "sentiment": "positive", "responseNeeded": true, "response": "Subject: RE: Pricing Information and Demo Request Hello Jamie, Thank you for your interest in our platform! I'd be happy to provide you with information about our pricing tiers and arrange a demo for your team. For an enterprise-level organization with 50 users, our Enterprise tier would indeed be the most suitable option. This package includes: - Unlimited access for all team members - Advanced security features - Dedicated account manager - Custom integrations - Priority support I've attached our complete pricing brochure with detailed information about all our plans. For your team size, we offer custom pricing with volume discounts. Regarding the demo, I'd be delighted to schedule one for next week. Could you please let me know what days and times work best for you and your team? Our demos typically take about 45 minutes and include time for questions. If you have any specific features or use cases you'd like us to focus on during the demo, please let me know so I can tailor the presentation to your needs. Looking forward to hearing from you and showing you how our platform can benefit your growing company. Best regards, [Your Name] [Your Position] [Contact Information]" } ``` --- ## Next Steps - Build something cool with Langbase [APIs](/api-reference) and [SDK](/sdk). - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- Tools https://langbase.com/docs/tools/ import { generateMetadata } from '@/lib/generate-metadata'; # Tools Tools, an AI Primitive by Langbase, allows you to extend the capabilities of your AI applications. They enable you to integrate functionality such as web search, crawling, and other specialized tasks into your AI workflows. By using tools, you can enhance the performance and versatility of your AI agents, making them more capable of handling complex tasks and providing valuable insights. --- ## Quickstart: Using Langbase Tools --- ## Let's get started In this guide, we'll use the Langbase SDK to interact with the Tools API, specifically focusing on web crawling and web search capabilities: --- ## Step #1: Generate Langbase API key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. If not, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. Some tools require external API keys. For this guide, you'll need: - [Spider.cloud](https://spider.cloud) API key for web crawling - [Exa](https://dashboard.exa.ai/api-keys) API key for web search --- ## Step #2: Setup your project Create a new directory for your project and navigate to it. ```bash mkdir langbase-tools && cd langbase-tools ``` ### Initialize the project Create a new Node.js project. ```bash {{ title: 'npm' }} npm init -y ``` ```bash {{ title: 'pnpm' }} pnpm init ``` ```bash {{ title: 'yarn' }} yarn init -y ``` ### Install dependencies You will use the [Langbase SDK](/sdk) and `dotenv` to manage environment variables. ```bash {{ title: 'npm' }} npm i langbase dotenv ``` ```bash {{ title: 'pnpm' }} pnpm add langbase dotenv ``` ```bash {{ title: 'yarn' }} yarn add langbase dotenv ``` ### Create an env file Create a `.env` file in the root of your project and add your API keys: ```bash {{ title: '.env' }} LANGBASE_API_KEY=your_langbase_api_key_here SPIDER_CLOUD_API_KEY=your_spider_cloud_api_key_here EXA_API_KEY=your_exa_api_key_here ``` --- ## Step #3: Web Crawling with Langbase Tools Let's create a file named `web-crawler.ts` that demonstrates how to use the web crawling tool: ```ts {{ title: 'TypeScript' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { // Use the crawl tool to extract content from these URLs const crawlResults = await langbase.tools.crawl({ url: ['https://langbase.com', 'https://langbase.com/about'], apiKey: process.env.SPIDER_CLOUD_API_KEY!, maxPages: 1 // Limit the crawl to 1 pages }); // Display the results console.log(crawlResults); } main() ``` --- ## Step #4: Run the web crawler Run the script to crawl the specified websites: ```bash {{ title: 'npm' }} npx tsx web-crawler.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx web-crawler.ts ``` You should see output showing the crawled URLs and extracted content: ```js [ { "url": "https://langbase.com/about", "content": "⌘Langbase –Serverless AI Agents platform# # ⌘Langbase –Serverless AI Agents platformThe most powerful serverless platform for building AI agents. Build. Deploy. Scale." }, { "url": "https://langbase.com", "content": "⌘Langbase –Serverless AI Agents platform# # ⌘Langbase –Serverless AI Agents platformThe most powerful serverless platform for building AI agents. Build. Deploy. Scale." } ] ``` --- ## Step #5: Web Search with Langbase Tools Now, let's create a file named `web-search.ts` that demonstrates how to use the web search tool: ```ts {{ title: 'TypeScript' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { // Perform a web search query const results = await langbase.tools.webSearch({ service: 'exa', totalResults: 2, query: 'What is Langbase?', domains: ['https://langbase.com'], apiKey: process.env.EXA_API_KEY!, // Find Exa key: https://dashboard.exa.ai/api-keys }); console.log(results); } main() ``` --- ## Step #6: Run the web search Run the script to perform web searches: ```bash {{ title: 'npm' }} npx tsx web-search.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx web-search.ts ``` You should see output showing the search results: ```js [ { "url": "https://langbase.com/", "content": "The most powerful serverless platform for building AI products. BaseAI: The first Web AI Framework for developers Build agentic ( pipes memory tools )" }, { "url": "https://langbase.com/about", "content": "The most powerful serverless platform for building AI products. BaseAI: The first Web AI Framework for developers Build agentic ( pipes memory tools )", } ] ``` --- ## Next Steps - Combine tools with other Langbase primitives like Embed and Chunk to build more powerful AI apps - Use these tools to enhance your RAG (Retrieval-Augmented Generation) systems with real-time web data - Build something cool with Langbase [SDK](/sdk) and [APIs](/api-reference) - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support --- Threads https://langbase.com/docs/threads/ import { generateMetadata } from '@/lib/generate-metadata'; # Threads Threads, an AI Primitive by Langbase, allows you to manage conversation history and context. They are essential for building conversational applications that require context management and organization of conversation threads. Threads help you maintain a coherent conversation flow, making it easier to build applications that require context-aware interactions. --- ## Quickstart: Managing Conversations with Threads --- ## Let's get started In this guide, we'll use the Langbase SDK to interact with the Threads API: --- ## Step #1: Generate Langbase API key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. If not, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Step #2: Setup your project Create a new directory for your project and navigate to it. ```bash mkdir conversation-app && cd conversation-app ``` ### Initialize the project Create a new Node.js project. ```bash {{ title: 'npm' }} npm init -y ``` ```bash {{ title: 'pnpm' }} pnpm init ``` ```bash {{ title: 'yarn' }} yarn init -y ``` ### Install dependencies You will use the [Langbase SDK](/sdk) to work with threads and `dotenv` to manage environment variables. ```bash {{ title: 'npm' }} npm i langbase dotenv ``` ```bash {{ title: 'pnpm' }} pnpm add langbase dotenv ``` ```bash {{ title: 'yarn' }} yarn add langbase dotenv ``` ### Create an env file Create a `.env` file in the root of your project and add your Langbase API key: ```bash {{ title: '.env' }} LANGBASE_API_KEY=your_api_key_here ``` --- ## Step #3: Create a new thread Let's create a file named `create-thread.ts` to demonstrate how to create a new thread: ```ts {{ title: 'TypeScript' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { // Create a new thread with initial messages and metadata const thread = await langbase.threads.create({ // Metadata for organization and filtering metadata: { userId: "user-456", topic: "billing-question", channel: "website" }, // Initial messages to start the conversation messages: [ { role: "system", content: "You are a helpful customer support agent. Be concise and friendly in your responses." }, { role: "user", content: "Hi, I have a question about my recent bill." } ] }); console.log('Thread created:', thread); } main() ``` Run the script to create your first thread: ```bash {{ title: 'npm' }} npx tsx create-thread.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx create-thread.ts ``` You should see output similar to this: ```json { "id": "06d1be7e-94fb-4219-b983-931089680ebb", "object": "thread", "created_at": 1714322048, "metadata": { "userId": "user-456", "topic": "billing-question", "channel": "website" } } ``` --- ## Step #4: Add messages to a thread Now let's create a file named `append-messages.ts` to add more messages to our thread: ```ts {{ title: 'TypeScript' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const threadId = "06d1be7e-94fb-4219-b983-931089680ebb"; // Add an assistant response and a follow-up user message const messages = await langbase.threads.append({ threadId, messages: [ { role: "assistant", content: "I'd be happy to help with your billing question. Could you please provide your account number or the specific charge you're inquiring about?" }, { role: "user", content: "My account number is AC-9876. I was charged twice for the same service on April 15.", metadata: { timestamp: new Date().toISOString(), device: "mobile" } } ] }); console.log(`Added ${messages.length} messages to thread ${threadId}`); console.log('Messages:', messages); } main() ``` Run the script to append messages to your thread: ```bash {{ title: 'npm' }} npx tsx append-messages.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx append-messages.ts ``` --- ## Step #5: Retrieve thread messages Let's create a file named `list-messages.ts` to retrieve all messages in our thread: ```ts {{ title: 'TypeScript' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const threadId = "06d1be7e-94fb-4219-b983-931089680ebb"; // List all messages in the thread const messages = await langbase.threads.messages.list({ threadId }); console.log(`Retrieved ${messages.length} messages from thread ${threadId}`); // Display the conversation messages.forEach(msg => { console.log(`[${msg.role}]: ${msg.content}`); }); } main() ``` Run the script to see all messages in your thread: ```bash {{ title: 'npm' }} npx tsx list-messages.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx list-messages.ts ``` The output should show the entire conversation history in chronological order: ``` Retrieved 4 messages from thread 06d1be7e-94fb-4219-b983-931089680ebb [system]: You are a helpful customer support agent. Be concise and friendly in your responses. [user]: Hi, I have a question about my recent bill. [assistant]: I'd be happy to help with your billing question. Could you please provide your account number or the specific charge you're inquiring about? [user]: My account number is AC-9876. I was charged twice for the same service on April 15. ``` --- ## Next Steps - Build something cool with Langbase [SDK](/sdk) and [APIs](/api-reference). - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- Supported Models and Providers https://langbase.com/docs/supported-models-and-providers/ import { generateMetadata } from '@/lib/generate-metadata'; # Supported Models and Providers
Langbase supports a wide range of latest Large Language Models (LLMs) and providers. We are continuously adding support for the latest models as they are released. Here are some of the models and providers supported by Langbase.
## Supported LLM Providers We currently support the following LLM providers. - [OpenAI](#open-ai) - [Anthropic](#anthropic) - [Google AI](#google-ai) - [Together](#together) - [Fireworks AI](#fireworks-ai) - [Groq](#groq) - [Deepseek](#deepseek) - [Perplexity](#perplexity) - [Mistral AI](#mistral-ai) - [Cohere](#cohere) - [xAI](#x-ai) - [Azure OpenAI](#azure-open-ai) - [OpenRouter](#open-router) You can use any of these providers to build your Pipe, by adding your provider's key. Please feel free to request any specific provider you would like to use. --- ## Supported LLM Models We support the following LLM models from the above providers. Please feel free to request any specific model you would like to use. ## OpenAI | Model | Provider | Owner | Context | Cost* | |-------------------------------------------------------------------------------------|----------|--------|---------|----------------------------------------------| | o3
ID: | OpenAI | OpenAI | 200,000 | $2 prompt
$8 completion | | o4-mini
ID: | OpenAI | OpenAI | 200,000 | $1.1 prompt
$4.4 completion | | o3-mini
ID: | OpenAI | OpenAI | 200,000 | $1.1 prompt
$4.4 completion | | o1
ID: | OpenAI | OpenAI | 200,000 | $15.0 prompt
$60.0 completion | | o1-preview
ID: | OpenAI | OpenAI | 128,000 | $15.0 prompt
$60.0 completion | | o1-mini
ID: | OpenAI | OpenAI | 128,000 | $3.0 prompt
$12.0 completion | | gpt-4.1
ID: | OpenAI | OpenAI | 1M | $2 prompt
$8.0 completion | | gpt-4.1-mini
ID: | OpenAI | OpenAI | 1M | $0.4 prompt
$1.6 completion | | gpt-4.1-nano
ID: | OpenAI | OpenAI | 1M | $0.1 prompt
$0.4 completion | | gpt-4o
ID: | OpenAI | OpenAI | 128,000 | $2.5 prompt
$10.0 completion | | chatgpt-4o-latest
ID: | OpenAI | OpenAI | 128,000 | $5 prompt
$15.0 completion | | gpt-4o-2024-08-06
ID: | OpenAI | OpenAI | 128,000 | $2.5 prompt
$10.0 completion | | gpt-4o-mini
ID: | OpenAI | OpenAI | 128,000 | $0.15 prompt
$0.6 completion | | gpt-4-turbo
ID: | OpenAI | OpenAI | 128,000 | $10.0 prompt
$30.0 completion | | gpt-4-turbo-preview
ID: | OpenAI | OpenAI | 128,000 | $10.0 prompt
$30.0 completion | | gpt-4-0125-preview
ID: | OpenAI | OpenAI | 128,000 | $10.0 prompt
$30.0 completion | | gpt-4-1106-preview
ID: | OpenAI | OpenAI | 128,000 | $10.0 prompt
$30.0 completion | | gpt-4
ID: | OpenAI | OpenAI | 8,192 | $30.0 prompt
$60.0 completion | | gpt-4-0613
ID: | OpenAI | OpenAI | 8,192 | $30.0 prompt
$60.0 completion | | gpt-4-32k
ID: | OpenAI | OpenAI | 32,768 | $60.0 prompt
$120.0 completion | | gpt-3.5-turbo-0125
ID: | OpenAI | OpenAI | 16,385 | $0.5 prompt
$1.5 completion | | gpt-3.5-turbo-1106
ID: | OpenAI | OpenAI | 16,385 | $1.0 prompt
$2.0 completion | | gpt-3.5-turbo
ID: | OpenAI | OpenAI | 4,096 | $1.5 prompt
$2.0 completion | | gpt-3.5-turbo-16k
ID: | OpenAI | OpenAI | 16,385 | $3.0 prompt
$4.0 completion | * USD per Million tokens ## Anthropic | Model | Provider | Owner | Context | Cost* | |--------------------------------------------------------------------------------------------|-----------|-----------|---------|---------------------------------------------| | claude-opus-4-20250514
ID: | Anthropic | Anthropic | 200K | $15 prompt
$75 completion | | claude-sonnet-4-20250514
ID: | Anthropic | Anthropic | 200K | $3 prompt
$15 completion | | claude-3.7-sonnet-latest
ID: | Anthropic | Anthropic | 200K | $3 prompt
$15 completion | | claude-3.7-sonnet-20250219
ID: | Anthropic | Anthropic | 200K | $3 prompt
$15 completion | | claude-3.5-sonnet-latest
ID: | Anthropic | Anthropic | 200K | $3 prompt
$15 completion | | claude-3-5-haiku-20241022
ID: | Anthropic | Anthropic | 200K | $1 prompt
$5 completion | | claude-3.5-sonnet-20240620
ID: | Anthropic | Anthropic | 200K | $3 prompt
$15 completion | | claude-3-opus
ID: | Anthropic | Anthropic | 200K | $15 prompt
$75 completion | | claude-3-sonnet
ID: | Anthropic | Anthropic | 200K | $3 prompt
$15 completion | | claude-3-haiku
ID: | Anthropic | Anthropic | 200K | $0.25 prompt
$1.25 completion | * USD per Million tokens ## Google AI | Model | Provider | Owner | Context | Cost* | |-----------------------------------------------------------------------------------------------------------------------|----------|--------|---------|-----------------------------------------------| | gemini-2.5-pro
ID: | Google | Google | 1M | $1.25 prompt
$10 completion | | gemini-2.5-flash
ID: | Google | Google | 1M | $0.3 prompt
$2.5 completion | | gemini-2.5-flash-lite-preview-06-17
ID: | Google | Google | 1M | $0.1 prompt
$0.4 completion | | gemini-2.5-pro-preview-06-05
ID: | Google | Google | 1M | $1.25 prompt
$10 completion | | gemini-2.5-pro-preview-05-06
ID: | Google | Google | 1M | $1.25 prompt
$10 completion | | gemini-2.5-flash-preview-05-20
ID: | Google | Google | 1M | $0.15 prompt
$3.50 completion | | gemini-2.5-flash-preview-04-17
ID: | Google | Google | 1M | $0.15 prompt
$3.50 completion | | gemini-2.5-pro-preview-03-25
ID: | Google | Google | 1M | $1.25 prompt
$10 completion | | gemini-2.0-flash
ID: | Google | Google | 1M | $0.1 prompt
$0.4 completion | | gemini-2.0-flash-lite
ID: | Google | Google | 1M | $0.075 prompt
$0.3 completion | | gemini-1.5-pro
ID: | Google | Google | upto 1M | $7 prompt
$21 completion | | gemini-1.5-flash
ID: | Google | Google | upto 1M | $0.075 prompt
$0.3 completion | | gemini-1.5-flash-8b
ID: | Google | Google | upto 1M | $0.0375 prompt
$0.15 completion | | gemini-1.0-pro
ID: | Google | Google | 30,720 | $0.5 prompt
$1.5 completion | * USD per Million tokens ## OpenRouter | Model | Provider | Owner | Context | Cost* | |------------------------------------------------------------------------------------------------------------------------------------|------------|------------|---------|----------------------------------------------| | anthropic/claude-3.7-sonnet
ID: | OpenRouter | Anthropic | 200,000 | $3 prompt
$5 completion | | anthropic/claude-3.7-sonnet:thinking
ID: | OpenRouter | Anthropic | 200,000 | $3 prompt
$15 completion | | mistralai/magistral-medium-2506:thinking
ID: | OpenRouter | Mistral | 40K | $2 prompt
$5 completion | | openai/o1-pro
ID: | OpenRouter | OpenAI | 200,000 | $150 prompt
$600 completion | | anthropic/claude-3.5-sonnet
ID: | OpenRouter | Anthropic | 200,000 | $3 prompt
$15 completion | | xai/grok-3-beta
ID: | xAI | xAI | 131,072 | $3 prompt
$15 completion | | xai/grok-3-mini-beta
ID: | xAI | xAI | 131,072 | $0.3 prompt
$0.5 completion | | cohere/command-a
ID: | OpenRouter | Cohere | 256,000 | $2.5 prompt
$10 completion | | perplexity/sonar-deep-research
ID: | OpenRouter | Perplexity | 200,000 | $2 prompt
$8 completion | | deepseek/deepseek-r1:free
ID: | OpenRouter | DeepSeek | 164,000 | $0 prompt
$0 completion | | deepseek-chat-v3-0324:free
ID: | OpenRouter | DeepSeek | 131,000 | $0 prompt
$0 completion | | deepseek-chat-v3-0324
ID: | OpenRouter | DeepSeek | 131,000 | $0.27 prompt
$1.1 completion | | google/gemini-2.0-flash-001
ID: | OpenRouter | Google | 1M | $0.1 prompt
$0.4 completion | | google/gemma-3-27b-it:free
ID: | OpenRouter | Google | 96,000 | $0 prompt
$0 completion | | mistralai/mistral-nemo
ID: | OpenRouter | Mistral | 131K | $0.035 prompt
$0.08 completion | * USD per Million tokens ## Together | Model | Provider | Owner | Context | Cost* | |----------------------------------------------------------------------------------------------------------------------------------------|----------|------------|---------|----------------------------------------------------| | Llama-4-Maverick-17B-128E-Instruct-FP8
ID: | Together | Meta | 500,000 | $0.27 prompt
$0.85 completion | | Llama-4-Scout-17B-16E-Instruct
ID: | Together | Meta | 300,000 | $0.18 prompt
$0.59 completion | | Llama-3.3-70B-Instruct-Turbo
ID: | Together | Meta | 131,072 | $0.88 prompt
$0.88 completion | | deepseek-v3
ID: | Together | Deepseek | 131,072 | $1.25 prompt
$1.25 completion | | Llama-3.1-405B-Instruct-Turbo
ID: | Together | Meta | 4,096 | $5 prompt
$5 completion | | Qwen2.5-72b
ID: | Together | Qwen | 32,768 | $0.9 prompt
$0.9 completion | | Llama-3.1-70B-Instruct-Turbo
ID: | Together | Meta | 8,192 | $0.88 prompt
$0.88 completion | | Llama-3.1-8B-Instruct-Turbo
ID: | Together | Meta | 8,192 | $0.18 prompt
$0.18 completion | | Llama-3-70b-chat-hf
ID: | Together | Meta | 8,192 | $0.9 prompt
$0.9 completion | | Llama-3-8b-chat-hf
ID: | Together | Meta | 8,192 | $0.2 prompt
$0.2 completion | | Llama-2-13b-chat-hf
ID: | Together | Meta | 4,096 | $0.225 prompt
$0.225 completion | | gemma-2b-it
ID: | Together | Google | 8,192 | $0.1 prompt
$0.1 completion | | 7B-Instruct-v0.1
ID: | Together | Mistral | 4,096 | $0.2 prompt
$0.2 completion | | 7B-Instruct-v0.2
ID: | Together | Mistral | 32,768 | $0.2 prompt
$0.2 completion | | Mixtral-8x7B-Instruct-v0.1
ID: | Together | Mistral | 32,768 | $0.6 prompt
$0.6 completion | | Mixtral-8x22B-Instruct-v0.1
ID: | Together | Mistral | 64,000 | $1.2 prompt
$1.2 completion | | DBRX-instruct
ID: | Together | Databricks | 32,768 | $1.2 prompt
$1.2 completion | * USD per Million tokens ## Fireworks AI | Model | Provider | Owner | Context | Cost* | |-------------------------------------------------------------------------------------------------|--------------|----------|---------|--------------------------------------------------| | Llama 4 Maverick Instruct (Basic)
ID: | Fireworks AI | Meta | 1M | $0.22 prompt
$0.88 completion | | Llama 4 Scout Instruct (Basic)
ID: | Fireworks AI | Meta | 128,000 | $0.15 prompt
$0.60 completion | | Llama 3.3 70B Instruct
ID: | Fireworks AI | Meta | 131,072 | $0.9 prompt
$0.9 completion | | deepseek-v3
ID: | Fireworks AI | Deepseek | 131,072 | $0.9 prompt
$0.9 completion | | Llama-3.2-3b
ID: | Fireworks AI | Meta | 131,072 | $0.1 prompt
$0.1 completion | | Llama-3.2-1b
ID: | Fireworks AI | Meta | 131,072 | $0.1 prompt
$0.1 completion | | Llama-3.1-405b
ID: | Fireworks AI | Meta | 131,072 | $3 prompt
$3 completion | | Qwen2.5-72b
ID: | Fireworks AI | Qwen | 32,768 | $0.9 prompt
$0.9 completion | | Llama-3.1-70b
ID: | Fireworks AI | Meta | 131,072 | $0.9 prompt
$0.9 completion | | Llama-3.1-8b
ID: | Fireworks AI | Meta | 131,072 | $0.2 prompt
$0.2 completion | | yi-large
ID: | Fireworks AI | 01.AI | 32,768 | $3 prompt
$3 completion | | Llama-3-70b
ID: | Fireworks AI | Meta | 8,192 | $0.9 prompt
$0.9 completion | * USD per Million tokens ## Groq | Model | Provider | Owner | Context | Cost* | |-----------------------------------------------------------------------------------------------------|----------|----------|---------|---------------------------------------------| | Llama-4-Maverick-17B-128E-Instruct
ID: | Groq | Meta | 131,000 | $0.11 prompt
$0.34 completion | | Llama-4-Scout-17B-16E-Instruct
ID: | Groq | Meta | 131,000 | $0.50 prompt
$0.77 completion | | Llama-3.3-70b-versatile
ID: | Groq | Meta | 128,000 | $0.59 prompt
$0.79 completion | | deepseek-r1-distill-llama-70b
ID: | Groq | DeepSeek | 128,000 | $0.75 prompt
$0.99 completion | | Llama-3.1-70b-versatile
ID: | Groq | Meta | 131,072 | $0.59 prompt
$0.79 completion | | Llama-3.1-8b-instant
ID: | Groq | Meta | 131,072 | $0.59 prompt
$0.79 completion | | Llama-3-70b
ID: | Groq | Meta | 8,192 | $0.59 prompt
$0.79 completion | | Llama-3-8b
ID: | Groq | Meta | 8,192 | $0.05 prompt
$0.1 completion | | Mixtral-8x7B
ID: | Groq | Mistral | 32,768 | $0.27 prompt
$0.27 completion | | gemma2-9b-it
ID: | Groq | Google | 8,192 | $0.2 prompt
$0.2 completion | * USD per Million tokens ## Perplexity | Model | Provider | Owner | Context | Cost* | |---------------------------------------------------------------------------------------------------------------------|------------|------------|---------|--------------------------------------------| | sonar-pro
ID: | Perplexity | Perplexity | 200,000 | $3 prompt
$15 completion | | sonar
ID: | Perplexity | Perplexity | 127,072 | $1 prompt
$1 completion | * USD per Million tokens. Perplexity charges additional $5 per each request on its online models. ## Mistral AI | Model | Provider | Owner | Context | Cost* | |----------------------------------------------------------------------------------------|------------|------------|---------|----------------------------------------------| | magistral-medium-2506
ID: | OpenRouter | Mistral | 40K | $2 prompt
$5 completion | | Mistral Large 2
ID: | Mistral AI | Mistral AI | 128K | $3 prompt
$9 completion | | Mistral Nemo
ID: | Mistral AI | Mistral AI | 128K | $0.3 prompt
$0.3 completion | | Codestral
ID: | Mistral AI | Mistral AI | 32,768 | $1 prompt
$3 completion | * USD per Million tokens ## Deepseek | Model | Provider | Owner | Context | Cost* | |----------------------------------------------------------------------------------------|----------|----------|---------|-------------------------------------| | deepseek-reasoner
ID: | Deepseek | Deepseek | 64K | $0.55 prompt
$2.19 completion | | deepseek-chat
ID: | Deepseek | Deepseek | 64K | $0.14 prompt
$0.28 completion | * USD per Million tokens ## Cohere | Model | Provider | Owner | Context | Cost* | |---------------------------------------------------------------------------|----------|--------|---------|-------------------------------------------| | command-r
ID: | Cohere | Cohere | 128K | $0.5 prompt
$1.5 completion | | command-r-plus
ID: | Cohere | Cohere | 128K | $3 prompt
$15 completion | * USD per Million tokens ## xAI | Model | Provider | Owner | Context | Cost* | |-------------------------------------------------------------------------------------------|----------|-------|---------|-----------------------------------| | grok-3-beta
ID: | xAI | xAI | 131,072 | $3 prompt
$15 completion | | grok-3-fast-beta
ID: | xAI | xAI | 131,072 | $5 prompt
$25 completion | | grok-3-mini-beta
ID: | xAI | xAI | 131,072 | $0.3 prompt
$0.5 completion | | grok-3-mini-fast-beta
ID: | xAI | xAI | 131,072 | $0.6 prompt
$4 completion | | grok-2-1212
ID: | xAI | xAI | 131,072 | $2 prompt
$10 completion | | grok-2-vision-1212
ID: | xAI | xAI | 32K | $2 prompt
$10 completion | | grok-beta
ID: | xAI | xAI | 131,072 | $5 prompt
$15 completion | * USD per Million tokens ## Azure OpenAI | Model | Provider | Owner | Context | Cost* | |-----------------------------------------------------------------------------------------------|----------|--------|---------|------------------------------------------------| | o1
ID: | OpenAI | OpenAI | 200,000 | $16.5 prompt
$66.0 completion | | o3-mini
ID: | OpenAI | OpenAI | 200,000 | $1.21 prompt
$4.84 completion | | o1-preview
ID: | OpenAI | OpenAI | 128,000 | $16.5 prompt
$66.0 completion | | o1-mini
ID: | OpenAI | OpenAI | 128,000 | $3.3 prompt
$13.2 completion | | gpt-4.5-preview
ID: | OpenAI | OpenAI | 128,000 | $75 prompt
$150 completion | | gpt-4o
ID: | OpenAI | OpenAI | 128,000 | $2.75 prompt
$11.0 completion | | gpt-4o-mini
ID: | OpenAI | OpenAI | 128,000 | $0.165 prompt
$0.66 completion | [Learn how to use Azure OpenAI models in Langbase](/integrations/azure-openai) * USD per Million tokens --- ## JSON Mode Support See the [list of models that support JSON mode](/features/json-mode) and how to use it in your Pipe. Completion and Prompt costs are based on the provider's pricing. Langbase does not charge on top of the provider's costs. --- ## Tool Support The following models support tool calls on Langbase. ### OpenAI | Model | Parallel Tool Call Support | Tool Choice Support | |-------------------------------------------------------------------------------------|----------------------------|---------------------| | o3
ID: | `true` | `true` | | o4-mini
ID: | `true` | `true` | | gpt-4.1
ID: | `true` | `true` | | gpt-4.1-mini
ID: | `true` | `true` | | gpt-4.1-nano
ID: | `false` | `true` | | gpt-4.5-preview
ID: | `true` | `true` | | o1
ID: | `true` | `true` | | o3-mini
ID: | `true` | `true` | | o1-preview
ID: | `true` | `true` | | o1-mini
ID: | `true` | `true` | | gpt-4o
ID: | `true` | `true` | | gpt-4o-2024-08-06
ID: | `true` | `true` | | gpt-4o-mini
ID: | `true` | `true` | | gpt-4-turbo
ID: | `true` | `true` | | gpt-4-turbo-preview
ID: | `true` | `true` | | gpt-4-0125-preview
ID: | `true` | `true` | | gpt-4-1106-preview
ID: | `true` | `true` | | gpt-4
ID: | `true` | `true` | | gpt-4-0613
ID: | `true` | `true` | | gpt-4-32k
ID: | `true` | `true` | | gpt-3.5-turbo-0125
ID: | `true` | `true` | | gpt-3.5-turbo-1106
ID: | `true` | `true` | | gpt-3.5-turbo
ID: | `true` | `true` | | gpt-3.5-turbo-16k
ID: | `true` | `true` | ### Google | Model | Parallel Tool Call Support | Tool Choice Support | |--------------------------------------------------------------------------------------|----------|--------| | gemini-2.5-flash-preview-04-17
ID: | `true` | `true`| | gemini-2.5-pro-exp-03-25
ID: | `true` | `true`| | gemini-2.5-pro-preview-03-25
ID: | `true` | `true`| | gemini-1.5-pro
ID: | `true` | `true`| | gemini-2.0-flash-exp
ID: | `true` | `true` | upto 1M | $0.075 prompt
$0.3 completion | | gemini-1.5-flash
ID: | `true` | `true`| | gemini-1.5-flash-8b
ID: | `true` | `true`| | gemini-1.0-pro
ID: | `false` | `false` | ### Anthropic | Model | Parallel Tool Call Support | Tool Choice Support | |------------------------------------------------------------------------------------------------------|-----------|-----------| | claude-3.7-sonnet-latest
ID: | `true` | `true` | | claude-3.7-sonnet-20250219
ID: | `true` | `true` | | claude-3.5-sonnet-latest
ID: | `true` | `true` | | claude-3.5-sonnet-20240620
ID: | `true` | `true` | | claude-3-opus
ID: | `true` | `true` | | claude-3-sonnet
ID: | `true` | `true` | | claude-3-haiku
ID: | `true` | `true` | ### Together AI | Model | Parallel Tool Call Support | Tool Choice Support | |---------------------------------------------------------------------------------------------------------------------------|----------|------------| | Llama-3.1-405B-Instruct-Turbo
ID: | `false` | `true` | | Llama-3.1-70B-Instruct-Turbo
ID: | `false` | `true` | | Llama-3.1-8B-Instruct-Turbo
ID: | `false` | `true` | | 7B-Instruct-v0.1
ID: | `false` | `true` | | Mixtral-8x7B-Instruct-v0.1
ID: | `false` | `true` | ## Deepseek | Model | Parallel Tool Call Support | Tool Choice Support | |------------------------------------------------------------------------------|----------------------------|---------------------| | deepseek-chat
ID: |`false` |`true` | --- ## Deprecated Models The following models are deprecated and no longer available for use in pipes. It is recommended to switch to a supported model. | Model | Provider | Owner | Deprecated on | Reason | |-----------------------------------|--------------|--------|---------------|------------------------------| | gemini-2.5-pro-exp-03-25 | Google | Google | 15-06-2025 | Discontinued by Perplexity | | llama-3.1-sonar-huge-128k-online | Perplexity | Meta | 21-02-2025 | Discontinued by Perplexity | | llama-3.1-sonar-large-128k-online | Perplexity | Meta | 21-02-2025 | Discontinued by Perplexity | | llama-3.1-sonar-small-128k-online | Perplexity | Meta | 21-02-2025 | Discontinued by Perplexity | | llama-3.1-sonar-large-128k-chat | Perplexity | Meta | 23-01-2025 | Discontinued by Perplexity | | llama-3.1-sonar-small-128k-chat | Perplexity | Meta | 23-01-2025 | Discontinued by Perplexity | | gemma-7b-it | Groq | Google | 01-01-2025 | Discontinued by Groq | | qwen2-72b | Fireworks AI | QwenLM | 13-08-2024 | Discontinued by Fireworks AI | | Llama-3-70b-chat-hf | Together AI | Meta | 15-09-2024 | Discontinued by Together AI | | Llama-2-7B-32K-Instruct | Together AI | Meta | 15-09-2024 | Discontinued by Together AI | | gemma-7b-it | Together AI | Meta | 15-09-2024 | Discontinued by Together AI | ---
AI Solutions by ⌘ Langbase https://langbase.com/docs/solutions/ import { generateMetadata } from '@/lib/generate-metadata'; # AI Solutions by ⌘ Langbase ⌘ Langbase is the composable infrastructure and developer experience to build, collaborate, and deploy any AI apps/features. Our mission is to make AI accessible to everyone, any developer not just AI/ML experts. We are the only [composable AI infrastructure][composable]. That's all we do. In software engineering, composition is a powerful concept. It allows for building complex systems from simple, interchangeable parts. Think Legos, Docker containers, React components. Langbase extends this concept to AI infrastructure with our **Composable AI** stack using [Pipes][pipe] and [Memory][memory]. Here are some carefully crafted **Langbase** powered **AI solutions**: - [Legal](/solutions/legal/) - [Finance](/solutions/finance/) - [Education](/solutions/education/) - [Marketing](/solutions/marketing/) - [Technology](/solutions/technology/) - [Healthcare](/solutions/healthcare/) - [Administration](/solutions/administration/) - [News and Media](/solutions/news-media/) - [Customer Support](/solutions/customer-support/) --- [composable]: /composable-ai [pipe]: /pipe [memory]: /memory Langbase SDK https://langbase.com/docs/sdk/ import { generateMetadata } from '@/lib/generate-metadata'; # Langbase SDK Langbase provides a TypeScript AI SDK with a phenomenal developer experience to help developers write less code and move fast. Use any LLM, build complex memory agents, compose pipe agents into a pipeline. **The SDK is designed to work with JavaScript, TypeScript, Node.js, Next.js, React, and the likes.** Langbase is an API-first platform delivering exceptional developer experience. Our APIs are simple, intuitive, and designed for seamless integration. With clear documentation, practical code examples, and responsive [community support](https://langbase.com/discord), we help you build quickly and efficiently. --- ### Table of contents - [Authentication](#authentication) - [Core functionality](#core-functionality) - [Next steps](#next-steps) --- ## Authentication The Langbase SDK uses API keys for authentication. You can create API keys at a user or org account level. Some SDK methods like when running a pipe allow you to specify a pipe specific API key as well. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. All API requests should include your API key in an Authorization HTTP header as follows: ```bash Authorization: Bearer LANGBASE_API_KEY ``` With Langbase SDK, you can set your API key as follows: ```js import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY }); ``` Treat your API keys like passwords. Keep them secret — use only on the server side. Remember to keep your API key secret! Your Langbase API key is server side only. Never share it or expose it in client-side code like browsers or apps. For production requests, route them through your own backend server where you can securely load your API key from an environment variable or key management service. --- ## Core functionality Langbase SDK provides the following core functionality: ### Pipe Use the SDK to manage the pipes in your Langbase account. Create, update, list, and run AI Pipes - [Run pipe](/sdk/pipe/run) - [Create pipe](/sdk/pipe/create) - [Update pipe](/sdk/pipe/update) - [List pipe](/sdk/pipe/list) - [usePipe()](/sdk/pipe/use-pipe) ### Memory Use the SDK to programmatically manage memories in your Langbase account. Since documents are stored in memories, you can also manage documents using the SDK. - [List memory](/sdk/memory/list) - [Create memory](/sdk/memory/create) - [Delete memory](/sdk/memory/delete) - [Retrieve memory](/sdk/memory/retrieve) - [List documents](/sdk/memory/document-list) - [Delete document](/sdk/memory/document-delete) - [Upload document](/sdk/memory/document-upload) - [Embeddings Retry](/sdk/memory/document-embeddings-retry) ### Workflow Use the SDK to programmatically manage workflows in your Langbase account. - [Workflow](/sdk/workflow) ### Agent Use the SDK to programmatically manage agents in your Langbase account. - [Agent](/sdk/agent/run) ### Threads Use the SDK to programmatically manage threads in your Langbase account. - [Create](/sdk/threads/create) - [Update](/sdk/threads/update) - [Get](/sdk/threads/get) - [Delete](/sdk/threads/delete) - [Append Messages](/sdk/threads/append-messages) - [List Messages](/sdk/threads/list-messages) ### Parser Use the SDK to parse text into structured data. - [Parser](/sdk/parser) ### Chunker Use the SDK to chunk text into smaller pieces. - [Chunker](/sdk/chunker) ### Embed Use the SDK to embed text into a vector space. - [Embed](/sdk/embed) ### Tools Use the SDK to manage tools in your Langbase account. - [Web Search](/sdk/tools/web-search) - [Crawl](/sdk/tools/crawl) Streaming text works differently in Node.js vs browser. Please check out different [examples](/sdk/examples) like the Next.js examples or the React example. --- ## Next steps Time to build. Check out the quickstart examples or Explore the API reference. --- What is an AI Agent? (Pipe) https://langbase.com/docs/pipe/ import { generateMetadata } from '@/lib/generate-metadata'; # What is an AI Agent? (Pipe) AI Agents can understand context and take meaningful actions. They can be used to automate tasks, research and analyze information, or help users with their queries. Pipe is your custom-built AI agent as an API. It's the easiest way to build, deploy, and scale AI agents without having to manage or update any infrastructure. --- Pipe lets you build AI agents and RAG apps without thinking about servers, GPUs, RAG, and infra. It is a high-level layer to Large Language Models (LLMs) that creates a personalized AI assistant for your queries and prompts. A pipe can leverage any LLM models, tools, and knowledge base with your datasets to assist with your queries. Pipe can connect [any LLM](/supported-models-and-providers/) to any data to build any developer API workflow. --- What is a Pipe --- ### P → **`Prompt`** Prompt engineering and orchestration. ### I → **`Instructions`** Instruction training few-shot, persona, character, etc. ### P → **`Personalization`** Knowledge base, variables, and safety hallucination engine. ### E → **`Engine`** Experiments, API Engine, evals, and enterprise governance. --- ### Next steps Time to build. Check out the quickstart overview example or Explore the API reference. --- Parser https://langbase.com/docs/parser/ import { generateMetadata } from '@/lib/generate-metadata'; # Parser Parser, an AI Primitive by Langbase, allows you to extract text content from various document formats. This is particularly useful when you need to process documents before using them in your AI applications. Parser can handle a variety of formats, including PDFs, CSVs, and more. By converting these documents into plain text, you can easily analyze, search, or manipulate the content as needed. --- ## Quickstart: Extracting Text from Documents --- ## Let's get started In this guide, we'll use the Langbase SDK to interact with the Parser API: --- ## Step #1: Generate Langbase API key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. If not, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Step #2: Setup your project Create a new directory for your project and navigate to it. ```bash mkdir document-parser && cd document-parser ``` ### Initialize the project Create a new Node.js project. ```bash {{ title: 'npm' }} npm init -y ``` ```bash {{ title: 'pnpm' }} pnpm init ``` ```bash {{ title: 'yarn' }} yarn init -y ``` ### Install dependencies You will use the [Langbase SDK](/sdk) to work with Parser and `dotenv` to manage environment variables. ```bash {{ title: 'npm' }} npm i langbase dotenv ``` ```bash {{ title: 'pnpm' }} pnpm add langbase dotenv ``` ```bash {{ title: 'yarn' }} yarn add langbase dotenv ``` ### Create an env file Create a `.env` file in the root of your project and add your Langbase API key: ```bash {{ title: '.env' }} LANGBASE_API_KEY=your_api_key_here ``` --- ## Step #3: Parse a PDF document Now let's create a file named `parse-pdf.ts` to demonstrate how to parse the document. You can download a sample PDF document from the below. Move the downloaded PDF document to your project directory. ```ts {{ title: 'TypeScript' }} import 'dotenv/config'; import { Langbase } from 'langbase'; import { readFile } from 'fs/promises'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { try { // Read the PDF document const buffer = await readFile('composable-ai.pdf'); // Parse the PDF document const result = await langbase.parser({ document: buffer, documentName: 'composable-ai.pdf', contentType: 'application/pdf', }); console.log('Parsed document name:', result.documentName); console.log('Parse document content:', result.content); } catch (error) { console.error('Error parsing PDF:', error); } } main(); ``` Run the script to parse your document: ```bash {{ title: 'npm' }} npx tsx parse-pdf.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx parse-pdf.ts ``` You should see output similar to this: ``` Parsed document name: composable-ai.pdf Parse document content: Composable AI In software engineering, composition is a powerful concept. It allows for building complex systems from simple, interchangeable parts. Think Legos, Docker containers, React components. Langbase extends this concept to AI infrastructure with our Composable AI stack using Pipes and Memory. Composable and personalized AI : With Langbase, you can compose multiple models together into pipelines. It's easier to think about, easier to develop for, and each pipe lets you choose which model to use for each task. You can see cost of every step. And allow your customers to hyper-personalize. Effortlessly zero-config AI infra : Maybe you want to use a smaller, domain-specific model for one task, and a larger general-purpose model for another task. Langbase makes it easy to use the right primitives and tools for each part of the job and provides developers with a zero-config composable AI infrastructure. That's a nice way of saying, you get a unicorn-scale API in minutes, not months . The most common problem I hear about in Gen AI space is that my AI agents are too complex and I can't scale them, too much AI talking to AI. I don't have control, I don't understand the cost, and the impact of this change vs that. Time from new model to prod is too long. Feels static, my customers can't personalize The Developer Friendly Future of AI Infrastructure Why Composable AI? Chai.new Launching soon in limited beta Join the waitlist Langbase it. Langbase fixes all this. AA I have built an AI email agent that can read my emails, understand the sentiment, summarize, and respond to them. Let's break it down to how it works, hint several pipes working together to make smart personalized decisions. 1. I created a pipe: email-sentiment this one reads my emails to understand the sentiment 2. email-summarizer pipe it summarizes my emails so I can quickly understand Example: Composable AI Email Agent Langbase Email Agent reference architecture them 3. email-decision-maker pipe should I respond? is it urgent? is it a newsletter? 4. If email-decision-maker pipe says yes , then I need to respond. This invokes the final pipe 5. email-writer pipe writes a draft response to my emails with one of the eight formats I have Ah, the power of composition. I can swap out any of these pipes with a new one. Flexibility : Swap components without rewriting everything Reusability : Build complex systems from simple, tested parts Scalability : Optimize at the component level for better performance Observability : Monitor and debug each step of your AI pipeline Control flow Maybe I want to use a different sentiment analysis model Or maybe I want to use a different summarizer when I'm on vacation I can chose a different LLM (small or large) based on the task BTW I definitely use a different decision-maker pipe on a busy day. Extensibility Add more when needed : I can also add more pipes to this pipeline. Maybe I want to add a pipe that checks my calendar or the weather before I respond to an email. You get the idea. Always bet on composition. Eight Formats to write emails : And I have several formats. Because Pipes are composable, I have eight different versions of email-writer pipe. I have a pipe email-pick-writer that picks the correct pipe to draft a response with. Why? I talk to my friends differently than my investors, reports, managers, vendors you name it. Why Composable AI is powerful? Long-term memory and context awareness By the way, I have all my emails in an emails-store memory, which any of these pipes can refer to if needed. That's managed semantic RAG over all the emails I have ever received. And yes, my emails-smart-spam memory knows all the pesky smart spam emails that I don't want to see in my inbox. Cost & Observability Because each intent and action is mapped out Pipe which is an excellent primitive for using LLMs, I can see everything related to cost, usage, and effectiveness of each pipe. I can see how many emails were processed, how many were responded to, how many were marked as spam, etc. I can switch LLMs for any of these actions, fork a pipe, and see how it performs. I can version my pipes and see how the new version performs against the old one. And we're just getting started Why Developers Love It Modular : Build, test, and deploy pipes x memorysets independently Extensible : API-first no dependency on a single language Version Control Friendly : Track changes at the pipe level Cost-Effective : Optimize resource usage for each AI task Stakeholder Friendly : Collaborate with your team on each pipe and memory. All your R&D team, engineering, product, GTM (marketing, sales), and even stakeholders can collaborate on the same pipe. It's like a Google Doc x GitHub for AI. That's what makes it so powerful. Each pipe and memory are like a docker container. You can have any number of pipes and memorysets. Can't wait to share more exciting examples of composable AI. We're cookin!! We'll share more on this soon. Follow us on Twitter and LinkedIn for updates. Previous Introduction Next API Reference Langbase, Inc. © Copyright 2025. All rights reserved. ``` --- ## Next Steps - Try parsing different file formats using the Langbase Parse API - Integrate the parsed content with other Langbase features - Build something cool with Langbase [SDK](/sdk) and [APIs](/api-reference). - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support --- MCP Servers https://langbase.com/docs/mcp-servers/ import { generateMetadata } from '@/lib/generate-metadata'; # MCP Servers Model Context Protocol (MCP) Servers provide a powerful way to extend AI capabilities through standardized interfaces. These servers enable seamless integration of external tools and data sources with AI models, creating more intelligent and context-aware applications. The MCP architecture is designed to work with various AI platforms, development environments, and programming languages. --- ### Table of contents - [What is MCP?](#what-is-mcp) - [Langbase Remote MCP Server](#langbase-remote-mcp-server) - [Langbase Docs MCP Server](#langbase-docs-mcp-server) - [Next steps](#next-steps) --- ## What is MCP? Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to LLMs. It provides a standardized way to connect AI models to different data sources and tools. You can deploy servers locally or remotely depending on your use case here are some of useful resources: Here are some useful resources to get started with MCP development and implementation: - [Official MCP Documentation](https://modelcontextprotocol.io/introduction) - The official documentation of the Model Context Protocol by Anthropic. - [MCP GitHub Repository](https://github.com/modelcontextprotocol) - The official github repository with SDKs, servers implementations and more. - [Cloudflare Remote MCP Server Guide](https://developers.cloudflare.com/agents/guides/remote-mcp-server/) - Step-by-step guide for deploying the Remote MCP Server. --- ## Langbase Remote MCP Server Langbase provides a fully compliant Remote MCP server that enables direct interaction with your Langbase pipes, memories, and agents. Create new pipe agents, upload documents to memory, run existing pipes, and manage your entire Langbase workspace seamlessly from your IDE. This server transforms your development environment into a powerful AI workspace where you can build, test, and deploy AI agents without leaving your code editor. ## Langbase Docs MCP Server The Langbase Docs MCP server allows IDEs like Cursor, Windsurf, etc., to access Langbase documentation. It provides LLMs with up-to-date context about Langbase APIs, SDK, and other resources present in the docs, enabling LLMs to deliver accurate and relevant answers to your Langbase-related queries. Perfect for getting instant help with Langbase SDK, API, and troubleshooting directly in your IDE. --- ## Next Steps - Build something cool with [Langbase Remote MCP Server](/mcp-servers/remote-mcp-server) from your IDEs. - Integrate [Langbase Docs MCP Server](/mcp-servers/docs-mcp-server) inside your IDEs and build with our [SDK](/sdk). - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- Memory Agents https://langbase.com/docs/memory/ import { generateMetadata } from '@/lib/generate-metadata'; # Memory Agents Memory agents are AI agents that have human like long-term memory. You can train AI agents with your data and knowledge base without having to manage vector storage, servers, or infrastructure. Langbase memory agents represent the next frontier in semantic retrieval-augmented generation (RAG) as a serverless and infinitely scalable API designed for developers. 30-50x less expensive than the competition, with industry-leading accuracy in advanced agentic routing and intelligent reranking. Memory agents on Langbase **Large Language Models (LLMs) have a universal constraint.** They don't know anything about your private data. They are trained on public data, and they hallucinate when you ask them questions they don't have the answers to. **Memory agents solve this problem** by dynamically attaching private data to any LLM at scale, with industry-leading accuracy in advanced agentic routing and intelligent reranking. **Every Langbase org/user can have millions of personalized RAG knowledge bases** tailored for individual users or specific use cases. Traditional vector storage architecture makes this impossible. So, memory agents are a managed context search API for developers. Empowering developers with a long-term memory solution that can acquire, process, retain, and later retrieve information. Combining vector storage, RAG (Retrieval-Augmented Generation), and internet access to help you build powerful AI features and products. --- ### Core functionality - **Upload**: Upload documents, files, and web content to context - **Process**: Automatically extract, embed, and create semantic index - **Query**: Recall and retrieve relevant context using natural language queries - **Accuracy**: Near zero hallucinations with accurate context-aware information --- ### Key features - **Semantic understanding**: Go beyond keyword matching with context-aware search - **Vector storage**: Efficient hybrid similarity search for large-scale data - **Semantic RAG**: Enhance LLM outputs with retrieved information from Memory - **Internet access**: Augment your private data with up-to-date web content All Large Language Models (LLMs) share one limitation, i.e. **Hallucination**. LLMs don't know anything about your private data. They are trained on public data, and they hallucinate when you ask them questions they don't know the answers to. This limitation makes it difficult for LLMs to provide accurate responses to your queries. Langbase long-term Memory solves this problem by allowing you to attach your private data to any LLM. --- In a Retrieval Augmented Generation (RAG) system, Memory is used with [Pipe agents](/pipe) to retrieve relevant data for queries. The process involves: - Creating query embeddings. - Retrieving matching data from Memory. - Augmenting the query with this data of 3-20 chunks. - Using it to generate accurate, context-aware responses. This integration ensures precise answers and enables use cases like document summarization, question-answering, and more. ---

Semantic Retrieval Augmented Generation (sRAG)

In a semantic RAG system, when an LLM is queried, it is provided with additional information relevant to the query from the Memory. This extra information helps the LLM to provide more accurate and relevant responses. Below is the list of steps performed in a RAG system: 0. **Query**: User queries the LLM through Pipe. Embeddings are generated for the query. 1. **Retrieval**: Pipe retrieves query-relevant information from the Memory through similarity search. 2. **Augmentation**: Retrieved information is augmented with the query. 3. **Generation**: The augmented information is fed to the LLM to generate a response. --- ### Next steps Time to build. Check out the quickstart overview example or Explore the API reference. ---
Embed https://langbase.com/docs/embed/ import { generateMetadata } from '@/lib/generate-metadata'; # Embed Embed, an AI Primitive by Langbase, allows you to convert text into vector embeddings. This is particularly useful for semantic search, text similarity comparisons, and other NLP tasks. Embedding text into vectors enables you to perform complex queries and analyses that go beyond simple keyword matching. It allows you to capture the semantic meaning of the text, making it easier to find relevant information based on context rather than just keywords. --- ## Quickstart: Converting Text to Vector Embeddings --- ## Let's get started In this guide, we'll use the Langbase SDK to interact with the Embed API: --- ## Step #1: Generate Langbase API key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. If not, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. Please add the [LLM API keys](/features/keysets) for the embedding models you want to use in your API key settings. --- ## Step #2: Setup your project Create a new directory for your project and navigate to it. ```bash mkdir text-embedder && cd text-embedder ``` ### Initialize the project Create a new Node.js project. ```bash {{ title: 'npm' }} npm init -y ``` ```bash {{ title: 'pnpm' }} pnpm init ``` ```bash {{ title: 'yarn' }} yarn init -y ``` ### Install dependencies You will use the [Langbase SDK](/sdk) to work with Embed and `dotenv` to manage environment variables. ```bash {{ title: 'npm' }} npm i langbase dotenv ``` ```bash {{ title: 'pnpm' }} pnpm add langbase dotenv ``` ```bash {{ title: 'yarn' }} yarn add langbase dotenv ``` ### Create an env file Create a `.env` file in the root of your project and add your Langbase API key: ```bash {{ title: '.env' }} LANGBASE_API_KEY=your_api_key_here ``` ## Step #3: Create an embedding generator Let's create a file named `generate-embeddings.ts` in your project directory that will demonstrate how to generate embeddings for text chunks: ```ts{{ title: 'generate-embeddings.ts' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { // Define some text chunks to embed const textChunks = [ "Artificial intelligence is transforming how we interact with technology", "Machine learning algorithms can identify patterns in large datasets", "Natural language processing helps computers understand human language", "Vector embeddings represent text as points in a high-dimensional space" ]; try { // Generate embeddings const embeddings = await langbase.embed({ chunks: textChunks, // Optional: specify the embedding model // embeddingModel: "openai:text-embedding-3-large" }); console.log("Number of embeddings generated:", embeddings.length); console.log("First embedding (showing first 5 dimensions):", embeddings[0].slice(0, 5)); console.log("Embedding dimensions:", embeddings[0].length); // Log the full first embedding vector console.log("\nComplete first embedding vector:"); console.log(embeddings[0]); } catch (error) { console.error("Error generating embeddings:", error); } } main(); ``` ## Step #4: Run the script Run the script to generate embeddings for your text chunks: ```bash {{ title: 'npm' }} npx tsx generate-embeddings.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx generate-embeddings.ts ``` You should see output showing the number of embeddings generated, a sample of the first embedding vector, and the full details of the first embedding vector: ``` Number of embeddings generated: 4 First embedding (showing first 5 dimensions): [-0.023, 0.128, -0.194, 0.067, -0.022] Embedding dimensions: 1536 Complete first embedding vector: [-0.023, 0.128, -0.194, 0.067, -0.022, ... ] ``` ## Next Steps - Build a semantic search system using your embeddings - Combine with other Langbase primitives like Chunk to process documents before embedding - Create a RAG (Retrieval-Augmented Generation) system using your embedded documents - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support --- Composable AI https://langbase.com/docs/composable-ai/ import { generateMetadata } from '@/lib/generate-metadata'; # Composable AI ## The Developer Friendly Future of AI Infrastructure In software engineering, composition is a powerful concept. It allows for building complex systems from simple, interchangeable parts. Think Legos, Docker containers, React components. Langbase extends this concept to AI infrastructure with our **Composable AI** stack using [Pipes][pipe] and [Memory][memory]. --- ## Why Composable AI? **Composable and personalized AI**: With Langbase, you can compose multiple models together into pipelines. It's easier to think about, easier to develop for, and each pipe lets you choose which model to use for each task. You can see cost of every step. And allow your customers to hyper-personalize. **Effortlessly zero-config AI infra**: Maybe you want to use a smaller, domain-specific model for one task, and a larger general-purpose model for another task. Langbase makes it easy to use the right primitives and tools for each part of the job and provides developers with a zero-config composable AI infrastructure. That's a nice way of saying, *you get a unicorn-scale API in minutes, not months*. > **The most common problem** I hear about in Gen AI space is that my AI agents are too complex and I can't scale them, too much AI talking to AI. I don't have control, I don't understand the cost, and the impact of this change vs that. Time from new model to prod is too long. Feels static, my customers can't personalize it. ⌘ Langbase fixes all this. — [AA](https://www.linkedin.com/in/MrAhmadAwais/) --- ## Interactive Example: Composable AI Email Agent But how does Composable AI work? Here's an interactive example of a composable AI Email Agent: Classifies, summarizes, responds. Click to send a spam or valid email and check how composable it is: Swap any pipes, any LLM, hyper-personalize (you or your users), observe costs. Everything is composable. --- ## Example: Composable AI Email Agent ⌘ Langbase Email Agent I have built an AI email agent that can read my emails, understand the sentiment, summarize, and respond to them. Let's break it down to how it works, hint several pipes working together to make smart personalized decisions. 1. I created a pipe: `email-sentiment` — this one reads my emails to understand the sentiment 2. `email-summarizer` pipe — it summarizes my emails so I can quickly understand them 3. `email-decision-maker` pipe — should I respond? is it urgent? is it a newsletter? 4. If `email-decision-maker` pipe says *yes*, then I need to respond. This invokes the final pipe 5. `email-writer` pipe — writes a draft response to my emails with one of the eight formats I have ## Why Composable AI is powerful? Ah, the power of composition. I can swap out any of these pipes with a new one. - **Flexibility**: Swap components without rewriting everything - **Reusability**: Build complex systems from simple, tested parts - **Scalability**: Optimize at the component level for better performance - **Observability**: Monitor and debug each step of your AI pipeline ### Control flow - Maybe I want to use a different sentiment analysis model - Or maybe I want to use a different summarizer when I'm on vacation - I can chose a different LLM (small or large) based on the task - BTW I definitely use a different `decision-maker` pipe on a busy day. ### Extensibility - **Add more when needed**: I can also add more pipes to this pipeline. Maybe I want to add a pipe that checks my calendar or the weather before I respond to an email. You get the idea. Always bet on composition. - **Eight Formats to write emails**: And I have several formats. Because Pipes are composable, I have eight different versions of `email-writer` pipe. I have a pipe `email-pick-writer` that picks the correct pipe to draft a response with. Why? I talk to my friends differently than my investors, reports, managers, vendors — you name it. ### Long-term memory and context awareness - By the way, I have all my emails in an `emails-store` memory, which any of these pipes can refer to if needed. That's managed [semantic RAG][memory] over all the emails I have ever received. - And yes, my `emails-smart-spam` memory knows all the pesky smart spam emails that I don't want to see in my inbox. ### Cost & Observability - Because each intent and action is mapped out Pipe — which is an excellent primitive for using LLMs, I can see everything related to cost, usage, and effectiveness of each pipe. I can see how many emails were processed, how many were responded to, how many were marked as spam, etc. - I can switch LLMs for any of these actions, [fork a pipe][fork], and see how it performs. I can version my pipes and see how the new version performs against the old one. - And we're just getting started … ### Why Developers Love It - **Modular**: Build, test, and deploy pipes x memorysets independently - **Extensible**: API-first no dependency on a single language - **Version Control Friendly**: Track changes at the pipe level - **Cost-Effective**: Optimize resource usage for each AI task - **Stakeholder Friendly**: Collaborate with your team on each pipe and memory. All your R&D team, engineering, product, GTM (marketing, sales), and even stakeholders can collaborate on the same pipe. It's like a Google Doc x GitHub for AI. That's what makes it so powerful. --- Each pipe and memory are like a docker container. You can have any number of pipes and memorysets. Can't wait to share more exciting examples of composable AI. We're cookin!! We'll share more on this soon. Follow us on [Twitter][x] and [LinkedIn][li] for updates. [pipe]: /pipe/ [memory]: /memory [signup]: https://langbase.fyi/awesome [x]: https://twitter.com/LangbaseInc [li]: https://www.linkedin.com/company/langbase/ [email]: mailto:support@langbase.com?subject=Pipe-Quickstart&body=Ref:%20https://langbase.com/docs/pipe/quickstart [fork]: https://langbase.com/docs/features/fork --- Chai – Vibe Code AI Agents https://langbase.com/docs/chai/ import { generateMetadata } from '@/lib/generate-metadata'; # Chai – Vibe Code AI Agents **Chai – Computer Human AI** by Langbase lets you vibe code AI agents. Like an on-demand AI engineer, it turns prompts into **production ready** agents. Prompt your AI agent idea, and Chai builds a fully functional agent — complete with **API** and the **Agent App** that are deployed on Langbase, the most powerful AI serverless platform. Key features of Chai include: - **Agent IDE**: A powerful code editor for editing, debugging, and observing the agent - **Agent App**: Every agent has a production-ready, shareable app - **Agent API**: Ready-to-use API for your agent, with code snippets - **Deployments**: Scalable, production ready deployments - **Prompt Modes**: Agent and App modes for specific updates - **Version Control**: Track changes and revert to previous versions - **Fork Agents**: Copy other agents and make them your own - **Share**: Live deployed URLs to share your agents with the world - **Flow**: Visualized flows for understanding complex agent logic - **Memory Agents**: Ready-to-use RAG pipeline - **Agent Readme**: Automatically generated documentation for your agent --- ## Quickstart: Vibe code and deploy AI agents with Chai. In this guide, we will use [Chai.new](https://Chai.new) to build an AI Support Agent that will answer user queries relevant to company documentation. --- ## Prerequisites To follow this guide, you will need a **Chai/Langbase Account**. Sign up at [Chai.new](https://chai.new) if you haven't already. --- ## Step 1: Create Your First Agent with Chai You can prompt Chai for vibe coding AI agents. Just describe what you want to create in the prompt box. The more specific you are, the better the results. Enter an initial prompt for your agent idea, and Chai will continue from there. You can keep refining and adjusting your agent as you go. Let's use the following prompt: > Build an AI Support agent that uses my docs as memory for autonomous AutoRAG Prompt Chai ## Step 2: Vibe code and refine your agent When you enter a prompt, Chai begins the agent creation process. It lays out the foundational structure of your agent, and starts generating the necessary code to bring it to life. This includes: - `agent.ts` – The main logic of your agent and its workflow - `app` – Agent app directory, which contains the app and frontend code (React components) for your agent Chai generates the agent code real time in the Agent IDE, where all code generation and editing takes place. You can toggle between files, edit them manually or prompt Chai to make changes. Agent created by Chai ## Step 3: Add files to agent memory Chai intelligently detects when an agent requires access to private or extended data (RAG). In such cases, it automatically creates [memory agents](/memory) — specialized agents equipped with human-like long-term memory. In this case, Chai has created a **Support Docs** memory for us. It will store the company documentation and provide it to the Support Agent when needed. ### Step 3.1: Download sample file For this guide, you can download and use this example documentation file. Next, open the memories tab and click on the **Support Docs** memory. Memory Agents inside Chai ### Step 3.2: Upload document to memory Upload the downloaded documentation file to the memory agent. You can add any documents that your agent needs to reference when answering user queries. Once uploaded, the documents are parsed, chunked, and embedded, making them searchable and retrievable by the agent. Uploading documents to memory agent ## Step 4: Deploy the agent When you're ready, click **Deploy** in the top-right corner. If your agent uses specific LLMs or tools, you may need to add API keys in the **Environment Variables** section. If you are a Langbase user and have LLM keys saved in your [profile keyset](https://langbase.com/settings/llm-keys), they'll be automatically imported here. You can set environment variables per agent to securely store sensitive info like API keys, keeping them out of your code. You can see **Logs** for each deployment to help you track progress and troubleshoot any issues. Once deployment is complete, your agent will be ready to use. Deploying the agent Once deployed, you will have access to: - **Agent App** – A prod-ready app to interact, and share the agent - **Agent API** – Ready-to-use scalable serverless endpoint for your agent - **Agent Flow** – A visual, diagrammatic representation of the agent's logic, to understand how it works You can also edit the agent's code, or download it if you prefer to self-host it. ## Step 6: Use the agent through Agent App Alongside the agent, Chai automatically generates a full fledged application for the agent. We call them Agent Apps, and they are: - Production ready - Auto update when agent changes - Fully hosted, instantly shareable - Mobile & desktop ready - Customizable using the **App Prompt Mode** Navigate to the **App** tab, here you can test and use your agent through UI. Using the Agent App You can test the agent with a prompt like ***"What is prompt chaining architecture?"*** and it should respond with an answer based on the documentation you uploaded. For more observability, agent app has a **Console** for debugging and observing the agent's behavior. It logs every API call and response made by the agent. ## Step 7: Use the agent through Agent API Now that you have deployed your AI support agent on Chai, you can use it in your apps, websites, or literally anywhere you want. 1. Go to the **API tab**. 2. Retrieve your **API base URL** and **API key**. 3. Make sure to never use your API key on client-side code. Always use it on server-side code. Your API key is like a password, keep it safe. If you think it's compromised, you can always regenerate a new one. Using the Agent API You will also find code snippets there for various languages to help you get started quickly. Here's an example snippet of agent API call in Node.js: ```js {{title: 'Calling the Agent API in Node.js'}} async function main() { const api = `https://api.langbase.com/devlangbase/doc-rag-support-agent-2dfd`; const response = await fetch(api, { method: 'POST', headers: { 'Authorization': `Bearer YOUR_LANGBASE_API_KEY`, 'Content-Type': 'application/json' }, body: JSON.stringify({"input":""}) }); if (!response.ok) { throw new Error(`Error: ${response.statusText}`); } const agentResponse = await response.json(); console.log('Agent response:', agentResponse); } ``` ## Step 8: Bonus – Visualize Your Agent's Logic with Agent Flow But wait, there's more. Chai automatically generates a visual **Agent Flow** to help you understand how your agent works. Keep an eye on your agent's Flow to ensure it behaves as intended. Agents can quickly become complex, with multiple decision paths, tools, and branching conditions. The Agent Flow provides a clear view of the agent's logic, including its decision paths, tools used, and branching conditions. Agent Flow --- ## What Will You Build? Chai gives you the tools to spin up agents that actually do things — no complex setup, no boilerplate. Here are a few ideas to get your wheels turning: * **AI Onboarding Agent** – Help new hires ramp up without bugging your team. * **Research Assistant** – A lightweight Perplexity-style bot tuned to your workflow. * **Meeting Summary Agent** – Drop in a transcript, get back action items and TL;DRs. * **Social Media Agent** – Generate posts that sound like you (on your best day). * **AI Email Agent** – Triage and respond to emails so you don't have to. * **Recruitment Agent** – Parse resumes, write job descriptions, compare candidates. ## Next Steps You're all set up. Here's what to try next: * Build an agent that solves a real problem—and share it with the world. * Explore what you can do with [Langbase APIs](/api-reference) and [SDK](/sdk). * Drop by our [Discord](https://langbase.com/discord) to get help, give feedback, or show off what you've made. --- Examples https://langbase.com/docs/examples/ import { ExampleAccordion } from '@/components/mdx/example-card'; import { generateMetadata } from '@/lib/generate-metadata'; # Examples Here’s a list of examples to help you create pipe agents, memory agents, manage threads, and more. --- Chunker https://langbase.com/docs/chunker/ import { generateMetadata } from '@/lib/generate-metadata'; # Chunker Chunker, an AI Primitive by Langbase, allows you to split text into smaller, manageable pieces. This is especially useful for RAG pipelines or when you need to work with only specific sections of a document. Especially, useful when building AI agents RAG. Chunking text can help improve the performance of your AI applications by allowing you to focus on relevant sections of text, making it easier to analyze and process information. This is especially beneficial for large documents where you may not need to use the entire content. --- ## Quickstart: Splitting Text into Chunks --- ## Let's get started In this guide, we'll use the Langbase SDK to interact with the Chunk API: --- ## Step #1: Generate Langbase API key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. If not, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Step #2: Setup your project Create a new directory for your project and navigate to it. ```bash mkdir document-chunker && cd document-chunker ``` ### Initialize the project Create a new Node.js project. ```bash {{ title: 'npm' }} npm init -y ``` ```bash {{ title: 'pnpm' }} pnpm init ``` ```bash {{ title: 'yarn' }} yarn init -y ``` ### Install dependencies You will use the [Langbase SDK](/sdk) to work with Chunk and `dotenv` to manage environment variables. ```bash {{ title: 'npm' }} npm i langbase dotenv ``` ```bash {{ title: 'pnpm' }} pnpm add langbase dotenv ``` ```bash {{ title: 'yarn' }} yarn add langbase dotenv ``` ### Create an env file Create a `.env` file in the root of your project and add your Langbase API key: ```bash {{ title: '.env' }} LANGBASE_API_KEY=your_api_key_here ``` --- ## Step #3: Chunk the text Now let's create a simple script to demonstrate how to split the text content into chunks: ```ts {{ title: 'TypeScript' }} import 'dotenv/config'; import { Langbase } from 'langbase'; // Initialize the Langbase client const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main(content) { // Chunk the document const chunks = await langbase.chunker({ content, chunkMaxLength: 1024, chunkOverlap: 256 }); console.log(`Total chunks created: ${chunks.length}`); console.log('Chunks:', chunks); } // Document content to chunk const content = `Composable AI The Developer Friendly Future of AI Infrastructure In software engineering, composition is a powerful concept. It allows for building complex systems from simple, interchangeable parts. Think Legos, Docker containers, React components. Langbase extends this concept to AI infrastructure with our Composable AI stack using Pipes and Memory. Why Composable AI? Composable and personalized AI: With Langbase, you can compose multiple models together into pipelines. It's easier to think about, easier to develop for, and each pipe lets you choose which model to use for each task. You can see cost of every step. And allow your customers to hyper-personalize. Effortlessly zero-config AI infra: Maybe you want to use a smaller, domain-specific model for one task, and a larger general-purpose model for another task. Langbase makes it easy to use the right primitives and tools for each part of the job and provides developers with a zero-config composable AI infrastructure. That's a nice way of saying, you get a unicorn-scale API in minutes, not months. The most common problem I hear about in Gen AI space is that my AI agents are too complex and I can't scale them, too much AI talking to AI. I don't have control, I don't understand the cost, and the impact of this change vs that. Time from new model to prod is too long. Feels static, my customers can't personalize it. ⌘ Langbase fixes all this. — AA Interactive Example: Composable AI Email Agent But how does Composable AI work? Here's an interactive example of a composable AI Email Agent: Classifies, summarizes, responds. Click to send a spam or valid email and check how composable it is: Swap any pipes, any LLM, hyper-personalize (you or your users), observe costs. Everything is composable. I'm stuck and frustrated because the billing API isn't working and the API documentation is outdated. Congratulations! You have been selected as the winner of a $100 million lottery! Send a demo request to understand composable AIarrow Example: Composable AI Email Agent ⌘ Langbase Email Agent ⌘ Langbase Email Agent reference architecture I have built an AI email agent that can read my emails, understand the sentiment, summarize, and respond to them. Let's break it down to how it works, hint several pipes working together to make smart personalized decisions. I created a pipe: email-sentiment — this one reads my emails to understand the sentiment email-summarizer pipe — it summarizes my emails so I can quickly understand them email-decision-maker pipe — should I respond? is it urgent? is it a newsletter? If email-decision-maker pipe says yes, then I need to respond. This invokes the final pipe email-writer pipe — writes a draft response to my emails with one of the eight formats I have Why Composable AI is powerful? Ah, the power of composition. I can swap out any of these pipes with a new one. Flexibility: Swap components without rewriting everything Reusability: Build complex systems from simple, tested parts Scalability: Optimize at the component level for better performance Observability: Monitor and debug each step of your AI pipeline Control flow Maybe I want to use a different sentiment analysis model Or maybe I want to use a different summarizer when I'm on vacation I can chose a different LLM (small or large) based on the task BTW I definitely use a different decision-maker pipe on a busy day. Extensibility Add more when needed: I can also add more pipes to this pipeline. Maybe I want to add a pipe that checks my calendar or the weather before I respond to an email. You get the idea. Always bet on composition. Eight Formats to write emails: And I have several formats. Because Pipes are composable, I have eight different versions of email-writer pipe. I have a pipe email-pick-writer that picks the correct pipe to draft a response with. Why? I talk to my friends differently than my investors, reports, managers, vendors — you name it. Long-term memory and context awareness By the way, I have all my emails in an emails-store memory, which any of these pipes can refer to if needed. That's managed semantic RAG over all the emails I have ever received. And yes, my emails-smart-spam memory knows all the pesky smart spam emails that I don't want to see in my inbox. Cost & Observability Because each intent and action is mapped out Pipe — which is an excellent primitive for using LLMs, I can see everything related to cost, usage, and effectiveness of each pipe. I can see how many emails were processed, how many were responded to, how many were marked as spam, etc. I can switch LLMs for any of these actions, fork a pipe, and see how it performs. I can version my pipes and see how the new version performs against the old one. And we're just getting started … Why Developers Love It Modular: Build, test, and deploy pipes x memorysets independently Extensible: API-first no dependency on a single language Version Control Friendly: Track changes at the pipe level Cost-Effective: Optimize resource usage for each AI task Stakeholder Friendly: Collaborate with your team on each pipe and memory. All your R&D team, engineering, product, GTM (marketing, sales), and even stakeholders can collaborate on the same pipe. It's like a Google Doc x GitHub for AI. That's what makes it so powerful. Each pipe and memory are like a docker container. You can have any number of pipes and memorysets. Can't wait to share more exciting examples of composable AI. We're cookin!! We'll share more on this soon. Follow us on Twitter and LinkedIn for updates. `; // Chunk the content. main(content); ``` Run the script to chunk your document: ```bash {{ title: 'npm' }} npx tsx chunk-text.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx chunk-text.ts ``` You should get the output similar to this: 9 chunks created: from the document. ```shell Total chunks created: 9 Chunks: [ 'Composable AI\n' + 'The Developer Friendly Future of AI Infrastructure\n' + 'In software engineering, composition is a powerful concept. It allows for building complex systems from simple, interchangeable parts. Think Legos, Docker containers, React compon ... ... ... ] ``` --- ## Next Steps - Use [Langbase SDK to integrate chunker primitive](/sdk/chunker) into your AI agents and apps - Check out the [Chunk API reference](/api-reference/chunker) for more details on the parameters and options available - Experiment with different chunk sizes and overlaps to find the optimal settings for your use case - Integrate document chunking into your RAG (Retrieval-Augmented Generation) pipeline - Combine with other Langbase primitives like Parse to process various document formats - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support --- What is Langbase Agent? https://langbase.com/docs/agent/ import { generateMetadata } from '@/lib/generate-metadata'; # What is Langbase Agent? Agent, an AI Primitive by Langbase, works as a runtime LLM agent. You can specify all parameters at runtime and get the response from the agent. Agent uses our unified LLM API to provide a consistent interface for interacting with 100+ LLMs across all the top LLM providers. See the list of [supported models and providers here](/supported-models-and-providers). All cutting-edge LLM features are supported, including streaming, JSON mode, tool calling, structured outputs, vision, and more. It is designed to be used in a variety of applications, including agentic workflows, chatbots, virtual assistants, and other AI-powered applications. --- ## Quickstart: Create a Runtime AI Agent --- ## Let's get started In this guide, we'll use the Langbase SDK to create an AI agent that can summarize user support queries. --- ## Step #1: Generate Langbase API key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. If not, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Step #2: Setup your project Create a new directory for your project and navigate to it. ```bash mkdir agent && cd agent ``` ### Initialize the project Create a new Node.js project. ```bash {{ title: 'npm' }} npm init -y ``` ```bash {{ title: 'pnpm' }} pnpm init ``` ```bash {{ title: 'yarn' }} yarn init -y ``` ### Install dependencies You will use the [Langbase SDK](/sdk) to run the agent and `dotenv` to manage environment variables. ```bash {{ title: 'npm' }} npm i langbase dotenv ``` ```bash {{ title: 'pnpm' }} pnpm add langbase dotenv ``` ```bash {{ title: 'yarn' }} yarn add langbase dotenv ``` ### Create an env file Create a `.env` file in the root of your project. You will need two environment variables: 1. `LANGBASE_API_KEY`: Your Langbase API key. 2. `LLM_API_KEY`: Your LLM provider API key. ```bash {{ title: '.env' }} LANGBASE_API_KEY=your_api_key_here LLM_API_KEY=your_llm_api_key_here ``` --- ## Step #2: Configure and Run the agent Now let's create a new file called `agent.ts` in the root of your project. This file will contain the code and configuration of the agent. We will use OpenAI GPT-4.1 model, but you can use any other supported model [listed here](supported-models-and-providers). In instructions, which are like system prompts, we will specify that the agent is a support agent and should summarize user support queries. Finally, we will provide the user query as input to the agent. ```ts {{ title: 'TypeScript' }} import { Langbase } from 'langbase'; import dotenv from 'dotenv'; dotenv.config(); // Initialize the Langbase client const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); async function main() { const response = await langbase.agent.run({ model: 'openai:gpt-4.1', stream: false, apiKey: process.env.LLM_API_KEY!, instructions: 'You are an AI agent that summarizes user support queries for a support agent.', input: 'I am having trouble logging into my account. I keep getting an error message that says "Invalid credentials." I have tried resetting my password, but it still does not work. Can you help me?', }); console.log('Agent Response:', response.output); } main(); ``` Run the agent by executing the script `agent.ts`. ```bash {{ title: 'npm' }} npx tsx agent.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx agent.ts ``` ```bash {{ title: 'yarn' }} yarn dlx tsx agent.ts ``` You should see an output similar to: ```txt Agent Response: User can't log in. Gets "Invalid credentials" error even after password reset. Needs help. ``` --- ## Next Steps Now that you have a basic understanding of how to create and run an agent, you can explore more advanced features and configurations. Here are some suggestions: - Set `stream: true` for [streaming responses](/examples/agent/run-stream) (great for chat related applications) - Use [structured outputs](/examples/agent/run-agent-structured-output) to get structured data from the agent - Explore more code examples of agents [here](/examples/agent) - Create complex AI workflows with multiple agents in [Langbase Workflow](/workflow) Langbase API https://langbase.com/docs/api-reference/ import { generateMetadata } from '@/lib/generate-metadata'; # Langbase API Langbase is an API-first platform delivering exceptional developer experience. Our APIs are simple, intuitive, and designed for seamless integration. With clear documentation, practical code examples, and responsive [community support](https://langbase.com/discord), we help you build quickly and efficiently. --- ### Table of contents - [Introduction](#introduction) - [Authentication](#authentication) - [Langbase API Reference](#langbase-api-reference) --- ## Introduction You can interact with Langbase APIs using 1. HTTP requests from any language, 2. or via our official [Langbase SDK](/sdk) (Node.js), 3. or via [BaseAI.dev](https://BaseAI.dev), our open-source web AI framework. To install the official Node.js Langbase SDK, run the following command in your Node.js project directory: ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` From here on you can use the [Langbase SDK](/sdk) to interact with Langbase APIs. --- ## Authentication The Langbase API uses API keys for authentication. You can create API keys at a user or org account level. Some API endpoints like when running a pipe allow you to specify a pipe specific API key as well. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. All API requests should include your API key in an Authorization HTTP header as follows: ```bash Authorization: Bearer LANGBASE_API_KEY ``` With Langbase SDK, you can set your API key as follows: ```js import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY }); ``` Treat your API keys like passwords. Keep them secret — use only on the server side. Remember to keep your API key secret! Your Langbase API key is sever side only. Never share it or expose it in client-side code like browsers or apps. For production requests, route them through your own backend server where you can securely load your API key from an environment variable or key management service. --- ## Langbase API Reference Explore the Langbase API reference to build AI agents with memory. - [Pipe](/api-reference/pipe) AI agents for any LLM - [Memory](/api-reference/memory) agents and documents for RAG - [Threads](/api-reference/threads) for conversation and context - [Parse](/api-reference/parser) document - [Chunk](/api-reference/chunker) document - [Embed](/api-reference/embed) text - [Agents](/api-reference/agent) Runtime LLM agents - [Tools](/api-reference/tools) to perform crawls and web search - [Errors](/api-reference/errors) Common API errors and how to handle them - [Limits](/api-reference/limits) API rate limits and usage - [Deprecated](/api-reference/deprecated) Deprecated APIs - [Migration Guides](/api-reference/migrate-to-api-v1) from `beta` to `v1` --- ### Next Steps - Build something cool with Langbase APIs. - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- Building Products with Generative AI https://langbase.com/docs/workshops/building-products-with-ai/ import { generateMetadata } from '@/lib/generate-metadata'; # Building Products with Generative AI In this workshop, you will learn how to build a simple AI Assistant using Langbase Pipes and Memory tools. We'll then convert this AI Assistant into a product — an AI chatbot that can be shipped to users. --- ## Workshop Outline 1. **Introduction to Generative AI** - What is Generative AI? - Use cases and applications - Langbase Pipes and Memory sets 2. **Building an AI Assistant with Pipes** - Creating and deploying a simple AI Assistant - Few shot training the AI Assistant 3. **Converting the AI Assistant into a Product** - Designing and building an open-source AI chatbot using [LangUI.dev] - Integrating the AI chatbot with Pipes and Memory - Shipping the AI chatbot to users --- ## Prerequisites Basic understanding of Web technologies (HTML, CSS, JavaScript, React, Next.js, APIs). ## Build an AI Chatbot with Pipes — ⌘ Langbase An AI Chatbot example to help you create chatbots for any use case. This chatbot is built by using an AI Pipe on Langbase, it works with 30+ LLMs (OpenAI, Gemini, Mistral, Llama, Gemma, etc), any Data (10M+ context with Memory sets), and any Framework (standard web API you can use with any software). Check out the live demo [here][demo]. ## Features - 💬 [AI Chatbot][demo] — Built with an [AI Pipe on ⌘ Langbase][pipe] - ⚡️ Streaming — Real-time chat experience with streamed responses - 🗣️ Q/A — Ask questions and get pre-defined answers with your preferred AI model and tone - 🔋 Responsive and open source — Works on all devices and platforms ## Learn more 1. Check the [AI Chatbot Pipe on ⌘ Langbase][pipe] 2. Read the [source code on GitHub][gh] for this example 3. Go through Documentaion: [Pipe Quick Start][qs] 4. Learn more about [Pipes & Memory features on ⌘ Langbase][docs] ## Get started Let's get started with the project: To get started with Langbase, you'll need to [create a free personal account on Langbase.com][signup] and verify your email address. _Done? Cool, cool!_ 1. Fork the [AI Chatbot Pipe][pipe] on ⌘ Langbase. 2. Go to the API tab to copy the Pipe's API key (to be used on server-side only). 3. Download the example project folder from [here][download] or clone the reppository. 4. `cd` into the project directory and open it in your code editor. 5. Duplicate the `.env.example` file in this project and rename it to `.env.local`. 6. Add the following environment variables: ```shell # Replace `PIPE_API_KEY` with the copied API key. NEXT_LB_PIPE_API_KEY="PIPE_API_KEY" # Install the dependencies using the following command: npm install # Run the project using the following command: npm run dev ``` Your app template should now be running on [localhost:3000][local]. > NOTE: > This is a Next.js project, so you can build and deploy it to any platform of your choice, like Vercel, Netlify, Cloudflare, etc. ## Authors This project is created by [Langbase][lb] ([@LangbaseInc][lbx]) team members, with contributions from: - Ahmad Awais ([@MrAhmadAwais][xaa]) - Founder & CEO, [Langbase][lb] - Ahmad Bilal ([@AhmadBilalDev][xab]) - Founding Engineer, [Langbase][lb] **_Built by ⌘ [Langbase.com][lb] — Ship composable AI with hyper-personalized memory!_** --- [LangUI]: https://langui.dev [demo]: https://ai-chatbot.langbase.dev [lb]: https://langbase.com [lbx]: https://x.com/LangbaseInc [pipe]: https://langbase.com/examples/ai-chatbot [gh]: https://github.com/LangbaseInc/langbase-examples/tree/main/examples/ai-chatbot [cover]: https://raw.githubusercontent.com/LangbaseInc/docs-images/main/examples/ai-chatbot/ai-chatbot-langbase.png [download]: https://download-directory.github.io/?url=https://github.com/LangbaseInc/langbase-examples/tree/main/examples/ai-chatbot [signup]: https://langbase.fyi/io [qs]: https://langbase.com/docs/pipe/quickstart [docs]: https://langbase.com/docs [xaa]: https://x.com/MrAhmadAwais [xab]: https://x.com/AhmadBilalDev [local]: http://localhost:3000 [mit]: https://img.shields.io/badge/license-MIT-blue.svg?style=for-the-badge&color=%23000000 [fork]: https://img.shields.io/badge/FORK%20ON-%E2%8C%98%20Langbase-000000.svg?style=for-the-badge&logo=%E2%8C%98%20Langbase&logoColor=000000 Translation https://langbase.com/docs/use-cases/translation/ import { generateMetadata } from '@/lib/generate-metadata'; # Translation Enhancing communication across different languages can greatly improve services and experiences in various sectors. With AI-powered translation, you can streamline multilingual customer support, develop multilingual chatbots, simplify localized knowledge bases, and more. Langbase empowers developers to build these translation solutions using [Pipes](/pipe/quickstart), allowing for easy customization and integration. Take a look at our [AI Translator](https://ai-translator.langbase.dev/) example powered by a simple AI Generate [Pipe](https://langbase.com/examples/ai-translator) on Langbase.
Here are some examples of how translation can be applied across various sectors: --- ## Customer Support - **Multilingual Chatbots:** AI translators can enable chatbots to understand and respond to customer queries in multiple languages, improving customer experience and expanding reach. - **Real-time Translation:** Customer service representatives can use AI translators to communicate with customers in real-time, regardless of language barriers. - **Knowledge Base Localization:** Automatically translate FAQs, help articles, and other support documents into different languages to cater to a global audience. - **Voice Support:** Translate spoken language during support calls to facilitate communication between customers and support agents who speak different languages. _Check out this [**Pipe**](https://langbase.com/langbase/cs-ai-translator) to translate **customer support messages** to understand and respond to customer queries in multiple languages._ --- ## Technology - **Software Localization:** Translate software interfaces, user manuals, and documentation into various languages to make technology products accessible to a global market. - **Technical Support:** Provide multilingual technical support through AI-powered translators, enabling support teams to assist users worldwide effectively. - **Developer Collaboration:** Translate comments, documentation, and code reviews in multilingual development teams to enhance collaboration and productivity. - **API Integration:** Integrate AI translation services into apps and platforms, offering users the ability to use them in their preferred language. _Check out this [**Pipe**](https://langbase.com/langbase/software-docs-translator) to translate **software documentation** to make it accessible to a global market._ --- ## Marketing - **Campaign Localization:** Translate marketing campaigns, advertisements, and social media posts to target diverse linguistic groups. - **Content Marketing:** Automatically translate blog posts, articles, and other content marketing materials to reach a broader audience. - **Customer Feedback:** Translate customer reviews and feedback from different languages to gain insights from a global customer base. - **SEO Optimization:** Translate keywords and optimize multilingual SEO strategies to improve search engine rankings in different languages. _Check out this [**Pipe**](https://langbase.com/langbase/marketing-content-translator) to translate **marketing content** to reach a broader audience._ --- ## News & Media - **Content Translation:** Translate news articles, reports, and multimedia content to provide information to a global audience. - **Subtitle Generation:** Automatically generate subtitles for videos and broadcasts in multiple languages to increase accessibility. - **Real-time Reporting:** Translate live news reports and updates in real-time to cater to international viewers. - **Audience Engagement:** Engage with a global audience on social media by translating posts and comments. _Check out this [**Pipe**](https://langbase.com/langbase/social-media-content-translator) to translate **social media content** to provide information to a global audience._ --- ## Healthcare - **Patient Communication:** Translate medical instructions, consent forms, and communication between healthcare providers and patients who speak different languages. - **Telemedicine:** Facilitate multilingual consultations between doctors and patients via telemedicine platforms using real-time translation. - **Medical Research:** Translate medical research papers, clinical trial documentation, and pharmaceutical information to share knowledge globally. - **Training & Education:** Provide translated medical training materials and courses to healthcare professionals around the world. _Check out this [**Pipe**](https://langbase.com/langbase/patient-and-doctor-notes-translator) to translate **records and notes** for communication between healthcare providers and patients._ --- ## Legal - **Document Translation:** Translate legal documents, contracts, and agreements for international clients and cases. - **Court Interpretation:** Use AI translators for real-time translation during court proceedings involving non-native speakers. - **Client Communication:** Translate communications between lawyers and clients who speak different languages. - **Regulatory Compliance:** Ensure legal documents are accurately translated to comply with regulatory requirements in different jurisdictions. _Check out this [**Pipe**](https://langbase.com/langbase/contract-translator) to translate **legal contracts** for international clients and cases._ --- ## Finance - **Multilingual Banking Services:** Offer banking services and support in multiple languages to cater to an international clientele. - **Document Processing:** Translate financial statements, reports, and other documents for global stakeholders. - **Market Analysis:** Translate market research reports and financial news to inform global investment decisions. - **Customer Communication:** Facilitate multilingual communication between financial advisors and clients. _Check out this [**Pipe**](https://langbase.com/langbase/financial-report-translator) to translate **financial reports** for global stakeholders._ --- ## Education & E-Learning - **Course Localization:** Translate online courses, e-learning modules, and educational materials to reach a wider audience. - **Student Support:** Provide multilingual support for international students through AI-powered translation in chats and emails. - **Collaborative Learning:** Enable students and educators from different linguistic backgrounds to collaborate and communicate effectively. - **Content Creation:** Translate educational content, including textbooks, research papers, and instructional videos, to make learning accessible to non-native speakers. _Check out this [**Pipe**](https://langbase.com/langbase/lecture-notes-translator) to translate **lecture notes** to make learning accessible to non-native speakers._ ---
Text Summarization https://langbase.com/docs/use-cases/text-summarization/ import { generateMetadata } from '@/lib/generate-metadata'; # Text Summarization Regardless of the industry, text summarization can significantly enhance decision-making and provide quick insights from large volumes of text data. Using [Pipes](/pipe/quickstart) on Langbase, anyone can build any kind of text summarization solutions each tailored for different industries and use cases. Here are some examples of how text summarization can be applied across various sectors: --- ## Customer Support - **Ticket Summarization:** Summarize customer support tickets to help support agents understand issues quickly and provide faster resolutions. - **AI-Powered Query Response:** Enable customer support to use AI to generate accurate answers to user queries based on the provided docs. This improving accuracy and speeds up the support process. - **FAQ Generation:** Automatically generate FAQ entries from customer inquiries and support interactions. _Check out this [**Pipe**](https://langbase.com/examples/cs-tickets-to-faq-summarizer) to summarize **customer support tickets** into **FAQ entries**._ --- ## Technology - **Software Documentation Summarization:** Summarize extensive software documentation and technical manuals to provide developers and users with quick references and key points. - **Bug Report Summarization:** Summarize bug reports to highlight critical issues and solutions, helping development teams prioritize and address problems more efficiently. - **API Documentation Summarization:** Summarize API documentation to make it easier for developers to understand how to integrate and use different APIs, enhancing productivity and reducing learning time. _Check out this [**Pipe**](https://langbase.com/langbase/bug-report-summarizer) to summarize **bug report** into a **concise summary**._ --- ## Marketing - **Content Summarization:** Summarize marketing content such as blogs, whitepapers, and reports to create short snippets for promotional purposes, improving content reach and engagement. - **Campaign Analysis:** Summarize customer feedback and campaign performance data to derive actionable insights, helping marketers refine their strategies. _Check out this [**Pipe**](https://langbase.com/examples/campaign-analysis-summarizer) that summarizes the **customer feedback** and **compaign performance data** to determine the success of your campaign._ --- ## News and Media - **Automatic News Summarization:** Media companies can use summarization to provide brief summaries of news articles, helping readers quickly grasp the main points. - **Social Media Monitoring:** Summarize trending topics and user comments to keep track of public sentiment and emerging issues. _Get a quick overview of **public sentiment** and **emerging issues** with this [**Pipe**](https://langbase.com/langbase/social-media-sentiment-analysis-summarizer) that summarizes trending topics and user comments._ --- ## Healthcare - **Medical Records Summarization:** Summarize patient records and doctors’ notes to provide a quick review for medical practitioners, improving the efficiency and quality of care. - **Research Summaries:** Condense large volumes of medical research papers into key findings, making it easier for healthcare professionals to stay updated with the latest developments. _Check out this [**Pipe**](https://langbase.com/langbase/patient-and-doctor-notes-summarizer) that summarizes **patient records** and **doctors' notes** to provide a quick overview about the patient._ --- ## Legal - **Contract Summaries:** Extract and summarize the key points from legal contracts, facilitating faster reviews and negotiations. - **Case Law Summarization:** Summarize legal documents and case law to provide lawyers and judges with concise overviews of lengthy texts, aiding in quicker and more informed decision-making. _Check out this [**Pipe**](https://langbase.com/langbase/contract-summarizer) that summarizes **contracts** to provide a quick **overview** of the key points._ --- ## Finance - **Earnings Reports Summarization:** Summarize quarterly and annual earnings reports to provide quick insights into a company's performance, helping investors and analysts make informed decisions. - **Market Analysis:** Condense financial news, market trends, and analysis reports to keep traders and analysts updated with essential information. _Check out this [**Pipe**](https://langbase.com/langbase/earning-report-summarizer) that summarizes **earnings reports** to provide **insights** into company's performance._ --- ## Education and E-Learning - **Lecture Summaries:** Summarize lecture notes and academic articles to provide students with quick reviews and study aids, enhancing learning efficiency. - **Content Curation:** Summarize e-learning content to help learners quickly grasp essential points, making the learning process more effective. _Check out this [**Pipe**](https://langbase.com/langbase/lecture-notes-summarizer) that summarizes **lecture notes** to provide a quick review for students._ --- AI Chatbot https://langbase.com/docs/use-cases/ai-chatbot/ import { generateMetadata } from '@/lib/generate-metadata'; # AI Chatbot Chatbots are everywhere nowadays, helping out in different industries to answer questions and provide support. But most of them work in a fixed way: you choose from preset options, and that's it. The problem? They can't handle questions that aren't on their list. That's where AI-powered chatbots step in. They act more like real assistants, ready to help customers with anything related to your company. With Langbase, you can easily create custom AI chatbots for your business using just one Chat Pipe API. Take a look at our [AI Chatbot](https://ai-chatbot.langbase.dev/) example powered by a simple AI Chat [Pipe](https://langbase.com/examples/ai-chatbot) on Langbase.
Here are some examples of how AI Chatbots can be adopted into different industries. ## Customer Support - **24/7 Assistance:** AI Chatbots can offer round-the-clock customer service, answering common queries and handling simple tasks without human intervention. - **Support Ticket System Integration:** You can create a chatbot using a Langbase pipe to automatically create and update support tickets based on user interactions. - **Complaint Resolution:** Route complex issues to the appropriate department, ensuring efficient resolution. - **Technical Support:** Offer troubleshooting assistance for common product technical problems, guiding users through step-by-step solutions. ## Marketing - **Lead Generation:** AI-powered Chatbots can engage with website visitors, qualify leads, and capture contact information for the sales team. - **Personalized Campaigns:** Deliver tailored marketing messages based on user behavior and preferences to increase engagement and conversions. - **Feedback Collection:** Conduct surveys and gather customer feedback to improve products and services. ## News and Media - **Interactive Storytelling:** AI Chatbots can provide interactive experiences, allowing users to explore news stories in depth through conversation. - **Subscription Management:** Users can manage their subscriptions, update preferences, and renew services through AI-powered chatbots. ## Healthcare - **Appointment Scheduling:** AI-powered chatbots can help patients book, reschedule, or cancel appointments with healthcare providers. - **Symptom Checker:** Provide preliminary health assessments and suggest possible conditions based on reported symptoms. ## Legal - **Document Automation:** AI chatbots can assist in drafting legal documents and contracts by asking users relevant questions. - **Legal Advice:** Provide clients general legal information and guidance on common legal issues. - **Appointment Scheduling:** Manage scheduling appointments between clients and legal professionals. ## Finance - **Account Information:** Chatbots help users check account balances and view transaction history. - **Savings Tips:** Provide users with simple tips on how to save money. - **Loan Info:** Chatbots offer information about loan products and eligibility. ## Education & E-Learning - **Student Support:** AI chatbots can answer student queries about course material, deadlines, and academic policies. - **Enrollment Assistance:** Prospective students can get information about programs, application processes, and deadlines. - **Interactive Learning:** Chatbots can facilitate interactive learning sessions, quizzes, and practice tests to enhance student engagement.
Page https://langbase.com/docs/threads/platform/ import { generateMetadata } from '@/lib/generate-metadata'; ## Platform Limits and pricing for Threads primitive on the Langbase Platform are as follows: 1. **[Limits](/threads/platform/limits)**: Rate and usage limits. 2. **[Pricing](/threads/platform/pricing)**: Pricing details for the Agent primitive. Technology https://langbase.com/docs/solutions/technology/ import { generateMetadata } from '@/lib/generate-metadata'; # Technology _Here are some carefully crafted **Langbase** powered **AI solutions** for tech industry:_ --- ## Text Summarization - **Software Documentation Summarization:** Summarize extensive software documentation and technical manuals to provide developers and users with quick references and key points. - **Bug Report Summarization:** Summarize bug reports to highlight critical issues and solutions, helping development teams prioritize and address problems more efficiently. - **API Documentation Summarization:** Summarize API documentation to make it easier for developers to understand how to integrate and use different APIs, enhancing productivity and reducing learning time. _Check out this [**Pipe**](https://langbase.com/langbase/bug-report-summarizer) to summarize **bug report** into a **concise summary**._ --- ## Translation - **Software Localization:** Translate software interfaces, user manuals, and documentation into various languages to make technology products accessible to a global market. - **Technical Support:** Provide multilingual technical support through AI-powered translators, enabling support teams to assist users worldwide effectively. - **Developer Collaboration:** Translate comments, documentation, and code reviews in multilingual development teams to enhance collaboration and productivity. - **API Integration:** Integrate AI translation services into apps and platforms, offering users the ability to use them in their preferred language. _Check out this [**Pipe**](https://langbase.com/langbase/software-docs-translator) to translate **software documentation** to make it accessible to a global market._ --- ## Resume Preparation - **Skill Identification**: Highlight relevant skills for the job. - **Experience Tailoring**: Customize work experience to match job requirements. - **Keyword Optimization**: Add industry-specific keywords for ATS. - **Achievement Highlighting**: Emphasize notable achievements and results. - **Format Enhancement**: Improve layout and design for readability. - **Cover Letter Integration**: Create a cover letter that aligns with your resume. _Check out this RAG based [**Pipe**](https://langbase.com/langbase/resume-analyst) to analyze your **resume** and tailor it for a given job description_ Forked Pipe does not contain any memory. Please create a memory, add document, and attach it to the Pipe to use the solution. [Learn more about memory](/memory/quickstart#step-1-create-a-memory). --- ## Pseudocode Generator - **Clarity and Readability**: Simplifies complex logic into an easy-to-understand format. - **Language Agnostic**: Can be applied across different programming languages. - **Problem Solving**: Helps in visualizing the solution before coding. - **Collaboration**: Facilitates communication and understanding among team members. - **Debugging Aid**: Identifies logical errors early in the development process. - **Planning**: Serves as a blueprint for the actual code implementation. - **Efficiency**: Streamlines the coding process by providing a clear plan. _Check out this [**Pipe**](https://langbase.com/examples/pseudocode-generation) to integrate pseudocode alongside code in your preferred language, enhancing the learning experience for your users._ --- ## CSV Juggler - **ETL Pipelines**: Convert CSV vendor data to JSON for ETL processing. - **System Interoperability**: Transform data formats for seamless data exchange between systems. - **API Development**: Generate sample JSON data from CSV inputs for API testing. _Check out this [**Pipe**](https://langbase.com/examples/csv-juggler) to streamline data transformation by converting CSV data to JSON, XML and vice versa, enhancing system interoperability, and aiding API development for your users._ --- ## IT Linux Support Assistant - **System Administration**: Provide step-by-step guidance on installing, configuring, and maintaining Ubuntu systems. - **Troubleshooting**: Offer expert solutions for resolving Ubuntu-specific and general Linux issues. - **Advanced Topics**: Assist with advanced networking, security, virtualization, and enterprise-level system design on Ubuntu. _Check out this [**Pipe**](https://langbase.com/examples/it-linux-support-bot) to enhance Ubuntu system administration by providing guidance on installation, troubleshooting, and advanced topics like networking and security for your users._ --- ## API Security Consultant (OWASP 2023) ChatBot - **Comprehensive API Security Assessments**: Evaluate APIs for vulnerabilities based on OWASP 2023 Top 10 API Security Risks. - **Developer Training and Education**: Guide developers in implementing best security practices for APIs. - **Continuous Compliance Monitoring**: Ensure APIs maintain compliance with security standards and regulations. _Check out this [**Pipe**](https://langbase.com/examples/api-sec-consultant), go through comprehensive checklist to get insights on Web API Secuirty best practices and estimate information security risks in your APIs implementation._ --- ## Midjourney Expert - **Visual Concept Refinement**: Assist creatives in transforming vague ideas into detailed, Midjourney-optimized prompts for precise image generation. _Check out this [**Pipe**](https://langbase.com/examples/midjourney-expert), guide your creative vision through a comprehensive questionnaire to craft detailed, optimized prompts for stunning AI-generated images._ --- ## PII (Personally Identifiable Information) Anonymizer - **Document Sanitization**: Automatically remove sensitive personal information from various document types to ensure compliance with privacy regulations and protect individual identities. - **Data Breach Prevention**: Scan outgoing communications and files in real-time to detect and redact personally identifiable information, minimizing the risk of accidental data exposure. _Check out this [**Pipe**](https://langbase.com/examples/pi-anonymizer), that instantly removes personal information from any text, keeping your sensitive data safe and secure._ --- ## Web Secuirty (OWASP) Consultant - **Comprehensive Security Assessments**: Evaluate web applications for vulnerabilities based on OWASP 2021 Top 10. - **Developer Training and Education**: Guide developers in secure coding practices in real-time. - **Continuous Compliance Monitoring**: Ensure ongoing compliance with security standards and regulations. _Check out this [**Pipe**](https://langbase.com/examples/web-sec-consultant) to navigate a comprehensive OWASP Top 10 checklist to gain invaluable insights on web security best practices and estimate potential vulnerabilities in your web application implementation._ --- ## ASCII Software Architect - **Code Comprehension**: Quickly visualize complex codebases as ASCII UML diagrams to aid in understanding and analysis. - **Design Documentation**: Generate clear, text-based UML representations for inclusion in code comments, README files, or technical specifications. - **Collaborative Planning**: Facilitate remote pair programming and architecture discussions by instantly sharing ASCII UML diagrams of proposed designs. - **Legacy System Analysis**: Rapidly create visual representations of older codebases to assist in modernization and refactoring efforts. _Check out this [**Pipe**](https://langbase.com/examples/ascii-software-architect) that instantly transforms your code into ASCII UML diagrams, improve software design workflows, accelerates code comprehension, and enhances team communication for developers and architects alike._ --- Legal https://langbase.com/docs/solutions/legal/ import { generateMetadata } from '@/lib/generate-metadata'; # Legal _Here are some carefully crafted **Langbase** powered **AI solutions** for legal industry:_ --- ## Text Summarization - **Contract Summaries:** Extract and summarize the key points from legal contracts, facilitating faster reviews and negotiations. - **Case Law Summarization:** Summarize legal documents and case law to provide lawyers and judges with concise overviews of lengthy texts, aiding in quicker and more informed decision-making. _Check out this [**Pipe**](https://langbase.com/langbase/contract-summarizer) that summarizes **contracts** to provide a quick **overview** of the key points._ --- ## Translation - **Document Translation:** Translate legal documents, contracts, and agreements for international clients and cases. - **Court Interpretation:** Use AI translators for real-time translation during court proceedings involving non-native speakers. - **Client Communication:** Translate communications between lawyers and clients who speak different languages. - **Regulatory Compliance:** Ensure legal documents are accurately translated to comply with regulatory requirements in different jurisdictions. _Check out this [**Pipe**](https://langbase.com/langbase/contract-translator) to translate **legal contracts** for international clients and cases._ --- ## Understanding Legal Contracts - **Contract Analysis:** Extract key information, clauses, and obligations from contracts to provide summary. - **Contract Comparison:** Compare multiple contracts to identify similarities, differences, and potential risks. - **Contract Risk Assessment:** Analyze contracts to identify potential risks, liabilities, and compliance issues. _Check out this RAG based [**Pipe**](https://langbase.com/langbase/contract-analyzer) that analyzes pdf **contracts** to answer queries._ Forked Pipe does not contain any memory. Please create a memory, add document, and attach it to the Pipe to use the solution. [Learn more about memory](/memory/quickstart#step-1-create-a-memory). --- ## AI Chatbot - **Document Automation:** AI chatbots can assist in drafting legal documents and contracts by asking users relevant questions. - **Legal Advice:** Provide clients general legal information and guidance on common legal issues. - **Appointment Scheduling:** Manage scheduling appointments between clients and legal professionals. --- News and Media https://langbase.com/docs/solutions/news-media/ import { generateMetadata } from '@/lib/generate-metadata'; # News and Media _Here are some carefully crafted **Langbase** powered **AI solutions** for news and media industry:_ --- ## Text Summarization - **Automatic News Summarization:** Media companies can use summarization to provide brief summaries of news articles, helping readers quickly grasp the main points. - **Social Media Monitoring:** Summarize trending topics and user comments to keep track of public sentiment and emerging issues. _Get a quick overview of **public sentiment** and **emerging issues** with this [**Pipe**](https://langbase.com/langbase/social-media-sentiment-analysis-summarizer) that summarizes trending topics and user comments._ --- ## Translation - **Content Translation:** Translate news articles, reports, and multimedia content to provide information to a global audience. - **Subtitle Generation:** Automatically generate subtitles for videos and broadcasts in multiple languages to increase accessibility. - **Real-time Reporting:** Translate live news reports and updates in real-time to cater to international viewers. - **Audience Engagement:** Engage with a global audience on social media by translating posts and comments. _Check out this [**Pipe**](https://langbase.com/langbase/social-media-content-translator) to translate **social media content** to provide information to a global audience._ --- ## AI Chatbot - **Interactive Storytelling:** AI Chatbots can provide interactive experiences, allowing users to explore news stories in depth through conversation. - **Subscription Management:** Users can manage their subscriptions, update preferences, and renew services through AI-powered chatbots. --- ## Flick Finder Gen AI Assistant - **Personalized Entertainment**: Provide tailored movie and TV series recommendations based on user preferences, enhancing viewing experiences. - **Comprehensive Reviews**: Offer detailed ratings and reviews from multiple platforms, assisting users in making informed decisions about what to watch. _Check out this [**Pipe**](https://langbase.com/examples/flick-finder) discover your next favorite movie or TV series._ --- ## StyleScribe AI Assistant - **Content Adaptation**: Transform user-written articles or blog posts into the style of famous authors, enhancing writing quality and engagement. - **Style Learning**: Assist writers in understanding and emulating the writing techniques of renowned authors, improving their own writing skills and versatility. _Explore this [**Pipe**](https://langbase.com/examples/style-scribe-bot) to effortlessly transform your text into the **style of renowned authors** for any platform._ --- Marketing https://langbase.com/docs/solutions/marketing/ import { generateMetadata } from '@/lib/generate-metadata'; # Marketing _Here are some carefully crafted **Langbase** powered **AI solutions** for marketing:_ --- ## Text Summarization - **Content Summarization:** Summarize marketing content such as blogs, whitepapers, and reports to create short snippets for promotional purposes, improving content reach and engagement. - **Campaign Analysis:** Summarize customer feedback and campaign performance data to derive actionable insights, helping marketers refine their strategies. _Check out this [**Pipe**](https://langbase.com/examples/campaign-analysis-summarizer) that summarizes the **customer feedback** and **compaign performance data** to determine the success of your campaign._ --- ## Onboarding Emails In the competitive landscape of marketing, whether it is for a SaaS, finance, or a services product, acquiring new users is difficult. One way to achieve this is through effective onboarding emails. - **Data-Driven Personalization:** A purpose-built Langbase pipe can generate context-aware emails highly relevant to the user's needs and pain points, making it more likely to resonate with them. It can help marketing companies create highly effective onboarding emails. - **Efficiency and Consistency:** Writing onboarding emails manually can be time-consuming, especially when you need to create personalized content for different segments of your user base. You can monitor, manage, and customize Langbase pipes to ensure that your emails are consistent and effective. _Check out this [**Pipe**](https://langbase.com/langbase/marketing-onboarding-email) that generates emails which leverage the PAS strategy to address user pain points, amplify the urgency to solve these issues, and present the new feature or solution in a persuasive manner._ --- ## Translation - **Campaign Localization:** Translate marketing campaigns, advertisements, and social media posts to target diverse linguistic groups. - **Content Marketing:** Automatically translate blog posts, articles, and other content marketing materials to reach a broader audience. - **Customer Feedback:** Translate customer reviews and feedback from different languages to gain insights from a global customer base. - **SEO Optimization:** Translate keywords and optimize multilingual SEO strategies to improve search engine rankings in different languages. _Check out this [**Pipe**](https://langbase.com/langbase/marketing-content-translator) to translate **marketing content** to reach a broader audience._ --- ## AI Chatbot - **Lead Generation:** AI-powered Chatbots can engage with website visitors, qualify leads, and capture contact information for the sales team. - **Personalized Campaigns:** Deliver tailored marketing messages based on user behavior and preferences to increase engagement and conversions. - **Feedback Collection:** Conduct surveys and gather customer feedback to improve products and services. --- ## Shoes expert - **Personalized Footwear Recommendations**: Suggest ideal Nike and Adidas shoes based on user preferences for price, style, and intended use (e.g., running, casual wear, sports). - **Price-Performance Analysis**: Compare shoes within a specified budget, highlighting the best value options considering factors like discounts, ratings, and features. - **Multi-Channel Shopping Assistance**: Provide consistent, knowledgeable support across various platforms including e-commerce websites, mobile apps, and in-store kiosks, enhancing the omnichannel shopping experience. _Check out this [**Pipe**](https://langbase.com/examples/shoes-expert) to elevate your footwear shopping experience with **personalized Nike and Adidas recommendations, smart price-performance comparisons, and seamless multi-channel support** for your customers._ --- ## Electronics expert - **Informed Consumer Choices**: Enable shoppers to quickly compare complex electronic products based on their specific needs and preferences. - **Technical Specification Analysis**: Assist professionals in evaluating and selecting electronic components for projects by comparing detailed datasheets. _Check out this [**Pipe**](https://langbase.com/examples/electronics-expert) to enhance product selection with personalized recommendations, detailed specification analysis, and **interactive decision support** for your customers._ --- ## Product Review Generator - **Consumer Experience Synthesis**: Transform individual user experiences into concise, informative reviews that highlight key product features, performance metrics, and potential drawbacks for prospective buyers. - **Balanced Feedback Generation**: Create nuanced product assessments that objectively present both strengths and areas for improvement, tailored to specific satisfaction levels and product categories to aid consumer decision-making. _Check out this [**Pipe**](https://langbase.com/examples/product-review-generator) to transform user experiences into concise, insightful reviews, offering potential buyers authentic perspectives and interactive guidance for smarter purchasing decisions._ --- ## Book Buddy Chatbot - **Personalized Book Discovery**: Help users find popular books based on their preferred categories, enhancing their reading experience. - **Best Seller Verification**: Verify if a book is a best seller and provide brief summaries, aiding informed reading choices. _Check out this [**Pipe**](https://langbase.com/examples/book-buddy), to **integrate bestseller lists**, personalized suggestions, and book summaries into your application, enhancing your user's engagement and literary discovery._ --- Education https://langbase.com/docs/solutions/education/ import { generateMetadata } from '@/lib/generate-metadata'; # Education _Here are some carefully crafted **Langbase** powered **AI solutions** for education industry:_ --- ## Text Summarization - **Lecture Summaries:** Summarize lecture notes and academic articles to provide students with quick reviews and study aids, enhancing learning efficiency. - **Content Curation:** Summarize e-learning content to help learners quickly grasp essential points, making the learning process more effective. _Check out this [**Pipe**](https://langbase.com/langbase/lecture-notes-summarizer) that summarizes **lecture notes** to provide a quick review for students._ --- ## Translation - **Course Localization:** Translate online courses, e-learning modules, and educational materials to reach a wider audience. - **Student Support:** Provide multilingual support for international students through AI-powered translation in chats and emails. - **Collaborative Learning:** Enable students and educators from different linguistic backgrounds to collaborate and communicate effectively. - **Content Creation:** Translate educational content, including textbooks, research papers, and instructional videos, to make learning accessible to non-native speakers. _Check out this [**Pipe**](https://langbase.com/langbase/lecture-notes-translator) to translate **lecture notes** to make learning accessible to non-native speakers._ --- ## AI Chatbot - **Student Support:** AI chatbots can answer student queries about course material, deadlines, and academic policies. - **Enrollment Assistance:** Prospective students can get information about programs, application processes, and deadlines. - **Interactive Learning:** Chatbots can facilitate interactive learning sessions, quizzes, and practice tests to enhance student engagement. --- ## JavaScript Tutor - **Personalized Learning**: Adaptive educational tutors that adjust content and pace based on student responses. - **Customer Service**: Complex problem resolution across multiple interactions in retail or tech support. - **E-commerce Solutions**: Guided shopping experiences with product recommendations and purchase assistance over multiple sessions. - **Travel Planning**: Interactive travel assistants helping users plan trips over multiple sessions. - **Professional Skill Development**: Continuous learning platforms for various professional skills, from coding to management. _Check out this [**Pipe**](https://langbase.com/examples/js-tutor) to enhance JavaScript learning with personalized lessons, complex problem resolution, and **interactive professional skill development** for your users._ --- ## Rust Tutor - **Personalized Learning**: Adaptive educational tutors that adjust content and pace based on student responses. - **Customer Service**: Complex problem resolution across multiple interactions in retail or tech support. - **E-commerce Solutions**: Guided shopping experiences with product recommendations and purchase assistance over multiple sessions. - **Travel Planning**: Interactive travel assistants helping users plan trips over multiple sessions. - **Professional Skill Development**: Continuous learning platforms for various professional skills, from coding to management. _Check out this [**Pipe**](https://langbase.com/examples/rust-tutor) to enhance Rust learning with personalized lessons, **guided learning experience**, and interactive professional skill development for your users._ --- ## Expert proofreader - **Language Refinement**: Improve academic and professional language usage. - **Style Consistency**: Ensure adherence to specified style guides (APA, MLA, Chicago, etc.). - **Grammar Correction**: Identify and fix grammatical errors and awkward phrasing. - **Clarity Enhancement**: Refine complex ideas into clear, concise language. - **Technical Accuracy**: Verify correct usage of field-specific terminology and jargon. - **Citation Verification**: Check and correct citations and references for accuracy. - **Tone Adjustment**: Adapt writing tone to suit the intended audience and purposgit e. - **Formatting Uniformity**: Ensure consistent formatting throughout the document. - **ESL Support**: Assist non-native English writers in producing fluent text. - **Readability Improvement**: Enhance overall document structure and flow for better comprehension. _Check out this [**Pipe**](https://langbase.com/examples/expert-proofreader) to **improve document** quality by refining language, ensuring style consistency, correcting grammar, and enhancing readability for your users._ --- ## LaTeX Expert - **Code Generation**: Create valid LaTeX code for document structures, equations, tables, and figures. - **Syntax Explanation**: Clarify LaTeX commands and provide educational comments within the code. - **Error Resolution**: Identify and troubleshoot common LaTeX compilation issues. - **Best Practices Guidance**: Offer tips on document organization, package usage, and LaTeX conventions. - **Complete Document Provision**: Supply fully compilable LaTeX documents for user testing and learning. _Check out this [**Pipe**](https://langbase.com/examples/latex-expert) to **streamline LaTeX document creation** by generating code, explaining syntax, resolving errors, and providing best practices for your users._ --- ## English CEFR level examiner - **Candidate Seniority Assessment**: Evaluate job applicants' seniority level and experience through a series of structured questions to better match them with appropriate roles. - **Employee Training**: Assess current skill levels and provide personalized learning paths to help employees upskill effectively. - **Customer Support**: Assist users in troubleshooting coding issues or understanding software features through interactive problem-solving. - **E-Commerce Customer Understanding**: Use a simple questionnaire to assess customer requirements, summarize their sentiment, and inform better product design. - **Team Skill Assessment**: Evaluate team members' skills to assign tasks more effectively and identify training needs for skill development. _Check out this [**Pipe**](https://langbase.com/examples/cefr-level-assessment-bot) to **streamline skill assessment** by evaluating seniority, guiding employee training, supporting customer queries, understanding e-commerce needs, and assessing team skills for your users._ --- ## AI Master Chef - **Personalized Meal Planning**: Generate customized weekly meal plans based on dietary preferences and available ingredients. - **Interactive Cooking Lessons**: Guide users through real-time, step-by-step cooking instructions for complex dishes. _Check out this [**Pipe**](https://langbase.com/examples/ai-master-chef) to transform your culinary skills, explore global cuisines, create personalized meal plans, and **turn any ingredients into gourmet dishes tailored to your tastes and dietary needs**._ --- ## ParaphraseParrot ChatBot - **Essay Enhancement**: Improve academic papers by offering varied sentence structures and vocabulary to enrich writing style. - **Content Variation**: Generate diverse phrasings for headlines, social media posts, and marketing copy to A/B test engagement. - **Writer's Block Buster**: Provide alternative expressions for key points in articles, blogs, or scripts to spark creativity and overcome writing stagnation. - **Communication Training**: Help non-native speakers practice expressing ideas in multiple ways to enhance language fluency. _Check out this [**Pipe**](https://langbase.com/examples/paraphrase-parrrot) to transforms mundane sentences into captivating expressions, **effortlessly rephrasing your ideas with flair, injecting idioms**, and offering multiple versions to perfectly capture your intended message._ --- ## Career Prep Coach - **Job Seeker Preparation**: Simulate realistic interview scenarios for various industries, helping candidates refine their responses and boost confidence. - **HR Training**: Assist HR professionals in honing their interviewing skills, ensuring fair and effective candidate evaluations across different roles. _Check out this [**Pipe**](https://langbase.com/examples/career-prep-coach) to ace your job interviews by **simulating real scenarios**, providing expert feedback, crafting STAR responses, and boosting your confidence across various industries and roles._ --- ## PolyExplainer ChatBot - **Adaptive Science Communication**: Seamlessly translate complex scientific concepts across five cognitive levels, from child to expert, enhancing understanding for diverse audiences. _Check out this [**Pipe**](https://langbase.com/examples/poly-explainer-bot) that decodes complex scientific and engineering concepts across five cognitive levels, effortlessly adapting explanations from child-friendly analogies to expert-level discourse, ensuring crystal-clear understanding for any audience._ --- ## Proverb Pro ChatBot - **Language Learning**: Aid non-native speakers in mastering idiomatic expressions, boosting language proficiency and cultural understanding. - **Content Enhancement**: Help writers and creators use idioms and proverbs accurately, enriching content across various mediums. _Check out this [**Pipe**](https://langbase.com/examples/proverb-pro), your go-to guide for decoding idioms and proverbs, offering clear explanations, usage examples, and origins to enrich your language skills and writing with colorful expressions._ --- ## SQL Tutor ChatBot - **Rapid Skill Acquisition**: Accelerate learning of essential SQL commands and concepts for immediate application in data tasks. - **Efficient Onboarding**: Enable quick upskilling for new team members, streamlining their integration into database management roles. _Check out this [**Pipe**](https://langbase.com/examples/sql-tutor) to enhance SQL learning with personalized lessons, **guided learning experience**, and interactive professional skill development for your users._ --- ## MatPyConverter ChatBot - **Cross-Platform Code Migration**: Assist researchers and engineers in transitioning projects between MATLAB and Python environments, facilitating collaboration and leveraging platform-specific advantages. _Check out this [**Pipe**](https://langbase.com/examples/mat-py-converter) to seamlessly integrate **MATLAB-Python code translation**, enabling cross-platform compatibility and expanding your user base across scientific computing environments._ --- ## Python Tutor ChatBot - **Skill Development**: Assist beginners and advanced users in mastering Python programming through interactive and personalized lessons. - **Practical Application**: Help users implement Python in real-world projects, enhancing their coding skills with hands-on examples and exercises. _Check out this [**Pipe**](https://langbase.com/examples/python-tutor) to enhance Python learning with personalized lessons, **guided learning experiences, and interactive skill development** for your users._ --- ## Question and Answer Expert - **Automated Contextual Questions**:Automated content generation for study guides or FAQ sections, allowing users to quickly create relevant question-and-answer sets on specific topics from a larger body of content. _Check out this [**Pipe**](https://langbase.com/examples/question-and-answer-expert) to empower your app with **intelligent Q&A generation**, turning any content into interactive learning experiences._ --- ## Bash Tutor ChatBot - **Interactive Learning**: Guide users through hands-on Bash exercises, providing real-time feedback and explanations to reinforce concepts. - **Skill Assessment**: Evaluate users' Bash proficiency through targeted questions and practical challenges, identifying areas for improvement. - **Customized Curriculum**: Adapt the learning path based on users' goals and prior experience, focusing on relevant Bash topics for their specific needs. _Check out this [**Pipe**](https://langbase.com/examples/bash-tutor) to empower your users with an intelligent Bash learning companion that offers personalized, **interactive tutorials and real-time feedback**, transforming command-line novices into confident shell scripters._ --- ## ASCII-artie - **Visual Communication Enhancement**: Transform plain text ideas into engaging ASCII art for vivid expression in text-only environments. - **Educational Illustration**: Generate ASCII versions of concepts, figures, or scenes for accessible, text-based educational materials. _Check out this [**Pipe**](https://langbase.com/examples/ascii-artie) to enable your users to create simple, personalized ASCII art effortlessly, adding a touch of creativity and fun to their digital interactions._ --- Administration https://langbase.com/docs/solutions/administration/ import { generateMetadata } from '@/lib/generate-metadata'; # Administration _Here are some carefully crafted **Langbase** powered **AI solutions** for administration:_ {/* --- ## Text Summarization - **Documents Summarization:** Summarize lengthy administrative documents, reports, and policies into concise summaries for quick reference. - **Research Summaries:** Summarize research papers, articles, and studies to provide a quick overview of the key findings and insights. - **Meeting Minutes:** Summarize meeting minutes, discussions, and action items to capture the key points and decisions made during meetings. *Check out this [**Pipe**](https://langbase.com/langbase/patient-and-doctor-notes-summarizer) that summarizes **patient records** and **doctors' notes** to provide a quick overview about the patient.* --- ## Translation - **Patient Communication:** Translate medical instructions, consent forms, and communication between healthcare providers and patients who speak different languages. - **Telemedicine:** Facilitate multilingual consultations between doctors and patients via telemedicine platforms using real-time translation. - **Medical Research:** Translate medical research papers, clinical trial documentation, and pharmaceutical information to share knowledge globally. - **Training & Education:** Provide translated medical training materials and courses to healthcare professionals around the world. _Check out this [**Pipe**](https://langbase.com/langbase/patient-and-doctor-notes-translator) to translate **records and notes** for communication between healthcare providers and patients._ */} --- ## Recruitment - **Resume Analysis**: Extract key skills, experience, and qualifications from resumes to provide a summary. - **Resume Comparison**: Compare multiple resumes to identify expertise, and filter potential candidates. - **Resume Screening**: Analyze resumes to identify potential red flags, gaps, and compliance issues. _Check out this RAG based [**Pipe**](https://langbase.com/langbase/software-engineer-hiring) that analyzes resumse of multiple candidates and shortlist them for given expertise._ Forked Pipe does not contain any memory. Please create a memory, add document, and attach it to the Pipe to use the solution. [Learn more about memory](/memory/quickstart#step-1-create-a-memory). --- ## AI Chatbot - **Meeting Coordination**: AI-powered chatbots can help employees schedule, reschedule, or cancel meetings with colleagues and clients. - **Task Prioritization**: Provide preliminary task assessments and suggest possible priorities based on reported urgency and importance. - **Employee Handbook Access**: AI-powered chatbots provide instant access to the company's handbook, offering quick answers to policy questions and guidance on company protocols. _Check out this chatbot [**Pipe**](https://langbase.com/langbase/employee-handbook-chatbot) that can answer queries related to company policies and guidelines for employees._ Forked Pipe does not contain any memory. Please create a memory, add document, and attach it to the Pipe to use the solution. [Learn more about memory](/memory/quickstart#step-1-create-a-memory). --- ## Onboarding AI Assistant - **Streamlined Onboarding Process**: Automate and standardize the equipment and access setup procedures, reducing human error and ensuring consistent onboarding experiences across the organization. - **Real-time Issue Resolution**: Instantly generate support tickets for missing equipment or access issues, minimizing delays in the onboarding process and improving new employee productivity. - **24/7 Availability**: Provide round-the-clock assistance for new employees, allowing them to complete onboarding tasks at their own pace and accommodating different time zones or work schedules. _Check out this [**Pipe**](https://langbase.com/examples/onboarding-ai-assistant) to streamline new hire integration, **automate equipment setup verification, and provide instant support** for a smooth and efficient onboarding experience across your organization._ --- ## Stock AI Assistant - **Predictive Restocking**: Forecast demand and automate reorder processes to maintain optimal inventory levels across all product lines. - **Real-time Inventory Alerts**: Monitor stock levels continuously and instantly notify managers of low-stock items or unusual inventory fluctuations. _Check out this [**Pipe**](https://langbase.com/examples/stock-assistant) to streamline stock management, automate reordering processes, and **provide instant inventory insights** for efficient and cost-effective retail operations across your store network._ --- ## AI-Assistant DevScreener (HR support ChatBot for initial screening) - **Candidate Experience Enhancement**: Deliver a personalized, responsive interview experience that adapts to each applicant's profile, providing a professional first impression while efficiently gathering crucial information. - **Talent Pool Optimization**: Systematically evaluate and categorize applicants based on their responses, enabling HR teams to quickly identify top candidates for different positions and experience levels. _Check out this [**Pipe**](https://langbase.com/examples/dev-screener) to streamline candidate evaluation, **automate initial interviews**, and gain instant insights into applicant qualifications for effective tech talent acquisition across your organization._ --- ## MemoAssistant (Transform Video Conference transcripts into Memo) - **Meeting Efficiency**: Instantly convert lengthy, jargon-filled team discussions into clear, actionable summaries, saving hours of post-meeting documentation time. - **Cross-Team Communication**: Bridge knowledge gaps by translating technical meetings into accessible reports for non-technical stakeholders, ensuring alignment across departments. _Explore this [**Pipe**](https://langbase.com/examples/memo-assistant) to transform video transcripts into clear, concise meeting summaries, capturing key points, decisions, and action items for seamless follow-up and enhanced productivity._ --- Healthcare https://langbase.com/docs/solutions/healthcare/ import { generateMetadata } from '@/lib/generate-metadata'; # Healthcare _Here are some carefully crafted **Langbase** powered **AI solutions** for healthcare industry:_ --- ## Text Summarization - **Medical Records Summarization:** Summarize patient records and doctors’ notes to provide a quick review for medical practitioners, improving the efficiency and quality of care. - **Research Summaries:** Condense large volumes of medical research papers into key findings, making it easier for healthcare professionals to stay updated with the latest developments. _Check out this [**Pipe**](https://langbase.com/langbase/patient-and-doctor-notes-summarizer) that summarizes **patient records** and **doctors' notes** to provide a quick overview about the patient._ --- ## Translation - **Patient Communication:** Translate medical instructions, consent forms, and communication between healthcare providers and patients who speak different languages. - **Telemedicine:** Facilitate multilingual consultations between doctors and patients via telemedicine platforms using real-time translation. - **Medical Research:** Translate medical research papers, clinical trial documentation, and pharmaceutical information to share knowledge globally. - **Training & Education:** Provide translated medical training materials and courses to healthcare professionals around the world. _Check out this [**Pipe**](https://langbase.com/langbase/patient-and-doctor-notes-translator) to translate **records and notes** for communication between healthcare providers and patients._ --- ## AI Chatbot - **Appointment Scheduling:** AI-powered chatbots can help patients book, reschedule, or cancel appointments with healthcare providers. - **Symptom Checker:** Provide preliminary health assessments and suggest possible conditions based on reported symptoms. - **Diet Management Plan**: AI chatbots can manage prescriptions, allowing patients to upload and discuss their prescriptions for better medication adherence. _Check out this RAG based [**Pipe**](https://langbase.com/langbase/diabetes-management-plan) that helps in managing diabetes by answering questions based on your docotr's notes._ Forked Pipe does not contain any memory. Please create a memory, add document, and attach it to the Pipe to use the solution. [Learn more about memory](/memory/quickstart#step-1-create-a-memory). --- ## AI Drug Assistant _Check out this [**Pipe**](https://langbase.com/examples/ai-drug-assistant) to access comprehensive drug information, personalized medication management guidance, and **interactive pharmaceutical knowledge support** for informed healthcare decisions._ --- Finance https://langbase.com/docs/solutions/finance/ import { generateMetadata } from '@/lib/generate-metadata'; # Finance _Here are some carefully crafted **Langbase** powered **AI solutions** for finance industry:_ --- ## Text Summarization - **Earnings Reports Summarization:** Summarize quarterly and annual earnings reports to provide quick insights into a company's performance, helping investors and analysts make informed decisions. - **Market Analysis:** Condense financial news, market trends, and analysis reports to keep traders and analysts updated with essential information. _Check out this [**Pipe**](https://langbase.com/langbase/earning-report-summarizer) that summarizes **earnings reports** to provide **insights** into company's performance._ --- ## Translation - **Multilingual Banking Services:** Offer banking services and support in multiple languages to cater to an international clientele. - **Document Processing:** Translate financial statements, reports, and other documents for global stakeholders. - **Market Analysis:** Translate market research reports and financial news to inform global investment decisions. - **Customer Communication:** Facilitate multilingual communication between financial advisors and clients. _Check out this [**Pipe**](https://langbase.com/langbase/financial-report-translator) to translate **financial reports** for global stakeholders._ --- ## AI Chatbot - **Account Information:** Chatbots help users check account balances and view transaction history. - **Savings Tips:** Provide users with simple tips on how to save money. - **Loan Info:** Chatbots offer information about loan products and eligibility. _Check out this RAG based [**Pipe**](https://langbase.com/langbase/financial-report-analyst) to analyze **financial reports** and provide insights._ Forked Pipe does not contain any memory. Please create a memory, add document, and attach it to the Pipe to use the solution. [Learn more about memory](/memory/quickstart#step-1-create-a-memory). --- ## Excel Master - **Quick Calculations**: Facilitates the quick generation of complex calculations in an easy and understandable way. - **Performance and Sales Tracking**: Assists managers in calculating and interpreting performance and sales metrics across departments and time periods. - **Excel Mastery**: Supports anyone looking to handle and develop complex Excel formulas, providing advanced calculation tools and detailed explanations. _Check out this [**Pipe**](https://langbase.com/examples/excel-master) to master Excel skills by facilitating quick calculations, and explanations for your users._ --- Customer Support https://langbase.com/docs/solutions/customer-support/ import { generateMetadata } from '@/lib/generate-metadata'; # Customer Support _Here are some carefully crafted **Langbase** powered **AI solutions** for customer support:_ --- ## Ticket Summarization - **Ticket Summarization:** Summarize customer support tickets to help support agents understand issues quickly and provide faster resolutions. - **AI-Powered Query Response:** Enable customer support to use AI to generate accurate answers to user queries based on the provided docs. This improving accuracy and speeds up the support process. - **FAQ Generation:** Automatically generate FAQ entries from customer inquiries and support interactions. _Check out this [**Pipe**](https://langbase.com/examples/cs-tickets-to-faq-summarizer) to summarize **customer support tickets** into **FAQ entries**._ --- ## Translation - **Multilingual Chatbots:** AI translators can enable chatbots to understand and respond to customer queries in multiple languages, improving customer experience and expanding reach. - **Real-time Translation:** Customer service representatives can use AI translators to communicate with customers in real-time, regardless of language barriers. - **Knowledge Base Localization:** Automatically translate FAQs, help articles, and other support documents into different languages to cater to a global audience. - **Voice Support:** Translate spoken language during support calls to facilitate communication between customers and support agents who speak different languages. _Check out this [**Pipe**](https://langbase.com/langbase/cs-ai-translator) to translate **customer support messages** to understand and respond to customer queries in multiple languages._ --- ## Documentation Bot - **Instant Document Access:** AI chatbots provide immediate access to relevant documents, reducing the time spent searching through databases. - **Document Search Assistance:** You can create a chatbot using a Langbase pipe to help users find specific documents based on keywords and queries. - **Policy and Procedure Guidance:** Chatbots guide users through company policies and procedures, offering step-by-step assistance for better understanding and compliance. _Check out this RAG based [**Pipe**](https://langbase.com/langbase/docs-bot) that Langbase Pipe FAQs to answer your questions._ Forked Pipe does not contain any memory. Please create a memory, add document, and attach it to the Pipe to use the solution. [Learn more about memory](/memory/quickstart#step-1-create-a-memory). --- ## AI Chatbot - **24/7 Automated Assistance:** AI Chatbots can handle common customer queries around the clock, reducing the need for human agents. - **Support Ticket System Integration:** You can create a chatbot using a Langbase pipe to automatically create and update support tickets based on user interactions. - **Product Troubleshooting:** Chatbots can guide customers through troubleshooting steps for common issues with products or services. _Check out this [**Pipe**](https://langbase.com/langbase/support-ticket-chatbot) that can converse with the user, extract information and create an appropriate **customer support ticket**._ --- ## Customer Support Chatbot with Memory (RAG) Documentation can be saved as memory and attached to the Pipe to allow LLMs to use it for generating responses. SAAS customer support AI agent pipe with RAG memory pipeline. - **Memory (RAG) based Chatbot:** You can create a chatbot using a Langbase pipe that uses the RAG model to generate responses based on the provided documentation. - **Accurate answers:** AI chatbots can provide accurate answers to customer queries based on the provided documentation. It improves the quality of support and with trains on new docs in minutes. _Check out this RAG based [**Pipe**](https://langbase.com/langbase/customer-support) that uses documentation docs to generate responses._ Your forked AI pipe agent will not contain any memory. Please create a memory, add your documents, and attach it to the Pipe to use this solution. [Learn more about memory](/memory/quickstart#step-1-create-a-memory). --- Workflow <span className="text-sm font-mono text-muted-foreground/70 ml-2">Workflow</span> https://langbase.com/docs/sdk/workflow/ import { generateMetadata } from '@/lib/generate-metadata'; # Workflow Workflow The workflow primitive allows you to create and manage a sequence of steps with advanced features like timeouts, retries, and error handling. It's designed to facilitate complex task orchestration in your AI applications. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). --- ## API reference ## `new Workflow(config)` Create a new Workflow instance by instantiating the `Workflow` class. ```ts {{ title: 'index.ts' }} const workflow = new Workflow(config); // with types const workflow = new Workflow(config?: WorkflowConfig); ``` ## config ```ts {{title: 'WorkflowConfig Object'}} interface WorkflowConfig { debug?: boolean; } ``` *Following are the properties of the config object.* --- ### debug When set to `true`, detailed execution logs for each step will be printed to the console. This includes step start/end times, timeouts, retries, and any errors encountered. Default: `false` --- ## workflow.step(config) Define and execute a step in the workflow. ```ts {{title: 'step Function Signature'}} step(config: StepConfig): Promise ``` The `step` function accepts a configuration object and returns a Promise that resolves to the result of the step execution. --- ### StepConfig ```ts {{title: 'StepConfig Object'}} interface StepConfig { id: string; timeout?: number; retries?: RetryConfig; run: () => Promise; } ``` *Following are the properties of the step config object.* --- #### id A unique identifier for the step. This ID is used in logs and for storing the step's output in the workflow context. --- #### timeout Maximum time in milliseconds that the step is allowed to run before timing out. If not specified, the step will run until completion or until it fails. --- #### retries Configuration for retry behavior if the step fails. ```ts {{title: 'RetryConfig Object'}} interface RetryConfig { limit: number; delay: number; backoff: 'exponential' | 'linear' | 'fixed'; } ``` Maximum number of retry attempts after the initial try. Base delay in milliseconds between retry attempts. Strategy for increasing delay between retries: - `exponential`: Delay doubles with each retry attempt (delay * 2^attempt) - `linear`: Delay increases linearly with each attempt (delay * attempt) - `fixed`: Delay remains constant for all retry attempts --- #### run The function to execute for this step. Must return a Promise that resolves to the step result. ```ts {{title: 'run Function Signature'}} () => Promise ``` ## Usage examples ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" LLM_API_KEY="" ``` ### `Workflow` examples ```ts {{ title: 'Basic Workflow' }} import { Langbase, Workflow } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const workflow = new Workflow({ debug: true, }); try { // Step 1: Fetch data const data = await workflow.step({ id: 'fetch-data', run: async () => { // Simulate API call return { topic: 'climate change' }; }, }); // Step 2: Generate content using the Agent const content = await workflow.step({ id: 'generate-content', timeout: 8000, // 8 second timeout run: async () => { const { output } = await langbase.agent.run({ model: 'openai:gpt-4o-mini', instructions: 'You are a helpful AI Agent.', input: `Write a short paragraph about ${data.topic}`, llmKey: process.env.LLM_API_KEY!, stream: false, }); return output; }, }); console.log('Final result:', content); } catch (error) { console.error('Workflow failed:', error); } } main(); ``` ```ts {{ title: 'With Retries' }} import { Langbase, Workflow } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const workflow = new Workflow({ debug: true, }); try { // Step 1: Fetch external data with retries const weatherData = await workflow.step({ id: 'fetch-weather', timeout: 5000, // 5 second timeout retries: { limit: 3, delay: 1000, backoff: 'exponential', }, run: async () => { const response = await fetch('https://api.weather.example/forecast'); return response.json(); }, }); // Step 2: Process data const processedData = await workflow.step({ id: 'process-data', run: async () => { // Process the weather data return { location: weatherData.location, temperature: weatherData.current.temperature, forecast: weatherData.forecast.summary, }; }, }); // Step 3: Generate report with Agent const report = await workflow.step({ id: 'generate-report', timeout: 10000, // 10 second timeout run: async () => { const { output } = await langbase.agent.run({ model: 'openai:gpt-4o-mini', instructions: 'You are a weather reporter.', input: `Create a weather report for ${processedData.location} where the temperature is ${processedData.temperature}°C with a forecast of "${processedData.forecast}"`, llmKey: process.env.LLM_API_KEY!, stream: false, }); return output; }, }); console.log('Weather Report:', report); } catch (error) { console.error('Workflow failed:', error); } } main(); ``` ```ts {{ title: 'Parallel Steps' }} import { Langbase, Workflow } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const workflow = new Workflow({ debug: true, }); try { // Run multiple steps in parallel const [summaryResult, translationResult] = await Promise.all([ workflow.step({ id: 'generate-summary', run: async () => { const { output } = await langbase.agent.run({ model: 'openai:gpt-4o-mini', input: 'Summarize the benefits of AI in healthcare', llmKey: process.env.LLM_API_KEY!, stream: false, }); return output; }, }), workflow.step({ id: 'translate-text', run: async () => { const { output } = await langbase.agent.run({ model: 'openai:gpt-4o-mini', input: `Translate "Artificial Intelligence is transforming our world" to French`, llmKey: process.env.LLM_API_KEY!, stream: false, }); return output; }, }), ]); console.log('Summary:', summaryResult); console.log('Translation:', translationResult); } catch (error) { console.error('Workflow failed:', error); } } main(); ``` ## Example outputs ```json {{ title: 'Basic Workflow' }} 🔄 Starting step: fetch-data ⏱️ Step fetch-data: 5.214ms 📤 Output: { "topic": "climate change" } ✅ Completed step: fetch-data 🔄 Starting step: generate-content ⏳ Timeout: 8000ms ⏱️ Step generate-content: 2352.871ms 📤 Output: "Climate change represents one of the most urgent global challenges of our time. Rising temperatures, shifting weather patterns, and increasingly frequent extreme weather events are disrupting ecosystems and threatening communities worldwide. Caused primarily by human activities like burning fossil fuels and deforestation, climate change requires immediate collective action through policy reforms, technological innovation, and individual lifestyle changes to mitigate its worst effects and build a sustainable future for coming generations." ✅ Completed step: generate-content Final result: "Climate change represents one of the most urgent global challenges of our time. Rising temperatures, shifting weather patterns, and increasingly frequent extreme weather events are disrupting ecosystems and threatening communities worldwide. Caused primarily by human activities like burning fossil fuels and deforestation, climate change requires immediate collective action through policy reforms, technological innovation, and individual lifestyle changes to mitigate its worst effects and build a sustainable future for coming generations." ``` ```json {{ title: 'With Retries' }} 🔄 Starting step: fetch-weather ⏳ Timeout: 5000ms 🔄 Retries: {"limit":3,"delay":1000,"backoff":"exponential"} ⚠️ Attempt 1 failed, retrying in 1000ms... Error: fetch failed ⚠️ Attempt 2 failed, retrying in 2000ms... Error: fetch failed ⏱️ Step fetch-weather: 8217.631ms 📤 Output: { "location": "San Francisco, CA", "current": { "temperature": 18, "conditions": "Partly Cloudy" }, "forecast": { "summary": "Partly cloudy with a slight chance of evening fog", "high": 21, "low": 14 } } ✅ Completed step: fetch-weather 🔄 Starting step: process-data ⏱️ Step process-data: 0.981ms 📤 Output: { "location": "San Francisco, CA", "temperature": 18, "forecast": "Partly cloudy with a slight chance of evening fog" } ✅ Completed step: process-data 🔄 Starting step: generate-report ⏳ Timeout: 10000ms ⏱️ Step generate-report: 2854.326ms 📤 Output: "Good afternoon, San Francisco! Today we're looking at mild conditions across the Bay Area. Current temperature in San Francisco is sitting at a comfortable 18°C, making it a pleasant day to be outdoors. Looking ahead, expect it to remain partly cloudy with a slight chance of evening fog rolling in from the Pacific. If you're planning evening activities, you might want to bring an extra layer as that fog could bring a slight drop in temperature. Back to you in the studio!" ✅ Completed step: generate-report Weather Report: "Good afternoon, San Francisco! Today we're looking at mild conditions across the Bay Area. Current temperature in San Francisco is sitting at a comfortable 18°C, making it a pleasant day to be outdoors. Looking ahead, expect it to remain partly cloudy with a slight chance of evening fog rolling in from the Pacific. If you're planning evening activities, you might want to bring an extra layer as that fog could bring a slight drop in temperature. Back to you in the studio!" ``` ```json {{ title: 'Parallel Steps' }} 🔄 Starting step: generate-summary 🔄 Starting step: translate-text ⏱️ Step generate-summary: 1523.123ms 📤 Output: "AI in healthcare offers numerous benefits including: improved diagnosis accuracy through image recognition, personalized treatment plans based on patient data, predictive analytics for early disease detection, streamlined administrative tasks reducing staff burden, enhanced drug discovery and development processes, remote patient monitoring through IoT devices, and improved accessibility to quality healthcare for underserved populations." ✅ Completed step: generate-summary ⏱️ Step translate-text: 785.451ms 📤 Output: "L'Intelligence Artificielle transforme notre monde." ✅ Completed step: translate-text Summary: "AI in healthcare offers numerous benefits including: improved diagnosis accuracy through image recognition, personalized treatment plans based on patient data, predictive analytics for early disease detection, streamlined administrative tasks reducing staff burden, enhanced drug discovery and development processes, remote patient monitoring through IoT devices, and improved accessibility to quality healthcare for underserved populations." Translation: "L'Intelligence Artificielle transforme notre monde." ``` --- Pipe <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.pipe</span> https://langbase.com/docs/sdk/pipe/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe langbase.pipe Use the SDK to manage the pipes in your Langbase account. Create, update, list, and run AI Pipes - [Run pipe](/sdk/pipe/run) - [Create pipe](/sdk/pipe/create) - [Update pipe](/sdk/pipe/update) - [List pipe](/sdk/pipe/list) - [usePipe()](/sdk/pipe/use-pipe) --- Threads <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.threads()</span> https://langbase.com/docs/sdk/threads/ import { generateMetadata } from '@/lib/generate-metadata'; # Threads langbase.threads() You can use the `threads()` functions to manage conversation threads. Threads help you organize and maintain conversation history, making it easier to build conversational applications. - [Create Thread](/sdk/threads/create) - [Update Thread](/sdk/threads/update) - [Get Thread](/sdk/threads/get) - [Delete Thread](/sdk/threads/delete) - [Append Messages](/sdk/threads/append-messages) - [List Messages](/sdk/threads/list-messages) --- Tools API reference of `langbase.tools` function in Langbase AI SDK. https://langbase.com/docs/api/og?title=langbase.tools§ion=Langbase%20AI%20SDK https://langbase.com/docs/sdk/tools # Tools langbase.tools The Langbase SDK provides a set of powerful tools to enhance your AI applications. These tools can be used to extend the capabilities of your AI agents and provide additional functionality. --- ## Available Tools --- ### Web Search The Web Search tool allows your AI agents to perform real-time web searches and retrieve information from the internet. This is particularly useful for tasks that require up-to-date information or fact-checking. [Learn more about Web Search](/sdk/tools/web-search) ### Crawler The Crawler tool enables your AI agents to extract and process information from web pages. It can be used to gather data from websites, parse content, and integrate it into your AI workflows. [Learn more about Crawler](/sdk/tools/crawler) --- Memory <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.memory</span> https://langbase.com/docs/sdk/memory/ import { generateMetadata } from '@/lib/generate-metadata'; # Memory langbase.memory Use the SDK to programmatically manage memories in your Langbase account. Since documents are stored in memories, you can also manage documents using the SDK. - [List memory](/sdk/memory/list) - [Create memory](/sdk/memory/create) - [Delete memory](/sdk/memory/delete) - [Retrieve memory](/sdk/memory/retrieve) - [List documents](/sdk/memory/document-list) - [Delete document](/sdk/memory/document-delete) - [Upload document](/sdk/memory/document-upload) - [Embeddings Retry](/sdk/memory/document-embeddings-retry) --- Parser `langbase.parser()` API reference of `langbase.parser()` function in Langbase AI SDK. https://langbase.com/docs/api/og?title=langbase.parser()§ion=Langbase%20AI%20SDK https://langbase.com/docs/sdk/parser # Parser langbase.parser() You can use the `parser()` function to extract text content from various document formats. This is particularly useful when you need to process documents before using them in your AI applications. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Limitations - Maximum file size: **10 MB** - Supported file formats: - Text files (`.txt`) - Markdown (`.md`) - PDF documents (`.pdf`) - CSV files (`.csv`) - Excel spreadsheets (`.xlsx`, `.xls`) - Common programming language files (`.js`, `.py`, `.java`, etc.) --- ## API reference ## `langbase.parser(options)` Parse documents by running the `langbase.parser()` function. ```ts {{ title: 'index.ts' }} langbase.parser(options); // with types langbase.parser(options: ParserOptions); ``` ## options ```ts {{title: 'ParserOptions Object'}} interface ParserOptions { document: Buffer | File | FormData | ReadableStream; documentName: string; contentType: ContentType; } ``` *Following are the properties of the options object.* --- ### document The input document to be parsed. Must be one of the supported file formats and under **10 MB** in size. --- ### documentName The name of the document including its extension (e.g., `document.pdf`). --- ### contentType The MIME type of the document. Supported MIME types based on file format: - Text files (`.txt`): `text/plain` - Markdown (`.md`): `text/markdown` - PDF documents (`.pdf`): `application/pdf` - CSV files (`.csv`): `text/csv` - Excel spreadsheets: - `.xlsx`: `application/vnd.openxmlformats-officedocument.spreadsheetml.sheet` - `.xls`: `application/vnd.ms-excel` - Programming language files (all use `text/plain`): - `.js`: `text/plain` - `.py`: `text/plain` - `.java`: `text/plain` - `.cpp`: `text/plain` - `.cs`: `text/plain` - Other code files: `text/plain` ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### `langbase.parser()` examples ```ts {{ title: 'Basic' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const document = new File(['Your document content'], 'document.txt', { type: 'text/plain' }); const result = await langbase.parser({ document: document, documentName: 'document.txt', contentType: 'text/plain' }); console.log('Parsed content:', result); } main(); ``` ```ts {{ title: 'PDF Document' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { // Assuming you're in a browser environment const fileInput = document.querySelector('input[type="file"]'); const pdfFile = fileInput.files[0]; const result = await langbase.parser({ document: pdfFile, documentName: 'document.pdf', contentType: 'application/pdf' }); console.log('Parsed content:', result); } main(); ``` ```ts {{ title: 'Node.js' }} import { Langbase } from 'langbase'; import { readFile } from 'fs/promises'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const buffer = await readFile('path/to/document.pdf'); const document = new File([buffer], 'document.pdf', { type: 'application/pdf' }); const result = await langbase.parser({ document: document, documentName: 'document.pdf', contentType: 'application/pdf' }); console.log('Parsed content:', result); } main(); ``` --- ## Response Response of `langbase.parser()` is a `Promise`. ```ts {{title: 'ParserResponse Type'}} interface ParserResponse { documentName: string; content: string; } ``` The name of the parsed document. The extracted text content from the document. ```json {{ title: 'ParserResponse Example' }} { "documentName": "document.pdf", "content": "Extracted text content from the document..." } ``` Embed `langbase.embed()` API reference of `langbase.embed()` function in Langbase AI SDK. https://langbase.com/docs/api/og?title=langbase.embed()§ion=Langbase%20AI%20SDK https://langbase.com/docs/sdk/embed # Embed langbase.embed() You can use the `embed()` function to generate vector embeddings for text chunks. This is particularly useful for semantic search, text similarity comparisons, and other NLP tasks. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. Please add the [LLM API keys](/features/keysets) for the embedding models you want to use in your API key settings. --- ## API reference ## `langbase.embed(options)` Generate embeddings by running the `langbase.embed()` function. ```ts {{ title: 'index.ts' }} langbase.embed(options); // with types langbase.embed(options: EmbedOptions); ``` ## options ```ts {{title: 'EmbedOptions Object'}} interface EmbedOptions { chunks: string[]; embeddingModel?: EmbeddingModels; } type EmbeddingModels = | 'openai:text-embedding-3-large' | 'cohere:embed-multilingual-v3.0' | 'cohere:embed-multilingual-light-v3.0' | 'google:text-embedding-004'; ``` *Following are the properties of the options object.* --- ### chunks An array of text chunks to generate embeddings for. --- ### embeddingModel The embedding model to use. Available options: - `openai:text-embedding-3-large` - `cohere:embed-v4.0` - `cohere:embed-multilingual-v3.0` - `cohere:embed-multilingual-light-v3.0` - `google:text-embedding-004` ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### `langbase.embed()` examples ```ts {{ title: 'Basic' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const embeddings = await langbase.embed({ chunks: [ "The quick brown fox", "jumps over the lazy dog" ] }); console.log('Embeddings:', embeddings); } main(); ``` ```ts {{ title: 'Custom Model' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const embeddings = await langbase.embed({ chunks: [ "Hello, world!", "Bonjour, monde!", "¡Hola, mundo!" ], embeddingModel: "cohere:embed-multilingual-v3.0" }); console.log('Multilingual embeddings:', embeddings); } main(); ``` --- ## Response Response of `langbase.embed()` is a `Promise`. ```ts {{title: 'EmbedResponse Type'}} type EmbedResponse = number[][]; ``` A 2D array where each inner array represents the embedding vector for the corresponding input chunk. ```json {{ title: 'EmbedResponse Example' }} [ [-0.023, 0.128, -0.194, ...], [0.067, -0.022, 0.289, ...], ] ``` Primitive: LLM <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.llm.run()</span> https://langbase.com/docs/sdk/llm/ import { generateMetadata } from '@/lib/generate-metadata'; # Primitive: LLM langbase.llm.run() You can use the `llm.run()` function as runtime LLM, meaning you have to specify all parameters at runtime. `llm.run()` is useful when you want fine control over the LLM model and its parameters. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.llm.run(options)` Request LLM by running the `langbase.llm.run()` function. ```ts {{ title: 'index.ts' }} langbase.llm.run(options); // with types. langbase.llm.run(options: LlmOptions); ``` ## options ```ts {{title: 'LlmOptions Object'}} interface LlmOptions { model: string; llmKey: string; messages: Message[]; stream?: boolean; tools?: Tool[]; tool_choice?: 'auto' | 'required' | ToolChoice; parallel_tool_calls?: boolean; top_p?: number; max_tokens?: number; temperature?: number; presence_penalty?: number; frequency_penalty?: number; stop?: string[]; customModelParams?: Record; } ``` *Following are the properties of the options object.* --- ### model LLM model. Combination of model provider and model id, like `openai:gpt-4o-mini` Format: `provider:model_id` You can copy the ID of a model from the list of [supported LLM models](/supported-models-and-providers) at Langbase. --- ### llmKey LLM API key for the selected model. --- ### messages A messages array including the following properties. ```ts {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | ContentType[] | null; name?: string; tool_call_id?: string; } ``` --- The role of the author of this message. The content of the message. 1. `String` For text generation, it's a plain string. 2. `Null` or `undefined` Tool call messages can have no content. 3. `ContentType[]` Array used in vision and audio models, where content consists of structured parts (e.g., text, image URLs). ```js {{ title: 'ContentType Object' }} interface ContentType { type: string; text?: string | undefined; image_url?: | { url: string; detail?: string | undefined; } | undefined; }; ``` The name of the tool called by LLM The id of the tool called by LLM --- ### stream Whether to stream the response or not. If `true`, the response will be streamed. --- ### tools A list of tools the model may call. ```ts {{title: 'Tools Object'}} interface ToolsOptions { type: 'function'; function: FunctionOptions } ``` The type of the tool. Currently, only `function` is supported. The function that the model may call. ```ts {{title: 'FunctionOptions Object'}} export interface FunctionOptions { name: string; description?: string; parameters?: Record } ``` The name of the function to call. The description of the function. The parameters of the function. --- ### tool_choice Tool usage configuration. Model decides when to use tools. Model must use specified tools. Forces use of a specific function. ```ts {{title: 'ToolChoice Object'}} interface ToolChoice { type: 'function'; function: { name: string; }; } ``` --- ### parallel_tool_calls Call multiple tools in parallel, allowing the effects and results of these function calls to be resolved in parallel. --- ### temperature What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic. Default: `0.7` --- ### top_p An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Default: `1` --- ### max_tokens Maximum number of tokens in the response message returned. Default: `1000` --- ### presence_penalty Penalizes a word based on its occurrence in the input text. Default: `0` --- ### frequency_penalty Penalizes a word based on how frequently it appears in the training data. Default: `0` --- ### stop Up to 4 sequences where the API will stop generating further tokens. --- ### customModelParams Additional parameters to pass to the model as key-value pairs. These parameters are passed on to the model as-is. ```ts {{title: 'CustomModelParams Object'}} interface CustomModelParams { [key: string]: any; } ``` Example: ```ts { "logprobs": true, "service_tier": "auto", } ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" LLM_API_KEY="" ``` ### `langbase.llm()` examples ```ts {{ title: 'Non-stream' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const {completion} = await langbase.llm.run({ model: 'openai:gpt-4o-mini', messages: [ { role: 'system', content: 'You are a helpful assistant.', }, { role: 'user', content: 'Who is an AI Engineer?', }, ], llmKey: process.env.LLM_API_KEY!, stream: false, }); console.log('Completion:', completion); } main(); ``` ```ts {{ title: 'Stream' }} import {getRunner, Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const {stream, rawResponse} = await langbase.llm.run({ model: 'openai:gpt-4o-mini', messages: [ { role: 'system', content: 'You are a helpful assistant.', }, { role: 'user', content: 'Who is an AI Engineer?', }, ], llmKey: process.env.LLM_API_KEY!, stream: true, }); // Convert the stream to a stream runner. const runner = getRunner(stream); runner.on('connect', () => { console.log('Stream started.\n'); }); runner.on('content', content => { process.stdout.write(content); }); runner.on('end', () => { console.log('\nStream ended.'); }); runner.on('error', error => { console.error('Error:', error); }); } main(); ``` ```ts {{ title: 'Tool Calling' }} // Tool Calling Example import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const tools = [ { type: 'function', function: { name: 'get_current_weather', description: 'Get the current weather in a given location', parameters: { type: 'object', properties: { location: { type: 'string', description: 'The city and state, e.g. San Francisco, CA', }, unit: { type: 'string', enum: ['celsius', 'fahrenheit'] }, }, required: ['location'], }, }, }, ]; const response = await langbase.llm.run({ model: 'openai:gpt-4o-mini', messages: [ { role: 'user', content: 'What is the weather like in SF today?', }, ], tools: tools, tool_choice: 'auto', llmKey: process.env.LLM_API_KEY!, stream: false, }); console.log(response); } main(); ``` --- ## Response Response of `langbase.llm.run()` is a `Promise` object. ### RunResponse Object ```ts {{title: 'RunResponse Object'}} interface RunResponse { completion: string; id: string; object: string; created: number; model: string; choices: ChoiceGenerate[]; usage: Usage; system_fingerprint: string | null; rawResponse?: { headers: Record; }; } ``` The generated text completion. The ID of the raw response. The object type name of the response. The timestamp of the response creation. The model used to generate the response. A list of chat completion choices. Can contain more than one elements if n is greater than 1. ```ts {{title: 'Choice Object for langbase.llm() with stream off'}} interface ChoiceGenerate { index: number; message: Message; logprobs: boolean | null; finish_reason: string; } ``` The index of the choice in the list of choices. A messages array including `role` and `content` params. ```ts {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | null; tool_calls?: ToolCall[]; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. The array of the tools called by LLM ```ts {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` The ID of the tool call. The type of the tool. Currently, only `function` is supported. The function that the model called. ```ts {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` The name of the function to call. The arguments to call the function with, as generated by the model in JSON format. Log probability information for the choice. Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. It could also be `eos` end of sequence and depends on the type of LLM, you can check their docs. The usage object including the following properties. ```ts {{title: 'Usage Object'}} interface Usage { prompt_tokens: number; completion_tokens: number; total_tokens: number; } ``` The number of tokens in the prompt (input). The number of tokens in the completion (output). The total number of tokens. This fingerprint represents the backend configuration that the model runs with. The different headers of the response. --- ### RunResponseStream Object Response of `langbase.llm.run()` with `stream: true` is a `Promise`. ```ts {{title: 'RunResponseStream Object'}} interface RunResponseStream { stream: ReadableStream; rawResponse?: { headers: Record; }; } ``` The different headers of the response. Stream is an object with a streamed sequence of StreamChunk objects. ```ts {{title: 'StreamResponse Object'}} type StreamResponse = ReadableStream; ``` ### StreamChunk Represents a streamed chunk of a completion response returned by model, based on the provided input. ```js {{title: 'StreamChunk Object'}} interface StreamChunk { id: string; object: string; created: number; model: string; choices: ChoiceStream[]; } ``` A `StreamChunk` object has the following properties. The ID of the response. The object type name of the response. The timestamp of the response creation. The model used to generate the response. A list of chat completion choices. Can contain more than one elements if n is greater than 1. ```js {{title: 'Choice Object for langbase.llm() with stream true'}} interface ChoiceStream { index: number; delta: Delta; logprobs: boolean | null; finish_reason: string; } ``` The index of the choice in the list of choices. A chat completion delta generated by streamed model responses. ```js {{title: 'Delta Object'}} interface Delta { role?: Role; content?: string | null; tool_calls?: ToolCall[]; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. The array of the tools called by LLM ```js {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` The ID of the tool call. The type of the tool. Currently, only `function` is supported. The function that the model called. ```js {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` The name of the function to call. The arguments to call the function with, as generated by the model in JSON format. Log probability information for the choice. Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. It could also be `eos` end of sequence and depends on the type of LLM, you can check their docs. ```json {{ title: 'RunResponse type of langbase.llm.run()' }} { "completion": "AI Engineer is a person who designs, builds, and maintains AI systems.", "id": "chatcmpl-123", "object": "chat.completion", "created": 1720131129, "model": "gpt-4o-mini", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "AI Engineer is a person who designs, builds, and maintains AI systems." }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 28, "completion_tokens": 36, "total_tokens": 64 }, "system_fingerprint": "fp_123" } ``` ```js {{ title: 'RunResponseStream of langbase.llm() with stream true' }} { "stream": StreamResponse // example of streamed chunks below. } ``` ```json {{ title: 'StreamResponse has stream chunks' }} // A stream chunk looks like this … { "id": "chatcmpl-123", "object": "chat.completion.chunk", "created": 1719848588, "model": "gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices": [{ "index": 0, "delta": { "content": "Hi" }, "logprobs": null, "finish_reason": null }] } // More chunks as they come in... {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"content":"there"},"logprobs":null,"finish_reason":null}]} … {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} ``` --- Deprecated SDK methods https://langbase.com/docs/sdk/deprecated/ import { generateMetadata } from '@/lib/generate-metadata'; # Deprecated SDK methods Deprecated SDK methods are no longer supported and should not be used. Below is a list of deprecated SDK methods. Click on the endpoint to view detailed information about it. --- ### Pipe Below is the list of deprecated Pipe SDK methods. - [Generate text](/sdk/deprecated/generate-text) - [Stream text](/sdk/deprecated/stream-text) Please refer to the [Pipe SDK](/sdk/pipe) for the latest methods. --- Reset Your Password https://langbase.com/docs/platform-docs/reset-your-password/ import { generateMetadata } from '@/lib/generate-metadata'; # Reset Your Password --- ## Step #1 Go to profile settings Login to your account on [Langbase][lb]. 1. Navigate to your profile `Settings` page. --- ## Step #2 Change your password Scroll down to the `Reset Password` section. 1. Enter a new password in the box under `New Password`. 2. Click on `Set Password` to confirm your new password. ----- [lb]: https://langbase.com ⌘ Langbase SDK AI Examples https://langbase.com/docs/sdk/examples/ import { generateMetadata } from '@/lib/generate-metadata'; # ⌘ Langbase SDK AI Examples Langbase SDK is built to be used by web developers using JavaScript or TypeScript code. The SDK is designed to be simple to use and easy to integrate with your existing codebase. Here are some examples to get you started with the Langbase SDK. ## Next.js & React Example - [Next.js Example with pipe.run() stream and non-stream](https://github.com/LangbaseInc/langbase-sdk/tree/main/examples/nextjs) - [React components](https://github.com/LangbaseInc/langbase-sdk/tree/main/examples/nextjs/components/langbase) to display the response - [API Route handlers](https://github.com/LangbaseInc/langbase-sdk/tree/main/examples/nextjs/app/langbase/pipe) to send requests to ⌘ Langbase ### `pipe.run()` non-stream - [Pipe run non-stream API](https://github.com/LangbaseInc/langbase-sdk/blob/main/examples/nextjs/app/langbase/pipe/run/route.ts) - [Pipe run React Component](https://github.com/LangbaseInc/langbase-sdk/blob/main/examples/nextjs/components/langbase/run.tsx) ### `pipe.run()` stream - [Pipe run stream API](https://github.com/LangbaseInc/langbase-sdk/blob/main/examples/nextjs/app/langbase/pipe/run-stream/route.ts) - [Pipe run stream React Component](https://github.com/LangbaseInc/langbase-sdk/blob/main/examples/nextjs/components/langbase/run-stream.tsx) - [Pipe run stream Web browser code](https://github.com/LangbaseInc/langbase-sdk/blob/main/examples/nextjs/components/langbase/run-stream.tsx#L28) --- ## Node.js ### `pipe.run()` non-stream - [Simple: pipe.run()](https://github.com/LangbaseInc/langbase-sdk/blob/main/examples/nodejs/pipes/pipe.run.ts) - [Chat with pipe.run()](https://github.com/LangbaseInc/langbase-sdk/blob/main/examples/nodejs/pipes/pipe.run.chat.ts) ### `streamText()` - [Simple pipe.run() stream](https://github.com/LangbaseInc/langbase-sdk/blob/main/examples/nodejs/pipes/pipe.run.stream.ts) - [Chat with ChatGPT like text streaming pipe.run()](https://github.com/LangbaseInc/langbase-sdk/blob/main/examples/nodejs/pipes/pipe.run.stream.chat.ts) --- Let us know about any other features and examples you'd like us to build in the SDK by [submitting a request](https://github.com/LangbaseInc/langbase-sdk/issues/new/choose). Excited to see what you ship. Billing https://langbase.com/docs/platform-docs/billing/ import { generateMetadata } from '@/lib/generate-metadata'; # Billing You can upgrade to `Pro` or `Enterprise` package on the `Billing` page of Langbase. --- ## Upgrade organization --- ## Step #1 Go to organization billing page Login to your account on [Langbase][lb]. 1. Navigate to your organization profile page. 2. Click on the `Billing` button at the top right corner. --- ## Step #2 Review current billing status 1. `Your Subscription` section displays your current subscription plan, term and status details. 2. `Your Usage` section shows the summary of your usage on Langbase. This includes total number of pipes, total requests and memory usage. --- ## Step #3 Upgrade to Pro or Enterprise Under the `Plans` section, compare and explore different subscription plans. 1. Click on the `Subscribe ` or `Let’s Talk` button and follow the prompts to upgrade your organization. ----- ## Upgrade user profile --- ## Step #1 Go to your billing page Login to your account on [Langbase][lb]. 1. Navigate to your profile `Billing` page. --- ## Step #2 Review current billing status 1. `Your Subscription` section displays your current subscription plan, term and status details. 2. `Your Usage` section shows the summary of your usage on Langbase. This includes total number of pipes, total requests and memory usage. --- ## Step #3 Upgrade to Pro or Enterprise Under the `Plans` section, compare and explore different subscription plans. 1. Click on the `Subscribe ` or `Let’s Talk` button and follow the prompts to upgrade your profile. ----- [lb]: https://langbase.com What is an AI Agent? (Pipe) https://langbase.com/docs/pipe/quickstart/ import { generateMetadata } from '@/lib/generate-metadata'; # What is an AI Agent? (Pipe) AI Agents can understand context and take meaningful actions. They can be used to automate tasks, research and analyze information, or help users with their queries. Pipe is an AI agent available as a serverless API. You write the logic, Langbase handles the logistics. It's the easiest way to build, deploy, and scale AI agents without having to manage or update any infrastructure. --- What is a Pipe Langbase Augmented LLM (Pipe Agent) is the fundamental component of an agentic system. It is a Large Language Model (LLM) enhanced with augmentations such as retrieval, tools, and memory. Our current models can actively utilize these capabilities—generating their own search queries, selecting appropriate tools, and determining what information to retain using memory. > Ever found yourself amazed by what ChatGPT can do and wished you could integrate similar AI features into your own apps? That's exactly what Pipe is designed for. It’s like ChatGPT, but simple (simplest API), powerfull (works with 250+ LLM models), and developer-ready (comes with a suite of dev-friendly features available as an API). --- ## Quickstart: Build an AI Agent to Generate Titles ### Let's build your first AI pipe in a minute. --- In this quickstart guide, you will: - **Create** and use a Pipe agent on Langbase. - **Use** an LLM model like GPT, Llama, Mistral, etc. - **Build** your pipe agent with configuration and meta settings. - **Design** a prompt with system, safety, and few-shot messages. - **Experiment** with your AI pipe in playground (Langbase AI Studio). - **Observe** real-time performance, usage, and per million request predictions. - **Deploy** your AI features seamlessly using the Pipe API (global, higly available, and scalable). --- ## Let's get started There are two ways to follow this guide: - [Langbase SDK](/sdk) - TypeScript SDK to interact with Langbase APIs. (code) - [Langbase AI Studio](https://langbase.com/studio) - AI Studio to build, deploy, and collaborate on AI agents. (build) Click on one of the buttons below to choose your preferred method. --- ## Step #1: Generate Langbase API key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. In case, you do not have an API key, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Step #2: Add LLM API keys If you have set up LLM API keys in your profile, the Pipe will automatically use them. If not, navigate to [LLM API keys](https://langbase.com/settings/llm-keys) page and add keys for different providers like OpenAI, TogetherAI, Anthropic, etc. You can add LLM API keys in your account using [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `LLM API keys` link. 4. From here you can add LLM API keys for different providers like OpenAI, TogetherAI, Anthropic, etc. --- ## Step 3: Setup your project Create a new directory for your project and navigate to it. ```bash mkdir ai-support-agent && cd ai-support-agent ``` ### Initialize the project Create a new Node.js project. ```bash {{ title: 'npm' }} npm init -y ``` ```bash {{ title: 'pnpm' }} pnpm init ``` ```bash {{ title: 'yarn' }} yarn init -y ``` ### Install dependencies You will use the [Langbase SDK](/sdk) to connect to the AI agent pipes and `dotenv` to manage environment variables. So, let's install these dependencies. ```bash {{ title: 'npm' }} npm i langbase dotenv ``` ```bash {{ title: 'pnpm' }} pnpm add langbase dotenv ``` ```bash {{ title: 'yarn' }} yarn add langbase dotenv ``` ### Create an env file Create a `.env` file in the root of your project and add the following environment variables: ```bash {{ title: '.env' }} LANGBASE_API_KEY=xxxxxxxxx ``` Replace `xxxxxxxxx` with your Langbase API key. --- ## Step 4: Create a new pipe agent Create a new file named `create-pipe.ts` and add the following code to create a new pipe agent using Langbase [`Pipe create`](/sdk/pipe/create) API: ```ts {{ title: 'TypeScript' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const supportAgent = await langbase.pipes.create({ name: `ai-support-agent`, description: `An AI agent to support users with their queries.`, messages: [ { role: `system`, content: `You're a helpful AI assistant.`, }, ], }); console.log('Support agent:', supportAgent); } main(); ``` Let's create the pipe agent by running the above file: ```bash {{ title: 'npm' }} npx tsx create-pipe.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx create-pipe.ts ``` This will create a new pipe agent named `ai-support-agent`. --- ## Step 5: Run AI support agent Now that you have created the pipe agent, it's time to run it and generate completions. Create a new file named `run-pipe.ts` and add the following code to run the AI support agent: ```ts {{ title: 'TypeScript' }} import 'dotenv/config'; import { Langbase, getRunner } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const { stream } = await langbase.pipes.run({ name: `ai-support-agent`, stream: true, messages: [], }); const runner = getRunner(stream); runner.on('content', content => { process.stdout.write(content); }); } main(); ``` You are using the `stream` option to get real-time completions from the AI model. Now let's run the above file to see the AI generate completions for the user message: ```bash npx tsx run-pipe.ts ``` You should see a sample AI response: ``` {{ title: 'LLM generation' }} How can I assist you today? ``` --- ## Step 6: Design a prompt Now that you have written the basic code it's time to design your prompt. --- ### What is a Prompt? Prompt is the input you provide to the AI model to generate the output. Typically, a prompt starts a chat thread with a system message, then alternates between user and assistant messages. **Prompt design is important.** At Langbase, we have a few key components to help you design a prompt: --- ### Prompt: System Instructions A system `message` in prompt acts as the set of instructions for the AI model. 1. It sets the initial context and helps the model understand your intent. 2. Now let's add a system instruction message. You can add this: `You're a helpful AI assistant. You will assist users with their queries about {{company}}. Always ensure that you provide accurate and to the point information.` Let's create a new file named `update-pipe.ts` and add the following code to update the pipe agent with the system instruction message: ```ts {{ title: 'TypeScript' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const supportAgent = await langbase.pipes.update({ name: `ai-support-agent`, description: `An AI agent to support users with their queries.`, messages: [ { role: `system`, content: `You're a helpful AI assistant. You will assist users with their queries about {{company}}. Always ensure that you provide accurate and to the point information.`, }, ], }); console.log('Support agent:', supportAgent); } main(); ``` ### Prompt: User Message 1. Now let's add a user message. You can create a new object in the `messages` array with `role` as `user`. 1. You can add this: `How to request payment API?` ```ts {{ title: 'TypeScript' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const supportAgent = await langbase.pipes.update({ name: `ai-support-agent`, description: `An AI agent to support users with their queries.`, messages: [ { role: `system`, content: `You're a helpful AI assistant. You will assist users with their queries about {{company}}. Always ensure that you provide accurate and to the point information.`, }, { role: `user`, content: `How to request payment API?`, }, ], }); console.log('Support agent:', supportAgent); } main(); ``` ### Prompt: Variables 1. Any text written between double curly brackets `{{}}` becomes a variable. 2. Variable values are passed separately. Since you added a `{{company}}` variable, you can pass its value as `ACME`. ✨ Variables allow you to use the same pipe to generate completion based on different values. ```ts {{ title: 'TypeScript' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const supportAgent = await langbase.pipes.update({ name: `ai-support-agent`, description: `An AI agent to support users with their queries.`, messages: [ { role: `system`, content: `You're a helpful AI assistant. You will assist users with their queries about {{company}}. Always ensure that you provide accurate and to the point information.`, } ], variables: { company: `ACME`, }, }); console.log('Support agent:', supportAgent); } main(); ``` Now run the above file to update `ai-support-agent` using the following command: ```bash npx tsx update-pipe.ts ``` --- ## Step 7: Run AI support agent Let's run the AI support agent again with the a user message. Go ahead and update the `run-pipe.ts` file with the following code: ```ts {{ title: 'TypeScript' }} import 'dotenv/config'; import { Langbase, getRunner } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const { stream } = await langbase.pipes.run({ name: `ai-support-agent`, stream: true, messages: [ { role: `user`, content: `How to request payment API?`, }, ], }); const runner = getRunner(stream); runner.on('content', content => { process.stdout.write(content); }); } main(); ``` ```bash npx tsx run-pipe.ts ``` You should see a sample AI response: ```md {{ title: 'LLM generation' }} To request a payment API from ACME, you typically need to follow these steps: 1. **Create an Account**: Sign up for an account on the ACME platform if you haven't already. 2. **Access Developer Documentation**: Visit the ACME developer portal or documentation section. This is where you'll find detailed information about the payment API, including endpoints, authentication, and usage examples. 3. **API Key Generation**: Look for a section in the developer portal where you can generate an API key. This key is essential for authenticating your requests. 4. **Review API Specifications**: Familiarize yourself with the API specifications, including the required parameters, request methods (GET, POST, etc.), and response formats. 5. **Make a Request**: Use your preferred programming language or tool (like Postman, cURL, etc.) to make API requests. Ensure you include your API key in the request headers. 6. **Test in Sandbox**: If available, use the sandbox environment to test your integration before going live. 7. **Contact Support**: If you have any specific questions or need assistance, contact ACME's support team through their support channels. Make sure to follow any specific guidelines or requirements outlined in the ACME API documentation. ``` --- ## Step #1: Create a Pipe To get started with Langbase, you'll need to [create a free personal account on Langbase.com][signup] and verify your email address. _Done? Cool, cool!_ 0. When logged in, you can always go to [`pipe.new`][pn] to create a new Pipe. 1. Give your Pipe a name. Let’s call it `AI support agent`. 2. Click on the `[Create Pipe]` button. And just like that, you have created your first Pipe. You can also fork the [`AI support agent`][opp] pipe we'll be creating in this guide by clicking on the `Fork` button. Forking a pipe is a great way to start experimenting with it. --- ## Step #2: Using an LLM model If you have set up LLM API keys in your profile, the Pipe will automatically use them. If not, just hit the LLM API Keys button or head over to Settings to add Pipe-level LLM API keys. Let's add an LLM provider API key now. 1. Click on the LLM keys button. It will open a side panel. 2. Select Pipe level keys. Choose any LLM. For example, you can use `OpenAI` (for GPT) or `Together` (for Llama, Mistral, etc.) or any other [supported model](https://langbase.com/docs/supported-models-and-providers) on ⌘ Langbase. 3. Click on OpenAI `[ADD KEY]` button, add your LLM API key. Inside each key modal, you'll find a link `Get a new key from here` click it to create a new API key on any API provider's website. OpenAI expects you to add credits to your account to use their API. And sometimes it can take up to an hour or so for your OpenAI keys to work. It's an OpenAI issue which they're working on to fix. --- ## Step #3: Build your Pipe: Configure LLM model Let's start building our pipe. Go back to the `Pipe` tab. 1. Click on the `gpt-4o-mini` button to select and configure the LLM model for your Pipe. 2. By default OpenAI `gpt-4o-mini` is selected. You can also pick any `Llama` or `Mistral` model. 3. Choose one of the pre-configured [presets](/features/model-presets) for your model. 4. You can also modify any of the model params. Learn more with the icon, next to param name. --- ## Step #4: Build your Pipe: Configure the Pipe's Meta Use the Meta section to configure how your `AI support agent` Pipe should work. 1. You can set the output format of the Pipe to JSON. 2. Moderation mode can be turned on to filter out inappropriate content as a requirement by OpenAI. 3. You can turn the streaming mode on and off. 4. Turn off storing messages (input prompt and generated completion) for sensitive data like emails. --- ## Step #5: Design a Prompt Now that you have your LLM model and Pipe meta configured, it's time to design your prompt. --- ### What is a Prompt? Prompt is the input you provide to the AI model to generate the output. Typically, a prompt starts a chat thread with a system message, then alternates between user and assistant messages. **Prompt design is important.** At Langbase, we have a few key components to help you design a prompt: --- ### Prompt: System Instructions A system `message` in prompt acts as the set of instructions for the AI model. 1. It sets the initial context and helps the model understand your intent. 2. Now let's add a system instruction message. You can add this: `You're a helpful AI assistant. You will assist users with their queries about {{company}}. Always ensure that you provide accurate and to the point information.` ### Prompt: User Message 1. Now let's add a user message. Click on the `USER` button to add a new message. 2. You can add this: `How to request payment API?` --- ### Prompt: Variables 1. Any text written between double curly brackets `{{}}` becomes a variable. 2. Variables section will display all your variable keys and values. 3. Since you added a variable `{{company}}` notice it appear in variables. 4. Now assessing the company variable value as `ACME`. This pipe will now replace `{{company}}` with its value in all messages. ✨ Variables allow you to use the same pipe with different data. 👏 Congrats, you've created your first AI assistant to generate creative blog titles. --- ### Prompt as Code We're not writing code here, but if you were to write this prompt as code, it would look like this: 1. **Prompt** is a `messages` array. Inside it are `message` objects. 2. **Each `message` object** typically consists of two properties: 1. `role` either "system", "user", or "assistant". 2. `content` that you're sending or expecting to be generated from the AI LLM. ```js // Prompt example: { messages: [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: 'How to request payment API?' }, { role: 'assistant', content: 'Sure, here you go … …' } ]; } ``` --- ## Step #6: AI Studio: Playground & Experimentation Now that you have your Pipe ready, it's time to run and experiment with it in the Langbase AI Studio. Langbase provides the developer experience and infrastructure to build, collaborate, and deploy secure, composable AI apps. Our mission is to make AI accessible to everyone, not just AI/ML experts. **Langbase is free for anyone to [get started][signup]**. Langbase is your AI Studio: Our dashboard is your AI playground to build, collaborate, and deploy AI. It allows you to experiment with your pipes in real-time, with real data, store messages, version your prompts, and truly helps you take your idea from building prototypes to deployed in production (with predictions on usage, cost, and effectiveness). Langbase is a complete developers platform for AI. - ⌘ **Collaborate**: Invite all team members to collaborate on the pipe. Build AI together. - ⌘ **Developers & Stackholders** All your R&D team, engineering, product, GTM (marketing, sales), and even stackholders can collaborate on the same pipe. It's like a Google Doc x GitHub for AI. That's what makes it so powerful. --- ## Step #7: Save and Deploy Pipes can be saved as a sandbox or deployed to production. 1. ⌘ **Deploy to Production**: Make your changes available on the API (global, highly available, and scalable). 2. ⌘ **Sandbox versions**: You can save your changes without deploying them to production. 3. ⌘ **Preview versions**: Running your pipe with unsaved changes will create a new preview version. 4. ⌘ **Version History**: You can use the version selector on top left to go back to any deployed or sandbox version. --- ### Pipe: Sandbox version 1. When you make changes, a `Draft fork of v1` current version is created but not saved. 2. Press `Save as Sandbox` button to save your changes as a sandbox version. --- ### Pipe: Deploy to production 1. You can deploy any sandbox version or draft fork to production. 2. Once you're ready, press the `Deploy to Production` button to make your changes available on the API. ✨ Woohoo! You've created and deployed your first AI pipe in production. --- ## Step #8: Pipe API Now that you have deployed your AI support agent Pipe, you can use it in your apps, websites, or literally anywhere you want. 1. Go to the `API` tab. 2. Retrieve your API base URL and API key. 3. Make sure to never use your API key on client-side code. Always use it on server-side code. Your API key is like a password, keep it safe. If you think it's compromised, you can always regenerate a new one. --- Using the API key and base URL, you can now make requests to your Pipe. ```shell curl https://api.langbase.com/v1/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "messages": [{ "role": "user", "content": "How to request payment API?" }], "stream": true }' ``` --- You can also send new values to the variables in your prompt. ```shell curl 'https://api.langbase.com/v1/pipes/run' \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "messages": [{ "role": "user", "content": "How to request payment API?" }], "variables": [{ "name": "company", "value": "XYZ" }], "stream": true }' ``` --- > On the `API` tab, you can also find interactive API components to test your pipe in real-time. --- ## Step #9: Usage 1. In the Pipe tab, scroll down and expand the `Runs` section. 2. Click on any row of the runs to see detailed logs. 3. Here you can see detailed logs of all your pipe runs including each request cost cost, tokens, latency, etc. For overall Pipe stats, navigate to the `Usage` tab. 1. Here you can see the total number of requests, cost, and 2. You can also check our AI prediction engine, predicting cost per million requests to your pipe. 3. Finally, you can see the real-time run of your pipe in the `Runs` section again. --- ✨ **Congrats, you have created your first AI pipe**. We're excited to see what you build with it. --- ## Next Steps Feel free to experiment with different LLM models, prompts, and configurations. - Next up, use the [API Reference](/api-reference/pipe) to learn more about pipe's API. - You can also check out 20+ pipe features from the left sidebar. - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. Share your feedback and suggestions with us. Post on [𝕏 (Twitter)][x], [LinkedIn][li], or [email us][email]. We're here to help you turn your ideas into AI. Let's go! --- [signup]: https://langbase.fyi/awesome [pn]: https://pipe.new/ [x]: https://twitter.com/LangbaseInc [li]: https://www.linkedin.com/company/langbase/ [email]: mailto:support@langbase.com?subject=Pipe-Quickstart&body=Ref:%20https://langbase.com/docs/pipe/quickstart [opp]: https://langbase.com/langbase/ai-support-agent Chunker `langbase.chunker()` API reference of `langbase.chunker()` function in Langbase AI SDK. https://langbase.com/docs/api/og?title=langbase.chunker()§ion=Langbase%20AI%20SDK https://langbase.com/docs/sdk/chunker # Chunker langbase.chunker() You can use the `chunker()` function to split your content into smaller chunks. This is especially useful for RAG pipelines or when you need to work with only specific sections of a document. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.chunker(options)` Split content into chunks by running `langbase.chunker()` function. ```ts {{ title: 'index.ts' }} langbase.chunker(options); // with types langbase.chunker(options: ChunkerOptions); ``` ## options ```ts {{title: 'ChunkerOptions Object'}} interface ChunkerOptions { content: string; chunkMaxLength?: number; chunkOverlap?: number; } ``` *Following are the properties of the options object.* --- ### content The content of the document to be chunked. --- ### chunkMaxLength The maximum length for each document chunk. Must be between `1024` and `30000` characters. Default: `1024` --- ### chunkOverlap The number of characters to overlap between chunks. Must be greater than or equal to `256` and less than chunkMaxLength. Default: `256` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### `langbase.chunker()` examples ```ts {{ title: 'Basic' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const content = `Langbase is the most powerful serverless AI platform for building AI agents with memory. Build, deploy, and scale AI agents with tools and memory (RAG). Simple AI primitives with a world-class developer experience without using any frameworks.`; const chunks = await langbase.chunker({ content, chunkMaxLength: 1024, chunkOverlap: 256 }); console.log('Chunks:', chunks); } main(); ``` --- ## Response The response of `langbase.chunker()` function is a `Promise` that resolves to an array of string. ```ts {{title: 'ChunkerResponse Type'}} type ChunkerResponse = string[]; ``` Array of text chunks created from the input document. ```json {{ title: 'ChunkerResponse Example' }} [ "Langbase is the most powerful serverless AI platform for building AI agents with memory. Build, deploy, and scale AI agents with tools and memory (RAG). Simple AI primitives with a world-class developer experience without using any frameworks." ] ``` Limits https://langbase.com/docs/pipe/limits/ import { generateMetadata } from '@/lib/generate-metadata'; # Limits Beta phase limits on Langbase Pipes. - Free tier: 1,000 Pipe runs - requests per month - Pro tier: $20/mo, 50M tokens, 20K Pipe runs (requests), then $2 per 1000 requests (max 1K tokens per request) ## Rate Limits - Free: 1 request per second - Pro: 10 requests per second (custom enterprise packages available upto 1K requests per second) Langbase is in the public beta stage right now. These limits and pricing are subject to change without notice. Please bear with us as we improve and get ready for a stable release and massive scale. Already processing tens of billions of AI tokens every month, you're in good hands. Feedback welcomed. --- Edit User Profile https://langbase.com/docs/platform-docs/edit-user-profile/ import { generateMetadata } from '@/lib/generate-metadata'; # Edit User Profile --- ## Step #1 Go to profile settings Login to your account on [Langbase][lb]. 1. Navigate to your profile `Settings` page. --- ## Step #2 Edit your profile 1. Click on `Change Profile Picture` to browse a profile picture from your PC. 2. Type a name for your profile in the box under `Name` 3. Type a `-` delimited username for your profile in the box under `Username` 4. Type a short bio for your profile in the box under `Bio` The image max size should be 1MB and in PNG or JPG format. --- ## Step #3 Save your changes 1. Click on `Save Changes` to save your edits or click on `Reset` to reset them and start over. ----- [lb]: https://langbase.com FAQ https://langbase.com/docs/pipe/faqs/ import { generateMetadata } from '@/lib/generate-metadata'; # FAQ Let's take a look at some frequently asked questions about Pipe. --- ## What is a Pipe? Pipe is a high-level layer to Large Language Models (LLMs) that creates a personalized AI assistant for your queries. It can leverage any LLM models, tools, and knowledge with your datasets to assist with your queries. --- ## What is a System Prompt Instruction? Initial setup or instruction for the LLM that configures or instructs the LLM on how to behave. --- ## What is a User Prompt? A text input that a user provides to an LLM to which the model responds. --- ## What is an AI Prompt? The LLM's generated output in response to a user prompt. --- ## How to run Playground in Pipe? Assuming the Pipe API keys are configured: 1. Select any LLM model. By default OpenAI `gpt-4o-mini` is selected. 2. If the Pipe is of type `generate`, simply run it. 3. If it is a `chat` pipe, write hello in Playground and run the Pipe. --- ## Can I add readme to a pipe? Yes, you can add readme to any Pipe. When you create a Pipe, it already contains a readme. Go all the way down in a Pipe. You will find a readme there. Simply edit it. --- ## Can I run experiments on a chat Pipe? No, only `generate` type Pipes can run experiments. --- ## Where can I find the Pipe API key? Navigate to the API tab in the Pipe navbar. Here you will find Pipe API secret. --- ## Does each Pipe have its own API key? Yes, every Pipe you create on Langbase contains its unique API key. --- ## Pipe Playground is not running. How can I fix it? - Check if you have configured the LLM API key for your selected model. - Try providing a user prompt if you are not providing it already. --- Concepts https://langbase.com/docs/pipe/concepts/ import { generateMetadata } from '@/lib/generate-metadata'; # Concepts Pipe is the fastest way to turn ideas into AI. Pipe is like an AI feature. It is a high-level layer to Large Language Models (LLMs) that creates a personalized AI assistant for your queries. Let's understand the key concepts of Pipe: --- ## Meta The Pipe meta defines its configuration. It contains the following information: ### Type The type of Pipe, i.e., `generate` or `chat`. The type of Pipe determines the behavior of the Pipe. For example: - Generate Pipe is designed to [generate](/features/generate) LLM completions. - Chat Pipe is designed to create a [conversational](/features/chat) AI agent. ### Stream mode Handles whether the Pipe should [stream](/features/stream) the response or not. If enabled, the Pipe will stream the response in real-time. ### Store messages Pipe can store both prompts and their completions if the [Store messages](/features/store-messages) in Pipe meta is enabled on. Otherwise, only [system prompts](/features/prompt) and [few-shot messages](/features/few-shot-training) will be saved. No completions, final prompts or variables will be retained to ensure privacy. ### Moderate Available only for OpenAI models. [Moderation](/features/moderation) endpoint by OpenAI identifies harmful content. If enabled, Langbase blocks flagged requests automatically. ### JSON Enforces the completion to be in [JSON format](/features/json-mode). If enabled, the completion will be in JSON format. --- ## Variables Any text written between `{{}}` in your prompt instructions acts as a [variable](/features/variables) to which you can assign different values using the variable section. Variables will appear once you add them using `{{variableName}}`. On runtime, these [variable](/features/variables) will dynamically populate with the assigned values during execution --- ## Safety Define AI [safety](/features/safety) prompt for any LLM inside a Pipe. For instance, do not answer questions outside of the given context. One of its use cases can be to ensure the LLM does not provide any sensitive information in its response from the provided context. --- ## Experiments They help you learn how your latest Pipe config will affect LLM response by running it against your previous `generate` requests. One example of [Experiments](/features/experiments) can be changing Pipe's LLM model to `gemma-7b-it` from `gpt-4-turbo-preview` to check how the response will look like. --- ## Few-shot training It helps AI LLM pick up and apply knowledge from just a handful of examples. Pipe lets you define multiple user and AI assistant prompts and completion pairs that can be used to [few-shot train](/features/few-shot-training) any LLM. --- ## Pipe level keysets Pipe LLM keyset is specific to each individual pipe. When selected, the Pipe doesn't use the user/org LLM API keys but instead use the Pipe level [keyset](/features/keysets) added to it in its settings. --- How to use Langbase Remote MCP server https://langbase.com/docs/mcp-servers/remote-mcp-server/ import { generateMetadata } from '@/lib/generate-metadata'; # How to use Langbase Remote MCP server ### Interact with Pipes and Memory Agents seamlessly with Langbase MCP Server --- ## What is Model Context Protocol (MCP)? Model Context Protocol (MCP) is an open protocol that standardizes how applications connect and share data, tools, and other resources with Large Language Models (LLMs). ## Langbase Remote MCP Server Langbase provides a fully compliant Remote MCP server at [mcp.langbase.com/sse](https://mcp.langbase.com/sse), supporting seamless authentication via the official Remote MCP [specification](https://modelcontextprotocol.io/specification/2025-03-26/basic/authorization). You can do the following with the Langbase Remote MCP server: - Create new pipe agents - Update existing pipe agents - Run different pipe agents - List all existing pipe agents - Create a new memory - Upload documents to memory - Retrieve similar chunks from memory - List Documents from memory - List all existing memories - Delete memories --- ## Setup the Langbase Remote MCP Server We will setup the Langbase Remote MCP Server on any of the IDEs below and Claude Desktop. 1. Open Cursor settings 2. Navigate to the Tools and Integrations 3. Click on the `+` button to add a New MCP Server 4. Paste the following configuration in the `mcp.json` file: ```bash { "mcpServers": { "Langbase MCP Server": { "command": "npx", "args": [ "-y", "mcp-remote", "https://mcp.langbase.com/sse" ] } } } ``` 1. Navigate to Windsurf - Settings > Advanced Settings 2. You will find the option to Add Server 3. Click “Add custom server +” 4. Paste the following configuration in the `mcp_config.json` file: ```bash { "mcpServers": { "Langbase MCP Server": { "command": "npx", "args": [ "-y", "mcp-remote", "https://mcp.langbase.com/sse" ] } } } ``` 1. Open Claude Desktop File Menu 2. Navigate to the settings 3. Go to Developer Tab 4. Click on the Edit Config button 5. Paste the following configuration: ```bash { "mcpServers": { "Langbase MCP Server": { "command": "npx", "args": [ "-y", "mcp-remote", "https://mcp.langbase.com/sse" ] } } } ``` You may need to restart the IDE or Claude Desktop for the MCP server to take effect. ## Examples Now that you have set up the MCP server, here are some example queries you can try: ```md Give me an overview of all the Pipes and Memory Agents ``` Give me an overview of all the Pipes and Memory Agents ```md Create an AI Support agent pipe that uses my docs as memory for autonomous AutoRAG ``` Create an AI Support agent pipe that uses my docs as memory for autonomous AutoRAG ```md Upload this document to memory named 'langbase-info' ``` Upload this document to memory named 'langbase-info' --- ## Next Steps - Check out [examples](/sdk/examples) of what you can build using Langbase MCP server. - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- How to setup Langbase Docs MCP Server? https://langbase.com/docs/mcp-servers/docs-mcp-server/ import { generateMetadata } from '@/lib/generate-metadata'; # How to setup Langbase Docs MCP Server? ### A step by step guide on how you can setup the Langbase Docs MCP Server --- ## What is Model Context Protocol (MCP)? Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to LLMs. It provides a standardized way to connect AI models to different data sources and tools. ## Langbase Docs MCP Server The Langbase Docs MCP server allows IDEs like Cursor, Windsurf, etc., to access Langbase documentation. It provides LLMs with up-to-date context about Langbase APIs, SDK, and other resources present in the docs, enabling LLMs to deliver **accurate** and **relevant** answers to your Langbase-related queries. Langbase Docs MCP Server --- ## Setup the Langbase MCP Server We will setup the MCP Server on any of the IDEs below and Claude Desktop. 1. Open Cursor settings 2. Navigate to the MCP settings 3. Click on the `+` button to add a new global MCP server 4. Paste the following configuration in the `mcp.json` file: ```bash { "mcpServers": { "Langbase Docs Server": { "command": "npx", "args": ["@langbase/cli@latest","docs-mcp-server"] } } } ``` 1. Navigate to Windsurf - Settings > Advanced Settings 2. You will find the option to Add Server 3. Click “Add custom server +” 4. Paste the following configuration in the `mcp_config.json` file: ```bash { "mcpServers": { "Langbase Docs Server": { "command": "npx", "args": ["@langbase/cli@latest", "docs-mcp-server"] } } } ``` 1. Open Claude Desktop File Menu 2. Navigate to the settings 3. Go to Developer Tab 4. Click on the Edit Config button 5. Paste the following configuration: ```bash { "mcpServers": { "Langbase Docs Server": { "command": "npx", "args": ["@langbase/cli@latest", "docs-mcp-server"] } } } ``` You may need to restart the IDE or Claude Desktop for the MCP server to take effect. Now that you have set up the MCP server, here are some example queries you can try: ```md How to create a pipe agent named 'summary-agent'? How to add memory to my summary-agent? ``` You can also ask about: - How to update existing pipe agents - How to use new AI primitives - Different agent architectures --- ## Next Steps - Build something cool with Langbase [APIs](/api-reference) and [SDK](/sdk). - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- Pipe Features https://langbase.com/docs/pipe/features/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe Features Explore the comprehensive list of features offered by Langbase Pipes. Dive into more details on their pages. ### API and Integration - [Generations](/features/generate) - [Chat Generations](/features/chat) - [API](/features/api) ### Prompt Engineering and Response Control - [Prompts](/features/prompt) - [Variables](/features/variables) - [Few Shot Training](/features/few-shot-training) - [Safety](/features/safety) - [Stream](/features/stream) ### Advanced Features - [Keysets](/features/keysets) - [Experiments](/features/experiments) - [Pipe Versions](/features/versions) - [Fork Pipes](/features/fork) - [JSON Mode](/features/json-mode) - [Messages Storage](/features/store-messages) - [Moderation](/features/moderation) - [Pipe Examples](/features/examples) - [Model Presets](/features/model-presets) - [Readme](/features/readme) ### Analytics and Monitoring - [Usage](/features/usage) - [Logs](/features/logs) Pipe Examples https://langbase.com/docs/pipe/examples/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe Examples Examples of Apps built using [Pipe](/pipe) and its features like generate, chat, stream, moderate, and JSON. --- ## AI Tech Guide Writer This example uses a simple [tech guide writer](https://langbase.com/langbase/tech-guide-writer) Pipe. It uses the Pipe `generate` API to generate guides on the provided topic. You can also define max words and sentences per paragrah. Since the app uses a [Pipe](/pipe), we can easily **switch** LLM models without changing the code. Right now, it is using the `gemma-7b-it` model from Groq. [Try out](https://ai-tech-guide-writer.langbase.dev/) the example and take a look at the [source code](https://github.com/LangbaseInc/langbase/tree/main/examples/tech-guide-writer) to see how easy it is to build an app using Pipe. --- ## AI Chatbot This example uses a [chatbot](https://langbase.com/examples/ai-chatbot) Pipe on Langbase to create an efficient, streaming-enabled chatbot for any use-case. It uses the Pipe `chat` API. Since the app uses a [Pipe](/pipe), we can easily **switch** to any LLM model from the extensive list of [providers](/supported-models-and-providers) on Langbase. You can customize the prompt of the pipe, and the chatbot will respond accordingly. [Try out](https://ai-chatbot.langbase.dev/) the example and take a look at the [source code](https://github.com/LangbaseInc/langbase/tree/main/examples/ai-chatbot) to see how easy it is to build an app using Pipe. --- ## ASCII Software Architect This example uses a [chatbot](https://langbase.com/examples/ascii-software-architect) Pipe on Langbase to create ASCII Software Architect, which generates ASCII UML Class diagrams for code comprehension, design documentation, collaborative planning, and legacy system analysis. It uses the Pipe `chat` API. Since the app uses a [Pipe](/pipe), we can easily **switch** to any LLM model from the extensive list of [providers](/supported-models-and-providers) on Langbase. You can customize the prompt of the pipe, and the chatbot will respond accordingly. [Try out](https://ascii-software-architect.langbase.dev/) the example and take a look at the [source code](https://github.com/LangbaseInc/langbase/tree/main/examples/ascii-software-architect) to see how easy it is to build an app using Pipe and Chat UI. To use this chatbot, you can select one of the suggestions presented in the menu. See conversation tips to get the best results out of this chatbot. --- ## Expert Proofreader This example uses a [chatbot](https://langbase.com/examples/expert-proofreader) Pipe on Langbase to create Expert Proofreader, refining language, ensuring style consistency, correcting grammar, and enhancing clarity while preserving accuracy. It uses the Pipe `chat` API. Since the app uses a [Pipe](/pipe), we can easily **switch** to any LLM model from the extensive list of [providers](/supported-models-and-providers) on Langbase. You can customize the prompt of the pipe, and the chatbot will respond accordingly. [Try out](https://expert-proofreader.langbase.dev/) the example and take a look at the [source code](https://github.com/LangbaseInc/langbase/tree/main/examples/expert-proofreader) to see how easy it is to build an app using Pipe and Chat UI. To use the Expert Proofreader chatbot, you can select one of the suggestions presented in the menu. See conversation tips to get the best results. --- ## JavaScript Tutor This example uses a [chatbot](https://langbase.com/examples/js-tutor) Pipe on Langbase to create JavaScript Tutor, offering interactive lessons, progress tracking, quizzes, and the ability to skip levels for targeted learning. It uses the Pipe `chat` API. Since the app uses a [Pipe](/pipe), we can easily **switch** to any LLM model from the extensive list of [providers](/supported-models-and-providers) on Langbase. You can customize the prompt of the pipe, and the chatbot will respond accordingly. [Try out](https://js-tutor.langbase.dev/) the example and take a look at the [source code](https://github.com/LangbaseInc/langbase/tree/main/examples/js-tutor) to see how easy it is to build an app using Pipe. To use this chatbot, select the suggestion presented in the menu. See conversation tips for the best results. --- ## English CEFR Level Assessment Bot This example uses a [chatbot](https://langbase.com/examples/cefr-level-assessment-bot) Pipe on Langbase to create English CEFR Level Assessment Bot, an AI Assistant that assess your english language skills based on interactive skill assessment test (comprehension and writing). It uses the Pipe `chat` API. Since the app uses a [Pipe](/pipe), we can easily **switch** to any LLM model from the extensive list of [providers](/supported-models-and-providers) on Langbase. You can customize the prompt of the pipe, and the chatbot will respond accordingly. [Try out](https://cefr-level-assessment-bot.langbase.dev/) the example and take a look at the [source code](https://github.com/LangbaseInc/langbase/tree/main/examples/cefr-level-assessment-bot) to see how easy it is to build an app using Pipe and Chat UI. To use English CEFR Level Assessment Bot, interact with the chatbot by answering questions. At the end of the interactive conversation/test, you can receive a rough assessment of your english proficiency from the English CEFR Level Assessment chatbot. --- ## AI Master Chef This example uses a [chatbot](https://langbase.com/examples/ai-master-chef) Pipe on Langbase to create AI MasterChef, your ultimate culinary assistant, designed to inspire home cooks, aspiring chefs, and food enthusiasts alike. It uses the Pipe `chat` API. Since the app uses a [Pipe](/pipe), we can easily **switch** to any LLM model from the extensive list of [providers](/supported-models-and-providers) on Langbase. You can customize the prompt of the pipe, and the chatbot will respond accordingly. [Try out](https://ai-master-chef.langbase.dev/) the example and take a look at the [source code](https://github.com/LangbaseInc/langbase/tree/main/examples/ai-master-chef) to see how easy it is to build an app using Pipe and Chat UI. To use AI Master Chef you can use the following text as an example: ``` You: Hello AI Master Chef: ... You: I have rice and chicken help me cook something delicious today ``` --- ## AI Drug Assistant This example uses a [chatbot](https://langbase.com/examples/ai-drug-assistant) Pipe on Langbase to create AI Drug Assistant, provides you with comprehensive details on medications, including main ingredients, pharmacological principles, efficacy, indications, dosage, and administration. It uses the Pipe `chat` API. Since the app uses a [Pipe](/pipe), we can easily **switch** to any LLM model from the extensive list of [providers](/supported-models-and-providers) on Langbase. You can customize the prompt of the pipe, and the chatbot will respond accordingly. [Try out](https://ai-drug-assistant.langbase.dev/) the example and take a look at the [source code](https://github.com/LangbaseInc/langbase/tree/main/examples/ai-drug-assistant) to see how easy it is to build an app using Pipe and Chat UI. To use AI Drug Assistant you can use you the following text as an example: ``` You: Hello AI Drug Assistant: ... You: Explain how to properly store and administer insulin, including potential interactions with other medications ``` --- ## Excel Master Chatbot This example uses a [chatbot](https://langbase.com/examples/excel-master) Pipe on Langbase to create Excel Master, providing assistance with Excel tasks including requirement analysis, formula generation, component explanation, implementation guidance, and troubleshooting. It uses the Pipe `chat` API. Since the app uses a [Pipe](/pipe), we can easily **switch** to any LLM model from the extensive list of [providers](/supported-models-and-providers) on Langbase. You can customize the prompt of the pipe, and the chatbot will respond accordingly. [Try out](https://excel-master.langbase.dev/) the example and take a look at the [source code](https://github.com/LangbaseInc/langbase/tree/main/examples/excel-master) to see how easy it is to build an app using Pipe and Chat UI. To use the Excel Master chatbot, you can select one of the suggestions presented in the menu. --- ## Pseudocode Generator Chatbot This example uses a [chatbot](https://langbase.com/examples/pseudocode-generator) Pipe on Langbase to create Pseudocode Generator chatbot, offering features like requirement analysis, structured pseudocode generation, data structure explanation, step-by-step comments, time complexity analysis, and reasoning behind the algorithm. It uses the Pipe `chat` API. Since the app uses a [Pipe](/pipe), we can easily **switch** to any LLM model from the extensive list of [providers](/supported-models-and-providers) on Langbase. You can customize the prompt of the pipe, and the chatbot will respond accordingly. [Try out](https://pseudocode-generator.langbase.dev/) the example and take a look at the [source code](https://github.com/LangbaseInc/langbase/tree/main/examples/pseudocode-generator) to see how easy it is to build an app using Pipe and Chat UI. To use the Pseudocode Generator, you can select one of the suggestions presented in the menu. --- ## Product Review Generator Chatbot This example uses a [chatbot](https://langbase.com/examples/product-review-generator) Pipe on Langbase to create Product Review Generator, featuring review crafting, user satisfaction assessment, targeted inquiry, balanced overviews, and consumer insight to generate concise and helpful product reviews based on user feedback. It uses the Pipe `chat` API. Since the app uses a [Pipe](/pipe), we can easily **switch** to any LLM model from the extensive list of [providers](/supported-models-and-providers) on Langbase. You can customize the prompt of the pipe, and the chatbot will respond accordingly. [Try out](https://product-review-generator.langbase.dev/) the example and take a look at the [source code](https://github.com/LangbaseInc/langbase/tree/main/examples/product-review-generator) to see how easy it is to build an app using Pipe and Chat UI. To use this chatbot, you can select one of the suggestions presented in the menu. See conversation tips to get the best results. --- ## Dev Screener Chatbot This example uses a [chatbot](https://langbase.com/examples/dev-screener) Pipe on Langbase to create Dev Screener, which enhances the candidate experience through personalized interviews and optimizes the talent pool by systematically evaluating and categorizing applicants for efficient HR decision-making. It uses the Pipe `chat` API. Since the app uses a [Pipe](/pipe), we can easily **switch** to any LLM model from the extensive list of [providers](/supported-models-and-providers) on Langbase. You can customize the prompt of the pipe, and the chatbot will respond accordingly. [Try out](https://dev-screener.langbase.dev/) the example and take a look at the [source code](https://github.com/LangbaseInc/langbase/tree/main/examples/dev-screener) to see how easy it is to build an app using Pipe and Chat UI. To use this chatbot, select a suggestion from the menu to start a guided conversation. See the conversation tips to get the best results. --- ## API Security Consultant Chatbot based on OWASP 2023 This example uses a [chatbot](https://langbase.com/examples/api-sec-consultant) Pipe on Langbase to create API Security Consultant, which guides users through a comprehensive OWASP 2023-based API security assessment via a structured MCQ process that evaluates vulnerabilities, educates developers, and ensures compliance. It uses the Pipe `chat` API. Since the app uses a [Pipe](/pipe), we can easily **switch** to any LLM model from the extensive list of [providers](/supported-models-and-providers) on Langbase. You can customize the prompt of the pipe, and the chatbot will respond accordingly. [Try out](https://api-sec-consultant.langbase.dev/) the example and take a look at the [source code](https://github.com/LangbaseInc/langbase/tree/main/examples/api-sec-consultant) to see how easy it is to build an app using Pipe and Chat UI. To use this Chatbot, select a suggestion from the menu to start a guided conversation. See conversation tips to get the best. --- Integrations: Langbase with n8n Workflows https://langbase.com/docs/integrations/n8n/ import { generateMetadata } from '@/lib/generate-metadata'; # Integrations: Langbase with n8n Workflows ### Let's take a look at how to integrate Langbase Pipe into your n8n workflows. --- In this integration guide, you will: - Learn how to set up **Langbase Pipe** in your n8n workflow. - Understand how to **configure** the HTTP Request node to communicate with Langbase Pipe. - **Extract and manipulate** the response from Langbase Pipe using n8n's data transformation features. --- ## Step 1: Get the Pipe run cURL request To get started with integrating Langbase Pipe into your n8n workflow, follow these steps: 1. Navigate to [Langbase Studio](https://studio.langbase.com). 2. Create a new Pipe agent or select an existing one. You can also fork this [AI Support agent](https://langbase.com/examples/ai-support-agent) pipe. Langbase Pipe 3. Navigate to the API tab in your Langbase Pipe to access its API endpoint and API key. Langbase Pipe API Section 4. Scroll to the **Run the Pipe (simple)** section. 5. Click on the **STREAM-OFF** tab. 6. Copy the Pipe run **cURL** request. Langbase Pipe cURL Request ## Step 2: Configure HTTP Request Node in n8n 1. Open your n8n workspace and navigate to your workflow. 2. Add a new node by clicking the **+** icon. 3. Select HTTP Request node from the Core section. 4. This node will handle the API communication with Langbase. n8n workflow ## Step 3: Import Pipe run cURL request 1. Click on the `Import cURL` button. A **popup** will appear on your screen. 2. Paste the Pipe run cURL request you copied earlier into the popup. 3. Click on `Execute Step` button. 4. You will see the response in the `OUTPUT` section. n8n workflow HTTP Request ## Step 4: Create a new data transformation node 1. Add another new node to your workflow. 2. Navigate to **Data transformation** section. 3. Select `Edit Fields (Set)` to extract specific response. n8n workflow Edit Fields (Set) ## Step 5: Extract and manipulate the response You can consume the response from the Langbase Pipe in your n8n workflow by extracting the `completion` field from the response. 1. Drag the `completion` field in the **Field to Set** input field. 2. Execute the transformation by clicking `Execute Step`. 3. Now you can see the response in the **OUTPUT** section. n8n workflow response n8n workflow Output See our [API Reference](/api-reference) for more API endpoints and cURL requests. --- ## Next Steps - Build something cool with Langbase [APIs](/api-reference) and [SDK](/sdk). - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- Page https://langbase.com/docs/parser/platform/ import { generateMetadata } from '@/lib/generate-metadata'; ## Platform Limits and pricing for Parser primitive on the Langbase Platform are as follows: 1. **[Limits](/parse/platform/limits)**: Rate and usage limits. 2. **[Pricing](/parse/platform/pricing])**: Pricing details for the Agent primitive. Explained: Retrieval Augmented Generation (RAG) https://langbase.com/docs/explained/rag/ import { generateMetadata } from '@/lib/generate-metadata'; # Explained: Retrieval Augmented Generation (RAG) ### A step-by-step explanation to understand how RAG works --- In this guide, we will learn: 1. What is Retrieval Augmented Generation (RAG)? 2. How does RAG work? --- ## What is Retrieval Augmented Generation (RAG)? Large language models (LLMs) are trained on vast amount of data and have impressive generation capabilities. However, they lack context and cannot provide answers based on specific knowledge or facts. RAG is a way to provide additional context to LLM to generate more relevant response. Let us try to understand this with an example: ```js ** System Prompt ** You are a helpful AI assistant. ** User ** Where was I born? ** LLM ** I am sorry, I do not have that information. ``` Now, let's see how RAG can help in this situation. ```js ** System Prompt ** You are a helpful AI assistant. Use the context provided below to generate response. Context: User was born in New York, USA. ** User ** Where was I born? ** LLM ** You were born in New York, USA. ``` This is a simple example of how in RAG we can provide additional context to LLMs to generate more relevant response. It can be extended to provide context from a large database of personal data, facts, or any other information. Let's dive deep into how RAG works. --- ## Step 0: Vector store We cannot feed all of the information to the LLMs as is, they have limited context window and can easily hellucinate. We need to provide the LLMs with a way to access the only relevant information when needed. This is where the vector store comes in. --- ### Vectors and Embeddings Vectors are a way to represent data using numbers used in machine learning, a powerful way to represent and manipulate data. In the context of RAG, we take the data and generate embeddings (vectors) for each piece of data. The core idea of embeddings is that semantically closer data points are closer in the vector space. A typical vector embedding for a piece of data might look like this: ```py [-0.2, 0.5, 0.1, -0.3, 0.7, 0.2, -0.1, 0.4, 0.3, ...] ``` The length of a vector array is called its dimension. Higher dimensions allow the vector to store more information. Think of each number in the vector as representing a different feature of the data. For example, to describe a fruit, features like color, taste, and shape can each be represented as a number in the vector. The choice and representation of these features form the art of creating embeddings. Effective embeddings capture the essence of the data compactly, ensuring that semantically similar data points have similar embeddings. For instance, "apple" and "orange" are closer in the vector space than "apple" and "car" because they are semantically similar. They will appear closer in the vector space. Look at the image below to visualize this concept. Embeddings Vector Space It can represent any kind of data, such as text, images, and audio. We embed this data into vectors and store them in a vector database. The goal is to store relevant information in the vector database and provide LLMs with a way to access it, known as retrieval. We will explore how to do this in the next steps. ⌘ Langbase Memory store You get the idea, right? Don't worry if you don't, we will see how to do this in the next steps. Langbase takes care of all the heavy lifting for you with its [memory sets](/memory). --- ### Chunking Files can be of any size. We cannot possibly capture the essence of the entire file in a single vector. We need to break down the file into smaller chunks and generate embeddings for each chunk. This process is called chunking. For instance, consider a book: we can break it down into chapters, paragraphs, and topics, generating embeddings for each chunk. Each piece of information is represented by a vector, and similar pieces of information will have similar embeddings, thus being close in vector space. This way, when we need to access information, we can retrieve only the relevant pieces instead of the entire file. Each piece has text and its associated embeddings. Together, these form a memory set, as shown below. ⌘ Langbase Memory work flow --- ## Step 1: Retrieval Now that we have the embeddings stored in the vector store, we can retrieve the relevant information when needed. This is called retrieval. When a user inputs a query, we do the following: 1. Generate embeddings for the query. 2. Retrieve the relevant embeddings from the vector store. We generate embeddings for the query and compare them with the data embeddings in the vector store. By retrieving the closest, semantically similar embeddings, we can then access the associated text. This provides the relevant information to the LLMs. --- ## Step 2: Agumentation Now that we have the relevant information, we can augment system prompt with this information and give instructions to the LLMs to generate response based on this information. This is called augmentation. It is then passed to the LLMs to generate text based on this information. --- ## Step 3: Generation Generation is the final step where we provide the LLMs with the user input and the augmented information. The LLMs can now generate text based on this information. The generated text is more relevant and meaningful as it has the context of the relevant information. Putting all together, we have the following workflow: 1. User inputs a query. 2. Retrieve the relevant information from the vector store. 3. Augment the system prompt with this information. 4. Generate text based on the augmented prompt. ⌘ Langbase RAG workflow --- Langbase offers powerful primitives like [Pipe](/pipe) and [Memory](/memory) to help you ship AI features and RAG applications in minutes. Check out our detailed guide on [RAG](/guides/rag) to ship your first RAG application today! --- Integrations: Configure Langbase with Next.js https://langbase.com/docs/integrations/nextjs/ import { generateMetadata } from '@/lib/generate-metadata'; # Integrations: Configure Langbase with Next.js ### Let's integrate an AI title generator pipe agent into a Next.js app. --- In this integration guide, you will: - **Setup** a Next.js app. - **Integrate** [Langbase SDK](/sdk) into your app via API route and Server Actions. - **Generate** title ideas for your next blog using Pipe. --- ## Step 0: Create a Next.js Application To build the agent, we need to have a Next.js starter app. If you don't have one, you can create a new app using the following command: ```bash {{ title: 'npm' }} npx create-next-app@latest generate-titles ``` ```bash {{ title: 'pnpm' }} pnpm create-next-app@latest generate-titles ``` ```bash {{ title: 'yarn' }} yarn create-next-app@latest generate-titles ``` This will create a new Next.js application in the `generate-titles` directory. Navigate to the directory and start the development server: ```bash {{ title: 'npm' }} cd generate-titles && npm run dev ``` ```bash {{ title: 'pnpm' }} cd generate-titles && pnpm run dev ``` ```bash {{ title: 'yarn' }} cd generate-titles && yarn run dev ``` ## Step 1: Install Langbase SDK Install the Langbase SDK in this project using npm or pnpm. ```bash {{ title: 'npm' }} npm install langbase ``` ```bash {{ title: 'pnpm' }} pnpm add langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ## Step 2: Get Langbase API Key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. In case, you do not have an API key, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. Create an `.env` file in the root of your project and add your Langbase API key. ```bash {{ title: '.env' }} LANGBASE_API_KEY=xxxxxxxxx ``` Replace xxxxxxxxx with your Langbase API key. ## Step #3: Add LLM API keys If you have set up LLM API keys in your profile, the Pipe will automatically use them. If not, navigate to [LLM API keys](https://langbase.com/settings/llm-keys) page and add keys for different providers like OpenAI, TogetherAI, Anthropic, etc. You can add LLM API keys in your acount using [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `LLM API keys` link. 4. From here you can add LLM API keys for different providers like OpenAI, TogetherAI, Anthropic, etc. ## Step 4: Fork the AI title generator agent pipe Go ahead and fork the AI title generator [agent](https://langbase.com/langbase/ai-title-generator) pipe in Langbase Studio. This agent generate 5 different titles on a given topic. ## Step 5: Run the AI title generator agent Let's learn how to use the Langbase SDK both on a serverless Next.js API route and a server action. Click on one of the buttons below to choose your preferred method. Let's create a API Route to generate titles using the Langbase SDK. ```ts import { Langbase } from 'langbase'; export async function POST(req: Request) { try { // Get request body from the client. const body = await req.json(); const variables = body.variables; const messages = body.messages; const shouldStream = body.stream; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); // STREAM if (shouldStream) { const { stream } = await langbase.pipes.run({ stream: true, messages, variables, name: 'ai-title-generator' }); return new Response(stream, { status: 200 }); } // NOT STREAM const { completion } = await langbase.pipes.run({ stream: false, messages, variables, name: 'ai-title-generator' }); return Response.json(completion); } catch (error: any) { console.error('Uncaught API Error:', error); return new Response(JSON.stringify(error), { status: 500 }); } } ``` Let's take a look at the code above: 1. Create a new file `api/run-agent/route.ts` in the `app` directory of your Next.js app. 2. Write a function called `POST` that uses Langbase SDK Pipe to generate titles. 3. Get the variables, messages, and stream from the request body. 4. Based on the stream value, call the `langbase.pipes.run` method with `stream: true` or `stream: false`. 5. Return the response from the API route. ## Step 6: Call the API Route Go ahead and add the following code to the `app/page.tsx` file to call the API route. ```tsx 'use client'; import { useState } from 'react'; import { getRunner } from 'langbase'; export default function Home() { const [topic, setTopic] = useState(''); const [completion, setCompletion] = useState(''); const [loading, setLoading] = useState(false); const handleGenerateCompletion = async () => { setLoading(true); try { const response = await fetch('/api/run-agent', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ stream: false, messages: [{ role: 'user', content: topic }], variables: { topic: topic }, }) }) const completionData = await response.json(); const parsedData = JSON.parse(completionData); setCompletion(JSON.stringify(parsedData)); } catch (error) { console.error('Error generating completion:', error); } finally { setLoading(false); } }; return (

Generate Text Completions

Enter a topic and click the button to generate titles using LLM

setTopic(e.target.value)} /> {loading &&

Loading...

} {completion && (

Generated Titles:

    {completion}
)}
); } ```
```tsx {{ title: 'app/page.tsx' }} 'use client'; import { useState } from 'react'; import { getRunner } from 'langbase'; export default function Home() { const [topic, setTopic] = useState(''); const [completion, setCompletion] = useState(''); const [loading, setLoading] = useState(false); const handleGenerateCompletion = async () => { setLoading(true); try { const response = await fetch('/api/run-agent', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ stream: true, messages: [{ role: 'user', content: topic }], variables: { topic: topic }, }) }); if (!response.ok) { const error = await response.json(); console.error('Error generating completion:', error); return; } if (response.body) { const runner = getRunner(response.body); for await (const chunk of runner) { const content = chunk?.choices[0]?.delta?.content || ''; content && setCompletion(prev => prev + content); } } } catch (error) { console.error('Error generating completion:', error); } finally { setLoading(false); } }; return (

Generate Text Completions

Enter a topic and click the button to generate titles using LLM

setTopic(e.target.value)} /> {loading &&

Loading...

} {completion && (

Generated Titles:

    {completion}
)}
); } ```
Let's break down the code above: 1. Create states for `topic`, `completion`, and `loading`. 2. Create a function called `handleGenerateCompletion` that calls the API route. 3. Use the `fetch` API to call the API route and pass the `topic` variable in the request body. 4. If the `stream` is `true`, use the `getRunner` method to get the stream and update the `completion` state with the generated titles. 5. If the `stream` is `false`, parse the response and update the `completion` state with the generated titles. 6. Display the generated titles in a list format.
Let's create a Server Action to generate titles using the Langbase SDK. Create a new file `actions.ts` in the `app` directory and add the following code: ```tsx 'use server'; import {Langbase} from 'langbase'; const response = await langbase.pipes.run({ stream: false, name: 'ai-title-generator', variables: {topic: topic}, messages: [{role: 'user', content: topic}], }); return response.completion; } catch (error: any) { console.error('Uncaught API Error:', error); throw new Error('Failed to generate title.'); } }; ``` ```tsx 'use server'; import {Langbase} from 'langbase'; const streamResponse = await langbase.pipes.run({ stream: true, name: 'ai-title-generator', variables: {topic: topic}, messages: [{role: 'user', content: topic}], }); return streamResponse; } catch (error: any) { console.error('Uncaught API Error:', error); throw new Error('Failed to generate title.'); } }; ``` Let's break down the code above: 1. Import the `Langbase` class from the `langbase` package. 2. Create a function called `runAiTitleGeneratorAgent` that takes a `topic` as an argument. 3. Inside the function, create a new instance of `Langbase` with the API key. 4. Call the `langbase.pipes.run` method with the `stream` option set to `true` or `false`, depending on whether you want to use streaming or not. 5. Return the response from the `langbase.pipes.run` method. 6. If an error occurs, log the error and throw a new error. ## Step 6: Call the Server Action Now that we have created the Server Action, we can call it from the client side. Let's add the following code to the `app/page.tsx` file to call the Server Action. ```tsx {{ title: 'app/page.tsx' }} 'use client'; import { useState } from 'react'; import { runAiTitleGeneratorAgent } from './actions'; export default function Home() { const [topic, setTopic] = useState(''); const [completion, setCompletion] = useState(''); const [loading, setLoading] = useState(false); const handleRunAiTitleGeneratorAgent = async () => { setLoading(true); const completion = await runAiTitleGeneratorAgent({topic: topic}); setCompletion(completion); setLoading(false); }; return (

Generate Text Completions

Enter a topic and click the button to generate titles using LLM

setTopic(e.target.value)} /> {loading &&

Loading...

} {completion && (

Completion: {completion}

)}
); } ```
```tsx {{ title: 'app/page.tsx' }} 'use client'; import { useState } from 'react'; import { runAiTitleGeneratorAgent } from './actions'; import { getRunner } from 'langbase'; export default function Home() { const [topic, setTopic] = useState(''); const [completion, setCompletion] = useState(''); const [loading, setLoading] = useState(false); const handleRunAiTitleGeneratorAgent = async () => { setLoading(true); const response = await runAiTitleGeneratorAgent({ topic: topic, }); if (response.stream) { const stream = getRunner(response.stream); for await (const chunk of stream) { const content = chunk?.choices[0]?.delta?.content; content && setCompletion(prev => prev + content); } } }; return (

Generate Text Completions

Enter a topic and click the button to generate titles using LLM

setTopic(e.target.value)} /> {loading &&

Loading...

} {completion && (

Completion: {completion}

)}
); } ```
Let's break down the code above: 1. Import the `runAiTitleGeneratorAgent` function from the `actions.ts` file. 2. Create states for `topic`, `completion`, and `loading`. 3. Create a function called `handleRunAiTitleGeneratorAgent` that calls the Server Action. 4. Call the `runAiTitleGeneratorAgent` function and pass the `topic` variable. 5. If the `stream` is `true`, use the `getRunner` method to get the stream and update the `completion` state with the generated titles. 6. If the `stream` is `false`, parse the response and update the `completion` state with the generated titles. 7. Display the generated titles in a list format.
## Step 7: Run the Next.js App To run the Next.js app, execute the following command: ```bash {{ title: 'npm' }} npm run dev ``` ```bash {{ title: 'pnpm' }} pnpm run dev ``` ```bash {{ title: 'yarn' }} yarn run dev ``` Open your browser and navigate to `http://localhost:3000`. You should see the app running with an input field to enter a topic, a button to generate titles, and a paragraph to display the generated titles. Give it a try by entering a topic and clicking the `Generate titles` button. Here are the example AI generated title for the topic `Large Language Models`: ```md 1. "Unlocking the Power of Large Language Models" 2. "The Future of AI: Large Language Models Explained" 3. "Large Language Models: Transforming Communication" 4. "Understanding Large Language Models in AI" 5. "The Impact of Large Language Models on Society" ``` --- ## Next Steps - Build something cool with Langbase [APIs](/api-reference) and [SDK](/sdk). - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. ---
Integrations: Azure OpenAI https://langbase.com/docs/integrations/azure-openai/ import { generateMetadata } from '@/lib/generate-metadata'; # Integrations: Azure OpenAI ### Learn how to use Azure OpenAI Models in Langbase --- Azure OpenAI provides custom deployments of OpenAI models for enhanced security, regional compliance, and control. Langbase fully supports Azure OpenAI models, making integration seamless. Follow this step-by-step guide to use Azure OpenAI models in Langbase. --- ## Step 1: Create an Azure OpenAI Resource If you have setup and deployed an Azure OpenAI model, you can skip to Step 3. Login and create an [Azure OpenAI resource here](https://portal.azure.com/?microsoft%5Fazure%5Fmarketplace%5FItemHideKey=microsoft%5Fopenai%5Ftip#create/Microsoft.CognitiveServicesOpenAI). ## Step 2: Deploy a Model in your Azure OpenAI Resource Once you have created an Azure OpenAI resource, deploy a model within it. Go to your [Azure OpenAI resource](https://ai.azure.com/), navigate to **Deployments** and click on **Deploy model**. ![Deploy Model](https://raw.githubusercontent.com/LangbaseInc/docs-images/refs/heads/main/docs/azure/azure-deploy-model.jpg) Select the model you want to deploy, follow the deployment instructions, and complete the process. ![Selecting Model](https://raw.githubusercontent.com/LangbaseInc/docs-images/refs/heads/main/docs/azure/azure-select-model.jpg) ## Step 3: Get your Azure OpenAI credentials Go to [ai.azure.com](https://ai.azure.com/), navigate to your **Deployments**. Click on the deployed model you want to use in Langbase. ![Deployed Models](https://raw.githubusercontent.com/LangbaseInc/docs-images/refs/heads/main/docs/azure/azure-deployments.jpg) Inside the deployed model page, copy your **Key**, then click **Open in playground**. ![Azure Key](https://raw.githubusercontent.com/LangbaseInc/docs-images/refs/heads/main/docs/azure/azure-key.jpg) In the playground, click **View Code**, select **Key Authentication** and copy the following three as shown in the screenshot below: 1. API version 2. Resource name 3. Deployment name ![Azure OpenAI Model Credentials](https://raw.githubusercontent.com/LangbaseInc/docs-images/refs/heads/main/docs/azure/azure-credentials.jpg) ## Step 4: Configure Azure OpenAI in Langbase In [Langbase studio](https://langbase.com/studio), navigate to **Settings** > **LLM API Keys** from the sidebar. Select **Azure OpenAI**, then enter the Azure OpenAI credentials you copied in Step 3: - **API Key**: Your Azure OpenAI Key - **Resource Name**: Your Azure OpenAI Resource Name - **API Version**: Your Azure OpenAI API Version - **Deployment Name**: Your Azure OpenAI Deployment Name - **Model Name**: The OpenAI Model Name (often the same as Deployment Name) ![Azure OpenAI Configuration](https://raw.githubusercontent.com/LangbaseInc/docs-images/refs/heads/main/docs/azure/azure-keyset.jpg) Click **Add** to save the configuration. Your credentials will be encrypted and stored securely. You are all set to use Azure OpenAI models in Langbase. Go to **[Pipes](https://langbase.com/studio)**, create a new pipe, select an Azure OpenAI model, and start using it in Langbase. Guide: How to use Vision https://langbase.com/docs/guides/vision/ import { generateMetadata } from '@/lib/generate-metadata'; # Guide: How to use Vision ### A step-by-step guide to using Vision capabilities from a Vision model using Langbase Pipes. --- In this guide, we will learn how to send an image to a **Vision** model in a Langbase Pipe and get it to answer questions about it. ### What is Vision? LLM models with vision capabilities can take images as input, understand them, and generate text-based answers about them. Vision models can be used to answer questions about images, generate captions, or provide descriptions of visual content. Vision is also used for OCR tasks, like image classification, and object detection. | **LLM Type** | **Input** | **Output** | |-------------------------------|--------------|------------| | Unimodal without Vision | Text | Text | | Multimodal with Vision | Text + Image | Text | Let's say we send the following image to a Vision model and ask it to describe the image. The Vision model will process the image and generate a text-based response like this. ``` The image depicts an iridescent green sweat bee, likely of the genus Agapostemon or Augochlorini. In the image, the bee is perched on a flower, likely foraging for nectar or pollen, which is a common behavior for these pollinators. ``` --- ## How to use Vision in Langbase Pipes? Vision is supported in Langbase Pipes across different LLM providers, including OpenAI, Anthropic, Google, and more. Using Vision in Langbase Pipes is simple. You can send images in the API request and get text answers about them. ## Sending Images to Pipe for Vision First, select a Vision model that supports image input in your [Langbase Pipe](/pipe). You can choose from a variety of Vision models from different LLM providers. For example, OpenAI's `gpt-4o` or Anthropic's `claude-3.5-sonnet`. The Pipe Run API matches the [OpenAI spec](https://platform.openai.com/docs/guides/vision) for Vision requests. When running the pipe, provide the image in a message inside the `messages` array. Here is what your messages will look like for vision requests: ```ts // Pipe Run API "messages": [ { "role": "user", "content": [ { "type": "text", "text": "What is in this image?" } { "type": "image_url", "image_url": { "url": "data:image/png;base64,iVBOR...xyz" // base64 encoded image } } ] } ] ``` In the above example, we are sending an image URL (base64 encoded image) as input to the vision model pipe, which will process the image and give a text response. Follow the [Run Pipe API spec](/api-reference/pipe/run) for detailed request types. ### Image Input Guidelines for Vision Here are some considerations when using vision in Langbase Pipes: 1. **Message Format** - Images can be passed in `user` role messages. - Message `content` must be an array of content parts (for text and images) in vision requests. While in text-only requests, the message `content` is a string. 2. **Image URL** - The `image_url` field is used to pass the image URL, which can be: 1. **Base64 encoded images**: Supported by all providers. 2. **Public URLs** Supported only by OpenAI. 3. **Provider-specific limits** - Different LLM providers may impose varying restrictions on image size, format, and the number of images per request. - Refer to the specific provider’s documentation for precise limits. - Langbase imposes no additional restrictions. 4. **Image Quality Settings** (OpenAI only) - OpenAI models support an optional detail field in the image_url object for controlling image quality. - The `detail` field can be set to `low`, `medium`, or `high` to control the quality of the image sent to the model. ## Examples Here are some example Pipe Run requests utilizing Vision models in Langbase Pipes. ### Example 1: Sending a Base64 Image Here is an example of sending a base64 image in a Pipe Run API request. ```bash curl https://api.langbase.com/v1/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "stream": false, "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image." }, { "type": "image_url", "image_url": { {/* An example image of colorful squares */} "url": "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAgAAAAIACAIAAAB7GkOtAAANG0lEQVR4nOzXf6/fdX3G8XPaLziBQmu7TSitVWGtdC5S6IEKnvFjskUZcgQqwqwiMEm2cIaigktBWDvYJtDqiuKZM2v5Fea6QIHCRiKCoxsQB7TUShyQTAu2heCQdutq6624EpLr8bgB1+vzxyd55j04b/03h5JOPuAd0f2j/uSc6P7Qsuz8se88O7q/e/vq6P49V8+J7q+e9p3o/r2TpkX3X9i7Nrq/9OvZ/39i9RHR/bEzt0T3l+3ZF91/cseq6P6CHz8e3Z8UXQfgTUsAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQaXvu2j2QPXLIxur95cF92//Fro/tzjn80ur/kU9ui+2MPjUf3z967K7q/9dMrovsXXbQ3uv/i8Mei+7/2t4dH9+8e/Ul0/43BZdH98z53XXT/1I98I7rvBQBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBqemDwSPXD9utXR/RWH/DK6v+x9z0T3L9z6oej+f3zoiOj+ziXD0f25v39ndH/lv22K7j87eXF0/5OvL43uv+34+6P72z6xI7p/4Jefju6P7fjj6P4V586N7nsBAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBrcd8InogSOefm90/5rTT43u773zsej+Ly7eL7q/9qL50f3H/vKV6P6D9/4iuv+BY/eP7m9e+HJ0f/mRo9H9Jyc/EN0/c3BzdH/R1uei+zd9bFV0//nlT0X3vQAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFKDZy59LnpgYsui6P4P79wY3V86463R/bkzDozuzzvp0uj+9GPPi+5vP3lDdP/fP3NNdH/8/qui+/9/6rej+5fP/PPo/iGbzorun/7JNdH9+TMuiO4fPWUkuu8FAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUGj708yujB+Ycd090/5/uOy66/7unzI/uH/ntf4nu/9acQXT//ZdPju5/5a5Z0f3D5o1E93cfuSq6v3PnW6P7n7lv/+j+4iXTovvD886P7t/wV4dE96dOvTK67wUAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQarH9pYfTAB6c8G92/+JBF0f0nvrczuv+FxS9E92+4Ynt0f8X4hdH9524aRPdnzJmI7r8x+vbo/o5FH47un7FkbXT/R4euie5vmbwsun/70B9E9ydtuz67H10H4E1LAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUGpx7zsLogZ/+64bo/lf/Ojo/NLpnT3R/xWEHRfePmPn16P7Eoj+M7n93+i3R/ZOPOTy6Pzb+8+j+jQ//OLo/Pn1WdP+R7+2K7q9Y/Zbs/pKp0f0bN74W3fcCACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKDTZtvyN6YO633hXdf37TzOj+I9ftju5PPDAa3X/Ha7Oi+zOnnR/dv/X+W6P7f3f1yuj+lSufj+7ffPj06P69+90W3V+zeVV0//CRhdH9i6ceFN1fe/lL0X0vAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACg1GD5yNXRAyNjs6L7//w/S6P7t7zySHR/zcJV0f3bLv7t6P5n3/mz6P7Qhoei86dvOz66v3j9tOj+TesfiO7Pu/vA6P66G++I7j946Ibo/p+9uCC6P+dP90X3vQAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFKDr95+T/TAN67fFd1fevtT0f3//uZvRvfXPfq/0f2Dfv0r0f1pn50a3R97eF90/0s7Ph3d/+6e46P7z7zwcHT/zC9NRPefmHRLdP/7+16I7s9e8Gp0/4t7/z667wUAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQanHLc16IHTrvooej+/ifNi+5veerh6P61G0+L7q/88rTo/uMHL43uXzr9H6L7P3jwhOj+y0edFd1/9ZIPRPe/OP890f0Z678V3R9M2h3dX7fp/6L7S8d3Rfe9AABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUoNXZ/1X9MB3DtgW3X/kmjOj+0M3r43Oz7/ijOj+lC0nRfff+L0l0f35Z9wZ3Z+7LPv9nz9qanT/lKHXo/uPHvxcdH/0xEuj+2M/+350/7DPrYvuz96yX3TfCwCglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKDV86D9+PHrgRzP/Jrr/tR8uiO5fcv6J0f1Xzr4ruv/Mihej+wfOWB3d3zCYEd0/98ILovu/3Jr9f04fGonuz3vfydH98dm3R/eP+fAPovsbX3tXdP/oyzZE970AAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSw49NnBs98PpdT0f3T7j1/uj+HVedEt0/+P1vj+5fufiy6P67H/1odP+A0dHo/uyjj4zunzjp59H9hdfeGN2fe9q7o/tzNr83uv87//l4dH/BDddF9zdOfym67wUAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQaPuaEj0cP/MaUs6L7L1/9luj+R1+6N7r/RxufiO5/6tmfRPff89Nd0f3xzcuj+3c/eU50/4NXzY7uj21dGd3fdOJIdP8LV6yJ7u/Y/4Ho/vIpx0T3/+LaC6L7XgAApQQAoJQAAJQSAIBSAgBQSgAASgkAQCkBACglAAClBACglAAAlBIAgFICAFBKAABKCQBAKQEAKCUAAKUEAKCUAACUEgCAUgIAUEoAAEoJAEApAQAoJQAApQQAoJQAAJQSAIBSAgBQSgAASgkAQKlfBQAA//+DUW1hSVkpbAAAAABJRU5ErkJggg==" } } ] } ] }' ``` ### Example 2: Sending Image as a Public Image URL (supported by OpenAI only) Public image URLs are only supported by OpenAI, so make sure you are using an OpenAI model. ```bash curl https://api.langbase.com/v1/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "stream": false, "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image." }, { "type": "image_url", "image_url": { "url": "https://upload.wikimedia.org/wikipedia/commons/b/b5/Iridescent.green.sweat.bee1.jpg" } } ] } ] }' ``` ### Example 3: Sending multiple images You can also send multiple images attached to the same message. ```bash curl https://api.langbase.com/v1/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "stream": false, "messages": [ { "role": "user", "content": [ { "type": "text", "text": "How are these images different?" }, { "type": "image_url", "image_url": { "url": "" } }, { "type": "image_url", "image_url": { "url": "" } } ] } ] }' ``` Replace `` and `` with the base64 encoded images you want to send. ### Example 4: Sending multiple images in conversation turns (chat) You can also send multiple images in different messages across conversation turns. Let's say you start the conversation with the first image: ```bash curl https://api.langbase.com/v1/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "stream": false, "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image." }, { "type": "image_url", "image_url": { "url": "" } } ] } ] }' ``` Then, in the next turn, you can send the second image: ```bash curl https://api.langbase.com/v1/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "stream": false, "thread_id": "", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Is this image different from the previous one?" }, { "type": "image_url", "image_url": { "url": "" } } ] } ] }' ``` By including the `thread_id` returned from the first request, in the second request, your Langbase Pipe automatically continues the conversation from the previous turn. ## FAQs - Make sure to use the correct Vision model that supports image input. - Langbase currently supports Vision models from OpenAI, Anthropic and Google. More providers will be supported soon. - Vision support is live on Langbase API. Vision in Studio playground is coming soon. - Langbase currently does not store images sent to Vision models. --- Guide: How to use Structured Outputs in Langbase https://langbase.com/docs/guides/structured-outputs/ import { generateMetadata } from '@/lib/generate-metadata'; # Guide: How to use Structured Outputs in Langbase ### A step-by-step guide to use Structured Outputs in Langbase. --- ## What are Structured Outputs? Structured Outputs is a feature that guarantees responses from language models adhere strictly to your supplied JSON schema. It eliminates the need for ad-hoc validation and reduces the risk of missing or malformed fields by enforcing a schema. This is particularly useful for applications—like data extraction, generative UI and chain of thoughts where you expect the model's output to be parsed according to a specific structure. Some benefits include: - **Reliable Type-Safety**: The model's output is automatically validated against your schema. It ensures that the output is always in the expected format, reducing the need for manual validation. - **Explicit Refusals**: If the model refuses to perform a request, the error or refusal is returned in a standardized format. - **Simpler Prompting**: You don't need to include extensive instructions to enforce a particular output format. --- ## Using Structured Outputs in Langbase Pipes In this guide, we will use Langbase SDK to create a Langbase pipe that uses Structured Outputs. ## Step 1: Define the JSON Schema First, you need to define the JSON schema that describes the expected output format. In TypeScript, you can use the `zod` library to define the schema. Here is an example of a simple schema that enforces the LLM to do Chain of Thought reasoning for a given math query and return the final answer: ```ts import { z } from 'zod'; import { zodToJsonSchema } from 'zod-to-json-schema'; const MathReasoningSchema = z.object({ steps: z.array( z.object({ explanation: z.string(), output: z.string(), }), ), final_answer: z.string(), }); // Convert the Zod schema to JSON Schema format const jsonSchema = zodToJsonSchema(MathReasoningSchema, { target: 'openAi' }); ``` ## Step 2: Create the Pipe Now, we can create a Langbase pipe using the `createPipe` method from the Langbase SDK. We will pass the JSON schema to this pipe, and the pipe will enforce the model's output to match this schema. ```ts import 'dotenv/config'; import { Langbase } from 'langbase'; import { z } from 'zod'; import { zodToJsonSchema } from 'zod-to-json-schema'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); const MathReasoningSchema = z.object({ steps: z.array( z.object({ explanation: z.string(), output: z.string(), }), ), final_answer: z.string(), }); // Convert the Zod schema to JSON Schema format const jsonSchema = zodToJsonSchema(MathReasoningSchema, { target: 'openAi' }); async function createMathTutorPipe() { const pipe = await langbase.pipes.create({ name: 'math-tutor', model: 'openai:gpt-4o', messages: [ { role: 'system', content: 'You are a helpful math tutor. Guide the user through the solution step by step.', }, ], json: true, response_format: { type: 'json_schema', json_schema: { name: 'math_reasoning', schema: jsonSchema, }, }, }); console.log('✅ Math Tutor pipe created:', pipe); } createMathTutorPipe(); ``` ## Step 3: Run the Pipe and Validate Output Once you have created the pipe, you can run it with a math query. The model will respond according to the defined JSON schema. You can validate it with your original Zod schema. This confirms that the output adheres to your structured format. ```ts async function runMathTutorPipe(question: string) { const { completion } = await langbase.pipes.run({ name: 'math-tutor', messages: [{ role: 'user', content: question }], stream: false, }); // Parse and validate the response using the original Zod schema const solution = MathReasoningSchema.parse(JSON.parse(completion)); console.log('✅ Structured Output Response:', solution); } runMathTutorPipe('How can I solve 8x + 22 = -23?'); ``` ### What happens here: - The pipe is executed with a math problem from the user. - The output is parsed from JSON and validated using MathReasoningSchema. - Any deviation from the expected format will be caught during the Zod validation process, giving you reliable, type-safe structured outputs. Here is the JSON response we get from our Structured Outputs pipe: ```json { "steps": [ { "explanation": "We start with the original equation.", "output": "8x + 22 = -23" }, { "explanation": "Subtract 22 from both sides to isolate the term with x.", "output": "8x + 22 - 22 = -23 - 22" }, { "explanation": "Simplify both sides of the equation after subtraction.", "output": "8x = -45" }, { "explanation": "Divide both sides by 8 to solve for x.", "output": "8x/8 = -45/8" }, { "explanation": "Simplify the right side to find the value of x.", "output": "x = -45 / 8" } ], "final_answer": "x = -45 / 8" } ``` You can find the full code for this example [here](https://github.com/LangbaseInc/langbase-sdk/blob/main/examples/nodejs/pipes/pipe.structured.outputs.ts). By following this guide, you've leveraged Langbase to build an AI agent that responds with reliable, structured output according to your use case. Join our [Discord community](https://langbase.com/discord) to share your builds and get support while innovating with Langbase. Build an AI Video Wisdom Extraction Tool https://langbase.com/docs/guides/video-wisdom-extractor/ import { generateMetadata } from '@/lib/generate-metadata'; # Build an AI Video Wisdom Extraction Tool ### A step-by-step guide to build an AI Video Wisdom Extraction Tool using Langbase SDK. --- In this guide, we will build an AI Video Wisdom Extraction Tool using the Langbase SDK. This tool will: - Extract wisdom from a YouTube video - Answer questions related to the video content - Generate a summary of the video - List main ideas and key points - Extract quotes and key phrases - Provide a list of references and resources - Highlight the wow moments in the video - Write Tweets from the video content ![VideoWisdom][cover] --- We will create a basic Next.js application that will use the [Langbase SDK](/sdk/overview) to connect to the AI Pipes and stream the final response back to user. Let's get started! ## Step 0: Create a Next.js Application To build the agent, we need to have a Next.js starter application. If you don't have one, you can create a new Next.js application using the following command: ```bash npx create-next-app@latest video-wisdom # or with pnpm pnpx create-next-app@latest video-wisdom ``` This will create a new Next.js application in the `video-wisdom` directory. Navigate to the directory and start the development server: ```bash cd video-wisdom npm run dev # or with pnpm pnpm run dev ``` ## Step 1: Install Langbase SDK Install the Langbase SDK in this project using npm or pnpm. ```bash npm install langbase # or with pnpm pnpm add langbase ``` ## Step 2: Fork the AI pipes Fork the following AI Pipes in ⌘ Langbase dashboard. These Pipes will power the Video Wisdom Extraction Tool: - [YouTube Videos Q/A Pipe][chatPipe] - Answers questions related to the video content. - [Summarize YouTube Video Pipe][summarize] - Generates a summary of the video. - [Main Ideas Extractor Pipe][getMainIdeas] - Lists main ideas and key points. - [List Interesting Facts Pipe][listInterestingFacts] - Extracts interesting facts from the video. - [Wow Moments Extractor Pipe][wowMomentsExtractor] - Highlights the wow moments in the video. - [Video Tweets Extractor Pipe][videoTweetsExtractor] - Writes Tweets from the video content. - [Video Recommendations Extractor Pipe][videoRecommendations] - Provides a list of references and resources from the video. - [List Quotes from Video Pipe][listQuotes] - Extracts quotes and key phrases from the video. When you fork a Pipe, navigate to the [API](/features/api) tab located in the Pipe's navbar. There, you'll find API keys specific to each Pipe, which are essential for making calls to the Pipes using the Langbase SDK. Create a `.env.local` file in the root directory of your project and add the following environment variables: ```bash # Replace `LB_SUMMARIZE_PIPE_KEY` with your API from the forked Summary Pipe LB_SUMMARIZE_PIPE_KEY="" # Replace `LB_GENERATE_PIPE_KEY` with your API from the forked YouTube Videos Q/A Pipe LB_GENERATE_PIPE_KEY="" # Replace `LB_MAIN_IDEAS_PIPE_KEY` with your API from the forked Main Ideas Extractor Pipe LB_MAIN_IDEAS_PIPE_KEY="" # Replace `LB_FACTS_PIPE_KEY` with your API from the forked List Interesting Facts Pipe LB_FACTS_PIPE_KEY="" # Replace `LB_WOW_PIPE_KEY` with your API from the forked Wow Moments Extractor Pipe LB_WOW_PIPE_KEY="" # Replace `LB_TWEETS_PIPE_KEY` with your API from the forked Video Tweets Extractor Pipe LB_TWEETS_PIPE_KEY="" # Replace `LB_RECOMMENDATION_PIPE_KEY` with your API from the forked Video Recommendations Extractor Pipe LB_RECOMMENDATION_PIPE_KEY="" # Replace `LB_QUOTES_PIPE_KEY` with your API from the forked List Quotes from Video Pipe LB_QUOTES_PIPE_KEY="" ``` ## Step 3: Create Wisdom Extraction API Route Create a new file `app/api/langbase/wisdom/route.ts`. This API route will call the Langbase AI Pipes to extract wisdom from the YouTube video. First we define the `GenerationType` enum and the `getEnvVar` function to get the environment variable based on the type of the Pipe we want to call. We will specify the type of the Pipe in the request body from the UI. This function will return the environment variable based on the type. ```ts // Enum for type. enum GenerationType { Generate = 'generate', Summarize = 'summarize', Quotes = 'quotes', Recommendation = 'recommendation', MainIdeas = 'mainIdeas', Facts = 'facts', Wow = 'wow', Tweets = 'tweets' } // Get the environment variable based on type. const getEnvVar = (type: GenerationType) => { switch (type) { case GenerationType.Generate: return process.env.LB_GENERATE_PIPE_KEY; case GenerationType.Summarize: return process.env.LB_SUMMARIZE_PIPE_KEY; case GenerationType.Quotes: return process.env.LB_QUOTES_PIPE_KEY; case GenerationType.Recommendation: return process.env.LB_RECOMMENDATION_PIPE_KEY; case GenerationType.MainIdeas: return process.env.LB_MAIN_IDEAS_PIPE_KEY; case GenerationType.Facts: return process.env.LB_FACTS_PIPE_KEY; case GenerationType.Wow: return process.env.LB_WOW_PIPE_KEY; case GenerationType.Tweets: return process.env.LB_TWEETS_PIPE_KEY; default: return null; } }; ``` We also define the `RequestBody` type and the `requestBodySchema` schema for the request body of the API route. ```ts import z from 'zod'; // For schema validation // Schema for request body const requestBodySchema = z.object({ prompt: z.string(), transcript: z.string().trim().min(1), type: z.enum([ GenerationType.Generate, GenerationType.Summarize, GenerationType.Quotes, GenerationType.Recommendation, GenerationType.MainIdeas, GenerationType.Facts, GenerationType.Wow, GenerationType.Tweets ]) }); // Type for request body type RequestBody = z.infer; ``` Add the route code to the `app/api/langbase/wisdom/route.ts` file: ```ts /** * This API route calls the Langbase AI Pipes to extract wisdom from the YouTube video. * * @param {NextRequest} req - The request object. * @returns {Response} The response object streaming the final response back to the frontend. */ export async function POST(req: NextRequest) { try { // Extract the prompt from the request body const reqBody: RequestBody = await req.json(); const parsedReqBody = requestBodySchema.safeParse(reqBody); // If the request body is not valid if (!parsedReqBody.success || !parsedReqBody.data) { throw new Error(parsedReqBody.error.message); } // Extract the prompt from the request body const { prompt, transcript, type } = parsedReqBody.data; // Get the environment variable based on type. const pipeKey = getEnvVar(type); // If the Pipe API key is not found, throw an error. if (!pipeKey) { throw new Error('Pipe API key not found'); } // Generate the response and stream from Langbase Pipe. return await generateResponse({ prompt, transcript, pipeKey }); } catch (error: any) { return new Response(error.message, { status: 500 }); } } /** * Generates a response by initiating a Pipe, constructing the input for the stream, * generating a stream by asking a question, and returning the stream in a readable stream format. * @param {Object} options - The options for generating the response. * @param {string} options.transcript - The transcript to be used as user input or variable value. * @param {string} options.prompt - The prompt to be used as user input or variable value. * @param {string} options.pipeKey - The API key for the Pipe. * @returns {Response} The response stream in a readable stream format. */ async function generateResponse({ transcript, prompt, pipeKey }: { transcript: string; prompt: string; pipeKey: string; }) { // 1. Initiate the Pipe. const pipe = new Pipe({ apiKey: pipeKey }); // 2. Construct the input for the stream // 2a. If we have prompt, we pass 'transcript' as a variable. // This is useful when we want to use the transcript as a variable in the prompt. // Used with the question answers Pipe. // 2b. Otherwise we pass 'transcript' as user input. let streamInput: StreamOptions; if (!prompt) { streamInput = { messages: [{ role: 'user', content: transcript }] }; } else { streamInput = { messages: [{ role: 'user', content: prompt }], variables: { transcript: transcript } }; } // 2. Generate a stream by asking a question const { stream } = await pipe.streamText(streamInput); // 3. Done, return the stream in a readable stream format. return new Response(stream.toReadableStream()); } ``` Here is a quick explanation of what's happening in the code above: - We extract the prompt, transcript, and type from the request body. - We get the environment variable based on the type of the Pipe we want to call. - We initiate the Pipe with the API key using the Langbase SDK. - We construct the input for the stream. If we have a prompt, we pass the transcript as a variable. Otherwise, we pass the transcript as user input. - We generate a stream by asking a question using the Langbase SDK. - We return the stream in a readable stream format. --- That's it! You have successfully created an AI Video Wisdom Extraction Tool using the Langbase SDK. You can connect the API routes to the frontend and start extracting wisdom from YouTube videos. You can find the complete code for the VideWisdom app in the [GitHub repository][gh]. --- ## Live demo You can try out the live demo of the VideoWisdom [here][demo]. ![VideoWisdom Tool][cover] --- [demo]: https://videowisdom.langbase.dev/ [lb]: https://langbase.com [summarize]: https://langbase.com/examples/youtube-video-summarizer [chatPipe]: https://langbase.com/examples/you-tube-videos-qn-a [getMainIdeas]: https://langbase.com/examples/youtube-video-main-ideas-extractor [videoTweetsExtractor]: https://langbase.com/examples/youtube-video-tweets-extractor [wowMomentsExtractor]: https://langbase.com/examples/youtube-video-wow-moments [listInterestingFacts]: https://langbase.com/examples/youtube-video-interesting-facts-extractor [videoRecommendations]: https://langbase.com/examples/youtube-video-recommendations-extractor [listQuotes]: https://langbase.com/examples/youtube-video-quotes-extractor [gh]: https://github.com/LangbaseInc/langbase-examples/tree/main/examples/video-wisdom [cover]: https://github.com/user-attachments/assets/e78e53bf-e61b-4ea5-a6b9-4c4c2c81a56a [download]: https://download-directory.github.io/?url=https://github.com/LangbaseInc/langbase-examples/tree/main/examples/video-wisdom [signup]: https://langbase.fyi/io [qs]: https://langbase.com/docs/pipe/quickstart [docs]: https://langbase.com/docs [xsa]: https://x.com/SaqibAmeen [local]: http://localhost:3000 [mit]: https://img.shields.io/badge/license-MIT-blue.svg?style=for-the-badge&color=%23000000 Guide: Run a Pipe using user/org API keys https://langbase.com/docs/guides/run-pipe-user-org-api-keys/ import { generateMetadata } from '@/lib/generate-metadata'; # Guide: Run a Pipe using user/org API keys ### A step-by-step guide to run a pipe using user/org API keys. --- In this guide, we will learn how to run a pipe using user/org keys. We will pass user/org API keys as auth token inside /run endpoint and pass the name of the pipe we want to run. Let's get started! --- ## Step 1: Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). ## Step 2: Run a pipe with user/org API keys Lastly, instead of pipe API key, we will pass user/org API key as auth token inside /run endpoint. We will also pass the name of the pipe we want to run. ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "messages": [ { "role": "user", "content": "Hello!" } ], "name": "" }' ``` ```js {{ title: 'Node.js' }} async function generateCompletion() { const url = 'https://api.langbase.com/v1/pipes/run' const apiKey = '' const data = { messages: [{ role: 'user', content: 'Hello!' }, name: '' ], } const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }) const res = await response.json(); const completion = res.completion; return completion; } ``` ```python import requests import json def generate_completion(): url = 'https://api.langbase.com/v1/pipes/run' api_key = '' body_data = { "messages": [ {"role": "user", "content": "Hello!"} ], "name": "" } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}' } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res = response.json() completion = res['completion'] return completion ``` --- Guide: Retrieval Augmented Generation (RAG) https://langbase.com/docs/guides/rag/ import { generateMetadata } from '@/lib/generate-metadata'; # Guide: Retrieval Augmented Generation (RAG) ### A step-by-step guide to implement RAG using Langbase Pipes and Memory. --- In this guide, we will build a RAG application that allows users to ask questions from their documents. We call it Documents QnA RAG App. You can use it to ask questions from documents you uploaded to Langbase Memory. --- First we will create a Langbase memory, upload data to it, and then connect it to a pipe. We will then create a Next.js application that uses Langbase SDK to use the pipe to generate responses. Let's get started! ## Step 1: Create a Memory In Langbase dashboard, navigate to Memory section, create a new memory and name it `rag-wikipedia`. You can also add a description to the memory. ⌘ Langbase create memory ## Step 2: Upload RAG Data Upload the data to the memory you created. You can upload any data for your RAG. For this example, we uploaded a PDF file of the Wikipedia page of [Claude](https://en.wikipedia.org/wiki/Claude_(language_model)). You can either drag and drop the file or click on the upload button to select the file. Once uploaded, wait a few minutes to let Langbase process the data. Langbase takes care of chunking, embedding, and indexing the data for you. Click on the Refresh button to see the latest status. Once you see the status as `Ready`, you can move to the next step. Uplaod data to Langbase memory ### Create Memory and Upload Data using Langbase Memory API Alternatively, we can use the [Langbase Memory API](/api-reference/memory) to create and upload the data as well.
Create memory and upload data using Langbase Memory API <> ### Create a Memory ```js async function createNewMemory() { const url = 'https://api.langbase.com/v1/memory'; const apiKey = ''; const memory = { name: "rag-wikipedia", description: "This memory contains content of AhmadAwais.com homepage", }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(memory), }); const newMemory = await response.json(); return newMemory; } ``` You will get a response with the newly created memory object: ```js { "name": "rag-wikipedia", "description": "This memory contains content of AhmadAwais.com homepage", "owner_login": "langbase", "url": "https://langbase.com/memorysets/langbase/rag-wikipedia" } ``` --- ### Upload Data Uploading data is a two-step process. First, we get a signed URL to upload the data. Then, we upload the data to the signed URL. Let's see how to get a signed URL: ```js async function getSignedUploadUrl() { const url = 'https://api.langbase.com/v1/memory/documents'; const apiKey = ''; const newDoc = { memoryName: 'rag-wikipedia', ownerLogin: 'langbase', fileName: 'ahmadawais-homepage.pdf', }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(newDoc), }); const res = await response.json(); return res; } ``` Response will contain the signed URL; something like this: ```js { "signedUrl": "https://b.langbase.com/..." } ``` Now that we have the signed URL, we can upload the data to it: ```js const fs = require("fs"); async function uploadDocument(signedUrl, filePath) { const file = fs.readFileSync(filePath); const response = await fetch(signedUrl, { method: 'PUT', headers: { 'Content-Type': 'application/pdf', }, body: file, }); return response; } ``` You should get a response with status `200` if the upload is successful. You can upload many other type of files as well. Check out [Memory API](/api-reference/memory) for more details.
--- ## Step 4: Create a Pipe In your Langbase dashboard, create a new pipe and name it `rag-wikipedia`. You can also add a description to the pipe. ⌘ Langbase create pipe ## Step 5: Connect Memory to Pipe Open the newly created pipe and click on the `Memory` button. From dropdown, select the memory you created in the previous step and that's it. ⌘ Langbase connect memory to pipe --- Now that we have created a memory, uploaded data to it, and connected it to a pipe, we can create a Next.js application that uses Langbase SDK to generate responses. ## Step 6: Clone the Starter Project Clone the [RAG Starter Project](https://github.com/LangbaseInc/langbase-examples/tree/main/starters/documents-qna-rag) to get started. The app contains a single page with a form to ask a question from documents. This project uses: 1. [Langbase SDK](/sdk) 2. [Langbase Pipe](/pipe) 3. [Langbase Memory](/memory) 4. Next.js 5. Tailwind CSS --- ## Step 7: Install Dependencies and Langbase SDK Install the dependencies using the following command: ```bash npm install ``` Install the Langbase SDK using the following command: ```bash npm install langbase ``` --- ## Step 8: Create a route `app/api/generate/route.ts` and add the following code: ```ts import { Pipe } from 'langbase'; import { NextRequest } from 'next/server'; /** * Generate response and stream from Langbase Pipe. * * @param req * @returns */ export async function POST(req: NextRequest) { try { if (!process.env.LANGBASE_PIPE_API_KEY) { throw new Error( 'Please set LANGBASE_PIPE_API_KEY in your environment variables.' ); } const { prompt } = await req.json(); // 1. Initiate the Pipe. const pipe = new Pipe({ apiKey: process.env.LANGBASE_PIPE_API_KEY }); // 2. Generate a stream by asking a question const stream = await pipe.streamText({ messages: [{ role: 'user', content: prompt }] }); // 3. Done, return the stream in a readable stream format. return new Response(stream.toReadableStream()); } catch (error: any) { return new Response(error.message, { status: 500 }); } } ``` ## Step 9: Go to `starters/rag-ask-docs/components/langbase/docs-qna.tsx` and add following import: ```tsx import { fromReadableStream } from 'langbase'; ``` Add the following code in `DocsQnA` component after the states declaration: ```tsx const handleSubmit = async (e: React.FormEvent) => { // Prevent form submission e.preventDefault(); // Prevent empty prompt or loading state if (!prompt.trim() || loading) return; // Change loading state setLoading(true); setCompletion(''); setError(''); try { // Fetch response from the server const response = await fetch('/api/generate', { method: 'POST', body: JSON.stringify({ prompt }), headers: { 'Content-Type': 'text/plain' }, }); // If response is not successful, throw an error if (response.status !== 200) { const errorData = await response.text(); throw new Error(errorData); } // Parse response stream if (response.body) { // Stream the response body const stream = fromReadableStream(response.body); // Iterate over the stream for await (const chunk of stream) { const content = chunk?.choices[0]?.delta?.content; content && setCompletion(prev => prev + content); } } } catch (error: any) { setError(error.message); } finally { setLoading(false); } }; ``` Replaces the following piece of code in `DocsQnA` component: ```tsx onSubmit={(e) => { e.preventDefault(); }} ``` With the following code: ```tsx onSubmit={handleSubmit} ``` ## Step 10: Create a copy of `.env.local.example` and rename it to `.env.local`. Add the [API key of the pipe](/features/api) that we created in step #4 to the `.env.local` file: ```bash # !! SERVER SIDE ONLY !! # Pipes. LANGBASE_PIPE_API_KEY="YOUR_PIPE_API_KEY" ``` ## Step 11: Run the project using the following command: ```bash npm run dev ``` Your app should be running on [http://localhost:3000](http://localhost:3000). You can now ask questions from the documents you uploaded to the memory. 🎉 That's it! You have successfully implemented a RAG application. It is a Next.js application you can deploy it to any platform of your choice like Vercel, Netlify, or Cloudflare. ## Live Demo You can see the live demo of this project [here](https://documents-qna-rag.langbase.dev/). Documents QnA RAG --- Further resources: - Complete code on [GitHub](https://github.com/LangbaseInc/langbase-examples/tree/main/examples/documents-qna-rag). - Pipe used in this example on [Langbase Pipes](https://langbase.com/examples/rag-wikipedia). ---
Guide: How to insert text into Memory https://langbase.com/docs/guides/memory-upload/ import { generateMetadata } from '@/lib/generate-metadata'; # Guide: How to insert text into Memory ### A step-by-step guide to insert raw text into Memory using the Langbase SDK. --- In this guide, we will learn how to insert raw text into Memory using the Langbase SDK. This way you can turn your text into a file and store it in Memory for RAG. --- ## Step 0: Get API Key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). ## Step 1: Install Langbase SDK Run the following command to install the Langbase SDK: ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm add langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ## Step 2: Add Langbase API Key Add your Langbase API key to the `.env` file: ```bash LANGBASE_API_KEY=YOUR_API_KEY ``` ## Step 3: Create a Memory We will use [`langbase.memories.create()`](/sdk/memory/create) function to create a new memory. This memory will store the text file. ```js import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); await langbase.memories.create({ name: 'text-based-memory', description: 'This memory contains documents created from text' }); ``` Response will contain the newly created memory object; for example: ```js { name: 'text-based-memory', description: 'This memory contains documents created from text', owner_login: 'langbase', url: 'https://langbase.com/memorysets/langbase/text-based-memory' } ``` --- ## Step 4: Upload the text Now that we have the memory, we can use it to upload text to the memory. We will use [`langbase.memories.documents.upload()`](/sdk/memory/document-upload) function to upload the text as a document to the memory. ```js import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); const text = "Hello, World!"; const document = new Blob([text], { type: "text/plain" }); await langbase.memories.documents.upload({ memoryName: 'text-based-memory', contentType: 'text/plain', documentName: 'file-from-text.txt', document: document, }); ``` That's it! You have successfully created a document from text and uploaded it to Memory using the Langbase SDK. You can see the text as a document using the UI or using [`langbase.memories.documents.list()`](/sdk/memory/document-list) function from Langbase SDK. Let's list the documents in the memory to see the uploaded document. ```js import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); const memoryDocumentsList = await langbase.memories.documents.list({ memoryName: 'text-based-memory' }); console.log(memoryDocumentsList); ``` We will get a response with the list of documents in the memory. ```js [ { "name": "file-from-text.txt", "status": "completed", "status_message": null, "metadata": { "size": 13, "type": "text/plain" }, "enabled": true, "chunk_size": 10000, "chunk_overlap": 2048, "owner_login": "saqib" } ] ``` We can see the uploaded file `file-from-text.txt` in the memory. You can now use this file in your RAG chatbot or any other application that uses Langbase Memory. --- Guide: Add AI in your docs https://langbase.com/docs/guides/setup-docs-agent/ import { generateMetadata } from '@/lib/generate-metadata'; # Guide: Add AI in your docs ### A step-by-step guide to add AI in your docs using Langbase. --- In this series of guides, we will learn how to add AI in your docs using Langbase. We will create an AI memory, an AI agent, and setup a chatbot to interact with the AI agent. 1. [Create an AI Memory](/guides/setup-docs-agent/create-memory) 2. [Create an AI Agent](/guides/setup-docs-agent/create-agent) 3. [Setup a Chatbot](/guides/setup-docs-agent/setup-chatbot) --- ## Prerequisites: Generate Langbase API Key We will use BaseAI and Langbase SDK in this guide. To work with both, you need to generate an API key. Visit [User/Org API key documentation](/api-reference/api-keys) page to learn more. --- ## Next steps Time to build. Checkout the first guide to create an AI memory using BaseAI. --- Guide: How to replace a document in Memory https://langbase.com/docs/guides/memory-document-replace/ import { generateMetadata } from '@/lib/generate-metadata'; # Guide: How to replace a document in Memory ### A step-by-step guide to replace an existing document in Memory using the Langbase API. --- In this guide, we will learn how to replace an existing document in Memory using the Langbase API. This is useful when you need to update the content of a file that is already stored in a memory. --- ## Step 0: Get API Key You will need to generate an API key to authenticate your requests. Visit [User/Org API key documentation](https://langbase.com/docs/api-reference/api-keys) page to learn more. ## Step 1: Create a Signed URL for Replacement We'll use the `upload` endpoint of the Memory API to create a signed URL for replacing the document. This process is similar to uploading a new document, but we'll use the same filename as the existing document. ```js async function getSignedReplaceUrl() { const url = 'https://api.langbase.com/v1/memory/documents'; const apiKey = ''; const replaceDoc = { memoryName: 'your-memory-name', ownerLogin: 'your-username', fileName: 'existing-document-name.pdf', // Use the name of the existing document }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(replaceDoc), }); const res = await response.json(); return res; } ``` The response will contain the signed URL, which looks like this: ```js { "signedUrl": "https://b.langbase.com/..." } ``` ## Step 2: Upload the Replacement Document Now that we have the signed URL, we can use it to upload the replacement document. We'll use the `PUT` method to upload the new file, overwriting the existing document. ```js const fs = require('fs'); async function replaceDocument(signedUrl, filePath) { const file = fs.readFileSync(filePath); const response = await fetch(signedUrl, { method: 'PUT', headers: { 'Content-Type': 'application/pdf', // Adjust based on your file type }, body: file, }); return response; } ``` ## Step 3: Verify the Replacement After replacing the document, you can verify that the update was successful by listing the documents in the memory again. ```js async function listMemoryDocuments() { const url = 'https://api.langbase.com/v1/memory/{memoryName}/documents'; const apiKey = ''; const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}` }, }); const memoryDocumentsList = await response.json(); return memoryDocumentsList; } ``` The response will show the updated document with the same name but potentially different metadata: ```js [ { "name": "existing-document-name.pdf", "status": "completed", "status_message": null, "metadata": { "size": 1048576, // This might be different "type": "application/pdf" }, "enabled": true, "chunk_size": 10000, "chunk_overlap": 2048, "owner_login": "your-username" } ] ``` --- That's it! You have successfully replaced an existing document in Memory using the Langbase API. The document retains its original name, but its content has been updated. This process allows you to keep your Memory up-to-date without creating duplicate entries. Remember that replacing a document will trigger a re-processing of the content, including re-chunking and re-embedding. This ensures that your RAG system uses the most current information when responding to queries. Guide: Build a Composable RAG Chatbot on your Docs https://langbase.com/docs/guides/rag-on-docs/ import { generateMetadata } from '@/lib/generate-metadata'; # Guide: Build a Composable RAG Chatbot on your Docs ### A step-by-step guide to creating a RAG chatbot on your documentation files using Langbase Pipes and Memory. --- In this guide, we will create a **RAG Chatbot** on your **documentation files** and synchronize your documentation files with Langbase. This will enable you to: - Build an AI question-answer chatbot on your documentation. - Effortlessly upload and manage your documentation files as embeddings in Langbase Memory. - Ensure very low hallucination in the responses due to the refined Pipe + Memory RAG system. - Keep the RAG chatbot updated with the latest changes in your documentation. Ask My Docs RAG Live Demo – Built on React Query Docs --- We'll use [BaseAI](https://baseai.dev/) for creating the RAG pipe and memory. BaseAI will handle the memory and synchronize your GitHub documentation with Langbase to build a powerful chatbot on your docs. ## What we will build To build a RAG agent on documentation files, we'll cover the following: 1. **Embed your documentation:** Convert your documentation files into embeddings using Langbase Memory, which will serve as the agent's core knowledge base. 2. **Create a RAG pipe:** Create a Langbase Pipe that taps into these memory embeddings to answer user queries accurately. 3. **Sync documentation updates:** Set up sync for your memory embeddings with BaseAI so your agent always reflects the latest documentation. 4. **Build a chatbot web app:** Connect the agent to a chatbot web app to interact with your documentation. The RAG agent will help users answer questions by leveraging memory from embedded documentation files. Let's dive in! ## Prerequisites * **Node.js and npm:** Ensure you have Node.js and npm installed on your system. * **Langbase Account:** Sign up at [https://langbase.com](https://langbase.com). * **Repository with Documentation:** A GitHub repository with your documentation files is required. BaseAI uses git commits to synchronize with your documentation changes. --- ## Step 0: Create a Memory in your Documentation Repository Navigate to the root of your documentation repository and create a new memory using BaseAI. Run the following command: ```bash npx baseai@latest memory ``` This command will setup BaseAI and start the memory creation process by asking for name and description of the memory. Let's call it `docs`. Follow the steps below to complete the memory creation from your documentation repository. ## Step 1: Allow Memory to Track Git Repository It will prompt you if you want to create a memory from the current project git repository. Select `yes`. ``` Do you want to create memory from current project git repository? (yes/no) yes ``` ## Step 2: Provide Path to your Documentation Files Next, it will ask you which directory or subdirectory you want to use for the memory. You can select the current directory or any subdirectory. ``` Enter the path to the directory to track (relative to current directory): ``` Provide the path relative to the root of the project directory that you want to use for the memory. E.g., `src/content/docs`, to use the docs directory in the `src/content` directory. If you want to use the current directory, just press enter. ## Step 3: Specify File Extensions Next, it will ask you which files extensions you want to track. Since we are creating a memory for docs, you can provide a comma-separated list of file extensions like `.mdx,.md` to track markdown files. Alternatively, you can provide `*` to track all files. ``` Enter file extensions to track (use * for all, or comma-separated list, e.g., .md,.mdx): .md,.mdx ``` That's it, the memory setup is complete. It will create a setup file at `baseai/memory/chat-with-docs` in your current directory that tracks the git repository directory and file extensions you provided. ## Step 4: Deploy the Memory The memory setup is ready and we can now create embeddings of the memory. With BaseAI, we can create embeddings by: 1. Deploying the memory to the Langbase cloud 2. Locally creating embeddings through Ollama For best performance and results, we will deploy the memory to Langbase cloud. This will also enable us to use it in a chatbot web app. Commit all the changes in your repository to git and run the following command to deploy the `docs` memory. Replace `docs` with the name of your memory if you used a different name. ```bash npx baseai@latest deploy -m docs ``` When deploying the first time, it will ask you to login to your Langbase account and authenticate. Follow [these instructions to authenticate quickly](https://baseai.dev/docs/deployment/authentication). It will then deploy all the required files the memory to Langbase cloud, and create their embeddings. Your memory is ready. Next time, whenever your documentation changes, you can run the above command again and BaseAI will update the memory with the latest changes. ## Step 5: Create a Pipe and Connect Memory Now, we will create a pipe that will use this memory to create a RAG agent. We will call this pipe `chat-with-docs`. Run the command below to create the pipe. In addition to the pipe name and description, it will ask you to select the memory to use. Select the `docs` memory you created in the previous steps. ```bash npx baseai@latest pipe ``` It will create the pipe in your current directory under `baseai/pipes/chat-with-docs.ts`. It prints the path in the terminal. You can open the file and see the details. Here is how it looks like: ```ts // baseai/pipes/chat-with-docs.ts import {PipeI} from '@baseai/core'; import docsMemory from '../memory/chat-with-docs'; const buildPipe = (): PipeI => ({ apiKey: process.env.LANGBASE_API_KEY!, // Replace with your API key https://langbase.com/docs/api-reference/api-keys name: 'chat-with-docs', description: '', status: 'private', model: 'openai:gpt-4o-mini', stream: true, json: false, store: true, moderate: true, top_p: 1, max_tokens: 1000, temperature: 0.7, presence_penalty: 1, frequency_penalty: 1, stop: [], tool_choice: 'auto', parallel_tool_calls: false, messages: [ {role: 'system', content: `You are an Ask Docs agent. Provide precise and accurate information or guidance based on document content, ensuring clarity and relevance in your responses. You are tasked with providing precise and accurate information or guidance based on document content, ensuring clarity and relevance in your responses. # Output Format - Provide a concise paragraph or series of bullet points summarizing the relevant information - Ensure that each response is directly related to the query while maintaining clarity - Use plain language and avoid jargon unless it is specified within the query or document. # Notes - Prioritize accurate citations from the document when necessary. - If the document or section is not available, acknowledge the limitation and offer guidance on potential next steps to acquire the needed information.`}, { role: 'system', name: 'rag', content: "Below is some CONTEXT for you to answer the questions. ONLY answer from the CONTEXT. CONTEXT consists of multiple information chunks. Each chunk has a source mentioned at the end.\n\nFor each piece of response you provide, cite the source in brackets like so: [1].\n\nAt the end of the answer, always list each source with its corresponding number and provide the document name. like so [1] Filename.doc.\n\nIf you don't know the answer, just say that you don't know. Ask for more context and better questions if needed.", }, ], variables: [], memory: [docsMemory()], // Connected docs memory tools: [], }); export default buildPipe; ``` Feel free to use the prompts and settings above or customize them to your needs. Deploy the pipe to Langbase when you are ready to use it: ```bash npx baseai@latest deploy ``` Your chat with docs RAG Agent is ready! ## Step 6: Use Your RAG Agent You can try out your RAG pipe in the Langbase Studio playground. Navigate to [Langbase Studio](https://langbase.com/studio) and open your `chat-with-docs` pipe. Ask questions about your documentation and see the answers. Here is how it looks like for our RAG pipe we created for Langbase documentation: Testing Langbase Docs RAG Pipe in layground As you can see, it accurately answers queries by retreiving relevant chunks and sending them to the LLM i.e., Retrieval-Augmented Generation (RAG). ## Step 7: Integrate in a Chatbot Web App You can integrate your pipe anywhere using the Langbase API. Here are a few ideas: - **Chatbot Web App** Use our [Next.js Ask Docs app example](https://github.com/LangbaseInc/langbase-examples/tree/main/examples/ask-docs-rag) and integrate your pipe to create a chatbot webapp. - **Discord/Slack Bot** Use our [Discord bot example](https://github.com/LangbaseInc/langbase/tree/main/examples/ai-discord-bot), integrate your pipe and offer an Ask Docs Bot to your Discord server members for querying your docs directly. - **CLI** Build a CLI tool that uses your pipe to answer questions from your documentation. For this guide, let's build a Chatbot Web App using our Chat with Docs RAG agent. ## Step 8: Build a Chatbot Web App for your Docs RAG Clone the [Ask My Docs Example](https://github.com/LangbaseInc/langbase-examples/tree/main/examples/ask-docs-rag) project to get started. The example app contains a single page with complete chatbot UI for your chatbot usecase. Run this command to clone the example project: ```bash npx degit LangbaseInc/langbase-examples/examples/ask-docs-rag cd ask-docs-rag ``` Install the dependencies: ```bash npm install ``` ## Step 9: Add your Pipe API key Copy the `.env.example` file, rename to `.env` and add your pipe API key: ```bash LANGBASE_PIPE_API_KEY= ``` To get your pipe API key, navigate to your `chat-with-docs` pipe on Langbase dashboard, open the API tab and copy the **API key**. Paste it in the `.env` file. ## Step 10: Run the Chatbot Run the project using the following command: ```bash npm run dev ``` Voila! Your chatbot web app is ready. It is that easy. Navigate to its link (usually on [http://localhost:3000](http://localhost:3000)). You can now ask questions about your documentation and get answers from your RAG pipe in this app. ## Live Demo You can see the live demo of this project [here](https://ask-react-query-docs.langbase.dev/), which we built for React Query Docs. Ask any questions about React Query and see the RAG chatbot in action. Ask My Docs RAG Demo – Built on React Query Docs --- By following this guide, you've leveraged Langbase and BaseAI to effortlessly build an automated and powerful RAG Agent on top of your documentation. The good thing is, it is highly composable and you customize it or even connect it to another pipe for improving the agent's quality. Join our [Discord community](https://langbase.com/discord) and share what you build and ask for support while building with Langbase. Quickstart: Build efficient RAG with Langbase https://langbase.com/docs/memory/quickstart/ import { generateMetadata } from '@/lib/generate-metadata'; # Quickstart: Build efficient RAG with Langbase ### A step-by-step guide to building a RAG solution with Langbase --- In this guide, you will learn how to build a highly scalable RAG powered AI support agent. You will: - Create an **AI Memory agent** that will contain FAQs data. - Create an **AI support agent** pipe configured with `gpt-4o-mini` model. - **Connect** the AI Memory agent to the AI support agent pipe to create a RAG solution --- ## Let's get started There are two ways to follow this guide: - [SDK](/sdk) - TypeScript SDK to create and run pipes, memory, and more. - [Studio](https://langbase.com/studio) - Web-based AI Studio. --- --- ## Step #1: Generate Langbase API key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. In case, you do not have an API key, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Step #2: Add LLM API keys If you have set up LLM API keys in your profile, the Pipe will automatically use them. If not, navigate to [LLM API keys](https://langbase.com/settings/llm-keys) page and add keys for different providers like OpenAI, TogetherAI, Anthropic, etc. You can add LLM API keys in your account using [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `LLM API keys` link. 4. From here you can add LLM API keys for different providers like OpenAI, TogetherAI, Anthropic, etc. --- ## Step 3: Setup your project Create a new directory for your project and navigate to it. ```bash mkdir ai-support-agent && cd ai-support-agent ``` ### Initialize the project Create a new Node.js project. ```bash {{ title: 'npm' }} npm init -y ``` ```bash {{ title: 'pnpm' }} pnpm init ``` ```bash {{ title: 'yarn' }} yarn init -y ``` ### Install dependencies You will use the [Langbase SDK](/sdk) to connect to the AI agent pipes and `dotenv` to manage environment variables. So, let's install these dependencies. ```bash {{ title: 'npm' }} npm i langbase dotenv ``` ```bash {{ title: 'pnpm' }} pnpm add langbase dotenv ``` ```bash {{ title: 'yarn' }} yarn add langbase dotenv ``` ### Create an env file Create a `.env` file in the root of your project and add the following environment variables: ```bash {{ title: '.env' }} LANGBASE_API_KEY=xxxxxxxxx ``` Replace `xxxxxxxxx` with your Langbase API key. --- ## Step #4: Create an AI Memory agent To create an AI Memory agent, you will use the [Langbase SDK](/sdk) to create a new Memory. Create a new file `create-memory.ts` and add the following code: ```ts import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const memory = await langbase.memories.create({ name: 'support-memory-agent', description: 'Support chatbot memory agent', }); console.log('Memory:', memory); } main(); ``` Now run the following command in your terminal to create an AI Memory agent: ```bash npx tsx create-memory.ts ``` --- ## Step #5: Upload FAQs data You will use the [`langbase.memories.upload()`](/sdk/memory/document-upload) function to upload content to the memory agent. Let's upload FAQs to the memory agent. Create a new file `upload-doc.ts` and add the following code: ```ts import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main(content: string) { // Create a Blob object with the FAQs content const documentBlob = new Blob([content], { type: 'text/plain' }); // Create a File object from the Blob object const document = new File([documentBlob], 'langbase-faqs.txt', { type: 'text/plain' }); // Upload the document to the memory agent const response = await langbase.memories.documents.upload({ document, memoryName: 'support-memory-agent', contentType: 'text/plain', documentName: 'langbase-faqs.txt', }); console.log('Document uploaded:', response); } // FAQs content const langbaseFAQs = ` How to reset password? Step 1: Access Profile Settings 1. Log in to your Langbase account. 2. Navigate to the [Profile Settings page](https://langbase.com/settings/profile) Step 2: Change Your Password 1. Scroll to the Reset Password section. 2. Enter your new password in the New Password box. 3. Click Set Password to confirm the change. How to edit user profile? Step 1: Access Profile Settings 1. Log in to your Langbase account. 2. Navigate to the Profile Settings page. Step 2: Edit Your Profile 1. Click on Change Profile Picture to upload an image from your computer. • Note: The image should be in PNG or JPG format and must not exceed 1MB in size. 2. Enter a name for your profile in the Name field. 3. Create a username for your profile using a hyphen-separated format in the Username field. 4. Write a short bio in the Bio field. Step 3: Save Your Changes 1. Click Save Changes to apply your edits. How to upgrade organization plan? Step 1: Access the Organization Billing Page 1. Log in to your Langbase account. 2. Navigate to your Organization Profile page. 3. Click the Billing button in the top-right corner. Step 2: Review Current Billing Status 1. In the Subscription section, view your current plan, term, and status details. 2. The Usage section summarizes your organization’s activity, including the total number of pipes, requests, and memory usage. Step 3: Upgrade to Pro or Enterprise 1. Go to the Plans section to compare and explore available subscription options. 2. Click Subscribe or Let’s Talk and follow the prompts to complete the upgrade. `; main(langbaseFAQs); ``` You can also import a file using `fs` module and pass it as a value to `document`. Let's run the script to upload the FAQs data to the memory agent: ```bash npx tsx upload-doc.ts ``` --- ## Step #6: Retrieval testing of FAQs data Let's do a retrieval test to check if the FAQs data was uploaded successfully. You will use the [`langbase.memories.retrieve()`](/sdk/memory/retrieve) function to list the documents in the memory agent. Create a new file `retrieval.ts` and add the following code: ```ts import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const chunks = await langbase.memories.retrieve({ memory: [ { name: 'support-memory-agent', }, ], query: 'How to reset password?', topK: 1 }); console.log('Memory chunks:', chunks); } main(); ``` Run the script to test the retrieval of FAQs data: ```bash npx tsx retrieval.ts ``` This will print the memory chunks containing the FAQs data on the console. ```md {{ title: 'Retrieval output' }} [ { "text": 'How to reset password?\n' + 'Step 1: Access Profile Settings\n' + '\t1.\tLog in to your Langbase account.\n' + '\t2.\tNavigate to the [Profile Settings page](https://langbase.com/settings/profile)\n' + 'Step 2: Change Your Password\n' + '\t1.\tScroll to the Reset Password section.\n' + '\t2.\tEnter your new password in the New Password box.\n' + '\t3.\tClick Set Password to confirm the change.\n' + '\n' + 'How to edit user profile?\n' + 'Step 1: Access Profile Settings\n' + '\t1.\tLog in to your Langbase account.\n' + '\t2.\tNavigate to the Profile Settings page.\n' + '\n' + 'Step 2: Edit Your Profile\n' + '\t1.\tClick on Change Profile Picture to upload an image from your computer.\n' + '\t\t\tNote: The image should be in PNG or JPG format and must not exceed 1MB in size.\n' + '\t2.\tEnter a name for your profile in the Name field.\n' + '\t3.\tCreate a username for your profile using a hyphen-separated format in the Username field.\n' + '\t4.\tWrite a short bio in the Bio field.\n' + '\n' + 'Step 3: Save Your Changes\n' + '\t1.\tClick Save Changes to apply your edits.\n' + '\n' + 'How to upgrade organization plan?', similarity: 0.5440677404403687, meta: { documentName: 'langbase-faqs.txt' } } ] ``` --- ## Step #6: Create an AI support agent pipe You need to create an AI support agent pipe. It will use the `gpt-4o-mini` model and the AI memory you created earlier to answer user queries. Create a new file `create-pipe.ts` and add the following code: ```ts import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const supportAgent = await langbase.pipes.create({ name: `ai-support-agent`, description: `An AI agent to support users with their queries.`, messages: [ { role: `system`, content: `You're a helpful AI assistant. You will assist users with their queries about Langbase. Always ensure that you provide accurate and to the point information.`, }, ], memory: [{ name: 'support-memory-agent' }], }); console.log('Support agent:', supportAgent); } main(); ``` You have defined a system prompt and linked the memory agent to the pipe by specifying the memory agent's name while creating the pipe. Run the script to create the AI support agent pipe: ```bash npx tsx create-pipe.ts ``` --- ## Step #7: Run the AI support agent pipe Now that you have created the AI support agent pipe, let's run it to test the agent. You will use the [`langbase.pipes.run()`](/sdk/pipe/run) function to run the pipe. Create a new file `run-pipe.ts` and add the following code: ```ts import 'dotenv/config'; import { Langbase, getRunner } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const { stream } = await langbase.pipes.run({ name: `ai-support-agent`, stream: true, messages: [ { role: `user`, content: `How to request payment API?`, }, ], }); const runner = getRunner(stream); runner.on('content', content => { process.stdout.write(content); }); } main(); ``` You are using the `stream` option to get real-time completions from the AI model. Now let's run the above file to see the AI generate completions for the user message: ```bash npx tsx run-pipe.ts ``` The AI support agent will **generate a response** to the user query using OpenAI `gpt-4o-mini` model based on the **FAQs data** uploaded to the memory agent. ```md {{ title: 'LLM generation' }} To reset your password on Langbase, follow these steps: Step 1: Access Profile Settings 1. Log in to your Langbase account. 2. Navigate to the [Profile Settings page](https://langbase.com/settings/profile). Step 2: Change Your Password 1. Scroll to the Reset Password section. 2. Enter your new password in the New Password box. 3. Click Set Password to confirm the change. ``` You can also ask other questions like: - How to edit user profile? - How to upgrade organization plan? And the AI support agent will generate responses based on the FAQs data uploaded to the memory agent. --- ## Step #1: Create an AI memory agent To get started with Langbase, you'll need to [create a free personal account on Langbase.com](https://langbase.com/signup) and verify your email address. 1. When logged in, click on Memory in the sidebar and then click on `[Add New]` 2. Give your memory a name. Let’s call it `support-memory-agent`. 3. Click on the `[Create Memory]` button. --- ## Step #2: Upload FAQs data Let's upload a sample FAQs document to the memory agent. You can upload any sample FAQs file you have or you can download the sample FAQs document by clicking the button below. 0. Click on the memory agent you created earlier. 1. Click on the upload area or drag & drop documents to upload them to the memory. 2. Click “Refresh” to fetch latest status. --- ## Step #3: Create an AI support agent pipe Now, let's create an AI support agent pipe. You will use the `gpt-4o-mini` model and the AI memory you created earlier to answer user queries. 1. Click on Pipes in the sidebar and then click on `[Add New]`. 2. Give your pipe a name. Let’s call it `ai-support-agent`. 3. Click on the `[Create Pipe]` button. --- ## Step #4: Update system prompt A system message in prompt acts as the set of instructions for the AI model. Let's update the system prompt for the AI support agent pipe with: `You're a helpful AI assistant. You will assist users with their queries about Langbase. Always ensure that you provide accurate and to the point information.` --- ## Step #5: Connect memory agent to the pipe You need to link the memory agent to the pipe. This will allow the AI model to use the FAQs data uploaded to the memory agent to answer user queries. All you need to do is to click on "Memory" button and select the memory agents you want to attach to the Pipe. It will take care of the rest. --- ## Step #6: Deploy the AI support agent pipe Now that you have created the AI support agent pipe, let's deploy it to test the agent. Click on the `[Deploy to Production]` button to deploy the pipe. --- ## Step #7: Run the AI support agent pipe Now that you have created the AI support agent pipe, let's test it inside Pipe playground. You can ask a question around resetting password to see if the AI generates a response based on the FAQs data uploaded to the memory agent. The AI support agent will **generate a response** to the user query using OpenAI `gpt-4o-mini` model based on the **FAQs data** uploaded to the memory agent. ```{{ title: 'LLM generation' }} To reset your password on Langbase, follow these steps: Step 1: Access Profile Settings 1. Log in to your Langbase account. 2. Navigate to the [Profile Settings page](https://langbase.com/settings/profile). Step 2: Change Your Password 1. Scroll to the Reset Password section. 2. Enter your new password in the New Password box. 3. Click Set Password to confirm the change. ``` You can also ask other questions like: - How to edit user profile? - How to upgrade organization plan? And the AI support agent will generate responses based on the FAQs data uploaded to the memory agent. --- ✨ That's it! You have successfully created an **AI support agent** that uses **AI Memory** (RAG) to answer user queries. You can now integrate this agent into your application to provide instant support to your users. --- ## Next Steps - Explore [SDK](/sdk) & [API reference](/api-reference) to learn more about Langbase APIs. - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- Build a composable AI Devin https://langbase.com/docs/guides/build-composable-ai-devin/ import { generateMetadata } from '@/lib/generate-metadata'; # Build a composable AI Devin ### A step-by-step guide to build an AI coding agent using Langbase SDK. --- In this guide, you will build an AI coding agent, **CodeAlchemist** aka **Devin**, that uses multiple pipe agents to: - Analyze the user prompt to identify if it's related to coding, database architecture, or a random prompt. - ReAct based architecture that decides whether to call code pipe agent or database pipe agent. - Generate raw React code for code query. - Generate optimized SQL for database query. ![CodeAlchemist aka Devin][cover] --- You will create a basic Next.js application that will use the [Langbase SDK](/sdk) to connect to the pipe agents and stream the final response back to user. Let's get started! --- ## Step 0: Setup your project Let's quickly set up the project. ### Initialize the project To build the agent, you need to have a Next.js starter application. If you don't have one, you can create a new Next.js application using the following command: ```bash {{ title: 'npm' }} npx create-next-app@latest code-alchemist ``` ```bash {{ title: 'pnpm' }} pnpm dlx create-next-app@latest code-alchemist ``` ```bash {{ title: 'yarm' }} yarn create next-app code-alchemist ``` This will create a new Next.js application in the `code-alchemist` directory. Navigate to the directory and start the development server: ```bash {{ title: 'npm' }} npm run dev ``` ```bash {{ title: 'pnpm' }} pnpm run dev ``` ```bash {{ title: 'yarm' }} yarn dev ``` ### Install dependencies Install the Langbase SDK in this project using npm or pnpm. ```bash {{ title: 'npm' }} npm install langbase ``` ```bash {{ title: 'pnpm' }} pnpm add langbase ``` ```bash {{ title: 'yarm' }} yarn add langbase ``` ## Step 1: Get Langbase API Key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. In case, you do not have an API key, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. Create an `.env` file in the root of your project and add your Langbase API key. ```bash {{ title: '.env' }} LANGBASE_API_KEY=xxxxxxxxx ``` Replace xxxxxxxxx with your Langbase API key. ## Step 2: Fork agent pipes Fork the following agent pipes in Langbase studio. These agent pipes will power CodeAlchemist: - [**Code Alchemist**][code-alchemist]: Decision maker pipe agent. Analyze user prompt and decide which pipe agent to call. - [**React Copilot**][react-copilot]: Generates a single React component for a given user prompt. - [**Database Architect**][database-architect]: Generates optimized SQL for a query or entire database schema ## Step 3: Create a Generate API Route Create a new file `app/api/generate/route.ts`. This API route will generate the AI response for the user prompt. Please add the following code: ```ts {{ title: 'app/api/generate/route.ts' }} import { runCodeGenerationAgent } from '@/utils/run-agents'; import { validateRequestBody } from '@/utils/validate-request-body'; /** * Handles the POST request for the specified route. * * @param req - The request object. * @returns A response object. */ export async function POST(req: Request) { try { const { prompt, error } = await validateRequestBody(req); if (error || !prompt) { return new Response(JSON.stringify(error), { status: 400 }); } const { stream, pipe } = await runCodeGenerationAgent(prompt); if (stream) { return new Response(stream, { headers: { pipe } }); } return new Response(JSON.stringify({ error: 'No stream' }), { status: 500 }); } catch (error: any) { console.error('Uncaught API Error:', error.message); return new Response(JSON.stringify(error.message), { status: 500 }); } } ``` Here is a quick explanation of what's happening in the code above: - Import `runCodeGenerationAgent` function from `run-agents.ts` file. - Import `validateRequestBody` function from `validate-request-body.ts` file. - Define the `POST` function that handles the POST request made to the `/api/generate` endpoint. - Validate the request body using the `validateRequestBody` [function](https://github.com/LangbaseInc/langbase-examples/blob/main/examples/code-alchemist/utils/validate-request-body.ts). - Call the `runCodeGenerationAgent` function with the user prompt. - Return the response stream and pipe name. You can find all these functions in the [utils](https://github.com/LangbaseInc/langbase-examples/tree/main/examples/code-alchemist/utils) directory of the CodeAlchemist [source code](https://github.com/LangbaseInc/langbase-examples/tree/main/examples/code-alchemist). ## Step 4: Decision Maker Pipe You are building a **ReAct based architecture** which means the sytem first reason over the info it has and then decide to act. The [Code Alchemist][code-alchemist] agent pipe is a **decision-making** agent. It contains two tool calls, `runPairProgrammer` and `runDatabaseArchitect`, which are called depending on the user's query. Create a new file `app/utils/run-agents.ts`. This file will contain the logic to call all the pipe agents and stream the final response back to the user. Please add the following code: ```ts {{ title: 'app/utils/run-agents.ts' }} import 'server-only'; import { Langbase, getToolsFromRunStream } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); /** * Runs a code generation agent with the provided prompt. * * @param prompt - The input prompt for code generation * @returns A promise that resolves to an object containing: * - pipe: The name of the pipe used for generation * - stream: The output stream containing the generated content * * @remarks * This function processes the prompt through a langbase pipe and handles potential tool calls. * If tool calls are detected, it delegates to the appropriate PipeAgent. * Otherwise, it returns the direct output from the code-alchemist pipe. */ export async function runCodeGenerationAgent(prompt: string) { const {stream} = await langbase.pipes.run({ stream: true, name: 'code-alchemist', messages: [{ role: 'user', content: prompt }], }) const [streamForCompletion, streamForReturn] = stream.tee(); const toolCalls = await getToolsFromRunStream(streamForCompletion); const hasToolCalls = toolCalls.length > 0; if(hasToolCalls) { const toolCall = toolCalls[0]; const toolName = toolCall.function.name; const response = await PipeAgents[toolName](prompt); return { pipe: response.pipe, stream: response.stream }; } else { return { pipe: 'code-alchemist', stream: streamForReturn } } } type PipeAgentsT = Record Promise<{ stream: ReadableStream, pipe: string }>>; export const PipeAgents: PipeAgentsT = { runPairProgrammer: runPairProgrammerAgent, runDatabaseArchitect: runDatabaseArchitectAgent }; async function runPairProgrammerAgent(prompt: string) {} async function runDatabaseArchitectAgent(prompt: string) {} ``` Let's go through the above code. - Import `Langbase` and `getToolsFromRunStream` from the Langbase SDK. - Initialize the Langbase SDK with the API key. - Define the `runCodeGenerationAgent` function that sends the prompt through the `code-alchemist` agent. - Inside `runCodeGenerationAgent`, get the tools from the stream using the `getToolsFromRunStream` function. - Check if there are any tool calls in the stream. - If there are tool calls, call the appropriate pipe agent. In this example, only one pipe agent will be called by the decision-making agent. - If there are no tool calls, return the direct output from the `code-alchemist` pipe. ## Step 5: React Copilot Pipe The [React Copilot][react-copilot] is a **code generation** pipe agent. It takes the user prompt and generates a React component based on it. It also writes clear and concise comments and use Tailwind CSS classes for styling. You have already defined `runPairProgrammerAgent` in the previous step. Let's write its implementation. Add the following code to the `run-agents.ts` file: ```ts {{ title: 'app/utils/run-agents.ts' }} /** * Executes a pair programmer agent with React-specific configuration. * * @param prompt - The user input prompt to be processed by the agent * @returns {Promise<{stream: ReadableStream, pipe: string}>} An object containing: * - stream: A ReadableStream of the agent's response * - pipe: The name of the pipe being used ('react-copilot') * * @example * const result = await runPairProgrammerAgent("Create a React button component"); * // Use the returned stream to process the agent's response */ async function runPairProgrammerAgent(prompt: string) { const { stream } = await langbase.pipes.run({ stream: true, name: 'react-copilot', messages: [{ role: 'user', content: prompt }], variables: { framework: 'React', } }); return { stream, pipe: 'react-copilot' }; } ``` Let's break down the above code: - Called `react-copilot` pipe agent with the user prompt and `React` as a variable. - Returned the stream and pipe as `react-copilot`. ## Step 6: Database Architect Pipe The [Database Architect][database-architect] pipe agent generates SQL queries. It takes the user prompt and generates either SQL query or whole database schema. It automatically incorporate partitioning strategies if necessary and also indexing options to optimize query performance. You have already defined `runDatabaseArchitectAgent` in the step 4. Let's write its implementation. Add the following code to the `run-agents.ts` file: ```ts {{ title: 'app/utils/run-agents.ts' }} /** * Executes the Database Architect Agent with the given prompt. * * @param prompt - The input prompt string to be processed by the database architect agent * @returns An object containing: * - stream: The output stream from the agent * - pipe: The name of the pipe used ('database-architect') * @async */ async function runDatabaseArchitectAgent(prompt: string) { const {stream} = await langbase.pipes.run({ stream: true, name: 'database-architect', messages: [{ role: 'user', content: prompt }] }) return { stream, pipe: 'database-architect' }; } ``` Here is a quick explanation of what's happening in the code above: - Call the `database-architect` pipe agent with the user prompt. - Return the stream and pipe as `database-architect`. ## Step 7: Build the CodeAlchmemist Now that you have all the pipes in place, you can call the `/api/generate` endpoint to either generate a React component or SQL query based on the user prompt. You will write a custom React hook `useLangbase` that will call the `/api/generate` endpoint with the user prompt and handle the response stream. ```ts {{ title: 'hooks/use-langbase.ts' }} import { getRunner } from 'langbase'; import { FormEvent, useState } from 'react'; const useLangbase = () => { const [loading, setLoading] = useState(false); const [completion, setCompletion] = useState(''); const [hasFinishedRun, setHasFinishedRun] = useState(false); /** * Executes the Code Alchemist agent with the provided prompt and handles the response stream. * * @param {Object} params - The parameters for running the Code Alchemist agent * @param {FormEvent} [params.e] - Optional form event to prevent default behavior * @param {string} params.prompt - The prompt to send to the Code Alchemist agent * @param {string} [params.originalPrompt] - Optional original prompt text * * @throws {Error} When the API request fails or returns an error response * * @returns {Promise} */ async function runCodeAlchemistAgent({ e, prompt, }: { prompt: string; e?: FormEvent; }) { e && e.preventDefault(); // if the prompt is empty, return if (!prompt.trim()) { console.info('Please enter a prompt first.'); return; } try { setLoading(true); setHasFinishedRun(false); // make a POST request to the API endpoint to call AI pipes const response = await fetch('/api/generate', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ prompt }) }); // if the response is not ok, throw an error if (!response.ok) { const error = await response.json(); console.error(error); return; } // get the response body as a stream if (response.body) { const runner = getRunner(response.body); // const stream = fromReadableStream(response.body); runner.on('content', (content) => { content && setCompletion(prev => prev + content); }); } } catch (error) { console.error('Something went wrong. Please try again later.'); } finally { setLoading(false); setHasFinishedRun(true); } } return { loading, completion, hasFinishedRun, runCodeAlchemistAgent }; }; export default useLangbase; ``` Here's what you have done in this step: - Create a custom React hook `useLangbase` that will call the `/api/generate` endpoint with the user prompt. - Use the `getRunner` function from the Langbase SDK to get the response body as a stream. - Listen for the `content` event on the runner and update the `completion` state with the generated content. You can find the complete code for the CodeAlchemist app in the [GitHub repository][gh]. --- ## Live demo You can try out the live demo of the CodeAlchemist [here][demo]. ![CodeAlchemist - ][cover] --- ## Next Steps - Build something cool with Langbase [APIs](/api-reference) and [SDK](/sdk). - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- [demo]: https://code-alchemist.langbase.dev [lb]: https://langbase.com [code-alchemist]: https://langbase.com/examples/code-alchemist [react-copilot]: https://langbase.com/examples/react-copilot [database-architect]: https://langbase.com/examples/database-architect [gh]: https://github.com/LangbaseInc/langbase-examples/tree/main/examples/code-alchemist [cover]: https://raw.githubusercontent.com/LangbaseInc/docs-images/main/examples/code-alchemist/demo.jpg [download]: https://download-directory.github.io/?url=https://github.com/LangbaseInc/langbase-examples/tree/main/examples/code-alchemist [signup]: https://langbase.fyi/io [qs]: https://langbase.com/docs/pipe/quickstart [docs]: https://langbase.com/docs [xsi]: https://x.com/MSaaddev [local]: http://localhost:3000 [mit]: https://img.shields.io/badge/license-MIT-blue.svg?style=for-the-badge&color=%23000000 [fork]: https://img.shields.io/badge/FORK%20ON-%E2%8C%98%20Langbase-000000.svg?style=for-the-badge&logo=%E2%8C%98%20Langbase&logoColor=000000 Guide: Build a composable AI email agent https://langbase.com/docs/guides/ai-email-agent/ import { generateMetadata } from '@/lib/generate-metadata'; # Guide: Build a composable AI email agent ### A step-by-step guide to build an AI email agent using the Langbase SDK. --- In this guide, you will build an AI email agent that uses multiple Langbase agent pipes to: - **Summarize** an email - **Analyze** sentiment of the email - **Decide** whether the email needs a response or not - **Pick** the tone of the response email - **Generate** a response email --- You will build a basic Node.js application that will use the [Langbase SDK]((/sdk)) to connect to the AI agent pipes and generate responses using **parallelization** and **prompt-chaining** agent architectures. ## Flow reference architecture There are two flows in the email agent, i.e., **User Email Flow** and **Spam Email Flow**. The **User Email Flow** is a normal email flow where the user sends an email, and the AI agent analyzes the email sentiment, summarizes the email content, decides the email needs a response, picks the tone of the response email, and generates the response email. The **Spam Email Flow** is a spam email flow where the AI agent analyzes the email sentiment, summarizes the email content, and decides that the email does not need a response. **Click on the flow to see the detailed steps.** Let's get started! --- ## Step 0: Setup your project Create a new directory for your project and navigate to it. ```bash mkdir ai-email-agent && cd ai-email-agent ``` ### Initialize the project Initialize Node.js project and create an `index.ts` file. ```bash {{ title: 'npm' }} npm init -y && touch index.ts && touch agents.ts ``` ```bash {{ title: 'pnpm' }} pnpm init && touch index.ts && touch agents.ts ``` ```bash {{ title: 'yarn' }} yarn init -y && touch index.ts && touch agents.ts ``` ### Install dependencies You will use the [Langbase SDK](/sdk) to connect to the AI agent pipes and `dotenv` to manage environment variables. So, let's install these dependencies. ```bash {{ title: 'npm' }} npm i langbase dotenv ``` ```bash {{ title: 'pnpm' }} pnpm add langbase dotenv ``` ```bash {{ title: 'yarn' }} yarn add langbase dotenv ``` ## Step 1: Get Langbase API Key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. In case, you do not have an API key, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. Create an `.env` file in the root of your project and add your Langbase API key. ```bash {{ title: '.env' }} LANGBASE_API_KEY=xxxxxxxxx ``` Replace xxxxxxxxx with your Langbase API key. ## Step 2: Fork the AI agent pipes Fork the following agent pipes needed for the AI email agent in Langbase dashboard: 1. [Email Sentiment][email-sentiment] → An agent pipe to analyze the sentiment of the incoming email 2. [Summarizer][summarizer] → Summarizes the content of the email and make it less wordy for you 3. [Decision Maker][decision-maker] → Decides if the email needs a response, the category and priority of the response 4. [Pick Email Writer][pick-email-writer] → An AI agent pipe that picks the tone for writing the response of the email 5. [Email Writer][email-writer] → An agent pipe that will write a response email ## Step 3: Sentiment Analysis In our first step, you will **analyze** the **email sentiment** using the Langbase AI agent pipe. Go ahead and add the following code to your `agents.ts` file: ```ts {{ title: 'agents.ts' }} import { Langbase } from 'langbase'; import 'dotenv/config' const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); // Sentiment analysis const completion = JSON.parse(response.completion); return completion.sentiment; }; ``` Let's take a look at what is happening in this code: - Initialize the [Langbase SDK](/sdk) with the API key from the environment variables. - Define `emailSentimentAgent` function that takes an email and returns its **sentiment analysis**. - Set the `json` parameter to `true` to get the response in JSON format. - Set the `stream` parameter to `false` because the content generation will be processed internally. Stream the response: - When it's displayed **directly** to users in the UI. - This creates a **better user experience** by showing the AI's response being generated in real-time. Do not stream: - When the AI response is being **processed internally** (e.g., for data analysis, content moderation, or generating metadata). - Set stream to false in these cases since real-time display isn't needed. ## Step 4: Summarize Email Now let's write a function in the same `agents.ts` file to **summarize** the email content. ```ts {{ title: 'agents.ts' }} // Summarize email const completion = JSON.parse(response.completion); return completion.summary; }; ``` Let's break down the above code: - Define a function `emailSummaryAgent` that takes an email and returns its **summarized content**. - Set the `json` parameter to `true` to get the response in JSON format. - Set the `stream` parameter to `false` because the content generation will be processed internally. ## Step 5: Decision Maker You are building a **ReAct based architecture** which means the sytem first reason our the info it has and then decide to act. In this example, results of the email sentiment and summary are passed to the decision-making agent pipe to **decide** whether to respond to the email or not. ![ReAct architecture](/docs/react-architecture-2.jpg) Go ahead and add the following code to your `agents.ts` file: ```ts {{ title: 'agents.ts' }} // Determine if a response is needed const completion = JSON.parse(response.completion); return completion.respond; }; ``` Let's go through the above code. - Define a function `shouldRespondToEmailAgent` that takes the summarized content and sentiment of the email and returns a decision. - Set the `json` parameter to `true` to get the response in JSON format. - Set the `stream` parameter to `false` because the content generation will be processed internally. ## Step 6: Pick Email Writer In cases where the email needs a response, you will use the **Pick Email Writer** agent pipe to pick the tone of the email response. ```ts {{ title: 'agents.ts' }} // Pick an email writer const completion = JSON.parse(response.completion); return completion.tone; }; ``` Here's what you have done in this step: - Created a function `pickEmailWriterAgent` that uses the **summarized email** and **sentiment** to pick one of the following tones for response: - Professional - Formal - Informal - Casual - Friendly - Set the `json` parameter to `true` to get the response in JSON format. - Set the `stream` parameter to `false` because the content generation will be processed internally. ## Step 7: Write Email Response Finally, you will use the **Email Writer** agent pipe to generate the response email based on the tone picked in the previous step. ```ts {{ title: 'agents.ts' }} // Generate an email reply return stream; }; ``` Let's take a look at the above code: - Created a function `emailResponseAgent` that takes the **tone** and **summarized email** and returns the **response email**. - Set the `stream` parameter to `true` to get the response in stream format as you will be writing the response to the console. ## Step 8: Final composable mult-agent workflow Now that you have all these agents, you can combine these in multi agent workflow so they can help us 1. **analyze** the email 2. **summarize** the email 3. **decide** if it needs a response 4. **pick** the tone of the response 5. **generate** the response email if needed. In `index.ts` file, let's import all our agents and define a `workflow` that to run with user email and spam email. ```ts {{ title: 'index.ts' }} import { getRunner } from 'langbase'; import { emailResponseAgent, emailSentimentAgent, emailSummaryAgent, pickEmailWriterAgent, shouldRespondToEmailAgent, } from './agents'; import { stdout } from 'process'; const workflow = async (emailContent: string) => { console.log('Email:', emailContent); // parallelly run the agent pipes const [emailSentiment, emailSummary] = await Promise.all([ emailSentimentAgent(emailContent), emailSummaryAgent(emailContent), ]); console.log('Sentiment:', emailSentiment); console.log('Summary:', emailSummary); const respond = await shouldRespondToEmailAgent(emailSummary, emailSentiment); console.log('Respond:', respond); if (!respond) { return 'No response needed for this email.'; } const writer = await pickEmailWriterAgent(emailSummary, emailSentiment); console.log('Writer:', writer); const emailStream = await emailResponseAgent(writer, emailSummary); const runner = getRunner(emailStream); runner.on('content', (content: string) => { stdout.write(content); }); }; const userEmail = `I'm really disappointed with the service I received yesterday. The product was faulty and customer support was unhelpful.`; const spamEmail = `Congratulations! You have been selected as the winner of a $100 million lottery!`; workflow(userEmail); ``` ```ts {{title: './agents.ts'}} import { Langbase } from 'langbase'; import 'dotenv/config' const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); // Sentiment analysis const completion = JSON.parse(response.completion); return completion.sentiment; }; // Summarize email const completion = JSON.parse(response.completion); return completion.summary; }; // Determine if a response is needed const completion = JSON.parse(response.completion); return completion.respond; }; // Pick an email writer const completion = JSON.parse(response.completion); return completion.tone; }; // Generate an email reply return stream; }; ``` Here's what you have done in this step: - Define a `workflow` function that takes an email content and runs the email sentiment, summary, decision-making, email writer, and email response agents. - You have used the `getRunner` function to get the stream of the email response and write it to the console. - You have also defined the `emailSentimentAgent`, `emailSummaryAgent`, `shouldRespondToEmailAgent`, `pickEmailWriterAgent`, and `emailResponseAgent` functions to run the respective agents. ## Step 9: Run the workflow Run the email agent workflow by running the following command in terminal: ```bash {{ title: 'npx' }} npx tsx index.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx index.ts ``` You should see the following output in the console: ```md {{ title: 'Email Agent output' }} Email: I'm really disappointed with the service I received yesterday. The product was faulty and customer support was unhelpful. Sentiment: frustrated Summary: Disappointed with faulty product and unhelpful customer support. Respond: true Writer: formal Subject: Re: Concern Regarding Faulty Product and Support Experience Dear [User's Name], Thank you for reaching out to us regarding your experience with our product and customer support. I sincerely apologize for the inconvenience you've encountered. We strive to provide high-quality products and exceptional service, and it is disappointing to hear that we did not meet those standards in your case. Please rest assured that your feedback is taken seriously, and we are committed to resolving this matter promptly. To assist you further, could you please provide details about the specific issues you're facing? This will enable us to address your concerns more effectively. Thank you for bringing this to our attention, and I look forward to assisting you. Best regards, [Your Name] [Your Position] [Company Name] [Contact Information] ``` This is how you can build an AI email agent using Langbase agent pipes. --- ## Next Steps - Build something cool with Langbase [APIs](/api-reference) and [SDK](/sdk). - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- [cover]: https://raw.githubusercontent.com/LangbaseInc/docs-images/main/examples/ai-email-agent/ai-email-agent.jpg [email-agent]: https://ai-email-agent.langbase.dev/ [gh-repo]: https://github.com/LangbaseInc/langbase-examples/blob/main/examples/email-agent-node [local]: http://localhost:3000 [email-sentiment]: https://langbase.com/examples/email-sentiment [summarizer]: https://langbase.com/examples/summarizer [decision-maker]: https://langbase.com/examples/decision-maker [pick-email-writer]: https://langbase.com/examples/pick-email-writer [email-writer]: https://langbase.com/examples/email-writer FAQs https://langbase.com/docs/memory/faqs/ import { generateMetadata } from '@/lib/generate-metadata'; # FAQs Let's take a look at some frequently asked questions about Memory. --- ## What is a Memory? Memory allows you to store, organize, and retrieve information. It can be used to build powerful Retrieval Augmented Generation (RAG) based AI agents which can use your data to assist with your queries. --- ## What is a Retrieval Augmented Generation (RAG) system? RAG is an AI assistant which usese a combination of a large language model (LLM) and your data (Memory) to provide more accurate and relevant responses to user queries. --- ## Where is RAG used? LLMs are powerful at understanding text but they can't store information. Memory is used to store and organize your information. RAG uses LLMs on your information to provide more accurate and relevant responses. It makes it ideal for building AI agents personalized to your data. --- ## Why I should store information in Memory instead of feeding LLMs directly? Querying LLMs directly can be expensive, and slow. Not to mention, they have a limited context winodow, which limits the amount of information they can process. Memory allows you to store and organize your information in a way that is optimized for retrieval. Only relevant piece of information is retrieved from memory in a RAG system and passed to the LLM for generating a response. --- ## Which file formats are supported for importing data into Memory? Currently we support `.txt`, `.pdf`, `.md`, `.csv`, and all the major plaincoding files. --- ## How can I import data from unsupported file formats? You can convert your data into a supported format before importing it into Memory. There are many online tools available to convert files into different formats. We are continuously working on adding support for more file formats. If you have a specific file format you would like us to support, please let us know and we will prioritize it. --- ## How does the chunking parameters affect the performance of RAG? A chunk is a piece of information given to the LLM. The size of these chunks affects how well RAG performs. During retrieval, the query is converted into a vector, and a search returns only the relevant chunks. If chunks are too small, they might not have enough information to be useful. If they are too large, they might include irrelevant information. You can adjust the chunk size based on your needs to optimize RAG's performance. --- ## Can I attach multiple Memory Sets to a single Pipe? Yes, you can attach multiple Memory Sets to a single Pipe. --- ## What is the maximum file size that can be imported into Memory? The maximum file size that can be imported into Memory is 10MB. --- ## Is there an API available for Memory? Yes, [Langbase Memory API](/api-reference/memory) provides you with a programmatic access to managing memories in your Langbase account. Since documents are stored in memories, you can also manage documents using the Memory API. --- Concepts https://langbase.com/docs/memory/concepts/ import { generateMetadata } from '@/lib/generate-metadata'; # Concepts Using memory is fastest way to use your knowledge data with a Pipe to build a RAG system. Let's understand the key concepts of Memory to successfully build a RAG system: --- ## RAG Prompt When a memory is attached to a Pipe, by deafult a RAG prompt appears which is fed to LLM to utilize the memory. Default prompt works fine in most cases, but you can customize the prompt based on your use case. RAG Prompt ## Document Processing Once the document is uploaded, it is submitted to a queue for processing. The status is reflected in the memory page. You can "Refresh" the status to get the latest status of the document on the memory page. Below are the possible status of the document: - **Queued**: Document is in the queue waiting to be processed - **Processing**: Document is being processed - **Ready**: Document is ready to be used - **Failed**: Document processing failed due to some error _If the document processing fails or takes too long, you can re-upload the document to process it again. In case of failure, the error message can be seen by hovering over the "Failed" status. If the issue persists, please contact support._ Following are the steps involved in document processing: 1. **Chunking**: The document is split into chunks. Each piece contain a certain piece of information. During the retrieval, only the relevant chunk is fed to LLM to generate the response. 2. **Embedding**: Each chunk is converted into an embedding. Embeddings are used to find similar pieces of information during retrieval. 3. **Indexing**: Embeddings are stored in vector store and indexed for faster retrieval. Documents with only selectable text are supported. The document processing may fail otherwise. Small and simple documents are recommended for better results. ### Enabled The Enabled/Disabled toggle button will include/exclude the document from the memory when the memory is used in a Pipe. --- ## Chunking Before embedding the document, it is split into chunks and each chunk is then embedded. The chunking parameters can be adjusted to control the number of chunks generated from the document. The chunking parameters are: 1. **Chunk Max Length**: The maximum number of characters per chunk. The chunk size can range from `1024` to `30000` characters. _We recommend a small chunk size so vector representation can capture distinctive information in each piece_ 2. **Chunks Overlap**: The number of characters by which the chunks overlap. The minimum overlap is `256` characters. It helps in tracking chunk boundaries and managing document flow. The default chunking parameters are set to **Chunk Max Length: 10000** and **Chunks Overlap: 2048**. Chunking settings can be adjusted based on the document size and the use case. To change the chunking settings, click on the document name on the memory. The document details page will open where you can adjust the chunking parameters. Below is how you can adjust the chunking parameters: 1. Change the `Chunk Max Length` and `Chunks Overlap` values and click on the `Draft changes` button. This will only preview the chunks: _it will not embedd and save the changes_. 2. If you are satisfied with the preview, click on the `Update Embeddings` button to save the changes. The document will be submitted for re-processing with the new chunking parameters. Embeddings can be only updated once the chunks are drafted. Adjusting Chunking Parameters During the chunking, another parameter called `Separator` is used to split the document into chunks. By default `["\n\n", "\n"]` are used as separators to split the document into chunks based on paragraphs and lines. So, if a separator is found in the document, the document is split into chunks at that point as it takes precedence over the chunking parameters. _Soon, you will be able to customize the separator based on your document structure._ --- ## Embeddings Embeddings are the vector representation of the document. All documents in a memory are converted into embeddings. These embeddings are used to find similar documents during retrieval testing. By default, OpenAI's latest embedding model `text-embedding-3-large` with `256` dimensions is used to generate embeddings. > Embeddings are a way to represent text in a way that captures the meaning of the text. The embeddings are generated in such a way that similar texts have similar embeddings. This allows us to find similar texts by comparing their embeddings. --- ## Retrieval Testing Retrieval testing is a way to test which parts of the document will be retrieved and passed on to LLM for a given prompt. It helps in debugging the document chunking parameters for your use case. If the retrieval testing is not showing the expected results, you can adjust the chunking parameters and re-run the retrieval test. Below are the steps to run retrieval testing: 1. In a memory, click on Retrieval Testing tab. 2. Enter a prompt in the input box and click on the `Find Similar Chunks` button. Retrieval Testing Guide: Build an AI Discord Bot https://langbase.com/docs/guides/discord-bot/ import { generateMetadata } from '@/lib/generate-metadata'; # Guide: Build an AI Discord Bot ### A step-by-step guide to building an AI Discord bot using Langbase Pipes. --- Want to add an AI-powered chatbot to your Discord server? This comprehensive guide walks you through the process of building an AI Discord Bot using Langbase Pipes. You'll be able to create a bot that interacts with users, answers questions, and provides a more engaging Discord experience. Let's get started! ## Example Bot We built a bot named **Ask Langbase Docs**, a Discord bot that can answer any questions about Langbase documentation. It is currently available on the Langbase Discord server. You can ask any question about Langbase documentation using the `/ask` command, and the bot will respond with the answer. Here is a preview of the bot in action: Ask Langbase Docs bot in action ## Prerequisites Here's what you need: * **Node.js and npm:** Make sure you have Node.js (version 18 or higher) and npm installed. * **Langbase Account:** If you don't have one already, sign up for free at [https://langbase.com](https://langbase.com). * **Discord Account and Server:** You'll need a Discord account and a server where you can add and test your bot. * **Discord Developer Portal Access:** You'll be creating a Discord application and bot, so access to the [Discord Developer Portal](https://discord.com/developers/applications) is required. * **Cloudflare Account:** We'll be using Cloudflare Workers for hosting the bot. ## Step 1: Set Up Your Discord Application 1. **Create a Discord Application:** - Go to the [Discord Developer Portal](https://discord.com/developers/applications). - Click "New Application" and give it a name (e.g., "My Langbase Bot"). 2. **Add a Bot:** - Navigate to the "Bot" section in the left-hand menu. - Click "Add Bot" and confirm to create a bot user for your application. 3. **Bot Permissions:** - In the "Bot" settings, under "Privileged Gateway Intents," enable the "Server Members Intent." This is important for the bot to identify users on your server. 4. **Copy Important Credentials:** - Note down your application's **Client ID** and **Client Secret** – you'll need these later. ## Step 2: Set Up Your Discord Server 1. **Create a Discord Server:** - If you don't have a server already, create one for testing your bot. 2. **Invite the Bot to Your Server:** - Back in the Discord Developer Portal, in your application's settings, go to the "OAuth2" section. - In the "Scopes" section, select "bot". - Under "Bot Permissions", select the following: - **Text Permissions:** Send Messages, Read Message History - **Presence Permissions:** Connect - Copy the generated invite link and use it to add the bot to your Discord server. ## Step 3: Create Your Langbase Pipe Now, let's create the Langbase Pipe that will power the AI capabilities of your Discord bot: 1. **Create a New Pipe:** - Log in to your Langbase account. - Go to the "Pipes" section and click "Create Pipe." - Give your pipe a name (e.g., "My Discord Bot Pipe"). 2. **Configure Your Pipe:** - You can choose a pre-built template from Langbase's library or customize your own. Choose the template that best suits your Discord bot's purpose. For our example, we are using a `Question Answering` template that uses a Langbase RAG Pipe to answer questions about Langbase. 3. **Get Your Pipe API Key:** - Once your pipe is created, navigate to the pipe's settings and locate the **API key**. Copy this key – you'll need it to allow your Discord bot to communicate with Langbase. ## Step 4: Set up Your Code Editor 1. **Clone the Example Repository:** - Open your terminal or command prompt. - Clone the example repository provided by Langbase: ```bash git clone https://github.com/LangbaseInc/langbase-examples.git cd langbase-examples/examples/ai-discord-bot ``` 2. **Install Dependencies:** - Install the project dependencies using npm: ```bash npm install ``` ## Step 5: Configure Your Bot 1. **Rename `.dev.vars`:** - Rename the `example.dev.vars` file to `.dev.vars`. - **Important:** Add `.dev.vars` to your `.gitignore` file. This file will contain sensitive information that should not be publicly exposed. 2. **Environment Variables:** - Open the `.dev.vars` file and fill in the following placeholders with your Discord and Langbase credentials: ```bash DISCORD_TOKEN='YOUR_DISCORD_BOT_TOKEN' DISCORD_PUBLIC_KEY='YOUR_DISCORD_APPLICATION_PUBLIC_KEY' DISCORD_APPLICATION_ID='YOUR_DISCORD_APPLICATION_ID' DISCORD_GUILD_ID='YOUR_DISCORD_GUILD_ID' LANGBASE_PIPE_KEY='YOUR_LANGBASE_PIPE_API_KEY' ``` - **Where to find these values:** - **`DISCORD_TOKEN`**: Discord Developer Portal > Your Application > Bot > Token (click "Reset Token" to generate a new one) - **`DISCORD_PUBLIC_KEY`**: Discord Developer Portal > Your Application > General Information > Public Key - **`DISCORD_APPLICATION_ID`**: Discord Developer Portal > Your Application > General Information > Application ID - **`DISCORD_GUILD_ID`**: In Discord, right-click your server > Server Settings > Advanced > Enable Developer Mode. Now, right-click your server again > Copy ID. - **`LANGBASE_PIPE_KEY`**: From the Langbase Pipe you created in Step 3. ## Step 6: Define Your Bot's Commands In this example, we're creating a slash command, `/ask`, which will allow users to interact with our bot. 1. **Command Registration:** - To make the command usable, you need to register it. For testing, it's easiest to register it specifically for your server (guild): ```bash npm run register:guild ``` - Replace placeholders with your actual credentials. 2. **Understanding the Command Code:** - Open the `src/server.ts` file. This file contains the core logic for handling interactions with your bot. - The key part is the code that handles the `/ask` command: ``` // ... inside the fetch request handler in server.ts if (command === 'ask') { const pipe = new Pipe(LANGBASE_PIPE_KEY) const question = input const response = await pipe.generateText({ messages: [{ role: 'user', content: question }], }) return new Response(JSON.stringify({ type: 4, data: { content: response.generations[0].text }, })) } // ... rest of the code ``` - Explanation: - This code intercepts the `/ask` command, retrieves the user's input (their question), sends it to your Langbase Pipe, gets the AI-generated response, and formats it to send back to the Discord user. ## Step 7: Run Locally with ngrok 1. **Start Your Bot:** ```bash npm run dev ``` This will start your bot locally. 2. **Set Up ngrok:** - Open a new terminal window and run: ```bash npm run ngrok ``` - ngrok creates a public URL that forwards requests to your local development server. Copy the HTTPS URL provided by ngrok (e.g., `https://8098-24-22-245-250.ngrok.io`). 3. **Update Discord with ngrok URL:** - Go back to the Discord Developer Portal > Your Application > Settings > Bot. - Under "Interactions Endpoint URL," paste your ngrok HTTPS URL. - **Important:** Click "Save" to apply the changes. 4. **Test Your Bot:** - On your Discord server, type the `/ask` command followed by a question. Your bot should respond with an answer powered by Langbase! ## Step 8: Deploy to Cloudflare Workers To make your bot publicly accessible, deploy it to Cloudflare Workers: 1. **Cloudflare Worker Setup:** - Follow the instructions in the [Cloudflare Workers documentation](https://developers.cloudflare.com/workers/get-started/create-a-worker/) to create a new worker. 2. **Publish Your Code:** - From your project's root directory, run: ```bash npm run publish ``` 3. **Set Environment Variables on Cloudflare:** - In your Cloudflare Worker's settings, go to the "Settings" tab and then "Variables." - Add the environment variables (`DISCORD_TOKEN`, etc.) you defined in your `.dev.vars` file as secrets in your Cloudflare Worker settings. 4. **Point Discord to Cloudflare Worker:** - In the Discord Developer Portal, update the "Interactions Endpoint URL" for your bot to point to your Cloudflare Worker's URL. ## You Did It! Congratulations! You have a working AI Discord bot powered by Langbase. Now, start exploring more advanced features, customizations, and integrations. Join the [Langbase Discord community](https://langbase.com/discord) for support and to share your creations. */} Page https://langbase.com/docs/embed/platform/ import { generateMetadata } from '@/lib/generate-metadata'; ## Platform Limits and pricing for Embed primitive on the Langbase Platform are as follows: 1. **[Limits](/embed/platform/limits)**: Rate and usage limits. 2. **[Pricing](/embed/platform/pricing])**: Pricing details for the Agent primitive. Versions https://langbase.com/docs/features/versions/ import { generateMetadata } from '@/lib/generate-metadata'; # Versions Versions in Pipe lets you see how its config has changed over time. You can go back and forth between different versions without having to change the Pipe config every time. There are four types of Pipe versions. --- ## 1. Production As the name suggests, it is the Pipe config that is deployed on production. If your Pipe has multiple versions, the version deployed on production will be the one that the Pipe API will use. --- ## 2. Draft Fork When you start updating your Pipe configuration, such as system prompts, metadata, variables, etc., a Draft fork version is created. This draft is based on the currently selected version. So if the selected version is at v9, the new draft will be called Draft Fork of v9. --- ## 3. Preview When you run a Pipe that is on a Draft fork version, a Preview version gets created. This version helps you track your pipe usage so later, you can check the model, params, and prompt/completion pair or convert it into a Sandbox version. --- ## 4. Sandbox Changing Pipe config makes a Draft Fork version. Instead of deploying this Draft Fork version to Production, you can save it as a Sandbox version. You can get back to this Sandbox version later if you want without having to manually change all the Pipe config. --- ## Compare Version Diff Compare versions helps you to see what changed between the current production version and any of the previous Pipe versions. It’s similar to the VS Code source control editor. --- Variables https://langbase.com/docs/features/variables/ import { generateMetadata } from '@/lib/generate-metadata'; # Variables Pipes have built-in support for variables to handle dynamic prompts. Any text written between `{{}}` in your prompt instructions acts as a variable to which you can assign different values using the variable section inside a Pipe. Variables will appear once you add them using `{{variableName}}`. Variables power you to build applications where you can get their value from your external or internal users. Based on these variable values, you can generate completion using the Pipe API. --- ## How to use Variables Follow this quick guide to learn how to use Variables inside a Pipe. --- ## Step 1: Create a Variable Variables are created using `{{variableName}}` syntax. Navigate to any of your Langbase Pipes and anywhere in system, user or assistant prompt instructions, write a variable name using the syntax. An example of a **system prompt** with a **variable**: ``` You are helpful AI assistant that creates creative blog {{titles}}. ``` ## Step 2: Assign a Value Inside the Pipe, there is a **Variable** section on the left where you can assign a value to the variable you created. Let's give the value **AI applications** to the variable `{{titles}}`. --- Usage https://langbase.com/docs/features/usage/ import { generateMetadata } from '@/lib/generate-metadata'; # Usage Pipe's usage feature provides insights into how it's being utilized. Along with detailed logs of every Pipe request, this feature includes data on the total cost incurred by these requests, the total number of tokens used across all requests, their overall usage, and the average response time. --- ## Usage Prediction The Pipe also offers insights on the estimated cost of running it a million times with the selected LLM. This cost prediction provides an estimate to the user and can help them optimize the Pipe for the cost. --- Store Messages https://langbase.com/docs/features/store-messages/ import { generateMetadata } from '@/lib/generate-metadata'; # Store Messages Pipe can store both prompts and their completions if the Store messages in Pipe meta is enabled. Otherwise, only system prompts and few-shot messages will be saved. No completions, final prompts or variables will be retained to ensure privacy. --- ## How to enable/disable message storage? Follow this quick guide to learn how to enable or disable storage of your messages in Langbase. --- ## Step 1: Toggle the store message option Navigate to your Langbase pipe. In the **Meta** section find the **Store messages** option. Toggle the switch to enable or disable message storage. Here is what your logs will look like if the Pipe does not store messages. Your logs will look like this if Pipe stores messages. --- What is tool calling? https://langbase.com/docs/features/tool-calling/ import { generateMetadata } from '@/lib/generate-metadata'; # What is tool calling? LLM tool calling allows a language model (like GPT) to use external tools (functions inside your codebase) to **perform** tasks it can't handle alone. Instead of just generating text, the model can **respond with a tool call** (name of the function to call with parameters) that triggers a function in your code. You can use tool calling to get the model to do things like fetch live information, run code for complex calculations, call another get some data from a database, call another pipe, or interact with other systems. ## How does it work? Langbase offers tool calling with multiple providers ([view complete list](/supported-models-and-providers#tool-support)) to provide more flexibility and control over the conversation. You can use this feature to call tools in your code and pass the results back to the model. 1. **Describe the tool**: You can describe a tool in your Pipe by providing the tool name, description, and arguments. You can also specify the data type of the arguments and if they are required or optional. These tools are then passed to the model. 2. **User prompt**: You sent a user prompt that requires data that the tool can provide. The model will generate a JSON object with tool name and its arguments. 3. **Call the tool**: You can use this JSON object to call the tool in your code. 4. **Pass the result**: You can pass the result back to the model to continue the conversation. --- ## Tool definition schema Tool calling features requires you to add a tool definition schema in your Pipe. This tool definition is passed to the model. The model then generates a JSON object with the tool name and its arguments based on the user prompt. The tool definition schema must contain `name` key inside `function` that is the name of the tool. You can also provide a description of the tool, the parameters it accepts, and their data types among other key/value pairs. Here is an example of a valid tool definition schema: ```json { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather of a given location", "parameters": { "type": "object", "required": [ "location" ], "properties": { "unit": { "enum": [ "celsius", "fahrenheit" ], "type": "string" }, "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } } } } } ``` --- ## Prerequisites: Generate Langbase API Key We will useLangbase SDK in this guide. To work with it, you need to generate an API key. Visit [User/Org API key documentation](/api-reference/api-keys) page to learn more. --- ## How to make tool calls in Pipes using Langbase SDK Follow this quick guide to learn how to make tool calls in Pipes using Langbase SDK. ## Step 0: Create a Pipe Create a [new](https://pipe.new) Pipe or open an existing Pipe in your Langbase account. Alternatively, you can fork this [`tool-calling-example`](https://langbase.com/examples/tool-calling-example) pipe and skip to step 2. ## Step 1: Add a tool to the Pipe Let's add a tool to get the current weather of a given location. Click on the `Add` button in the Tools section to add a new tool. This will open a modal where you can define the tool. The tool we are defining will take two arguments: 1. location 2. unit. The `location` argument is required and the `unit` argument is optional. The tool definition will look something like the following. ```json { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather of a given location", "parameters": { "type": "object", "required": [ "location" ], "properties": { "unit": { "enum": [ "celsius", "fahrenheit" ], "type": "string" }, "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } } } } } ``` Go ahead and deploy the Pipe to production. If a Pipe has tools, the playground will be disabled. You can only test tool calling with our `run` API or SDK. ## Step 2: Install dependencies We will use Langbase SDK to run the pipe. We will also use `dotenv` to load env variables from a `.env` file. ```bash {{ title: 'npm' }} npm i langbase dotenv ``` ```bash {{ title: 'pnpm' }} pnpm i langbase dotenv ``` ```bash {{ title: 'yarn' }} yarn add langbase dotenv ``` ## Step 3: Add env variables Create a `.env` file in the root of your project and add the following environment variables. ```bash LANGBASE_API_KEY="" ``` Replace `` with your Langbase API key. ## Step 4: Run pipe with user prompt to make a tool call Now let's create an `index.js` file where we will define `get_current_weather` function and also run the pipe with a user prompt that will trigger the tool call. ```js import 'dotenv/config'; import { Langbase, getToolsFromStream, getRunner } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY, // https://langbase.com/docs/api-reference/api-keys }); function get_current_weather() { return "It's 70 degrees and sunny in SF."; } const tools = { get_current_weather, }; async function main() { const messages = [ { role: 'user', content: 'Whats the weather like in SF?', }, ]; const { stream: initialStream, threadId } = await langbase.pipes.run({ messages, stream: true, name: 'tool-calling-example', // name of the pipe to run }); } main(); ``` Replace `tool-calling-example` with your pipe name if it has a different name. Here is what we did so far: 1. **Initialized** the Langbase SDK with the API key. 2. **Defined** a `get_current_weather` function that returns the current weather of San Francisco. 3. **Created** a `tools` object that contains the `get_current_weather` function. 4. **Wrote** a `main` function that **runs** the Pipe using **SDK** with a user prompt that triggers the tool call. Because the user prompt requires the current weather of San Francisco, the model will respond with a tool call like the following: ```json { "role": "assistant", "content": null, "tool_calls": [ { "id": "call_u28sPmmCAWkop0OdgDYDJ9OG", "type": "function", "function": { "name": "get_current_weather", "arguments": "{\"location\": \"San Francisco\"}" } } ] } ``` ## Step 5: Check for tool calls We will still check if we recieved any tool calls from the model. If the model has called a tool, we will **call the functions** specified in the tool call and send the **tool results** back to model using Langbase SDK. ```js import 'dotenv/config'; import { Langbase, getToolsFromStream, getRunner } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY, // https://langbase.com/docs/api-reference/api-keys }); function get_current_weather() { return "It's 70 degrees and sunny in SF."; } const tools = { get_current_weather, }; async function main() { const messages = [ { role: 'user', content: 'Whats the weather like in SF?', }, ]; const { stream: initialStream, threadId } = await langbase.pipes.run({ messages, name: 'tool-calling-example', // name of the pipe to run stream: true, }); const [streamForToolCalls, streamForResponse] = initialStream.tee(); const toolCalls = await getToolsFromStream(streamForToolCalls); const hasToolCalls = toolCalls.length > 0; if (hasToolCalls) { const messages = []; // call all the functions in the toolCalls array toolCalls.forEach(async toolCall => { const toolName = toolCall.function.name; const toolParameters = JSON.parse(toolCall.function.arguments); const toolFunction = tools[toolName]; const toolResult = toolFunction(toolParameters); messages.push({ tool_call_id: toolCall.id, // required: id of the tool call role: 'tool', // required: role of the message name: toolName, // required: name of the tool content: toolResult, // required: response of the tool }); }); const { stream: chatStream } = await langbase.pipes.run({ messages, name: 'tool-calling-example', // name of the pipe to run threadId: threadId, stream: true, }); const runner = getRunner(chatStream); runner.on('content', content => { process.stdout.write(content); }); } else { // no tool calls, handle the response stream } } main(); ``` Lastly, here is what we did: 1. **Cloned** the initial stream to use it later for tool calls check. 2. Used `getToolsFromStream` SDK function to **get** tool calls from the stream if any. 3. **Checked** if there are any tool calls. 4. Incase there are tool calls, we **called** the functions specified in the tool call. 5. We then sent the **tool results** in a messages array to the model using Langbase SDK. 6. We also sent the `threadId` in the request to the pipe to **continue** the conversation. ## Step 6: Run the code Let's run the code using the following command: ```bash node index.js ``` You should see the following output printed on your terminal: ``` The weather in San Francisco is 70 degrees Celsius and sunny. ``` And that's it! You have successfully made a tool call using pipes & Langbase SDK. --- ## Next Steps - Build something cool with Langbase. - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- Structured Outputs https://langbase.com/docs/features/structured-outputs/ import { generateMetadata } from '@/lib/generate-metadata'; # Structured Outputs Structured Outputs is a feature that guarantees responses from language models adhere strictly to your supplied JSON schema. Langbase supports Structured Outputs in pipe and agent primitives. You can read our API reference for [pipes](/api-reference/pipe) and [agents](/api-reference/agent). Structured Outputs offer benefits such as: - **Boost reliability**: By enforcing schema, the chances of runtime errors are reduced. - **Eliminate post-processing**: Responses are already shaped exactly as you expect, no need for manual parsing. - **Explicit refusals**: If the model can’t fulfill a request, it returns a clear, standardized error or refusal—making handling edge cases easier and safer. Learn [how to use Structured Outputs](https://langbase.com/docs/guides/structured-outputs) in Langbase. --- Rerank https://langbase.com/docs/features/rerank/ import { generateMetadata } from '@/lib/generate-metadata'; # Rerank Our custom rerank models reorder search results based on relevance, delivering more accurate and meaningful rankings. We train our models on carefully curated, high-quality datasets to outperform industry standards. For enterprise customers, we take it a step further—training rerank models on your own data for even better relevance tailored to your needs. Want to improve your search relevance? Let’s discuss a custom rerank model for your use case. Stream https://langbase.com/docs/features/stream/ import { generateMetadata } from '@/lib/generate-metadata'; # Stream Pipes support streaming LLM responses for all of its supported LLM models on both its API and Langbase dashboard. Streaming can enabled or disabled inside a Pipe if required. ## Streaming Response When streaming is enabled, the API will return a streaming response in [OpenAI SSE format](https://platform.openai.com/docs/api-reference/streaming). The response will be a series of JSON objects, each representing a chunk of the completion. To implement streaming, you will need to parse the response and display the chunks as they arrive. Parsing the stream is non-trivial, and you may want to use a library to help with this. Any library that helps with OpenAI streaming will work with Langbase Pipe's streaming responses. You can also check our [Chatbot Example code](https://github.com/LangbaseInc/langbase-examples/tree/main/examples/ai-chatbot) to implement streaming in your app. Here is what a streaming response looks like. ```txt {{ title: 'Example response' }} {"id":"chatcmpl-9gDTI1K8XnBBhGou2ZnF7wQRU3M7r","object":"chat.completion.chunk","created":1719848588,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-9gDTI1K8XnBBhGou2ZnF7wQRU3M7r","object":"chat.completion.chunk","created":1719848588,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"Hello"},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-9gDTI1K8XnBBhGou2ZnF7wQRU3M7r","object":"chat.completion.chunk","created":1719848588,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"!"},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-9gDTI1K8XnBBhGou2ZnF7wQRU3M7r","object":"chat.completion.chunk","created":1719848588,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":" How"},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-9gDTI1K8XnBBhGou2ZnF7wQRU3M7r","object":"chat.completion.chunk","created":1719848588,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":" can"},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-9gDTI1K8XnBBhGou2ZnF7wQRU3M7r","object":"chat.completion.chunk","created":1719848588,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":" I"},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-9gDTI1K8XnBBhGou2ZnF7wQRU3M7r","object":"chat.completion.chunk","created":1719848588,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":" assist"},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-9gDTI1K8XnBBhGou2ZnF7wQRU3M7r","object":"chat.completion.chunk","created":1719848588,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":" you"},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-9gDTI1K8XnBBhGou2ZnF7wQRU3M7r","object":"chat.completion.chunk","created":1719848588,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":" today"},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-9gDTI1K8XnBBhGou2ZnF7wQRU3M7r","object":"chat.completion.chunk","created":1719848588,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{"content":"?"},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-9gDTI1K8XnBBhGou2ZnF7wQRU3M7r","object":"chat.completion.chunk","created":1719848588,"model":"gpt-3.5-turbo-0125","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} ``` ```json {{ title: 'Example chunk' }} { "id":"chatcmpl-9gDTI1K8XnBBhGou2ZnF7wQRU3M7r", "object":"chat.completion.chunk", "created":1719848588, "model":"gpt-3.5-turbo-0125", "system_fingerprint":null, "choices":[ { "index":0, "delta":{ "content":"Hello" }, "logprobs":null, "finish_reason":null } ] } ``` --- Prompt https://langbase.com/docs/features/prompt/ import { generateMetadata } from '@/lib/generate-metadata'; # Prompt A prompt sets the context and guidelines for both the LLM and the user, shaping the conversation and responses. Pipe can contain system, user, and AI assistant prompts. --- ## 1. System Prompt A system prompt sets the context, instructions, and guidelines for a language model before it receives questions or tasks. It helps define the model's role, personality, tone, and other details to improve its responses to user input. --- ## 2. User Prompt A user prompt is a text input that a user provides to an LLM to which the model responds. Lastly, AI assistant prompt is the LLM's generated output in response to a user prompt. --- ## 3. AI Assistant Prompt An AI assistant prompt is the LLM's generated output in response to a user prompt. It is the response that the LLM generates based on the user's input. All of these prompts are present inside a Pipe and can also be customized. These prompts can use [variables](/features/variables) that make them dynamic. --- ## How to use System, User & AI assistant prompts Follow this quick guide to learn how to use System, User & AI assistant prompts. --- ## Step 1: System Prompt Navigate to any of your Langbase Pipes and click on the **System Prompt Instructions** text area to write a system prompt. ``` You are helpful AI assistant that creates creative blog titles. ``` ## Step 2: User Prompt Click on **User** button in front of **Add Message By** to write a user prompt. It will open a text area with role **User**. Write a user prompt in this text area. ``` Can you suggest a creative blog title for my blog post? ``` --- ## Step 3: AI Assistant Prompt Click on **AI Assistant** button in front of **Add Message By** to write an AI assistant prompt. It will open a text area with role **AI Assistant**. Write an AI assistant prompt in this text area. ``` "Unleashing the Power of AI: Exploring Innovative Applications" ``` --- Safety https://langbase.com/docs/features/safety/ import { generateMetadata } from '@/lib/generate-metadata'; # Safety Define AI safety prompt for any LLM inside a Pipe. For instance, do not answer questions outside of the given context. One of its use cases can be to ensure the LLM does not provide any sensitive information in its response from the provided context. --- ## How to define Safety instructions Following is a quick guide on how to define Safety instructions inside a Pipe. --- ## Step 1: Write Safety prompt Navigate to any of your Pipes on Langbase. Click once on Safety text area to add a safety prompt. --- ## Step 2: Deploy the Pipe Once you have added the safety prompt, click on the **Deploy to Production** button to deploy the Pipe. That's it! Now the LLM inside the Pipe will follow the safety instructions you have defined. --- Parser https://langbase.com/docs/features/parser/ import { generateMetadata } from '@/lib/generate-metadata'; # Parser Parser is a Langbase AI primitive that allows you to extract text content from various document formats. This is particularly useful when you need to process documents before using them in your AI applications. Parser can handle a variety of formats, including PDFs, CSVs, and more. By converting these documents into plain text, you can easily analyze, search, or manipulate the content as needed. Parser can be helpful in scenarios where you need to: - **Extract text from documents:** Parser can read and extract text from various document formats, including PDFs, CSVs, DOCX, and HTML. - **Enable full-text search:** By converting documents into plain text, you can perform full-text searches and indexing on the content. - **Automate content analysis:** Parser can help automate the process of analyzing and processing documents, reducing manual effort. - **Create complex AI Agents:** Parser integrates seamlessly with other Langbase primitives and can be used inside [Agent Workflows](/workflow) to create AI Agents that process and analyze documents. Learn [how to use the Parser primitive](/parser) in Langbase. --- Organizations https://langbase.com/docs/features/organizations/ import { generateMetadata } from '@/lib/generate-metadata'; # Organizations Langbase supports organizations, fostering collaboration among users within a shared workspace. You can easily establish your organization and invite all your team members in it for collaboration. All the organization members get seamless access to all pipes within the organization. To get started with Langbase, you'll need to [create a free personal account on Langbase.com][signup] and verify your email address. Some of the organization features include: - Create organizations and invite members. - Create unlimited pipes within the organization. - View and edit any pipe within the organization. - Organization-based billing. - Organization level keys. Here is how you can create an organization on Langbase to make the most of these features. --- ## Step 1: Create an organization In the Langbase dashboard, click on your user panel to access the organization settings. Click on the Create organization button in the dropdown. You should see a New Organization dialog. Fill in your organization details like name and description, and confirm the creation. Once created, you will be redirected to your org’s dashboard. This is where you will access your org’s pipes, settings as well as billing information. --- ## Step 2: Invite members to your organization Within your organization's dashboard, navigate to its settings. In your org settings, scroll down to the members section. Here you will be able to manage the current members of your organization, their roles, and invite new members. To invite a new member, please ensure that the member is signed up on langbase. Then, enter their username or email address to add them. They will be added to your organization. --- ## Step 3: Collaborate Your team will be able to collaborate in your organization using the following features: - **Access Pipes:** All members can access and run any pipe in the organization. - **Create Pipes:** All members can create new pipes in the organization. - **Edit Pipes:** All members can edit any pipe within the organization, facilitating collaborative prompt engineering, testing, evaluation and refinement. - **Organization keysets:** Enable organization-wide keysets to effortlessly integrate keys from your LLM Providers. For instance, by setting up keysets for OpenAI, Anthropic, and Together at the organization level, all members can seamlessly utilize them without the need for individual configuration. - **Organization billing:** Manage your organization’s billing information and subscription plans from the organization settings. - **Admin Control:** Organization admins can manage their organization’s settings, including members, billing, and other configurations. Pipes created by the admins are automatically shared with all members, but only admins can access the pipe's settings. [signup]: https://langbase.fyi/awesome Open Pipes https://langbase.com/docs/features/open-pipes/ import { generateMetadata } from '@/lib/generate-metadata'; # Open Pipes Langbase now allows users to create and share Open pipes, making it easier to collaborate and share your work. Both user and organization Pipes can be made public, enabling you to share them with anyone. ## Pipe Privacy When creating a new pipe, you now have the option to set its privacy in the Privacy section, allowing you to choose between making your pipe open or private to other users. ### Public Anyone with the link can view and fork your pipe to their account. Open pipes are visible to anyone through search engines (Google, Bing, etc.), and all information about the pipe, except for sensitive data, is visible to everyone. ### Private Only you or your organization members can access the pipe. Private pipes cannot be shared with others and are visible only to you or your organization members. ### Features of Open Pipes - Visibility: Anyone can view the pipe. - Forking: Anyone can fork the pipe to their account. - Metrics: Anyone can view the total number of runs, tokens, and contributors. - Information: Anyone can view the pipe's description and tags. ### Pipe Sensitive Data Sensitive data remains private and includes: - Pipe API keys (or LLM API keys) - Pipe runs and messages - Pipe logs and settings - Pipe cost and billing information - Runnable playground of a pipe ### Managing Existing Pipes All existing pipes are private by default. You can change the privacy settings for each pipe from the pipe settings, making it open or private as needed. ## Sharing Profiles You can share both your personal profile and your organization profile. Only open pipes will be visible to others. ## Public Profile Your public profile includes: - Profile Picture: Upload a profile picture to make your profile more personal. - Bio: Share a little bit about yourself with the Langbase community. - Public Pipes: All the public pipes you've created are listed on your profile ### Example Check out our first public pipe: [AI Title Generator](https://langbase.com/langbase/ai-title-generator) ## Public Runs Public Runs allows you to share live demonstrations of your pipes without incurring any LLM API costs. Demonstrations are streamed directly from Langbase. ### Using Public Runs - Once your Pipe is public. You can use Public Runs as a feature - In your Pipes window, scroll down and go to the "Runs" section. - Choose one of the runs you have conducted from your experimentation list. - Scroll down in the selected run's details and click on the "Save as completion" button. ### Sharing your Public Pipes After saving the run as a completion, share your Pipe URL with others to showcase a live demonstration streamed from Langbase. ## FAQs ### What is a public pipe? A public pipe or an open pipe is a pipe that is visible to anyone with the link or through search engine (Google, Bing, etc.). Anyone can view the pipe and fork it to their account. Except for the sensitive data, all the information about the pipe is visible to everyone. ### What about my existing pipes? All your existing pipes are private by default. You can change the privacy settings for each pipe from the pipe settings. ### Can I share my profile? Yes, you can share your profile with anyone. Only your public pipes will be visible to others. ### Can I share my org profile? Yes, you can share your org profile with anyone. Only your org's public pipes will be visible to others. ### Can I share my private pipe? No, only public pipes can be shared with others. Private pipes are visible only to you or your org members. ### Can I make my private pipe public? Yes, you can change the privacy settings of your private pipe to public from the pipe settings. Or vice versa. ### What's included in the Public Profile? - Profile Picture: You can upload a profile picture to make your profile more personal. - Bio: Share a little bit about yourself with the Langbase community. - Public Pipes: All the public Pipes you've created are listed on your profile. More public profile improvements on the roadmap are coming soon. --- Prompt Optimization https://langbase.com/docs/features/prompt-optimization/ import { generateMetadata } from '@/lib/generate-metadata'; # Prompt Optimization Our advanced prompt optimizer models refine and enhance prompts to deliver more accurate, high-quality outputs. Whether you’re generating text, retrieving information, or running complex AI workflows, our models help you get the best possible results. For enterprises, we offer custom-trained prompt optimizers tailored to your specific data and use cases—ensuring even greater accuracy and relevance for your business needs. Ready to optimize your prompts? Let’s talk! Readme https://langbase.com/docs/features/readme/ import { generateMetadata } from '@/lib/generate-metadata'; # Readme Pipe contains a readme at the very bottom that you can edit to include any documentation relevant to it. The readme editor supports markdown so you can format your text or add images through a remote URL. Keysets https://langbase.com/docs/features/keysets/ import { generateMetadata } from '@/lib/generate-metadata'; # Keysets Langbase offers multiple LLM models through different providers like OpenAI, TogetherAI, Groq, Google, etc. Each Pipe can be configured with any of the supported LLM models. Since each LLM provider has its unique API key, you can add these keysets to your profile/org settings and every Pipe will use those LLM keys by default. If needed, you can specify a different LLM keyset for individual Pipes by adding a custom keyset directly in the Pipe's settings. Keysets allow you to seamlessly transition between various LLM models within your pipe without needing to add the API key again, provided that model provider key is already present in your keyset. Langbase has three types of keysets: 1. Pipe Keysets 2. User Keysets 3. Organization Keysets --- ## Pipe Keysets Pipe level LLM API keyset can be set for each pipe individually inside the pipe settings. This keyset carries the highest priority. You can use Pipe Keysets to override the user/organization LLM API keysets. To add a pipe keyset, navigate to your **Pipe > Settings**. In the **LLM API Keysets** section, select **Pipe level keys** and add your keys. --- ## User Keysets User level LLM API keysets are configurable in your profile settings. For user accounts, they are the default keyset used for all pipes unless you have set a pipe keyset for a specific pipe. To add a user keyset, go to your **User profile > Settings > LLM API Keys**. Alternatively, you can select **User level keys** inside any pipe's settings and click **Configure Keysets**. Here, add your keys for all the providers you need. --- ## Organization Keysets Organizations use organization-level keysets, which are configurable in your organization’s settings. They are similar to user keysets, but for an organization. For orgs, they are the default keysets for all pipes unless you have set a pipe-level keyset in a specific pipe. To add an org keyset, go to your **Organization > Settings > LLM API Keys**. Alternatively, you can select **Org level keys** inside any pipe's settings in your organization and click **Configure Keysets**. Here, add your keys for all the providers you need. JSON Mode https://langbase.com/docs/features/json-mode/ import { generateMetadata } from '@/lib/generate-metadata'; # JSON Mode
JSON mode instructs the LLM to give output in JSON and asks it to conform to a provided schema in the prompt. To activate JSON mode, you need to select a model that supports it.
--- ## Supported Models Currently, the following models support JSON mode. ### OpenAI - `o3-mini` - `gpt-4o` - `gpt-4o-2024-08-06` - `gpt-4o-mini` - `gpt-4-turbo` - `gpt-4-turbo-preview` - `gpt-4-0125-preview` - `gpt-4-1106-preview` - `gpt-3.5-turbo` - `gpt-3.5-turbo-0125` - `gpt-3.5-turbo-1106` ### Google - `gemini-1.5-pro` - `gemini-1.5-flash` - `gemini-1.5-flash-8b` ### Together - `Mistral-7B-Instruct-v0.1` - `Mixtral-8x7B-Instruct-v0.1` ### Deepseek - `deepseek-chat` --- ## Use JSON Mode in your Pipe To use JSON mode, ensure that you have selected a model that supports it. You should see a JSON mode toggle in the Pipe IDE. Turn the toggle ON to activate JSON mode. Additionaly, you can also provide a schema in the system prompt or messages to further optimize the output. ### Alternative If you are using a model that does not support JSON mode, try asking the model to produce an output in JSON and providing the schema in your prompt. The LLM will try to conform to the schema as much as possible. ---
Model Presets https://langbase.com/docs/features/model-presets/ import { generateMetadata } from '@/lib/generate-metadata'; # Model Presets When configuring a model in a Langbase pipe, you have the option to fine-tune its response parameters. These parameters, such as temperature, max-tokens, frequency penalty, presence penalty, and top p, directly influence the model's response. However, these parameters can be confusing. To get started quickly, you can select a model preset. These presets include: - **Precise**: Tuned for precise and accurate responses. - **Balanced**: Strikes a balance between accuracy and creativity. - **Creative**: Prioritizes creativity and diversity in the generated responses. - **Custom**: Allows you to manually configure the response parameters. You can select the preset that fits your use case. --- ## How to use Model Presets? Follow this quick guide to learn how to use model presets in Langbase. --- ## Step 1: Select a Model Navigate to your Langbase pipe and click open the model selector. You will see a list of models available in Langbase. Select a model to configure. ## Step 2: Choose a Preset After selecting a model, you will see the model presets below. Choose the preset that best suits your use case. You can choose from Precise, Balanced, Creative. If you want to manually configure the response parameters, you can change any parameter and the preset will change to Custom. Once satisfied with the preset, close the model dialog and save the pipe. Every model behaves differently after changing these parameters. We have created these presets after thorough research but feel free to try custom settings of your own if the response is not ideal. Logs https://langbase.com/docs/features/logs/ import { generateMetadata } from '@/lib/generate-metadata'; # Logs Every Pipe request translates into a detailed log that is present inside the `Usage` tab. The log contains the following information: - **Pipe Version**: The Pipe version that made the request. Read more about versions [here](https://langbase.com/docs/features/versions). - **Type**: The type of the Pipe, i.e., [Generate](https://langbase.com/docs/features/generate) & [Chat](https://langbase.com/docs/features/chat). - **Model**: The LLM used in the request. - **Prompt tokens**: The total number of prompt tokens that user sent to LLM. - **Completion tokens**: The total number of tokens returned in the completion by the LLM. - **Total Tokens**: Sum of Prompt and Completion tokens. - **Prompt cost**: LLM cost for processing the sent prompt - **Completion Cost**: LLM cost for generating the completion - **Total Cost**: Sum of Prompt and Completion cost. - **Cached**: Whether the request was cached or not. --- ## How to view Pipe logs Following is a quick guide on how to view detailed logs for every run inside a Pipe. --- ## Step 1: Navigate to Usage tab Navigate to the `Usage` tab inside any of your Pipes on Langbase. --- ## Step 2: Select a log Click on any of the logs to view detailed information about that run. --- Moderation https://langbase.com/docs/features/moderation/ import { generateMetadata } from '@/lib/generate-metadata'; # Moderation Moderation is a feature that the Pipe offers for all the OpenAI models. By default it is turned on, but in case it is turned off, Pipe will not call the OpenAI moderation endpoint to identify harmful content. --- Generate https://langbase.com/docs/features/generate/ import { generateMetadata } from '@/lib/generate-metadata'; # Generate Generate is a Pipe type that is designed to generate LLM completions. It's not intended for chat use-cases. Since it is still a Pipe, it has all the Pipe features like streaming, logs, versions, safety, etc. Experiments is a Pipe feature that is only available for Generate Pipes. Read more about Experiments [here](/features/experiments). --- ## How to create a Generate Pipe Follow this quick guide to learn how to create a Generate Pipe. --- ## Step 1: Create a new Pipe Navigate to the **Pipes** tab from the sidebar and click on the **Add New** button. Alternatively, you can visit [pipe.new](https://pipe.new) to create a new Pipe. --- ## Step 2: Select Pipe type By default, the Pipe type is set to **Generate**. Give your Pipe a name and click on the **Create Pipe** button. This will create a new Generate Pipe with the default settings. --- Fork https://langbase.com/docs/features/fork/ import { generateMetadata } from '@/lib/generate-metadata'; # Fork Fork allows you to duplicate a pipe. Once duplicated, you can rename, modify, and manage the new pipe independently from the original. You can fork any pipe in your account or within any of your organizations. Much like GitHub fork, forking a pipe is useful for various scenarios, such as: - **Experimentation:** Safely test changes or new ideas on a copy without affecting the original pipe. - **Customization:** Quickly customize any pipe with a fork. Like chaning meta, LLMs, or instructions. - **Collaboration:** Share a modified version of the pipe with team members or across different departments while keeping the original intact. --- ## How to fork a pipe? Follow this quick guide to learn how to fork a pipe in Langbase. --- ## Step 1: Open and fork the pipe Navigate to your Langbase pipe and click on the **Fork** button located at the top right corner of the pipe editor. ## Step 2: Enter details of the forked pipe First, select the owner of the forked pipe, which can be you or any of your organizations. Then, you can change the name and description the forked pipe. Click on the **Fork** button to complete the fork. ## Step 3: Customize and use the forked pipe Once forked, you will be navigated to the new forked pipe. You can now modify, run, or save it as per your requirements. --- Few-shot Learning https://langbase.com/docs/features/few-shot/ import { generateMetadata } from '@/lib/generate-metadata'; # Few-shot Learning Few-shot learning helps AI LLM pick up and apply knowledge from just a handful of examples. It involves using multiple sets of prompts between the user and the AI assistant that we internally send to LLM with every API request. Pipe enables developers to define multiple user and AI assistant prompt and completion pairs that can be used to few-shot train any LLM. --- ## How to do Few-shot Learning Following is a quick guide on how to do few-shot learning inside a Pipe. --- ## Step 1: Create user and AI assistant prompts Navigate to any of your Pipes on Langbase. Click once on User and then on AI Assistant besides **Add Message By**. It will create two text areas, one for user prompt and the other for AI assistant prompt. --- ## Step 2: Add completions Add completions for both user and AI assistant prompts. These completions will be used by LLM to learn and generate responses based on the incoming user prompts. --- Chat https://langbase.com/docs/features/chat/ import { generateMetadata } from '@/lib/generate-metadata'; # Chat Chat is another Pipe type that is intended for generating chat-like completions. It can be used to build chat-bots, another chatgpt, etc. In every Chat Pipe, you'll find an extra `Chat` link in the Pipe navbar. Clicking on this takes you to a page with a start chatting CTA. This action opens up Langbase Chat UI, tailored uniquely for each Pipe, meaning chat threads will vary from one Chat Pipe to another. --- ## How to create a Chat Pipe Follow this quick guide to learn how to create a Chat Pipe. --- ## Step 1: Create a new Pipe Navigate to the **Pipes** tab from the sidebar and click on the **Add New** button. Alternatively, you can visit [pipe.new](https://pipe.new) to create a new Pipe. --- ## Step 2: Select Pipe type By default, the Pipe type is set to **Generate**. Change the Pipe type to **Chat**. Give your Pipe a name and click on the **Create Pipe** button. This will create a new Chat Pipe with the default settings. ## Step 3: LangUI for Chat Pipe LangUI helps you Build & Deploy your own ChatGPT with any LLM and any Data for RAG. One-click deploy LangUI to Langbase, a ChatGPT-style, open-source chat assistant pipe, to create a chatbot with any LLM and give it access to any data for RAG. LangUI helps you Build & Deploy your own ChatGPT with any LLM and any Data for RAG. Click on the **Chat** link in the Pipe navbar to open the LangUI for your chat Pipe. Now you have your own custom ChatGPT style chatbot. --- Langbase API https://langbase.com/docs/features/api/ import { generateMetadata } from '@/lib/generate-metadata'; # Langbase API Langbase is a serverless AI developer platform, offering robust APIs for pipes and memory to build composable Large Language Models (LLM) AI agents. Pipe and Memory APIs enable developers to build AI agents and features that can perform various tasks, such as generating text, embeddings, similar chunksm building RAG, and more. --- Langbase offers the following APIs: ## Pipe API Pipes are the core building blocks of Langbase, and they can be used to create various AI agents for different use cases. Pipes can be used to generate text, chat with users, and more. The Pipe API allows you to create, update, and delete pipes. The API also provides endpoints to run the AI Pipe for chat and generation. Learn [more](/api-reference/pipe) in Pipe API reference. --- ## Memory API Memory is a managed search engine as an API for developers. Imagine an all in one severless RAG (Retrieval-Augmented Generation) with — vector store, file storage, attribution data, parsing + chunking, and semanitc similarity search engine. The Memory API allows you to manage the memory of your Langbase account. You can use the Memory API to create, list, and update. The API also provides endpoints to upload documents, list them, and retry generating embeddings for documents. Learn [more](/api-reference/memory) in Memory API reference. --- Examples https://langbase.com/docs/features/examples/ import { generateMetadata } from '@/lib/generate-metadata'; # Examples Every Langbase pipe comes with multiple ready-to-use examples to quickly setup the Pipe without needing much assistance from the user. These examples aim to quickly get you started with building your first Pipe. These examples include: - `Learn: Few Shot Prompting`: This is an onboarding example to get you started few shot prompting in a pipe. - `Using Variables`: This example demonstrates how to use variables in a pipe. - `Less Wordy GPT`: A less wordy prompt example in a pipe. Great for chat purposes. - `Prompt Builder`: An advanced pipe example which uses few-shot prompts and variables to build better prompts. --- ## How to use Pipe Examples? Follow this quick guide to learn how to use pipe examples in Langbase. --- ## Step 1: Select an Example Navigate to your Langbase pipe and click open the examples selector. You will see a list of examples available in Langbase. Let's select the **Few Shot Prompting** example. ## Step 2: Configure the Example Selecting the example will add the example's prompts and other parameters in your pipe. You can see the few shot prompts added in the pipe. Now, you can modify the example as per your requirements, Run or Save the pipe. P https://langbase.com/docs/chai/features/p ## How Chai works Chai builds agents based on AI primitives by Langbase. It makes building and deploying agents super easy. There are many features that make Chai the best agent coding tool. ### AI Primitives At Langbase (the company behind Chai), we believe AI primitives is the best way to build AI agents, not frameworks. AI primitives are a set of tools to perform common AI-related tasks such as creating embeddings, chunking text data, creating agents, creating memory agents and so on. - **[Agents](/agent)** - for reasoning and planning - **[Tools](/tools)** - for taking actions in the world - **[Memory](/memory)** - for human-like context & learning - **[Workflows](/workflow)** - for orchestrating complex tasks ### 1-Click Deployments Deployments are 1-click with Chai. Agents can be instantly deployed and used. Chai removes the complexity of servers and scaling. Focus on what to build, Chai will handle the logic and logistics. Place your API keys under environment variables and begin deployment. If you previously added API keys, they will be auto-imported. Deployments in Chai ### Environment variables Environment variables are key-value pairs stored outside your code used to configure your apps's behavior without hardcoding values. They typically store API keys and other sensitive information. Sensitive data like API keys from LLM providers should not be in your codebase. To help with this, Chai lets you securely add API keys as environment variables. Your agent won't work properly without the correct key for the LLM you're using. There are over 600 LLMs that Chai supports. When in doubt, prompt Chai to use Claude, Mistral or Llama. Environment variables in Chai ### App Mode Every agent built with Chai has an App. This is the look and feel of your agent. It's also the gateway to your agent. By default, Chai designs the App to be user-friendly based on the agent type. However, you can you can fully customize the UI by prompting Chai while in the App mode. - **App mode**: To control the visual experience and layout of your agent. - **Agent mode**: To control the logic and functionality of your agent. Agent & App Modes in Chai You can also view your agent in full screen, by using URL from **App** tab. View agent in full screen ### API Every Chai agent comes with a ready-to-use Agent API. This means you can interact with your agent programmatically. You can use API in any programming language to talk to your agent. Integrating API gives you full control over how your agent looks, and interacts with users. Agent API in Chai ### Version Control Agents built with with Chai are version-controlled natively. Every prompt creates a new, isolated version of your agent—separate from the previous one. You can revisit any version at any time to see exactly how Chai updated the code in response to your prompt. A tag of "Live" is labeled to show the version live in production. Version Control in Chai ### Forking Agents Just like GitHub repositories, agents can be forked. Forking allows you to copy someone else's agent and make it your own. Only public agents can be forked. Once duplicated, you can rename, modify or manage the agent independently from the original agent. You can also fork any agent in your account or organization. Forking is useful for running experiments, customizing the agent and collaborating with your team (keeping the original intact). Forking agent in Chai - Select Agent Owner - Change name, description and confirm "Create Fork" Forking agent in Chai ### Shareability of agents There are two modes to shareability of agents: private, and public. Chai allows you to configure who can access the agent, its chat history and UI. - **Private**: Only you and members of your organization can see the agent - **Public**: Anyone can access the agent and its UI, but your API keys and data remain secure Configure who has access to agent ### Flow diagrams To help you understand the agent flow, Chai automatically generates visual diagrams. These diagrams give you a clear view of the agent's logic and decision-making flow—so you're not left guessing what happens under the hood. Diagrams highlight key steps, tools, branches, and conditions involved in your agent's behavior. Diagrams help you understand how your agent processes inputs and makes decisions. They update in real time as your agent evolves, ensuring your mental model always matches the current logic. Flow diagram ## Troubleshooting **My agent works, but the UI looks broken. What should I check?** To fix UI related issues, prompt while in App mode with a description of what is broken and what needs to fix. **My agent isn't behaving as expected. What should I do first?** Tell Chai to fix and prompt it with the error if any you're seeing **How do I run this agent on my local machine?** Chai builds its agents with AI primitives. Langbase SDK and API are two ways to use these primitives. To run your agent on local machine, you will need the SDK. **How to self-host Chai agent on my own cloud?** Chai gives away all the code, take it to your cloud, install necessary dependencies and that's all. Experiments https://langbase.com/docs/features/experiments/ import { generateMetadata } from '@/lib/generate-metadata'; # Experiments Experiments help you learn how your latest Pipe config will affect LLM response by running it against your previous five `generate` requests. They are only available for `Generate` Pipes. Experiments contains a `Previous Completions` column that are results of your past five runs. When you run the Experiments, Pipe executes the most recent Pipe configuration against the last `generate` request prompts and shows the outcomes in the `New Completions` column. One example can be changing Pipe’s LLM model to `gemma-7b-it` from `gpt-4-turbo-preview` to check how the response will look like. Chai Limits https://langbase.com/docs/chai/limits/ import { generateMetadata } from '@/lib/generate-metadata'; # Chai Limits In order to ensure stability, speed, and prevent misuse, we have set certain limits on how much a user or organization can use [Chai](https://Chai.new). The following limits are enforced on Chai based on your subscribed plan. --- ## Message Limits Message limits are enforced everytime Chai is prompted to create agents. Following are the monthly messsage credit limits on Chai: | Plan | Message Credits | |------------|---------------------------------------------------| | Hobby | 20 | | Pro | Based on credits selected `100/200/500/1000/4000` | | Teams | Based on credits selected `100/200/500/1000/4000` | | Enterprise | [Contact Us][contact-us] | Each time you prompt Chai, it intelligently sends 1–3 messages to create your agent, its UI, workflow, and more. Every message generated from your prompt is counted as one message credit. **Example 1** You send the first prompt to Chai to create an Agent. It results in 3 messages to create the agent, its UI, flow, and more. We count it as 3 messages. **Example 2** You send a regeneration request in Agent app. It results in 1 message to create the agent. We count it as 1 message. --- ## Deployment Limits Deployment limits are enforced everytime you deploy an agent on Chai. Following are the monthly deployment limits on Chai: | Plan | Deployments | |------------|---------------------------------------------------| | Hobby | 10 | | Pro | Based on credits selected `100/200/500/1000/4000` | | Teams | Based on credits selected `100/200/500/1000/4000` | | Enterprise | [Contact Us][contact-us] | --- ## Deployed Agent Limits Agents built with Chai are deployed on Langbase. As such, there are some platform limits. These are followed by both the agent and agent app, regardless of the pricing plan. Following are the deployed agent limits on Chai: | Type of Limit | Limit | |----------------|---------------------------------------------------| | CPU time | 10 ms | | Workflow Code | JavaScript Runtime | **CPU Time**: This refers to the execution time of your agent's code. **Workflow Code**: This refers to the code that the platform can execute. Agents can only run JavaScript functions and libraries. --- ### About Limits - If you exceed the usage limit, you will receive an error message indicating that you have exceeded your monthly message limit. You can either wait until the next month for the limit to reset or upgrade your plan to increase your limits. - Usage limits are applied on a per-user or per-organization basis. For organizations, all runs made by the organization are collectively restricted within a single limit window. [contact-us]: mailto:support@langbase.com --- Workflow Example https://langbase.com/docs/examples/workflow/ import { ExampleAccordion } from '@/components/mdx/example-card'; import { generateMetadata } from '@/lib/generate-metadata'; # Workflow Example Here are some examples of how to use the Langbase Workflow functionality. These examples are designed to help you get started with workflow using Langbase. --- Tools Examples https://langbase.com/docs/examples/tools/ import { ExampleAccordion } from '@/components/mdx/example-card'; import { generateMetadata } from '@/lib/generate-metadata'; # Tools Examples Here are some examples of how to use the Langbase Tools. These examples are designed to help you get started with tools using Langbase. --- Threads Examples https://langbase.com/docs/examples/threads/ import { ExampleAccordion } from '@/components/mdx/example-card'; import { generateMetadata } from '@/lib/generate-metadata'; # Threads Examples Here are some examples of how to use the Langbase Threads. These examples are designed to help you get started with threads using Langbase. --- Pipe Agent Examples https://langbase.com/docs/examples/pipe-agent/ import { ExampleAccordion } from '@/components/mdx/example-card'; import { generateMetadata } from '@/lib/generate-metadata'; # Pipe Agent Examples Here are some examples of how to use the Langbase Pipe Agent. These examples are designed to help you get started with the Langbase Pipe Agent. --- Parser Example https://langbase.com/docs/examples/parser/ import { generateMetadata } from '@/lib/generate-metadata'; import { ExampleAccordion } from '@/components/mdx/example-card'; # Parser Example Here are some examples of how to use the Langbase parser functionality. These examples are designed to help you get started with parsing documents using Langbase. --- Memory Examples https://langbase.com/docs/examples/memory/ import { ExampleAccordion } from '@/components/mdx/example-card'; import { generateMetadata } from '@/lib/generate-metadata'; # Memory Examples Here are some examples of how to use the Langbase Memory. These examples are designed to help you get started with memory using Langbase. --- Use internet search with AI Agent https://langbase.com/docs/examples/internet-research-tool/ import { generateMetadata } from '@/lib/generate-metadata'; # Use internet search with AI Agent --- In this guide, you will: - **Create a pipe**: Create an AI agent pipe using Langbase SDK. - **Define a tool**: Define a `searchInternet` tool function. - **Run the AI Agent**: Execute the pipe to search the web using tool function. --- ## Pre-requisites - **Langbase API Key**: A [Langbase API key](/api-reference/api-keys) to authenticate your requests with Langbase. - **Exa API Key**: An [Exa API key](https://dashboard.exa.ai/api-keys) to authenticate your requests with Exa. --- ## Step 0: Setup your project ```bash mkdir ai-search-agent && cd ai-search-agent ``` ### Initialize the project ```bash {{ title: 'npm' }} npm init -y ``` ```bash {{ title: 'pnpm' }} pnpm init ``` ```bash {{ title: 'yarn' }} yarn init -y ``` ### Install dependencies ```bash {{ title: 'npm' }} npm i langbase exa-js dotenv ``` ```bash {{ title: 'pnpm' }} pnpm add langbase exa-js dotenv ``` ```bash {{ title: 'yarn' }} yarn add langbase exa-js dotenv ``` ### Install dev dependencies ```bash {{ title: 'npm' }} npm i -D @types/node ``` ```bash {{ title: 'pnpm' }} pnpm add -D @types/node ``` ```bash {{ title: 'yarn' }} yarn add -D @types/node ``` ### Create a `.env` file Create a `.env` file in the root of your project and add the following environment variables: ```bash LANGBASE_API_KEY="YOUR_API_KEY" EXA_API_KEY="YOUR_EXA_API_KEY" ``` Replace `YOUR_API_KEY` and `YOUR_EXA_API_KEY` with your Langbase and Exa API keys respectively. --- ## Step 1: Create a new pipe Create a new file named `create-pipe.ts` and add the following code to create a new pipe using [`langbase.pipes.create`](/sdk/pipe/create) function from Langbase SDK: ```js import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); (async () => { // Create a pipe await langbase.pipes.create({ name: `internet-research-agent`, description: `An AI search agent powered by Langbase and Exa`, messages: [{ role: `system`, content: `You are a research assistant. Your job is take a query, search for relevant content on the web using the provided domain, and then answer user's questions.`, }], }); })(); ``` --- ## Step 2: Add Exa to pipe In this step, we will add Exa to our pipe as a tool in `create-pipe.ts` file: ```js import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); (async () => { // Exa as a tool const exa = { type: `function`, function: { name: `searchInternet`, description: `Tool for running semantic search using the provided domain on web using Exa API`, parameters: { type: `object`, required: [`query`, `domain`], properties: { query: { type: `string`, description: `The search query to find relevant content`, }, domain: { type: `string`, description: `Domain to search content from`, }, numResults: { type: `integer`, description: `Number of results to return (default: 5)`, minimum: 1, maximum: 20, }, useAutoprompt: { type: `boolean`, description: `Type of search to perform (default: neural)`, default: false, }, }, }, }, }; // Create a pipe await langbase.pipes.create({ name: `internet-research-agent`, description: `An AI search agent powered by Langbase and Exa`, messages: [{ role: `system`, content: `You are a research assistant. Your job is take a query, search for relevant content on the web using the provided domain, and then answer user's questions.`, }], tools: [exa], }); })(); ``` Now run the above file to create `internet-research-agent` pipe using the following command: ```bash npx tsx create-pipe.ts ``` --- ## Step 3: Integrate Exa with pipe Create a new `exa.ts` file to define the `searchInternet` tool function. The LLM model will use this funtion to execute web search using Exa SDK. ```js import 'dotenv/config'; import Exa from 'exa-js'; // Exa search parameters interface ExaSearchParams { query: string; domain: string; numResults?: number; useAutoprompt?: boolean; } // Exa search result interface SearchResult { title: string | null; url: string; text: string; publishedDate?: string; highlights?: string[]; } export async function searchInternet({ query, domain, numResults, useAutoprompt, }: ExaSearchParams) { try { const exa = new Exa(process.env.EXA_API_KEY); const searchResponse = await exa.searchAndContents(query, { text: true, highlight: true, type: `keyword`, includeDomains: [domain], numResults: numResults || 5, useAutoprompt: useAutoprompt || false, }); // Transform search results into a formatted string for LLM const content = searchResponse.results.map( (result: SearchResult, index: number) => { // 1. Title let formattedResult = `${index + 1}. ${result.title}\n`; // 2. URL formattedResult += `URL: ${result.url}\n`; // 3. Published date if (result.publishedDate) { formattedResult += `Published: ${result.publishedDate}\n`; } // 4. Content and highlights formattedResult += `Content: ${result.text}\n`; if (result.highlights?.length) { formattedResult += `Relevant excerpts:\n`; result.highlights.forEach((highlight) => { formattedResult += `- ${highlight.trim()}\n`; }); } // 5. Return the formatted result return formattedResult; } ); return content.join(`\n\n`); } catch (error: any) { console.error(`Error performing search:`, error.message); return `Error performing search: ${error.message}`; } } ``` --- ## Step 4: Run the pipe Finally, let's create `ai-search-agent.ts` file and add the following code to run `internet-research-agent` pipe: ```js import { Langbase, Message, Runner, getRunner, getToolsFromStream } from 'langbase'; import { searchInternet } from './exa'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); const tools = { searchInternet }; (async () => { // Run the pipe const { stream, threadId } = await langbase.pipes.run({ name: `internet-research-agent`, stream: true, messages: [{ role: `user`, content: `Explain the concept of Pipes from Langbase.com` }] }); const toolCalls = await getToolsFromStream(stream); const hasToolCalls = toolCalls.length > 0; let runner: Runner; if (hasToolCalls) { const messages: Message[] = []; // Call all the functions in the tool_calls array for (const toolCall of toolCalls) { const toolName = toolCall.function.name; const toolParameters = JSON.parse(toolCall.function.arguments); const toolFunction = tools[toolName]; // Execute the function const toolResult = await toolFunction(toolParameters); messages.push({ tool_call_id: toolCall.id, // Required: id of the tool call role: `tool`, // Required: role of the message name: toolName, // Required: name of the tool content: toolResult // Required: response of the tool }); } const { stream: newStream } = await langbase.pipes.run({ messages, threadId: threadId!, name: `internet-research-agent`, stream: true }); runner = getRunner(newStream); } else { runner = getRunner(stream); } runner.on(`content`, content => { process.stdout.write(content); }); })(); ``` Once done, let's run our AI search agent: ```bash npx tsx ai-search-agent.ts ``` Here is a sample LLM response: ```markdown The concept of "Pipes" from Langbase.com revolves around creating custom-built AI agents that serve as APIs. These Pipes allow developers to build AI features and applications quickly, without needing to manage servers or infrastructure. Here are the key aspects of Pipes: 1. **Definition**: Pipes function as a high-level layer on top of Large Language Models (LLMs), enabling the creation of personalized AI assistants tailored to specific queries and prompts. 2. **Core Components**: - **Prompt**: Involves prompt engineering and orchestration. - **Instructions**: Provides instruction training using few-shot learning, personas, and character definitions. - **Personalization**: Integrates knowledge bases and variables while managing safety to mitigate hallucinations in responses. - **Engine**: Supports experiments and evaluations, allowing for API engine integration and governance. 3. **Streaming and Storage**: Pipes can stream responses in real-time and can store messages (prompts and completions) if configured to do so. This feature ensures privacy by limiting the storage of certain messages. 4. **Safety Features**: Includes moderation capabilities for harmful content, particularly for OpenAI models, and defines safety prompts to restrict LLM responses to relevant contexts. 5. **Variables**: Pipes support dynamic prompts using variables, which can be defined and populated during execution, enhancing the flexibility and interactivity of AI responses. 6. **Open Pipes**: Langbase allows users to create "Open Pipes" that can be shared publicly. These pipes can be forked by other users, promoting collaboration and community engagement. 7. **API Integration**: Pipes can connect any LLM to various datasets and workflows, enabling developers to create tailored applications efficiently. The overall goal of Pipes is to simplify the process of building and deploying AI applications by providing a robust framework that handles various aspects of AI interaction and customization. For more detailed information, you can refer to the official documentation on Langbase's website. ``` --- ## Next Steps - Build something cool with Langbase. - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- Embed Example https://langbase.com/docs/examples/embed/ import { ExampleAccordion } from '@/components/mdx/example-card'; import { generateMetadata } from '@/lib/generate-metadata'; # Embed Example Here are some examples of how to use the Langbase to generate embeddings for a given text. These examples are designed to help you get started with embedding documents using Langbase. --- Chunker Example https://langbase.com/docs/examples/chunker/ import { ExampleAccordion } from '@/components/mdx/example-card'; import { generateMetadata } from '@/lib/generate-metadata'; # Chunker Example Here are some examples of how to use the Langbase chunker functionality. These examples are designed to help you get started with chunking documents using Langbase. --- Build RAG AI Agents with TypeScript https://langbase.com/docs/examples/build-agentic-rag/ import { generateMetadata } from '@/lib/generate-metadata'; # Build RAG AI Agents with TypeScript ### A step-by-step guide to building an agentic RAG system with TypeScript using Langbase SDK. --- In this guide, you will build an agentic RAG system. You will: - Create an agentic AI memory - Use custom embedding models - Add documents to AI memory - Perform RAG retrieval against a query - Generate comprehensive responses using LLMs --- You will build a basic Node.js app in TypeScript that uses the Langbase SDK to create an agentic RAG system. Let's get started. --- ## Step 0: Setup your project Create a new directory for your project and navigate to it. ```bash mkdir agentic-rag && cd agentic-rag ``` ### Initialize the project Initialize Node.js project and create different TypeScript files. ```bash {{ title: 'npm' }} npm init -y && touch index.ts agents.ts create-memory.ts upload-docs.ts create-pipe.ts ``` ```bash {{ title: 'pnpm' }} pnpm init && touch index.ts agents.ts create-memory.ts upload-docs.ts create-pipe.ts ``` ```bash {{ title: 'yarn' }} yarn init -y && touch index.ts agents.ts create-memory.ts upload-docs.ts create-pipe.ts ``` ### Install dependencies You will use the [Langbase SDK](/sdk) to create memory agents and `dotenv` to manage environment variables. So, let's install these dependencies. ```bash {{ title: 'npm' }} npm i langbase dotenv ``` ```bash {{ title: 'pnpm' }} pnpm add langbase dotenv ``` ```bash {{ title: 'yarn' }} yarn add langbase dotenv ``` ## Step 1: Get Langbase API Key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. In case, you do not have an API key, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. Create an `.env` file in the root of your project and add your Langbase API key. ```bash {{ title: '.env' }} LANGBASE_API_KEY=xxxxxxxxx ``` Replace xxxxxxxxx with your Langbase API key. ## Step #3: Add LLM API keys If you have set up LLM API keys in your profile, the AI memory and agent pipe will automatically use them. Otherwise navigate to [LLM API keys](https://langbase.com/settings/llm-keys) page and add keys for different providers like OpenAI, Anthropic, etc. You can add LLM API keys in your acount using [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `LLM API keys` link. 4. From here you can add LLM API keys for different providers like OpenAI, TogetherAI, Anthropic, etc. ## Step 4: Create an agentic AI memory In this step, you will create an AI memory using the Langbase SDK. Go ahead and add the following code to the `create-memory.ts` file. ```ts {{ title: 'create-memory.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const memory = await langbase.memories.create({ name: 'knowledge-base', description: 'An AI memory for agentic memory workshop', embedding_model: 'openai:text-embedding-3-large' }); console.log('AI Memory:', memory); } main(); ``` Let's take a look at what is happening in this code: - Import the `dotenv` package to load environment variables. - Import the `Langbase` class from the `langbase` package. - Create a new instance of the `Langbase` class with your API key. - Use the `memories.create` method to create a new AI memory. - Set the name and description of the memory. - Use the `openai:text-embedding-3-large` model for embedding. - Log the created memory to the console. Let's create the agentic memory by running the `create-memory.ts` file. ```bash {{ title: 'npm' }} npx tsx create-memory.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx create-memory.ts ``` This will create an AI memory and log the memory details to the console. ## Step 5: Add documents to AI memory In this step, you will add documents to the AI memory you created in the previous step. Click on the following buttons to download sample documents. Once the sample docs are downloaded, create a `docs` directory in your project and move the downloaded documents to this directory. Now go ahead and add the following code to the `upload-docs.ts` file. ```ts {{ title: 'upload-docs.ts' }} import 'dotenv/config'; import { Langbase } from 'langbase'; import { readFile } from 'fs/promises'; import path from 'path'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const cwd = process.cwd(); const memoryName = 'knowledge-base'; // Upload agent architecture document const agentArchitecture = await readFile(path.join(cwd, 'docs', 'agent-architectures.txt')); const agentResult = await langbase.memories.documents.upload({ memoryName, contentType: 'text/plain', documentName: 'agent-architectures.txt', document: agentArchitecture, meta: { category: 'Examples', topic: 'Agent architecture' }, }); console.log(agentResult.ok ? '✓ Agent doc uploaded' : '✗ Agent doc failed'); // Upload FAQ document const langbaseFaq = await readFile(path.join(cwd, 'docs', 'langbase-faq.txt')); const faqResult = await langbase.memories.documents.upload({ memoryName, contentType: 'text/plain', documentName: 'langbase-faq.txt', document: langbaseFaq, meta: { category: 'Support', topic: 'Langbase FAQs' }, }); console.log(faqResult.ok ? '✓ FAQ doc uploaded' : '✗ FAQ doc failed'); } main(); ``` Let's break down the above code: - Import the `readFile` function from the `fs/promises` module to read files asynchronously. - Import the `path` module to work with file paths. - Use the `memories.documents.upload` method to upload documents to the AI memory. - Log the result of the document upload to the console. - Upload the `agent-architectures.txt` and `langbase-faq.txt` documents to the AI memory. Run the `upload-docs.ts` file to upload the documents to the AI memory. ```bash {{ title: 'npm' }} npx tsx upload-docs.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx upload-docs.ts ``` This will upload the documents to the AI memory. ## Step 6: Perform RAG retrieval In this step, you will perform RAG retrieval against a query. Add the following code to the `agents.ts` file. ```ts {{ title: 'agents.ts' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); export async function runMemoryAgent(query: string) { const chunks = await langbase.memories.retrieve({ query, topK: 4, memory: [ { name: 'knowledge-base', }, ], }); return chunks; } ``` Let's break down the above code: - Import the `Langbase` class from the `langbase` package. - Create a function `runMemoryAgent` that takes a query as input. - Use the `memories.retrieve` method to perform RAG retrieval against the query. - Retrieve top 4 chunks from the agentic AI memory. - Return the retrieved chunks. Now let's add the following code to the `index.ts` file to run the memory agent. ```ts {{ title: 'index.ts' }} import { runMemoryAgent } from './agents'; async function main() { const chunks = await runMemoryAgent('What is agent parallelization?'); console.log('Memory chunk:', chunks); } main(); ``` Now run the `index.ts` file to perform RAG retrieval against the query. ```bash {{ title: 'npm' }} npx tsx index.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx index.ts ``` You will see the retrieved memory chunks in the console. ```js {{ title: 'Memory agent output' }} [ { text: '---\n' + '\n' + '## Agent Parallelization\n' + '\n' + 'Parallelization runs multiple LLM tasks at the same time to improve speed or accuracy. It works by splitting a task into independent parts (sectioning) or generating multiple responses for comparison (voting).\n' + '\n' + 'Voting is a parallelization method where multiple LLM calls generate different responses for the same task. The best result is selected based on agreement, predefined rules, or quality evaluation, improving accuracy and reliability.\n' + '\n' + "`This code implements an email analysis system that processes incoming emails through multiple parallel AI agents to determine if and how they should be handled. Here's the breakdown:", similarity: 0.7146744132041931, meta: { docName: 'agent-architectures.txt', documentName: 'agent-architectures.txt', category: 'Examples', topic: 'Agent architecture' } }, { text: 'async function main(inputText: string) {\n' + '\ttry {\n' + '\t\t// Create pipes first\n' + '\t\tawait createPipes();\n' + '\n' + '\t\t// Step A: Determine which agent to route to\n' + '\t\tconst route = await routerAgent(inputText);\n' + "\t\tconsole.log('Router decision:', route);\n" + '\n' + '\t\t// Step B: Call the appropriate agent\n' + '\t\tconst agent = agentConfigs[route.agent];\n' + '\n' + '\t\tconst response = await langbase.pipes.run({\n' + '\t\t\tstream: false,\n' + '\t\t\tname: agent.name,\n' + '\t\t\tmessages: [\n' + "\t\t\t\t{ role: 'user', content: `${agent.prompt} ${inputText}` }\n" + '\t\t\t]\n' + '\t\t});\n' + '\n' + '\t\t// Final output\n' + '\t\tconsole.log(\n' + '\t\t\t`Agent: ${agent.name} \\n\\n Response: ${response.completion}`\n' + '\t\t);\n' + '\t} catch (error) {\n' + "\t\tconsole.error('Error in main workflow:', error);\n" + '\t}\n' + '}\n' + '\n' + '// Example usage:\n' + "const inputText = 'Why days are shorter in winter?';\n" + '\n' + 'main(inputText);\n' + '```\n' + '\n' + '\n' + '---\n' + '\n' + '## Agent Parallelization\n' + '\n' + 'Parallelization runs multiple LLM tasks at the same time to improve speed or accuracy. It works by splitting a task into independent parts (sectioning) or generating multiple responses for comparison (voting).', similarity: 0.5911030173301697, meta: { docName: 'agent-architectures.txt', documentName: 'agent-architectures.txt', category: 'Examples', topic: 'Agent architecture' } }, { text: "`This code implements a sophisticated task orchestration system with dynamic subtask generation and parallel processing. Here's how it works:\n" + '\n' + '1. Orchestrator Agent (Planning Phase):\n' + ' - Takes a complex task as input\n' + ' - Analyzes the task and breaks it down into smaller, manageable subtasks\n' + ' - Returns both an analysis and a list of subtasks in JSON format\n' + '\n' + '2. Worker Agents (Execution Phase):\n' + ' - Multiple workers run in parallel using Promise.all()\n' + ' - Each worker gets:\n' + ' - The original task for context\n' + ' - Their specific subtask to complete\n' + ' - All workers use Gemini 2.0 Flash model\n' + '\n' + '3. Synthesizer Agent (Integration Phase):\n' + ' - Takes all the worker outputs\n' + ' - Combines them into a cohesive final result\n' + ' - Ensures the pieces flow together naturally', similarity: 0.5393730401992798, meta: { docName: 'agent-architectures.txt', documentName: 'agent-architectures.txt', category: 'Examples', topic: 'Agent architecture' } }, { text: "`This code implements an email analysis system that processes incoming emails through multiple parallel AI agents to determine if and how they should be handled. Here's the breakdown:\n" + '\n' + '1. Three Specialized Agents running in parallel:\n' + ' - Sentiment Analysis Agent: Determines if the email tone is positive, negative, or neutral\n' + ' - Summary Agent: Creates a concise summary of the email content\n' + ' - Decision Maker Agent: Takes the outputs from the other agents and decides:\n' + ' - If the email needs a response\n' + " - Whether it's spam\n" + ' - Priority level (low, medium, high, urgent)\n' + '\n' + '2. The workflow:\n' + ' - Takes an email input\n' + ' - Runs sentiment analysis and summary generation in parallel using Promise.all()\n' + ' - Feeds those results to the decision maker agent\n' + ' - Outputs a final decision object with response requirements\n' + '\n' + '3. All agents use Gemini 2.0 Flash model and are structured to return parsed JSON responses', similarity: 0.49115753173828125, meta: { docName: 'agent-architectures.txt', documentName: 'agent-architectures.txt', category: 'Examples', topic: 'Agent architecture' } } ] ``` ## Step 7: Create support pipe agent In this step, you will create a support agent using the Langbase SDK. Go ahead and add the following code to the `create-pipe.ts` file. ```ts {{ title: 'create-pipe.ts' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const supportAgent = await langbase.pipes.create({ name: `ai-support-agent`, description: `An AI agent to support users with their queries.`, messages: [ { role: `system`, content: `You're a helpful AI assistant. You will assist users with their queries. Always ensure that you provide accurate and to the point information.`, }, ], }); console.log('Support agent:', supportAgent); } main(); ``` Let's go through the above code: - Initialize the Langbase SDK with your API key. - Use the `pipes.create` method to create a new pipe agent. - Log the created pipe agent to the console. Now run the `create-pipe.ts` file to create the pipe agent. ```bash {{ title: 'npm' }} npx tsx create-pipe.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx create-pipe.ts ``` This will create a support agent and log the agent details to the console. ## Step 8: Generate RAG responses In this step, you will generate comprehensive responses using LLMs. Add the following code to the `agents.ts` file. ```ts {{ title: 'agents.ts' }} import 'dotenv/config'; import { Langbase, MemoryRetrieveResponse } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); export async function runAiSupportAgent({ chunks, query, }: { chunks: MemoryRetrieveResponse[]; query: string; }) { const systemPrompt = await getSystemPrompt(chunks); const { completion } = await langbase.pipes.run({ stream: false, name: 'ai-support-agent', messages: [ { role: 'system', content: systemPrompt, }, { role: 'user', content: query, }, ], }); return completion; } async function getSystemPrompt(chunks: MemoryRetrieveResponse[]) { let chunksText = ''; for (const chunk of chunks) { chunksText += chunk.text + '\n'; } const systemPrompt = ` You're a helpful AI assistant. You will assist users with their queries. Always ensure that you provide accurate and to the point information. Below is some CONTEXT for you to answer the questions. ONLY answer from the CONTEXT. CONTEXT consists of multiple information chunks. Each chunk has a source mentioned at the end. For each piece of response you provide, cite the source in brackets like so: [1]. At the end of the answer, always list each source with its corresponding number and provide the document name. like so [1] Filename.doc. If there is a URL, make it hyperlink on the name. If you don't know the answer, say so. Ask for more context if needed. ${chunksText}`; return systemPrompt; } export async function runMemoryAgent(query: string) { const chunks = await langbase.memories.retrieve({ query, topK: 4, memory: [ { name: 'knowledge-base', }, ], }); return chunks; } ``` Let's break down the above code: - Create a function `runAiSupportAgent` that takes chunks and query as input. - Use the `pipes.run` method to generate responses using the LLM. - Create a function `getSystemPrompt` to generate a system prompt for the LLM. - Combine the retrieved chunks to create a system prompt. - Return the generated completion. Lets run the support agent with AI memory chunks. Add the following code to the `index.ts` file. ```ts {{ title: 'index.ts' }} import { runMemoryAgent, runAiSupportAgent } from './agents'; async function main() { const query = 'What is agent parallelization?'; const chunks = await runMemoryAgent(query); const completion = await runAiSupportAgent({ chunks, query, }); console.log('Completion:', completion); } main(); ``` Let's run the `index.ts` file to generate responses using the LLM. ```bash {{ title: 'npm' }} npx tsx index.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx index.ts ``` You will see the generated completion in the console. ```md {{ title: 'Support agent output' }} Completion: Agent parallelization is a process that runs multiple LLM (Language Model) tasks simultaneously to enhance speed or accuracy. This technique can be implemented in two main ways: 1. **Sectioning**: A task is divided into independent parts that can be processed concurrently. 2. **Voting**: Multiple LLM calls generate different responses for the same task, and the best result is selected based on agreement, predefined rules, or quality evaluation. This approach improves accuracy and reliability by comparing various outputs. In practice, agent parallelization involves orchestrating multiple specialized agents to handle different aspects of a task, allowing for efficient processing and improved outcomes. If you need more detailed examples or further clarification, feel free to ask! ``` This is how you can build an agentic RAG system with TypeScript using the Langbase SDK. --- ## Next Steps - Build something cool with Langbase [APIs](/api-reference) and [SDK](/sdk). - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- Build AI Email Agents: Composable Multi-agent https://langbase.com/docs/examples/ai-email-agent/ import { generateMetadata } from '@/lib/generate-metadata'; # Build AI Email Agents: Composable Multi-agent ### A step-by-step guide to build a composable multi agent architecture using Langbase SDK. --- In this guide, you will build an AI email agent that uses multiple Langbase agent pipes to: - **Summarize** an email - **Analyze** sentiment of the email - **Decide** whether the email needs a response or not - **Pick** the tone of the response email - **Generate** a response email --- You will build a basic Node.js application that will use the [Langbase SDK](/sdk) to connect to the AI agent pipes and generate responses using **parallelization** and **prompt-chaining** agent architectures. ## Flow reference architecture There are two flows in the email agent, i.e., **User Email Flow** and **Spam Email Flow**. The **User Email Flow** is a normal email flow where the user sends an email, and the AI agent analyzes the email sentiment, summarizes the email content, decides the email needs a response, picks the tone of the response email, and generates the response email. The **Spam Email Flow** is a spam email flow where the AI agent analyzes the email sentiment, summarizes the email content, and decides that the email does not need a response. **Demo workflow: Click [Send spam email] or [Send valid email] to see how it works:** Let's get started! --- ## Step 0: Setup your project Create a new directory for your project and navigate to it. ```bash mkdir ai-email-agent && cd ai-email-agent ``` ### Initialize the project Initialize Node.js project and create an `index.ts` file. ```bash {{ title: 'npm' }} npm init -y && touch index.ts && touch agents.ts ``` ```bash {{ title: 'pnpm' }} pnpm init && touch index.ts && touch agents.ts ``` ```bash {{ title: 'yarn' }} yarn init -y && touch index.ts && touch agents.ts ``` ### Install dependencies You will use the [Langbase SDK](/sdk) to connect to the AI agent pipes and `dotenv` to manage environment variables. So, let's install these dependencies. ```bash {{ title: 'npm' }} npm i langbase dotenv ``` ```bash {{ title: 'pnpm' }} pnpm add langbase dotenv ``` ```bash {{ title: 'yarn' }} yarn add langbase dotenv ``` ## Step 1: Get Langbase API Key Every request you send to Langbase needs an [API key](/api-reference/api-keys). This guide assumes you already have one. In case, you do not have an API key, please check the instructions below. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. Create an `.env` file in the root of your project and add your Langbase API key. ```bash {{ title: '.env' }} LANGBASE_API_KEY=xxxxxxxxx ``` Replace xxxxxxxxx with your Langbase API key. ## Step #3: Add LLM API keys If you have set up LLM API keys in your profile, the Pipe will automatically use them. If not, navigate to [LLM API keys](https://langbase.com/settings/llm-keys) page and add keys for different providers like OpenAI, TogetherAI, Anthropic, etc. You can add LLM API keys in your account using [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `LLM API keys` link. 4. From here you can add LLM API keys for different providers like OpenAI, TogetherAI, Anthropic, etc. ## Step 4: Fork the AI agent pipes Fork the following agent pipes needed for the AI email agent in Langbase dashboard: 1. [Email Sentiment][email-sentiment] → An agent pipe to analyze the sentiment of the incoming email 2. [Summarizer][summarizer] → Summarizes the content of the email and make it less wordy for you 3. [Decision Maker][decision-maker] → Decides if the email needs a response, the category and priority of the response 4. [Pick Email Writer][pick-email-writer] → An AI agent pipe that picks the tone for writing the response of the email 5. [Email Writer][email-writer] → An agent pipe that will write a response email ## Step 4: Sentiment Analysis In our first step, you will **analyze** the **email sentiment** using the Langbase AI agent pipe. Go ahead and add the following code to your `agents.ts` file: ```ts {{ title: 'agents.ts' }} import { Langbase } from 'langbase'; import 'dotenv/config' const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); // Sentiment analysis const completion = JSON.parse(response.completion); return completion.sentiment; }; ``` Let's take a look at what is happening in this code: - Initialize the [Langbase SDK](/sdk) with the API key from the environment variables. - Define `emailSentimentAgent` function that takes an email and returns its **sentiment analysis**. - Set the `json` parameter to `true` to get the response in JSON format. - Set the `stream` parameter to `false` because the content generation will be processed internally. Stream the response: - When it's displayed **directly** to users in the UI. - This creates a **better user experience** by showing the AI's response being generated in real-time. Do not stream: - When the AI response is being **processed internally** (e.g., for data analysis, content moderation, or generating metadata). - Set stream to false in these cases since real-time display isn't needed. ## Step 5: Summarize Email Now let's write a function in the same `agents.ts` file to **summarize** the email content. ```ts {{ title: 'agents.ts' }} // Summarize email const completion = JSON.parse(response.completion); return completion.summary; }; ``` Let's break down the above code: - Define a function `emailSummaryAgent` that takes an email and returns its **summarized content**. - Set the `json` parameter to `true` to get the response in JSON format. - Set the `stream` parameter to `false` because the content generation will be processed internally. ## Step 6: Decision Maker You are building a **ReAct based architecture** which means the sytem first reason over the info it has and then decide to act. In this example, results of the email sentiment and summary are passed to the decision-making agent pipe to **decide** whether to respond to the email or not. ![ReAct architecture](/docs/react-architecture-2.jpg) Go ahead and add the following code to your `agents.ts` file: ```ts {{ title: 'agents.ts' }} // Determine if a response is needed const completion = JSON.parse(response.completion); return completion.respond; }; ``` Let's go through the above code. - Define a function `shouldRespondToEmailAgent` that takes the summarized content and sentiment of the email and returns a decision. - Set the `json` parameter to `true` to get the response in JSON format. - Set the `stream` parameter to `false` because the content generation will be processed internally. ## Step 7: Pick Email Writer In cases where the email needs a response, you will use the **Pick Email Writer** agent pipe to pick the tone of the email response. ```ts {{ title: 'agents.ts' }} // Pick an email writer const completion = JSON.parse(response.completion); return completion.tone; }; ``` Here's what you have done in this step: - Created a function `pickEmailWriterAgent` that uses the **summarized email** and **sentiment** to pick one of the following tones for response: - Professional - Formal - Informal - Casual - Friendly - Set the `json` parameter to `true` to get the response in JSON format. - Set the `stream` parameter to `false` because the content generation will be processed internally. ## Step 8: Write Email Response Finally, you will use the **Email Writer** agent pipe to generate the response email based on the tone picked in the previous step. ```ts {{ title: 'agents.ts' }} // Generate an email reply return stream; }; ``` Let's take a look at the above code: - Created a function `emailResponseAgent` that takes the **tone** and **summarized email** and returns the **response email**. - Set the `stream` parameter to `true` to get the response in stream format as you will be writing the response to the console. ## Step 9: Final composable multi-agent workflow Now that you have all these agents, you can combine these in multi agent workflow so they can help us 1. **analyze** the email 2. **summarize** the email 3. **decide** if it needs a response 4. **pick** the tone of the response 5. **generate** the response email if needed. In `index.ts` file, let's import all our agents and define a `workflow` that to run with user email and spam email. ```ts {{ title: 'index.ts' }} import { getRunner } from 'langbase'; import { emailResponseAgent, emailSentimentAgent, emailSummaryAgent, pickEmailWriterAgent, shouldRespondToEmailAgent, } from './agents'; import { stdout } from 'process'; const workflow = async (emailContent: string) => { console.log('Email:', emailContent); // parallelly run the agent pipes const [emailSentiment, emailSummary] = await Promise.all([ emailSentimentAgent(emailContent), emailSummaryAgent(emailContent), ]); console.log('Sentiment:', emailSentiment); console.log('Summary:', emailSummary); const respond = await shouldRespondToEmailAgent(emailSummary, emailSentiment); console.log('Respond:', respond); if (!respond) { return 'No response needed for this email.'; } const writer = await pickEmailWriterAgent(emailSummary, emailSentiment); console.log('Writer:', writer); const emailStream = await emailResponseAgent(writer, emailSummary); const runner = getRunner(emailStream); runner.on('content', (content: string) => { stdout.write(content); }); }; const userEmail = `I'm really disappointed with the service I received yesterday. The product was faulty and customer support was unhelpful.`; const spamEmail = `Congratulations! You have been selected as the winner of a $100 million lottery!`; workflow(userEmail); ``` ```ts {{title: './agents.ts'}} import { Langbase } from 'langbase'; import 'dotenv/config' const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); // Sentiment analysis const completion = JSON.parse(response.completion); return completion.sentiment; }; // Summarize email const completion = JSON.parse(response.completion); return completion.summary; }; // Determine if a response is needed const completion = JSON.parse(response.completion); return completion.respond; }; // Pick an email writer const completion = JSON.parse(response.completion); return completion.tone; }; // Generate an email reply return stream; }; ``` Here's what you have done in this step: - Define a `workflow` function that takes an email content and runs the email sentiment, summary, decision-making, email writer, and email response agents. - You have used the `getRunner` function to get the stream of the email response and write it to the console. - You have also defined the `emailSentimentAgent`, `emailSummaryAgent`, `shouldRespondToEmailAgent`, `pickEmailWriterAgent`, and `emailResponseAgent` functions to run the respective agents. ## Step 10: Run the workflow Run the email agent workflow by running the following command in terminal: ```bash {{ title: 'npx' }} npx tsx index.ts ``` ```bash {{ title: 'pnpm' }} pnpm dlx tsx index.ts ``` You should see the following output in the console: ```md {{ title: 'Email Agent output' }} Email: I'm really disappointed with the service I received yesterday. The product was faulty and customer support was unhelpful. Sentiment: frustrated Summary: Disappointed with faulty product and unhelpful customer support. Respond: true Writer: formal Subject: Re: Concern Regarding Faulty Product and Support Experience Dear [User's Name], Thank you for reaching out to us regarding your experience with our product and customer support. I sincerely apologize for the inconvenience you've encountered. We strive to provide high-quality products and exceptional service, and it is disappointing to hear that we did not meet those standards in your case. Please rest assured that your feedback is taken seriously, and we are committed to resolving this matter promptly. To assist you further, could you please provide details about the specific issues you're facing? This will enable us to address your concerns more effectively. Thank you for bringing this to our attention, and I look forward to assisting you. Best regards, [Your Name] [Your Position] [Company Name] [Contact Information] ``` This is how you can build an AI email agent using Langbase agent pipes. --- ## Next Steps - Build something cool with Langbase [APIs](/api-reference) and [SDK](/sdk). - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- [cover]: https://raw.githubusercontent.com/LangbaseInc/docs-images/main/examples/ai-email-agent/ai-email-agent.jpg [email-agent]: https://ai-email-agent.langbase.dev/ [gh-repo]: https://github.com/LangbaseInc/langbase-examples/blob/main/examples/email-agent-node [local]: http://localhost:3000 [email-sentiment]: https://langbase.com/examples/email-sentiment [summarizer]: https://langbase.com/examples/summarizer [decision-maker]: https://langbase.com/examples/decision-maker [pick-email-writer]: https://langbase.com/examples/pick-email-writer [email-writer]: https://langbase.com/examples/email-writer Agent Architectures https://langbase.com/docs/examples/agent-architectures/ import { generateMetadata } from '@/lib/generate-metadata'; # Agent Architectures At Langbase, we believe you don't need a framework. Build AI agents without any frameworks. We’ll cover several different agent architectures that leverage Langbase to build, deploy, and scale autonomous agents. Define how your agents use LLMs, tools, memory, and durable workflows to process inputs, make decisions, and achieve goals. --- ### Reference agent architectures 1. [Augmented LLM](#augmented-llm-pipe-agent) 2. [Prompt chaining](#prompt-chaining-and-composition) 3. [Agentic Routing](#agent-routing) 4. [Agent Parallelization](#agent-parallelization) 5. [Orchestration workers](#agentic-orchestration-workers) 6. [Evaluator-optimizer](#evaluator-optimizer) 7. [Augmented LLM with Tools](#augmented-llm-with-tools) 9. [Memory Agent](#memory-agent) --- ## Augmented LLM (Pipe Agent) Langbase Augmented LLM (Pipe Agent) is the fundamental component of an agentic system. It is a Large Language Model (LLM) enhanced with augmentations such as retrieval, tools, and memory. Our current models can actively utilize these capabilities—generating their own search queries, selecting appropriate tools, and determining what information to retain using memory. ```ts import dotenv from 'dotenv'; import { Langbase } from 'langbase'; dotenv.config(); const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { // Create a pipe agent const summaryAgent = await langbase.pipes.create({ name: "summary-agent", model: "google:gemini-2.5-flash", messages: [ { role: "system", content: 'You are a helpful assistant that summarizes text.' } ] }); // Run the pipe agent const inputText = `Langbase is the most powerful serverless platform for building AI agents with memory. Build, scale, and evaluate AI agents with semantic memory (RAG) and world-class developer experience. We process billions of AI messages tokens daily. Built for every developer, not just AI/ML experts. Compared to complex AI frameworks, Langbase is simple, serverless, and the first composable AI platform`; const { completion } = await langbase.pipes.run({ name: summaryAgent.name, stream: false, messages: [ { role: "user", content: inputText, } ], }); console.log(completion); } main(); ``` --- ## Prompt chaining and composition Prompt chaining splits a task into steps, with each LLM call using the previous step's result. It improves accuracy by simplifying each step, making it ideal for structured tasks like content generation and verification. ```ts {{ title: 'prompt-chaining.ts' }} import dotenv from 'dotenv'; import { Langbase } from 'langbase'; dotenv.config(); const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); async function main(inputText: string) { // Prompt chaining steps const steps = [ { name: 'summary-agent', model: 'google:gemini-2.5-flash', description: 'summarize the product description into two concise sentences', prompt: `Please summarize the following product description into two concise sentences:\n` }, { name: 'features-agent', model: 'google:gemini-2.5-flash', description: 'extract key product features as bullet points', prompt: `Based on the following summary, list the key product features as bullet points:\n` }, { name: 'marketing-copy-agent', model: 'google:gemini-2.5-flash', description: 'generate a polished marketing copy using the bullet points', prompt: `Using the following bullet points of product features, generate a compelling and refined marketing copy for the product, be precise:\n` } ]; // Create the pipe agents await Promise.all( steps.map(step => langbase.pipes.create({ name: step.name, model: step.model, messages: [ { role: 'system', content: `You are a helpful assistant that can ${step.description}.` } ] }) ) ); // Initialize the data with the raw input. let data = inputText; try { // Process each step in the workflow sequentially. for (const step of steps) { // Call the LLM for the current step. const response = await langbase.pipes.run({ stream: false, name: step.name, messages: [{ role: 'user', content: `${step.prompt} ${data}` }] }); data = response.completion; console.log(`Step: ${step.name} \n\n Response: ${data}`); // Gate on summary agent output to ensure it is not too brief. // If summary is less than 10 words, throw an error to stop the workflow. if (step.name === 'summary-agent' && data.split(' ').length < 10) { throw new Error( 'Gate triggered for summary agent. Summary is too brief. Exiting workflow.' ); return; } } } catch (error) { console.error('Error in main workflow:', error); } // The final refined marketing copy console.log('Final Refined Product Marketing Copy:', data); } const inputText = `Our new smartwatch is a versatile device featuring a high-resolution display, long-lasting battery life,fitness tracking, and smartphone connectivity. It's designed for everyday use and is water-resistant. With cutting-edge sensors and a sleek design, it's perfect for tech-savvy individuals.`; main(inputText); ``` --- ## Agent Routing Routing classifies inputs and directs them to specialized LLMs for better accuracy. It helps optimize performance by handling different tasks separately, like sorting queries or assigning models based on complexity. ```ts {{ title: 'routing.ts' }} import dotenv from 'dotenv'; import { Langbase } from 'langbase'; dotenv.config(); // Initialize Langbase with your API key const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); // Specialized agents configurations const agentConfigs = { router: { name: 'router-agent', model: 'google:gemini-2.5-flash', prompt: `You are a router agent. Your job is to read the user's request and decide which specialized agent is best suited to handle it. Blow are the agents you can route to: - summary: Summarizes the text - reasoning: Analyzes and provides reasoning for the text - coding: Provides a coding solution for the text Only respond with valid JSON that includes a single field "agent" whose value must be one of ["summary", "reasoning", "coding"] based on the user's request. For example: {"agent":"summary"}. No additional text or explanation—just the JSON. ` }, summary: { name: 'summary-agent', model: 'google:gemini-2.5-flash', prompt: 'Summarize the following text:\n' }, reasoning: { name: 'reasoning-agent', model: 'groq:deepseek-r1-distill-llama-70b', prompt: 'Analyze and provide reasoning for:\n' }, coding: { name: 'coding-agent', model: 'anthropic:claude-3-5-sonnet-latest', prompt: 'Provide a coding solution for:\n' } }; // Create the router and specialized agent pipes async function createPipes() { // Create router and all specialized agent await Promise.all( Object.entries(agentConfigs).map(([key, config]) => langbase.pipes.create({ name: config.name, model: config.model, messages: [ { role: 'system', content: config.prompt } ] }) ) ); } // Router agent async function routerAgent(inputText: string) { const response = await langbase.pipes.run({ stream: false, name: 'router-agent', messages: [ { role: 'user', content: inputText } ] }); // The router's response should look like: {"agent":"summary"} or {"agent":"reasoning"} or {"agent":"coding"} // We parse the completion to extract the agent value return JSON.parse(response.completion); } async function main(inputText: string) { try { // Create pipes first await createPipes(); // Step A: Determine which agent to route to const route = await routerAgent(inputText); console.log('Router decision:', route); // Step B: Call the appropriate agent const agent = agentConfigs[route.agent]; const response = await langbase.pipes.run({ stream: false, name: agent.name, messages: [ { role: 'user', content: `${agent.prompt} ${inputText}` } ] }); // Final output console.log( `Agent: ${agent.name} \n\n Response: ${response.completion}` ); } catch (error) { console.error('Error in main workflow:', error); } } // Example usage: const inputText = 'Why days are shorter in winter?'; main(inputText); ``` --- ## Agent Parallelization Parallelization runs multiple LLM tasks at the same time to improve speed or accuracy. It works by splitting a task into independent parts (sectioning) or generating multiple responses for comparison (voting). Voting is a parallelization method where multiple LLM calls generate different responses for the same task. The best result is selected based on agreement, predefined rules, or quality evaluation, improving accuracy and reliability. ```ts {{ title: 'parallelization.ts' }} import { Langbase } from 'langbase'; import dotenv from 'dotenv'; dotenv.config(); const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); // Agent configurations const agentConfigs = { sentiment: { name: 'email-sentiment', model: 'google:gemini-2.5-flash', prompt: ` You are a helpful assistant that can analyze the sentiment of the email. Only respond with the sentiment, either "positive", "negative" or "neutral". Do not include any markdown formatting, code blocks, or backticks in your response. The response should be a raw JSON object that can be directly parsed. Example response: { "sentiment": "positive" } ` }, summary: { name: 'email-summary', model: 'google:gemini-2.5-flash', prompt: ` You are a helpful assistant that can summarize the email. Only respond with the summary. Do not include any markdown formatting, code blocks, or backticks in your response. The response should be a raw JSON object that can be directly parsed. Example response: { "summary": "The email is about a product that is not working." } ` }, decisionMaker: { name: 'email-decision-maker', model: 'google:gemini-2.5-flash', prompt: ` You are a decision maker that analyzes and decides if the given email requires a response or not. Make sure to check if the email is spam or not. If the email is spam, then it does not need a response. If it requires a response, based on the email urgency, decide the response date. Also define the response priority. Use following keys and values accordingly - respond: true or false - category: spam or not spam - priority: low, medium, high, urgent Do not include any markdown formatting, code blocks, or backticks in your response. The response should be a raw JSON object that can be directly parsed. ` } }; // Create all pipes async function createPipes() { await Promise.all( Object.entries(agentConfigs).map(([key, config]) => langbase.pipes.create({ name: config.name, model: config.model, json: true, messages: [ { role: 'system', content: config.prompt } ] }) ) ); } async function main(emailInput: string) { try { // Create pipes first await createPipes(); // Sentiment analysis const emailSentimentAgent = async (email: string) => { const response = await langbase.pipes.run({ name: agentConfigs.sentiment.name, stream: false, messages: [ { role: 'user', content: email } ] }); return JSON.parse(response.completion).sentiment; }; // Summarize email const emailSummaryAgent = async (email: string) => { const response = await langbase.pipes.run({ name: agentConfigs.summary.name, stream: false, messages: [ { role: 'user', content: email } ] }); return JSON.parse(response.completion).summary; }; // Determine if a response is needed const emailDecisionMakerAgent = async ( summary: string, sentiment: string ) => { const response = await langbase.pipes.run({ name: agentConfigs.decisionMaker.name, stream: false, messages: [ { role: 'user', content: `Email summary: ${summary}\nEmail sentiment: ${sentiment}` } ] }); return JSON.parse(response.completion); }; // Parallelize the requests const [emailSentiment, emailSummary] = await Promise.all([ emailSentimentAgent(emailInput), emailSummaryAgent(emailInput) ]); console.log('Email Sentiment:', emailSentiment); console.log('Email Summary:', emailSummary); // aggregator based on the results const emailDecision = await emailDecisionMakerAgent( emailSummary, emailSentiment ); console.log('should respond:', emailDecision.respond); console.log('category:', emailDecision.category); console.log('priority:', emailDecision.priority); return emailDecision; } catch (error) { console.error('Error in main workflow:', error); } } const email = ` Hi John, I'm really disappointed with the service I received yesterday. The product was faulty and customer support was unhelpful. How can I apply for a refund? Thanks, `; main(email); ``` --- ## Agentic Orchestration-workers The orchestrator-workers workflow has a main LLM (orchestrator) that breaks a task into smaller parts and assigns them to worker LLMs. The orchestrator then gathers their results to complete the task, making it useful for complex and unpredictable jobs. ```ts {{ title: 'orchestration-worker.ts' }} import dotenv from 'dotenv'; import { Langbase } from 'langbase'; dotenv.config(); const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); // Agent configurations const agentConfigs = { orchestrator: { name: 'orchestrator', model: 'google:gemini-2.5-flash', prompt: ` You are an orchestrator agent. Analyze the user's task and break it into smaller and distinct subtasks. Return your response in JSON format with: - An analysis of the task extracted from the task. No extra steps like proofreading, summarizing, etc. - A list of subtasks, each with a "description" (what needs to be done). Do not include any markdown formatting, code blocks, or backticks in your response. The response should be a raw JSON object that can be directly parsed. Example response: { "analysis": "The task is to describe benefits and drawbacks of electric cars", "subtasks": [ { "description": "Write about the benefits of electric cars." }, { "description": "Write about the drawbacks of electric cars." } ] } ` }, worker: { name: 'worker-agent', model: 'google:gemini-2.5-flash', prompt: ` You are a worker agent working on a specific part of a larger task. You are given a subtask and you need to complete it. You are given the original task and the subtask. You need to complete the subtask. ` }, synthesizer: { name: 'synthesizer-agent', model: 'google:gemini-2.5-flash', prompt: `You are an expert synthesizer agent. Combine the following results into a cohesive final output.` } }; // Create all pipes async function createPipes() { await Promise.all( Object.entries(agentConfigs).map(([key, config]) => langbase.pipes.create({ name: config.name, model: config.model, messages: [ { role: 'system', content: config.prompt } ] }) ) ); } // Main orchestration workflow async function orchestratorAgent(task: string) { try { // Create pipes first await createPipes(); // Step 1: Use the orchestrator LLM to break down the task const orchestrationResults = await langbase.pipes.run({ stream: false, name: agentConfigs.orchestrator.name, messages: [ { role: 'user', content: `Task: ${task}` } ] }); // Parse the orchestrator's JSON response const { analysis, subtasks } = JSON.parse( orchestrationResults.completion ); console.log('Task Analysis:', analysis); console.log('Generated Subtasks:', subtasks); // Step 2: Process each subtask in parallel using worker LLMs const workerAgentsResults = await Promise.all( subtasks.map(async subtask => { const workerAgentResult = await langbase.pipes.run({ stream: false, name: agentConfigs.worker.name, messages: [ { role: 'user', content: ` The original task is: "${task}" Your specific subtask is: "${subtask.description}" Provide a concise response for this subtask. Only respond with the response for the subtask. ` } ] }); return { subtask, result: workerAgentResult.completion }; }) ); // Step 3: Synthesize all results into a final output const synthesizerAgentInput = workerAgentsResults .map( workerResponse => `Subtask Description: ${workerResponse.subtask.description}\n Result:\n${workerResponse.result}` ) .join('\n\n'); const synthesizerAgentResult = await langbase.pipes.run({ stream: false, name: agentConfigs.synthesizer.name, messages: [ { role: 'user', content: `Combine the following results into a complete solution:\n ${synthesizerAgentInput}` } ] }); console.log( 'Final Synthesized Output:\n', synthesizerAgentResult.completion ); return synthesizerAgentResult.completion; } catch (error) { console.error('Error in orchestration workflow:', error); throw error; } } // Example usage async function main() { const task = ` Write a blog post about the benefits of remote work. Include sections on productivity, work-life balance, and environmental impact. `; await orchestratorAgent(task); } main(); ``` --- ## Evaluator-optimizer The evaluator-optimizer workflow uses one LLM to generate a response and another to review and improve it. This cycle repeats until the result meets the desired quality, making it useful for tasks that benefit from iterative refinement. ```ts {{ title: 'evaluator-optimizer.ts' }} import dotenv from 'dotenv'; import { Langbase } from 'langbase'; dotenv.config(); const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); // Agent configurations const agentConfigs = { generator: { name: 'generator-agent', model: 'google:gemini-2.5-flash', prompt: ` You are a skilled assistant tasked with creating or improving a product description. Your goal is to produce a concise, engaging, and informative description based on the task and feedback provided. ` }, evaluator: { name: 'evaluator-agent', model: 'google:gemini-2.5-flash', prompt: ` You are an evaluator agent tasked with reviewing a product description. If the description matches the requirements and needs no further changes, respond just with "ACCEPTED" and nothing else. ONLY suggest changes if it is not ACCEPTED. If not accepted, provide constructive feedback on how to improve it based on the original task requirements. ` } }; // Create all pipes async function createPipes() { await Promise.all( Object.entries(agentConfigs).map(([key, config]) => langbase.pipes.create({ name: config.name, model: config.model, messages: [ { role: 'system', content: config.prompt } ] }) ) ); } // Main evaluator-optimizer workflow async function evaluatorOptimizerWorkflow(task: string) { try { // Create pipes first await createPipes(); let solution = ''; // The solution being refined let feedback = ''; // Feedback from the evaluator let iteration = 0; const maxIterations = 5; // Limit to prevent infinite loops while (iteration < maxIterations) { console.log(`\n--- Iteration ${iteration + 1} ---`); // Step 1: Generator creates or refines the solution const generatorResponse = await langbase.pipes.run({ stream: false, name: agentConfigs.generator.name, messages: [ { role: 'user', content: ` Task: ${task} Previous Feedback (if any): ${feedback} Create or refine the product description accordingly. ` } ] }); solution = generatorResponse.completion; console.log('Generated Solution:', solution); // Step 2: Evaluator provides feedback const evaluatorResponse = await langbase.pipes.run({ stream: false, name: agentConfigs.evaluator.name, messages: [ { role: 'user', content: ` Original requirements: "${task}" Current description: "${solution}" Please evaluate it and provide feedback or indicate if it is acceptable. ` } ] }); feedback = evaluatorResponse.completion.trim(); console.log('Evaluator Feedback:', feedback); // Step 3: Check if solution is accepted if (feedback.toUpperCase() === 'ACCEPTED') { console.log('\nFinal Solution Accepted:', solution); return solution; } iteration++; } console.log('\nMax iterations reached. Final Solution:', solution); return solution; } catch (error) { console.error('Error in evaluator-optimizer workflow:', error); throw error; } } // Example usage async function main() { const task = ` Write a product description for an eco-friendly water bottle. The target audience is environmentally conscious millennials. Key features include: plastic-free materials, insulated design, and a lifetime warranty. `; await evaluatorOptimizerWorkflow(task); } main(); ``` --- ## Augmented LLM with Tools Langbase pipe agent is the fundamental component of an agentic system. It is a Large Language Model (LLM) enhanced with augmentations such as retrieval, tools, and memory. Our current models can actively utilize these capabilities—generating their own search queries, selecting appropriate tools, and determining what information to retain using memory. ```ts import dotenv from 'dotenv'; import { Langbase, getToolsFromRun } from "langbase"; dotenv.config(); const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { // Create a pipe agent const weatherPipeAgent = await langbase.pipes.create({ name: "langbase-weather-pipe-agent", model: "google:gemini-2.5-flash", messages: [ { role: "system", content: 'You are a helpful assistant that can get the weather of a given location.', } ] }); // Define a weather tool const weatherTool = { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather of a given location", "parameters": { "type": "object", "required": [ "location" ], "properties": { "unit": { "enum": [ "celsius", "fahrenheit" ], "type": "string" }, "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } } } } } function get_current_weather() { return "It's 70 degrees and sunny in SF."; } const tools = { get_current_weather, }; // Run the pipe agent with the tool const response = await langbase.pipes.run({ name: weatherPipeAgent.name, messages: [ { role: "user", content: 'What is the weather in San Francisco?', } ], tools: [weatherTool] }); const toolsFromRun = await getToolsFromRun(response); const hasToolCalls = toolsFromRun.length > 0; if (hasToolCalls) { const messages = []; // call all the functions in the toolCalls array toolCalls.forEach(async toolCall => { const toolName = toolCall.function.name; const toolParameters = JSON.parse(toolCall.function.arguments); const toolFunction = tools[toolName]; const toolResult = toolFunction(toolParameters); // Call the tool function with the tool parameters messages.push({ tool_call_id: toolCall.id, role: 'tool', name: toolName, content: toolResult, }); }); const { completion } = await langbase.pipes.run({ messages, name: weatherPipeAgent.name, threadId: response.threadId, stream: false, }); console.log(completion); } else { // No tool calls, just return the completion console.log(response.completion); } } main(); ``` ## Memory Agent Langbase [memory agents](https://langbase.com/docs/memory) represent the next frontier in semantic retrieval-augmented generation (RAG) as a serverless and infinitely scalable API designed for developers. 30-50x less expensive than the competition, with industry-leading accuracy in advanced agentic routing, retrieval, and more. ```ts {{ title: 'index.ts' }} import { Langbase } from "langbase"; import dotenv from "dotenv"; import downloadFile from "./download-file"; dotenv.config(); const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { // Step 1: Create a memory const memory = await langbase.memories.create({ name: "employee-handbook-memory", description: "Memory for employee handbook", }); console.log("Memory created:", memory.name); // Step 2: Download the employee handbook file const fileUrl = "https://raw.githubusercontent.com/LangbaseInc/langbase-examples/main/assets/employee-handbook.txt"; const fileName = "employee-handbook.txt"; const fileContent = await downloadFile(fileUrl); // Step 3: Upload the employee handbook to memory await langbase.memories.documents.upload({ memoryName: memory.name, contentType: "text/plain", documentName: fileName, document: fileContent, }); console.log("Employee handbook uploaded to memory"); // Step 4: Create a pipe const pipe = await langbase.pipes.create({ name: "employee-handbook-pipe", model: "google:gemini-2.5-flash", description: "Pipe for querying employee handbook", memory: [{ name: "employee-handbook-memory" }], }); console.log("Pipe created:", pipe.name); // Step 5: Ask a question const question = "What is the company policy on remote work?"; const { completion } = await langbase.pipes.run({ name: "employee-handbook-pipe", messages: [{ role: "user", content: question }], stream: false, }); console.log("Question:", question); console.log("Answer:", completion); } main().catch(console.error); ``` ```ts {{ title: 'download-file.ts' }} import * as https from "https"; const downloadFile = async (fileUrl: string): Promise => { return new Promise((resolve, reject) => { https.get(fileUrl, (response) => { // Check for redirect or error status codes if (response.statusCode && (response.statusCode < 200 || response.statusCode >= 300)) { reject(new Error(`Failed to download: ${response.statusCode}`)); return; } const chunks: Buffer[] = []; response.on("data", (chunk) => { chunks.push(chunk); }); response.on("end", () => { const fileBuffer = Buffer.concat(chunks); console.log(`File downloaded successfully (${fileBuffer.length} bytes)`); resolve(fileBuffer); }); }).on("error", (err) => { console.error("Error downloading file:", err); reject(err); }); }); }; export default downloadFile; ``` --- Agent Run Examples https://langbase.com/docs/examples/agent/ import { ExampleAccordion } from '@/components/mdx/example-card'; import { generateMetadata } from '@/lib/generate-metadata'; # Agent Run Examples Here are some examples of how to use the Langbase Agent Primitive. These examples are designed to help you get started with the Langbase runtime agent `agent.run()`. --- Page https://langbase.com/docs/chunker/platform/ import { generateMetadata } from '@/lib/generate-metadata'; ## Platform Limits and pricing for Chunk primitive on the Langbase Platform are as follows: 1. **[Limits](/chunk/platform/limits)**: Rate and usage limits. 2. **[Pricing](/chunk/platform/pricing])**: Pricing details for the Agent primitive. Page https://langbase.com/docs/agent/platform/ import { generateMetadata } from '@/lib/generate-metadata'; ## Platform Limits and pricing for Agent primitive on the Langbase Platform are as follows: 1. **[Limits](agent/platform/limits)**: Rate and usage limits. 2. **[Pricing](agent/platform/pricing)**: Pricing details for the Agent primitive. Tools API<span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/tools/ import { generateMetadata } from '@/lib/generate-metadata'; # Tools APIv1 The tools API is a collection of endpoints that allow you to interact with various services. - [Crawl](/api-reference/tools/crawl) - [Web Search](/api-reference/tools/web-search) --- Threads API <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/threads/ import { generateMetadata } from '@/lib/generate-metadata'; # Threads API v1 Use the Threads API to manage conversation threads. Threads help you organize and maintain conversation history, making it easier to build conversational applications. - [Create Thread](/api-reference/threads/create) - [Update Thread](/api-reference/threads/update) - [Get Thread](/api-reference/threads/get) - [Delete Thread](/api-reference/threads/delete) - [Append Messages](/api-reference/threads/append-messages) - [List Messages](/api-reference/threads/list-messages) --- Memory API https://langbase.com/docs/api-reference/memory/ import { generateMetadata } from '@/lib/generate-metadata'; # Memory API Langbase Memory API provides you with a programmatic access to managing memories in your Langbase account. Since documents are stored in memories, you can also manage documents using the Memory API. --- ## Memory - [List memory](/api-reference/memory/list) — List all memories in your account - [Create memory](/api-reference/memory/create) — Create a new memory - [Delete memory](/api-reference/memory/delete) — Delete a memory - [Retrieve memory](/api-reference/memory/retrieve) — Similarity search for a given query --- ## Documents - [List documents](/api-reference/memory/document-list) — List all documents in a memory - [Delete document](/api-reference/memory/document-delete) — Delete a document - [Upload document](/api-reference/memory/document-upload) — Upload a document - [Embeddings Retry](/api-reference/memory/document-embeddings-retry) — Retry generating embeddings for a document --- Migration Guide from Langbase beta API to v1 https://langbase.com/docs/api-reference/migrate-to-api-v1/ import { generateMetadata } from '@/lib/generate-metadata'; # Migration Guide from Langbase beta API to v1 --- This guide will help you migrate from the Langbase `beta` API to the `v1` API. The new API introduces simplified request endpoints and a streamlined body structure. --- The `beta` version of the API has been deprecated and will remain supported until **February 28, 2025**. We strongly encourage all users to migrate to the new `v1` API. --- ## Major Changes 1. **API base URL**: A new API base URL has been introduced for `v1` API endpoints. - `https://api.langbase.com/v1` 2. **Deprecated Endpoints**: The `generate` and `chat` endpoints have been deprecated and replaced by the `run` endpoint. 3. **Unified endpoints**: Several user/org specific endpoints have been unified for better consistency. 4. **Request body**: The structure of the request body for some endpoints has been simplified. --- ## API Endpoint Changes Here are the key changes in the `v1` API endpoints: **`beta` endpoint** | **`v1` endpoint** | Description | -----------------|-----------------|-----------------| `/beta/org/:org/pipes` | `/v1/pipes` | Create/List pipes | `/beta/user/pipes` | `/v1/pipes` | Create/List pipes | `/beta/pipes/:ownerLogin/:pipeName` | `/v1/pipes/:pipeName` | Update pipe | `/beta/pipes/run` | `/v1/pipes/run` | Run pipe | `/beta/generate` | `/v1/pipes/run` | Generate (now unified) | `/beta/chat` | `/v1/pipes/run` | Chat (now unified) | `/beta/org/:org/memorysets` | `/v1/memory` | Create/List memory | `/beta/user/memorysets` | `/v1/memory` | Create/List memory | `/beta/memorysets/:owner/:memoryName` | `/v1/memory/:memoryName` | Delete memory | `/beta/memory/retrieve` | `/v1/memory/retrieve` | Retrieve memory | `/beta/memorysets/:owner/:memoryName/documents` | `/v1/memory/:memoryName/documents` | List documents | `/beta/user/memorysets/documents` | `/v1/memory/:memoryName/documents` | Upload document | `/beta/org/:org/memorysets/documents` | `/v1/memory/:memoryName/documents` | Upload document | `/beta/memorysets/:owner/`
`documents/embeddings/retry` | `/v1/memory/:memoryName/documents/`
`:documentName/embeddings/retry` | Retry embeddings | --- ## Authentication The authentication process remains the same for the `v1` API. You can use the same API key to authenticate your requests. --- ## Run Pipe Here are the code examples to migrate from the `beta` to `v1` API for running a pipe. ```js {{ title: 'v1 API' }} async function runPipe() { const url = 'https://api.langbase.com/v1/pipes/run'; const apiKey = ''; // Replace with your user/org API key const data = { messages: [ { role: 'user', content: 'Hello!' } ], }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }); const result = await response.json(); return result; } ``` ```js {{ title: 'beta API (deprecated)' }} async function runPipe() { const url = 'https://api.langbase.com/beta/pipes/run'; // Unified endpoint // const url = 'https://api.langbase.com/beta/generate'; // Deprecated // const url = 'https://api.langbase.com/beta/chat'; // Deprecated const apiKey = ''; // Replace with your user/org API key const data = { messages: [ { role: 'user', content: 'Hello!' } ], }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }); const result = await response.json(); return result; } ``` **Key Changes:** - The `generate` and `chat` endpoints have been deprecated and replaced by the `run` endpoint. --- ## Create Pipe Here are the code examples to migrate from the `beta` to `v1` API for creating a pipe. ```js {{ title: 'v1 API' }} async function createNewPipe() { const url = 'https://api.langbase.com/v1/pipes'; const apiKey = ''; // Replace with your user/org API key const pipe = { name: 'ai-agent', upsert: true, description: 'This is a test ai-agent pipe', status: 'public', model: 'openai:gpt-4o-mini', stream: true, json: true, store: false, moderate: true, top_p: 1, max_tokens: 1000, temperature: 0.7, presence_penalty: 1, frequency_penalty: 1, stop: [], tool_choice: 'auto', parallel_tool_calls: false, messages: [ { role: 'system', content: "You're a helpful AI assistant." }, { role: 'system', content: "Don't ignore these instructions", name: 'safety' } ], variables: [], tools: [], memory: [], }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(pipe), }); const newPipe = await response.json(); return newPipe; } ``` ```js {{ title: 'beta API (deprecated)' }} async function createNewPipe() { const url = `https://api.langbase.com/beta/org/${org}/pipes`; // Org endpoint // const url = `https://api.langbase.com/beta/user/pipes`; // User endpoint const apiKey = ''; // Replace with your user/org API key const pipe = { name: 'ai-agent', description: 'This is a test ai-agent pipe', status: 'public', type: 'chat', config: { meta: { stream: true, json: false, store: true, moderate: false, }, model: { name: 'gpt-4o-mini', provider: 'OpenAI', params: { max_tokens: 1000, temperature: 0.7, top_p: 1, frequency_penalty: 1, presence_penalty: 1, stop: [], }, tool_choice: 'required', parallel_tool_calls: false }, prompt: { opening: 'Welcome to Langbase. Prompt away!', system: 'You are a helpful AI assistant.', messages: [], variables: [], }, tools: [], memorysets: [] } }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(pipe), }); const newPipe = await response.json(); return newPipe; } ``` **Key Changes:** - The new `upsert` property allows you to update a pipe if it already exists. - Default: `false` - The `model` property now accepts a model string in the `provider:model_id` format. - Default: `openai:gpt-4o-mini` - You can find the [list of supported models](https://langbase.com/docs/supported-models-and-providers) in the API documentation. Use the copy button to copy/paste the model string for the request body. - The `config` object has been replaced by individual properties in the `v1` API. - The `memorysets` property has been replaced by `memory` property. - The `prompt` object has been replaced by `messages` array. --- ## Update Pipe Here are the code examples to migrate from the `beta` to `v1` API for updating a pipe. ```js {{ title: 'v1 API' }} async function updatePipe() { const url = `https://api.langbase.com/v1/pipes/${pipeName}`; const apiKey = ''; // Replace with your user/org API key const pipe = { name: 'ai-agent', description: 'This is a test ai-agent pipe', status: 'public', model: 'openai:gpt-4o-mini', stream: true, json: true, store: false, moderate: true, top_p: 1, max_tokens: 1000, temperature: 0.7, presence_penalty: 1, frequency_penalty: 1, stop: [], tool_choice: 'auto', parallel_tool_calls: false, messages: [ { role: 'system', content: "You're a helpful AI assistant." }, { role: 'system', content: "Don't ignore these instructions", name: 'safety' } ], variables: [], tools: [], memory: [], }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(pipe), }); const updatedPipe = await response.json(); return updatedPipe; } ``` ```js {{ title: 'beta API (deprecated)' }} async function updatePipe() { const url = `https://api.langbase.com/beta/pipes/${ownerLogin}/${pipeName}`; const apiKey = ''; // Replace with your user/org API key const pipe = { name: 'ai-agent', description: 'This is a test ai-agent pipe', status: 'public', type: 'chat', config: { meta: { stream: true, json: false, store: true, moderate: false, }, model: { name: 'gpt-4o-mini', provider: 'OpenAI', params: { max_tokens: 1000, temperature: 0.7, top_p: 1, frequency_penalty: 1, presence_penalty: 1, stop: [], }, tool_choice: 'required', parallel_tool_calls: false }, prompt: { opening: 'Welcome to Langbase. Prompt away!', system: 'You are a helpful AI assistant.', messages: [], variables: [], }, tools: [], memorysets: [] } }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(pipe), }); const updatedPipe = await response.json(); return updatedPipe; } ``` **Key Changes:** - The `ownerLogin` has been removed from the `v1` update pipe API endpoint. - The `model` property now accepts a model string in the `provider:model_id` format. - Default: `openai:gpt-4o-mini` - You can find the [list of supported models](https://langbase.com/docs/supported-models-and-providers) in the API documentation. Use the copy button to copy/paste the model string for the request body. - The `config` object has been replaced by individual properties in the `v1` API. - The `memorysets` property has been replaced by `memory` property. - The `prompt` object has been replaced by `messages` array. --- ## List Pipes Here are the code examples to migrate from the `beta` to `v1` API for listing pipes. ```js {{ title: 'v1 API' }} async function listPipes() { const url = 'https://api.langbase.com/v1/pipes'; const apiKey = ''; // Replace with your user/org API key const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, }); const pipes = await response.json(); return pipes; } ``` ```js {{ title: 'beta API (deprecated)' }} async function listPipes() { const url = `https://api.langbase.com/beta/org/${org}/pipes`; // Org endpoint // const url = `https://api.langbase.com/beta/user/pipes`; // User endpoint const apiKey = ''; // Replace with your user/org API key const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, }); const pipes = await response.json(); return pipes; } ``` **Key Changes:** - The user/org endpoints have been unified in the `v1` list pipes API endpoint. --- ## List Memory Here are the code examples to migrate from the `beta` to `v1` API for listing memory. ```js {{ title: 'v1 API' }} async function listMemory() { const url = 'https://api.langbase.com/v1/memory'; const apiKey = ''; // Replace with your user/org API key const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, }); const memory = await response.json(); return memory; } ``` ```js {{ title: 'beta API (deprecated)' }} async function listMemory() { const url = `https://api.langbase.com/beta/org/${org}/memorysets`; // Org endpoint // const url = `https://api.langbase.com/beta/user/memorysets`; // User endpoint const apiKey = ''; // Replace with your user/org API key const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, }); const memory = await response.json(); return memory; } ``` **Key Changes:** - The user/org endpoints have been unified in the `v1` list memory API endpoint. --- ## Create Memory Here are the code examples to migrate from the `beta` to `v1` API for creating memory. ```js {{ title: 'v1 API' }} async function createMemory() { const url = 'https://api.langbase.com/v1/memory'; const apiKey = ''; // Replace with your user/org API key const memory = { name: 'memory-agent', description: 'This is a memory for ai-agent', }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(memory), }); const newMemory = await response.json(); return newMemory; } ``` ```js {{ title: 'beta API (deprecated)' }} async function createMemory() { const url = `https://api.langbase.com/beta/org/${org}/memorysets`; // Org endpoint // const url = `https://api.langbase.com/beta/user/memorysets`; // User endpoint const apiKey = ''; // Replace with your user/org API key const memory = { name: 'memory-agent', description: 'This is a memory for ai-agent', }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(memory), }); const newMemory = await response.json(); return newMemory; } ``` **Key Changes:** - The user/org endpoints have been unified in the `v1` create memory API endpoint. --- ## Delete Memory Here are the code examples to migrate from the `beta` to `v1` API for deleting memory. ```js {{ title: 'v1 API' }} async function deleteMemory() { const url = `https://api.langbase.com/v1/memory/${memoryName}`; const apiKey = ''; // Replace with your user/org API key const response = await fetch(url, { method: 'DELETE', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, }); const result = await response.json(); return result; } ``` ```js {{ title: 'beta API (deprecated)' }} async function deleteMemory() { const url = `https://api.langbase.com/beta/memorysets/${ownerLogin}/${memoryName}`; const apiKey = ''; // Replace with your user/org API key const response = await fetch(url, { method: 'DELETE', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, }); const result = await response.json(); return result; } ``` **Key Changes:** - The `ownerLogin` has been removed from the `v1` delete memory API endpoint. --- ## Retrieve Memory Here are the code examples to migrate from the `beta` to `v1` API for retrieving memory. ```js {{ title: 'v1 API' }} async function retrieveMemory() { const url = 'https://api.langbase.com/v1/memory/retrieve'; const apiKey = ''; // Replace with your user/org API key const data = { query: 'your query here', memory: [ { name: 'memory1' }, { name: 'memory2' } ] }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }); const result = await response.json(); return result; } ``` ```js {{ title: 'beta API (deprecated)' }} async function retrieveMemory() { const url = 'https://api.langbase.com/beta/memory/retrieve'; const apiKey = ''; // Replace with your user/org API key const data = { ownerLogin: '', query: 'your query here', memory: [ { name: 'memory1' }, { name: 'memory2' } ] }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }); const result = await response.json(); return result; } ``` **Key Changes:** - The `ownerLogin` property has been removed from the `v1` retrieve memory API request body. --- ## List Memory Documents Here are the code examples to migrate from the `beta` to `v1` API for listing memory documents. ```js {{ title: 'v1 API' }} async function listMemoryDocuments() { const url = `https://api.langbase.com/v1/memory/${memoryName}/documents`; const apiKey = ''; // Replace with your user/org API key const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, }); const documents = await response.json(); return documents; } ``` ```js {{ title: 'beta API (deprecated)' }} async function listMemoryDocuments() { const url = `https://api.langbase.com/beta/memorysets/${ownerLogin}/${memoryName}/documents`; const apiKey = ''; // Replace with your user/org API key const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, }); const documents = await response.json(); return documents; } ``` **Key Changes:** - The user/org endpoints have been unified in the `v1` list memory documents API endpoint. - The `ownerLogin` has been removed from the `v1` list memory documents API endpoint. --- ## Upload Memory Document Here are the code examples to migrate from the `beta` to `v1` API for uploading memory documents. ```js {{ title: 'v1 API' }} async function uploadMemoryDocument() { const url = `https://api.langbase.com/v1/memory/${memoryName}/documents`; const apiKey = ''; // Replace with your user/org API key const data = { memoryName: 'memory-agent', fileName: 'file.pdf', }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }); const result = await response.json(); return result; } ``` ```js {{ title: 'beta API (deprecated)' }} async function uploadMemoryDocument() { const url = `https://api.langbase.com/beta/org/${org}/memorysets/documents`; // Org endpoint // const url = `https://api.langbase.com/beta/user/memorysets/documents`; // User endpoint const apiKey = ''; // Replace with your user/org API key const data = { memoryName: 'memory-agent', ownerLogin: '', fileName: 'file.pdf', }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }); const result = await response.json(); return result; } ``` **Key Changes:** - The user/org endpoints have been unified in the `v1` upload memory document API endpoint. - The `ownerLogin` has been removed from the `v1` upload memory document API request body. --- ## Retry Memory Document Embeddings Here are the code examples to migrate from the `beta` to `v1` API for retrying memory document embeddings. ```js {{ title: 'v1 API' }} async function retryMemoryDocumentEmbeddings() { const url = `https://api.langbase.com/v1/memory/${memoryName}/documents/${documentName}/embeddings/retry`; const apiKey = ''; // Replace with your user/org API key const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, }); const result = await response.json(); return result; } ``` ```js {{ title: 'beta API (deprecated)' }} async function retryMemoryDocumentEmbeddings() { const url = `https://api.langbase.com/beta/memorysets/${owner}/documents/embeddings/retry`; const apiKey = ''; // Replace with your user/org API key const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, }); const result = await response.json(); return result; } ``` **Key Changes:** - The `owner` has been replaced by `memoryName` and `documentName` in the `v1` retry memory document embeddings API endpoint. ---
Page https://langbase.com/docs/api-reference/limits/ import { generateMetadata } from '@/lib/generate-metadata'; ## Limits In order to ensure stability, speed, and prevent misuse, we have set certain limits on how much a user or organization can use the Langbase API. These limits are subject to change and may vary based on your subscription plan. We have two types of limits: 1. **[Rate Limits](/api-reference/limits/rate-limits)**: The number of requests you can make in a given time period. 2. **[Usage Limits](/api-reference/limits/usage-limits)**: The number of requests you can make per month, based on your subscription plan. For more information on the limits, please refer to the individual pages corresponding to each limit. Pipe API https://langbase.com/docs/api-reference/pipe/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe API Use the Pipe API to manage the pipes in your Langbase account. Create, update, list, and run AI Pipes - [Run pipe](/api-reference/pipe/run) (NEW) - [Create pipe](/api-reference/pipe/create) - [Update pipe](/api-reference/pipe/update) - [List pipes](/api-reference/pipe/list) - [Deprecated endpoints](/api-reference/deprecated) --- Parser API <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/parser/ import { generateMetadata } from '@/lib/generate-metadata'; # Parser API v1 The `parser` API endpoint allows you to extract text content from various document formats. This is particularly useful when you need to process documents before using them in your AI applications. --- ## Limitations - Maximum file size: **10 MB** - Supported file formats: - Text files (`.txt`) - Markdown (`.md`) - PDF documents (`.pdf`) - CSV files (`.csv`) - Excel spreadsheets (`.xlsx`, `.xls`) - Common programming language files (`.js`, `.py`, `.java`, etc.) --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Parse documents {{ tag: 'POST', label: '/v1/parser' }} Parse documents by sending them to the parser API endpoint. ### Headers Request content type. Needs to be `multipart/form-data`. Replace `` with your user/org API key. --- ### Request Body The input document to be parsed. Must be one of the supported file formats and under **10 MB** in size. The name of the document including its extension (e.g., `document.pdf`). The MIME type of the document. Supported MIME types based on file format: - Text file: `text/plain` - Markdown: `text/markdown` - PDF documents: `application/pdf` - CSV files: `text/csv` - Excel spreadsheets: - `application/vnd.openxmlformats-officedocument.spreadsheetml.sheet` - `application/vnd.ms-excel` - Programming language files: `text/plain` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Parse a document
```ts {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const document = new File(['Your document content'], 'document.txt', { type: 'text/plain' }); const result = await langbase.parser({ document: document, documentName: 'document.txt', contentType: 'text/plain' }); console.log('Parsed content:', result); } main(); ``` ```python import requests def parse_document(): url = 'https://api.langbase.com/v1/parser' api_key = 'YOUR_API_KEY' headers = { 'Authorization': f'Bearer {api_key}' } with open('document.pdf', 'rb') as f: files = { 'document': f, } data = { 'documentName': 'document.pdf', 'contentType': 'application/pdf' } response = requests.post( url, headers=headers, files=files, data=data ) parsed_content = response.json() return parsed_content ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/parser \ -X POST \ -H 'Authorization: Bearer ' \ -F 'document=@/path/to/document.pdf' \ -F 'documentName=document.pdf' \ -F 'contentType=application/pdf' ``` --- ### Response The response is a JSON object with the following structure: ```ts {{title: 'Parser API Response'}} interface ParserResponse { documentName: string; content: string; } ``` The name of the parsed document. The extracted text content from the document. ```json {{ title: 'API Response' }} { "documentName": "document.pdf", "content": "Extracted text content from the document..." } ``` --- Error Codes https://langbase.com/docs/api-reference/errors/ import { generateMetadata } from '@/lib/generate-metadata'; # Error Codes Your requests to Langbase may fail due to various reasons. These errors can originate from either Langbase itself or the LLM (Language Model) provider. Below is a list of potential error codes to help you understand the issue. Please refer to the individual pages corresponding to the error code you encounter for detailed information and resolution steps. - [Forbidden (403)](/api-reference/errors/forbidden) - [Not found (404)](/api-reference/errors/not_found) - [Conflict (409)](/api-reference/errors/conflict) - [Bad request (400)](/api-reference/errors/bad_request) - [Unauthorized (401)](/api-reference/errors/unauthorized) - [Usage exceeded (403)](/api-reference/errors/usage_exceeded) - [Precondition failed (412)](/api-reference/errors/precondition_failed) - [Internal Server Error (500)](/api-reference/errors/internal_server_error) - [Insufficient permissions (403)](/api-reference/errors/insufficient_permissions) If you encounter an error not listed here, please reach out to our support team for assistance. --- Chunker API <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/chunker/ import { generateMetadata } from '@/lib/generate-metadata'; # Chunker API v1 The `chunker` API endpoint allows you to split your content into smaller chunks. This is particularly useful when you want to use document chunks in a RAG pipeline or just use specific parts of a document for your tasks. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Chunk content {{ tag: 'POST', label: '/v1/chunker' }} Split content into chunks by sending them to chunk API endpoint. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. --- ### Request Body The content of the document to be chunked. The maximum length for each document chunk. Must be between `1024` and `30000` characters. Default: `1024` The number of characters to overlap between chunks. Must be greater than or equal to `256` and less than chunkMaxLength. Default: `256` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Chunk content
```ts {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const content = `Langbase is the most powerful serverless AI platform for building AI agents with memory. Build, deploy, and scale AI agents with tools and memory (RAG). Simple AI primitives with a world-class developer experience without using any frameworks.`; const chunks = await langbase.chunker({ content, chunkMaxLength: 1024, chunkOverlap: 256 }); console.log('Chunks:', chunks); } main(); ``` ```python import requests def chunk_document(): url = 'https://api.langbase.com/v1/chunker' api_key = 'YOUR_API_KEY' headers = { 'Authorization': f'Bearer {api_key}', 'Content-Type': 'application/json', } data = { 'content': 'Langbase is the most powerful serverless AI platform for building AI agents with memory. Build, deploy, and scale AI agents with tools and memory (RAG). Simple AI primitives with a world-class developer experience without using any frameworks.', 'chunkMaxLength': '1024', 'chunkOverlap': '256' } response = requests.post( url, headers=headers, data=data ) chunks = response.json() return chunks ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/chunker \ -X POST \ -H 'Authorization: Bearer ' \ -H 'Content-Type: application/json' \ -d '{ "content": "Langbase is the most powerful serverless AI platform for building AI agents with memory. Build, deploy, and scale AI agents with tools and memory (RAG). Simple AI primitives with a world-class developer experience without using any frameworks.", "chunkMaxLength": 1024, "chunkOverlap": 256 }' ``` --- ### Response The response is an array of chunks created from the content. ```ts {{title: 'Chunker API Response'}} type ChunkerResponse = string[]; ``` ```json {{ title: 'API Response' }} [ "Langbase is the most powerful serverless AI platform for building AI agents with memory. Build, deploy, and scale AI agents with tools and memory (RAG). Simple AI primitives with a world-class developer experience without using any frameworks." ] ``` --- Embed API <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/embed/ import { generateMetadata } from '@/lib/generate-metadata'; # Embed API v1 The `embed` API endpoint allows you to generate vector embeddings for text chunks. This is particularly useful for semantic search, text similarity comparisons, and other NLP tasks. --- ## Limitations - Maximum number of chunks per request: **100** - Maximum length per chunk: **8192 characters** - Available embedding models: - `openai:text-embedding-3-large` - `cohere:embed-v4.0` - `cohere:embed-multilingual-v3.0` - `cohere:embed-multilingual-light-v3.0` - `google:text-embedding-004` --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. Please add the [LLM API keys](/features/keysets) for the embedding models you want to use in your API key settings. --- ## Generate embeddings {{ tag: 'POST', label: '/v1/embed' }} Generate vector embeddings for text chunks by sending them to the embed API endpoint. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. --- ### Request Body An array of text chunks to generate embeddings for. Maximum 100 chunks per request, with each chunk limited to 8192 characters. The embedding model to use. Available options: - `openai:text-embedding-3-large` - `cohere:embed-multilingual-v3.0` - `cohere:embed-multilingual-light-v3.0` - `google:text-embedding-004` Default: `openai:text-embedding-3-large` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Generate embeddings
```ts {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const embeddings = await langbase.embed({ chunks: [ "The quick brown fox", "jumps over the lazy dog" ] }); console.log('Embeddings:', embeddings); } main(); ``` ```python import requests import json def generate_embeddings(): url = 'https://api.langbase.com/v1/embed' api_key = 'YOUR_API_KEY' headers = { 'Authorization': f'Bearer {api_key}', 'Content-Type': 'application/json' } data = { 'chunks': [ "The quick brown fox", "jumps over the lazy dog" ], 'embeddingModel': 'openai:text-embedding-3-large' } response = requests.post( url, headers=headers, data=json.dumps(data) ) embeddings = response.json() return embeddings ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/embed \ -X POST \ -H 'Authorization: Bearer ' \ -H 'Content-Type: application/json' \ -d '{ "chunks": [ "The quick brown fox", "jumps over the lazy dog" ], "embeddingModel": "openai:text-embedding-3-large" }' ``` ```ts {{ title: 'Custom Model' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const embeddings = await langbase.embed({ chunks: [ "Hello, world!", "Bonjour, monde!", "¡Hola, mundo!" ], embeddingModel: "cohere:embed-multilingual-v3.0" }); console.log('Multilingual embeddings:', embeddings); } main(); ``` --- ### Response The response is a 2D array where each inner array represents the embedding vector for the corresponding input chunk. ```ts {{title: 'Embed API Response'}} type EmbedResponse = number[][]; ``` ```json {{ title: 'API Response' }} [ [-0.023, 0.128, -0.194, ...], [0.067, -0.022, 0.289, ...], ] ``` --- Agent Run API https://langbase.com/docs/api-reference/agent/ import { generateMetadata } from '@/lib/generate-metadata'; # Agent Run API You can use the Agent Run endpoint as runtime LLM agent. You can specify all parameters at runtime and get the response from the agent. Agent uses our unified LLM API to provide a consistent interface for interacting with 100+ LLMs across all the top LLM providers. See the list of [supported models and providers here](/supported-models-and-providers). --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Run an Agent {{ tag: 'POST', label: '/v1/agent/run' }} Run the Agent by sending the required data with the request. ## Headers Request content type. Needs to be `application/json` Replace `` with your user/org API key. LLM API key for the LLM being used in the request. --- ## options ```ts {{title: 'AgentRunOptions Object'}} interface AgentRunOptions { model: string; input: string | Array; instructions?: string; stream?: boolean; tools?: Tool[]; tool_choice?: 'auto' | 'required' | ToolChoice; parallel_tool_calls?: boolean; mcp_servers?: McpServerSchema[]; top_p?: number; max_tokens?: number; temperature?: number; presence_penalty?: number; frequency_penalty?: number; stop?: string[]; customModelParams?: Record; } ``` *Following are the properties of the options object.* --- ### model LLM model. Combination of model provider and model id, like `openai:gpt-4o-mini` Format: `provider:model_id` You can copy the ID of a model from the list of [supported LLM models](/supported-models-and-providers) at Langbase. --- ### input A string (for simple text queries) or an array of input messages. When using a string, it will be treated as a single user message. Use it for simple queries. For example: ```ts {{title: 'String Input Example'}} langbase.agent.run({ input: 'What is an AI Agent?', ... }); ``` When using an array of input messages `InputMessage[]`. Each input message should include the following properties: ```ts {{title: 'Input Message Object'}} interface InputMessage { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | ContentType[] | null; name?: string; tool_call_id?: string; } ``` ```ts {{title: 'Array Input Messages Example'}} langbase.agent.run({ input: [ { role: 'user', content: 'What is an AI Agent?', }, ], ... }); ``` --- The role of the author of this message. The content of the message. 1. `String` For text generation, it's a plain string. 2. `Null` or `undefined` Tool call messages can have no content. 3. `ContentType[]` Array used in vision and audio models, where content consists of structured parts (e.g., text, image URLs). ```js {{ title: 'ContentType Object' }} interface ContentType { type: string; text?: string | undefined; image_url?: | { url: string; detail?: string | undefined; } | undefined; }; ``` The name of the tool called by LLM The id of the tool called by LLM --- ### instructions Used to give high level instructions to the model about the task it should perform, including tone, goals, and examples of correct responses. This is equivalent to a system/developer role message at the top of LLM's context. --- ### stream Whether to stream the response or not. If `true`, the response will be streamed. --- ### tools A list of tools the model may call. ```ts {{title: 'Tools Object'}} interface ToolsOptions { type: 'function'; function: FunctionOptions } ``` The type of the tool. Currently, only `function` is supported. The function that the model may call. ```ts {{title: 'FunctionOptions Object'}} export interface FunctionOptions { name: string; description?: string; parameters?: Record } ``` The name of the function to call. The description of the function. The parameters of the function. --- ### tool_choice Tool usage configuration. Model decides when to use tools. Model must use specified tools. Forces use of a specific function. ```ts {{title: 'ToolChoice Object'}} interface ToolChoice { type: 'function'; function: { name: string; }; } ``` --- ### parallel_tool_calls Call multiple tools in parallel, allowing the effects and results of these function calls to be resolved in parallel. --- ### mcp_servers An SSE type MCP servers array ```js {{ title: 'McpServerSchema Object' }} interface McpServerSchema { name: string; type: 'url'; url: string; authorization_token?: string; tool_configuration?: { allowed_tools?: string[]; enabled?: boolean; }; custom_headers?: Record; } ``` The name of the MCP server. Type of the MCP server. The URL of the MCP server. The authorization token is for MCP servers that require OAuth authentication, you’ll need to obtain an access token. Please note that we do not store this token. Tool configuration for the MCP server have the following properties: 1. `allowed_tools` - Specify the tool names that the MCP server is permitted to use. 2. `enabled` - Whether to enable tools from this server. Custom headers are additional headers for MCP servers if required. --- ### temperature What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic. Default: `0.7` --- ### top_p An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Default: `1` --- ### max_tokens Maximum number of tokens in the response message returned. Default: `1000` --- ### presence_penalty Penalizes a word based on its occurrence in the input text. Default: `0` --- ### frequency_penalty Penalizes a word based on how frequently it appears in the training data. Default: `0` --- ### stop Up to 4 sequences where the API will stop generating further tokens. --- ### customModelParams Additional parameters to pass to the model as key-value pairs. These parameters are passed on to the model as-is. ```ts {{title: 'CustomModelParams Object'}} interface CustomModelParams { [key: string]: any; } ``` Example: ```ts { "logprobs": true, "service_tier": "auto", } ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Run an agent
```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const {output} = await langbase.agent.run({ model: 'openai:gpt-4o-mini', instructions: 'You are a helpful AI Agent.', input: 'Who is an AI Engineer?', apiKey: process.env.LLM_API_KEY!, // Replace with the LLM API key. stream: false, }); console.log('Agent response:', output); } main(); ``` ```python import requests import json def generate_completion(): url = 'https://api.langbase.com/v1/agent/run' api_key = '' llm_api_key = '' body_data = { "input": [ {"role": "user", "content": "Hello!"} ], "stream": False } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', 'LB-LLM-API-KEY': llm_api_key } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res = response.json() completion = res['completion'] return completion ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/agent/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -H 'LB-LLM-API-KEY: ' \ -d '{ "input": [ { "role": "user", "content": "Hello!" } ] }' ``` ```js {{ title: 'Node.js' }} import {getRunner, Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const {stream, rawResponse} = await langbase.agent.run({ model: 'openai:gpt-4o-mini', instructions: 'You are a helpful AI Agent.', input: 'Who is an AI Engineer?', apiKey: process.env.LLM_API_KEY!, stream: true, }); // Convert the stream to a stream runner. const runner = getRunner(stream); runner.on('connect', () => { console.log('Stream started.\n'); }); runner.on('content', content => { process.stdout.write(content); }); runner.on('end', () => { console.log('\nStream ended.'); }); runner.on('error', error => { console.error('Error:', error); }); } main(); ``` ```python import requests import json def main(): url = 'https://api.langbase.com/v1/agent/run' api_key = '' # TODO: Replace with your Langbase user/org API key. llm_api_key = '' # TODO: Replace with your LLM API key. data = { "input": [{"role": "user", "content": "Hello!"}], "stream": True } headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}", 'LB-LLM-API-KEY': llm_api_key } response = requests.post(url, headers=headers, data=json.dumps(data)) if not response.ok: print(response.json()) return for line in response.iter_lines(): if line: try: decoded_line = line.decode('utf-8') if decoded_line.startswith('data: '): json_str = decoded_line[6:] if json_str.strip() and json_str != '[DONE]': data = json.loads(json_str) if data['choices'] and len(data['choices']) > 0: delta = data['choices'][0].get('delta', {}) if 'content' in delta and delta['content']: print(delta['content'], end='', flush=True) except json.JSONDecodeError: print("Failed to parse JSON") # Debug JSON parsing continue except Exception as e: print(f"Error processing line: {e}") if __name__ == "__main__": main() ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/agent/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -H 'LB-LLM-API-KEY: ' \ -d '{ "input": [ { "role": "user", "content": "Hello!" } ], "stream": true }' ``` ```js {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const tools = [ { type: 'function', function: { name: 'get_current_weather', description: 'Get the current weather in a given location', parameters: { type: 'object', properties: { location: { type: 'string', description: 'The city and state, e.g. San Francisco, CA', }, unit: { type: 'string', enum: ['celsius', 'fahrenheit'] }, }, required: ['location'], }, }, }, ]; const response = await langbase.agent.run({ model: 'openai:gpt-4o-mini', input: 'What is the weather like in SF today?', tools: tools, tool_choice: 'auto', apiKey: process.env.LLM_API_KEY!, stream: false, }); console.log(response); } main(); ``` ```python import requests import json import os def get_weather_info(): url = 'https://api.langbase.com/v1/agent/run' api_key = '' llm_api_key = '' body_data = { "stream": False, "input": [ {"role": "user", "content": "What's the weather in SF"} ], "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather of a given location", "parameters": { "type": "object", "required": ["location"], "properties": { "unit": { "enum": ["celsius", "fahrenheit"], "type": "string" }, "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } } } } } ] } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', 'LB-LLM-API-KEY': llm_api_key } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res = response.json() return res ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/agent/run \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -H 'LB-LLM-API-KEY: ' \ -d '{ "input": [ { "role": "user", "content": "What\'s the weather in SF" } ], "stream": false, "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather of a given location", "parameters": { "type": "object", "required": ["location"], "properties": { "unit": { "enum": ["celsius", "fahrenheit"], "type": "string" }, "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } } } } } ] }' ``` --- ## Response Response of `langbase.agent.run()` is a `Promise` object. ### RunResponse Object ```ts {{title: 'AgentRunResponse Object'}} interface RunResponse { output: string | null; id: string; object: string; created: number; model: string; choices: ChoiceGenerate[]; usage: Usage; system_fingerprint: string | null; rawResponse?: { headers: Record; }; } ``` The generated text response (also called completion) from the agent. It can be a string or null if the model called a tool. The ID of the raw response. The object type name of the response. The timestamp of the response creation. The model used to generate the response. A list of chat completion choices. Can contain more than one elements if n is greater than 1. ```ts {{title: 'Choice Object for langbase.agent.run() with stream off'}} interface ChoiceGenerate { index: number; message: Message; logprobs: boolean | null; finish_reason: string; } ``` The index of the choice in the list of choices. A messages array including `role` and `content` params. ```ts {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | null; tool_calls?: ToolCall[]; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. The array of the tools called by the agent ```ts {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` The ID of the tool call. The type of the tool. Currently, only `function` is supported. The function that the model called. ```ts {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` The name of the function to call. The arguments to call the function with, as generated by the model in JSON format. Log probability information for the choice. Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. It could also be `eos` end of sequence and depends on the type of LLM, you can check their docs. The usage object including the following properties. ```ts {{title: 'Usage Object'}} interface Usage { prompt_tokens: number; completion_tokens: number; total_tokens: number; } ``` The number of tokens in the prompt (input). The number of tokens in the completion (output). The total number of tokens. This fingerprint represents the backend configuration that the model runs with. The different headers of the response. --- ### RunResponseStream Object Response of `langbase.agent.run()` with `stream: true` is a `Promise`. ```ts {{title: 'AgentRunResponseStream Object'}} interface RunResponseStream { stream: ReadableStream; rawResponse?: { headers: Record; }; } ``` The different headers of the response. Stream is an object with a streamed sequence of StreamChunk objects. ```ts {{title: 'StreamResponse Object'}} type StreamResponse = ReadableStream; ``` ### StreamChunk Represents a streamed chunk of a completion response returned by model, based on the provided input. ```js {{title: 'StreamChunk Object'}} interface StreamChunk { id: string; object: string; created: number; model: string; choices: ChoiceStream[]; } ``` A `StreamChunk` object has the following properties. The ID of the response. The object type name of the response. The timestamp of the response creation. The model used to generate the response. A list of chat completion choices. Can contain more than one elements if n is greater than 1. ```js {{title: 'Choice Object for langbase.agent.run() with stream true'}} interface ChoiceStream { index: number; delta: Delta; logprobs: boolean | null; finish_reason: string; } ``` The index of the choice in the list of choices. A chat completion delta generated by streamed model responses. ```js {{title: 'Delta Object'}} interface Delta { role?: Role; content?: string | null; tool_calls?: ToolCall[]; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. The array of the tools called by LLM ```js {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` The ID of the tool call. The type of the tool. Currently, only `function` is supported. The function that the model called. ```js {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` The name of the function to call. The arguments to call the function with, as generated by the model in JSON format. Log probability information for the choice. Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. It could also be `eos` end of sequence and depends on the type of LLM, you can check their docs. ```json {{ title: 'RunResponse type of langbase.agent.run()' }} { "output": "AI Engineer is a person who designs, builds, and maintains AI systems.", "id": "chatcmpl-123", "object": "chat.completion", "created": 1720131129, "model": "gpt-4o-mini", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "AI Engineer is a person who designs, builds, and maintains AI systems." }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 28, "completion_tokens": 36, "total_tokens": 64 }, "system_fingerprint": "fp_123" } ``` ```js {{ title: 'RunResponseStream of langbase.agent.run() with stream true' }} { "stream": StreamResponse // example of streamed chunks below. } ``` ```json {{ title: 'StreamResponse has stream chunks' }} // A stream chunk looks like this … { "id": "chatcmpl-123", "object": "chat.completion.chunk", "created": 1719848588, "model": "gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices": [{ "index": 0, "delta": { "content": "Hi" }, "logprobs": null, "finish_reason": null }] } // More chunks as they come in... {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"content":"there"},"logprobs":null,"finish_reason":null}]} … {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} ``` --- Deprecated API https://langbase.com/docs/api-reference/deprecated/ import { generateMetadata } from '@/lib/generate-metadata'; # Deprecated API Deprecated API endpoints are no longer supported and should not be used. Below is a list of deprecated API endpoints. Click on the endpoint to view detailed information about it. --- ### Pipe API Below is the list of deprecated Pipe API endpoints. - [Run (beta)](/api-reference/deprecated/pipe-run) - [Generate (beta)](/api-reference/deprecated/pipe-generate) - [Chat (beta)](/api-reference/deprecated/pipe-chat) - [Create (beta)](/api-reference/deprecated/pipe-create) - [Update (beta)](/api-reference/deprecated/pipe-update) - [List (beta)](/api-reference/deprecated/pipe-list) Please refer to the [Pipe API](/api-reference/pipe) for the latest endpoints. --- ### Memory API Below is the list of deprecated Memory API endpoints. - [List (beta)](/api-reference/deprecated/memory-list) - [Create (beta)](/api-reference/deprecated/memory-create) - [Delete (beta)](/api-reference/deprecated/memory-delete) - [Retrieve (beta)](/api-reference/deprecated/memory-retrieve) Please refer to the [Memory API](/api-reference/memory) for the latest endpoints. --- ### Documents API Below is the list of deprecated Documents API endpoints. - [List (beta)](/api-reference/deprecated/document-list) - [Upload (beta)](/api-reference/deprecated/document-upload) - [Embeddings Retry (beta)](/api-reference/deprecated/document-embeddings-retry) Please refer to the [Documents API](/api-reference/memory#document) for the latest endpoints. --- Langbase API Keys for users and organizations https://langbase.com/docs/api-reference/api-keys/ import { generateMetadata } from '@/lib/generate-metadata'; # Langbase API Keys for users and organizations The Langbase API uses API keys for authentication. You can create API keys at a user or org account level. --- ### Table of contents - [Get your Langbase API key](#get-your-langbase-api-key) - [Org API Key](#org-api-key) - [User API Key](#user-api-key) --- ## Get your Langbase API key You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details read the docs below. Treat your API keys like passwords. Keep them secret — use only on the server side. Remember to keep your API key secret! Your Langbase API key is sever side only. Never share it or expose it in client-side code like browsers or apps. For production requests, route them through your own backend server where you can securely load your API key from an environment variable or key management service. --- All API requests should include your API key in an Authorization HTTP header as follows: ```bash Authorization: Bearer LANGBASE_API_KEY ``` With Langbase SDK, you can set your API key as follows: ```js import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY }); ``` --- ## Org API Key Get an API key for your organization on Langbase. ## Step #1 Go to organization settings Login to your account on [Langbase](https://langbase.com/). 1. Navigate to your organization profile page. 2. Click on the `Settings` button and select 'Langbase API Keys'. --- ## Step #2 Org API Keys section 1. In the `Org API Keys` section, click on the `Generate API Key` button. --- ## Step #3 Generate a new API key 1. Give your org API key a name. Let's call it `Org API Key`. 2. Click on `Generate API Key` button to generate the API key. --- ## Step #4 Copy the API Key 1. Use the `Copy` button to copy your org API key. 2. Click on the `Done` button to close the modal. Make sure to copy your org API key now. You will not be able to view it another time. --- ## User API Key Get an API key for your user account on Langbase. ## Step #1 Go to profile settings Login to your account on [Langbase](https://langbase.com/). 1. Navigate to your profile `Settings` page. 2. Select 'Langbase API Keys' --- ## Step #2 User API Keys section 1. In the `User API Keys` section, click on the `Generate API Key` button. --- ## Step #3 Generate a new API key 1. Give your user API key a name. Let's call it `API Key`. 2. Click on `Generate API Key` button to generate the API key. --- ## Step #4 Copy the API Key 1. Use the `Copy` button to copy your user API key. 2. Click on the `Done` button to close the modal. Make sure to copy your user API key now. You will not be able to view it another time. --- ✨ Congrats, you have created your first API key. We're excited to see what you build with it. --- Pricing for Threads Primitive https://langbase.com/docs/threads/platform/pricing/ import { generateMetadata } from '@/lib/generate-metadata'; # Pricing for Threads Primitive `Create` and `Update` requests to the Threads primitive are counted as **Runs** against your subscription plan. Runs are counted against the content of messages in the Threads primitive. The content of the message is counted as the number of tokens in the message. For example, if a message has 1000 tokens, it counts as 1 run. If a message has 1500 tokens, it counts as 2 runs. `Get`, `Delete`, and `List` requests are not charged. | Plan | Runs | Overage | |------------|----------|----------| | Hobby | 500 | - | | Enterprise | [Contact Us][contact-us] | [Contact Us][contact-us] | Each run is an API request which can have at the max 1,000 Tokens in it which is equivalent to almost 750 words (an article). If your API request has, for instance, 1500 tokens in it, it will count as 2 runs. ### Free Users - **Limit**: 500 runs per month. - **Overage**: No overage. ### Enterprise Users There are no hard limits for Enterprise. Billing is based solely on the number of runs during each billing period. If you have questions about your usage or need assistance, please don't hesitate to [contact us](mailto:support@langbase.com). --- [contact-us]: mailto:support@langbase.com Limits for Threads Primitive https://langbase.com/docs/threads/platform/limits/ import { generateMetadata } from '@/lib/generate-metadata'; # Limits for Threads Primitive The following Rate and Usage Limits apply for the Threads primitive: ### Rate Limits Threads primitive requests follow our standard rate limits. See the [Rate Limits](/api-reference/limits/rate-limits) page for more details. ### Usage Limits `Create` and `Update` requests to the Threads primitive are counted as **Runs** against your subscription plan. See the [Run Usage Limits](/api-reference/limits/usage-limits) page for more details. `Get`, `Delete`, and `List` requests have no usage limits. usePipe() https://langbase.com/docs/sdk/pipe/use-pipe/ import { generateMetadata } from '@/lib/generate-metadata'; # usePipe() You can use the Langbase `usePipe()` React hook to generate text or handle stream from model provider. It internally manages the state and provides all the necessary callbacks and properties to work with LLM. Check out how to use the `usePipe()` hook in [this example](https://github.com/LangbaseInc/langbase-sdk/blob/main/examples/nextjs/components/langbase/chat-advanced.tsx). --- ## API reference ## `usePipe(options)` Handle text or stream from model provider. ```tsx {{title: 'hook Signature'}} usePipe(options); // With types. usePipe(options: UsePipeOptions) ``` ## options ### UsePipeOptions ```ts {{title: 'UsePipeOptions Object'}} interface UsePipeOptions { apiRoute?: string; onResponse?: (message: Message) => void; onFinish?: (messages: Message[]) => void; onConnect?: () => void; onError?: (error: Error) => void; threadId?: string; initialMessages?: Message[]; stream?: boolean; } ``` *Following are the properties of the options object.* --- ### apiRoute The API route to call that returns LLM response. --- ### onResponse The callback function that is called when a response is received from the API. The message object. ```ts {{title: 'Message Object'}} interface Message { role: MessageRole; content: string; name?: string; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. An name for the participant. Provides the model information to differentiate between participants of the same role. --- ### onFinish The callback function that is called when the API call is finished. The message object. ```ts {{title: 'Message Object'}} interface Message { role: MessageRole; content: string; name?: string; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. An name for the participant. Provides the model information to differentiate between participants of the same role. --- ### onConnect The callback function that is called when the API call is connected. --- ### onError The callback function that is called when an error occurs. The error object containing information about what went wrong. --- ### threadId The ID of the thread. Enable if you want to continue the conversation in the same thread from the second message onwards. Works only with deployed pipes. - If `threadId` is not provided, a new thread will be created. E.g. first message of a new chat will not have a threadId. - After the first message, a new `threadId` will be returned. - Use this `threadId` to continue the conversation in the same thread from the second message onwards. --- ### initialMessages An array of messages to be sent to the LLM. The message object. ```ts {{title: 'Message Object'}} interface Message { role: MessageRole; content: string; name?: string; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. An name for the participant. Provides the model information to differentiate between participants of the same role. --- ### stream Whether to stream the response from the API. Default: `true` --- ## Return Object The `usePipe` hook returns the following object: ```ts {{ title: 'usePipe return object' }} interface UsePipeReturn { input: string; stop: () => void; isLoading: boolean; error: Error | null; messages: Message[]; threadId: string | null; setMessages: (newMessages: Message[]) => void; regenerate: (options: PipeRequestOptions) => Promise; sendMessage: (content: string, options: PipeRequestOptions) => Promise; handleInputChange: (event: React.ChangeEvent) => void; handleSubmit: (event?: React.FormEvent, options: PipeRequestOptions) => void; } ``` The input value of the input field. A function that stops the response from the API. A boolean value that indicates whether the API call is in progress. The error object containing information about what went wrong. The message object. ```ts {{title: 'Message Object'}} interface Message { role: MessageRole; content: string; name?: string; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. An name for the participant. Provides the model information to differentiate between participants of the same role. The ID of the thread. Enable if you want to continue the conversation in the same thread from the second message onwards. Works only with deployed pipes. - If `threadId` is not provided, a new thread will be created. E.g. first message of a new chat will not have a threadId. - After the first message, a new `threadId` will be returned. - Use this `threadId` to continue the conversation in the same thread from the second message onwards. A function that sets the messages. The message object. ```ts {{title: 'Message Object'}} interface Message { role: MessageRole; content: string; name?: string; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. An name for the participant. Provides the model information to differentiate between participants of the same role. A function that regenerates the response from the API. ```ts {{title: 'PipeRequestOptions'}} interface PipeRequestOptions { headers?: Record | Headers; body?: any; data?: any; allowEmptySubmit?: boolean; } ``` Additional headers to be sent with the request. The body of the request. The data to be sent with the request. Whether to allow an empty submit. If `true`, the request will be sent even if the input is empty. A function that sends a message to the API. The content of the message. ```ts {{title: 'PipeRequestOptions'}} interface PipeRequestOptions { headers?: Record | Headers; body?: any; data?: any; allowEmptySubmit?: boolean; } ``` Additional headers to be sent with the request. The body of the request. The data to be sent with the request. Whether to allow an empty submit. If `true`, the request will be sent even if the input is empty. A function that handles the input change event. A function that handles the form submit and call the API. ```ts {{title: 'PipeRequestOptions'}} interface PipeRequestOptions { headers?: Record | Headers; body?: any; data?: any; allowEmptySubmit?: boolean; } ``` Additional headers to be sent with the request. The body of the request. The data to be sent with the request. Whether to allow an empty submit. If `true`, the request will be sent even if the input is empty. ### `usePipe` hook example ```tsx {{ title: 'page.tsx' }} import { usePipe } from 'langbase/react'; export default function ChatComponent() { const { stop, input, error, messages, threadId, isLoading, regenerate, setMessages, sendMessage, handleSubmit, handleInputChange, } = usePipe({ stream: true, apiRoute: '', onResponse: (message) => {}, onFinish: (messages) => {}, onError: (error) => {}, initialMessages: [ {role: 'assistant', content: 'Hello! How can I help you?'}, {role: 'user', content: 'Who is an AI engineer?'}, ], // You can set initial messages here if needed }); // UI return <> } ``` ```ts {{ title: 'route.ts' }} import {Langbase, RunResponse} from 'langbase'; import {NextRequest} from 'next/server'; export async function POST(req: NextRequest) { const options = await req.json(); // 1. Initiate the Pipe. const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); if (options.stream) { const {stream, threadId} = await langbase.pipes.run({ ...options, stream: true, name: 'summary-agent', }); return new Response(stream, { status: 200, headers: { 'lb-thread-id': threadId ?? '', }, }); } else { const response = (await langbase.pipes.run({ ...options, stream: false, name: 'summary', })) as unknown as RunResponse; return new Response(JSON.stringify(response), { status: 200, headers: { 'lb-thread-id': response.threadId ?? '', }, }); } } ``` --- Update Pipe <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.pipes.update()</span> https://langbase.com/docs/sdk/pipe/update/ import { generateMetadata } from '@/lib/generate-metadata'; # Update Pipe langbase.pipes.update() Update an existing AI agent pipe on Langbase using the `langbase.pipes.update()` function. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.pipes.update(options)` ```ts {{ title: 'index.ts' }} langbase.pipes.create(options); // with types. langbase.pipes.create(options: PipeUpdateOptions); ``` ## options ```ts {{title: 'PipeUpdateOptions Object'}} interface PipeUpdateOptions { name: string; description?: string; status?: 'public' | 'private'; model?: string; stream?: boolean; json?: boolean; store?: boolean; moderate?: boolean; top_p?: number; max_tokens?: number; temperature?: number; presence_penalty?: number; frequency_penalty?: number; stop?: string[]; tools?: { type: 'function'; function: { name: string; description?: string; parameters?: Record; }; }[]; tool_choice?: 'auto' | 'required' | ToolChoice; parallel_tool_calls?: boolean; messages?: Message[]; variables?: Record; memory?: { name: string; }[]; response_format?: ResponseFormat; } ``` *Following are the properties of the options object.* Name of the pipe to update. Description of the AI pipe. Status of the pipe. Defaults to `public` Pipe LLM model. Combination of model provider and model id. Format: `provider:model_id` You can copy the ID of a model from the list of [supported LLM models](/supported-models-and-providers) at Langbase. Default: `openai:gpt-4o-mini` Tool usage configuration. Model decides when to use tools. Model must use specified tools. Forces use of a specific function. ```ts {{title: 'ToolChoice Object'}} interface ToolChoice { type: 'function'; function: { name: string; }; } ``` An array of memories that the Pipe should use ```ts {{title: 'Memory Object'}} interface Memory { name: string; } ``` If enabled, the output will be streamed in real-time like ChatGPT. This is helpful if user is directly reading the text. Default: `true` Enforce the output to be in JSON format. Default: `false` If enabled, both prompt and completions will be stored in the database. Otherwise, only system prompt and few shot messages will be saved. Default: `true` If enabled, Langbase blocks flagged requests automatically. Default: `false` What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic. Default: `0.7` An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Default: `1` Maximum number of tokens in the response message returned. Default: `1000` Penalizes a word based on its occurrence in the input text. Default: `1` Penalizes a word based on how frequently it appears in the training data. Default: `1` Up to 4 sequences where the API will stop generating further tokens. Default: `[]` Call multiple tools in parallel, allowing the effects and results of these function calls to be resolved in parallel. Default: `true` --- ### messages A messages array including the following properties. Optional if variables are provided. ```ts {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | null; name?: string; tool_call_id?: string; tool_calls?: ToolCall[]; } ``` --- The role of the author of this message. The contents of the chunk message. The name of the tool called by LLM The id of the tool called by LLM The array of tools sent to LLM. ```ts {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` Function definition sent to LLM. ```ts {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` --- ### variables An object containing pipe variables. The key is the variable name and the value is the variable value. Default: `{}` --- ### response_format Defines the format of the response. Primarily used for Structured Outputs. To enforce Structured Outputs, set type to `json_schema`, and provide a JSON schema for your response with `strict: true` option. Default: `text` ```ts {{title: 'ResponseFormat Object'}} type ResponseFormat = | {type: 'text'} | {type: 'json_object'} | { type: 'json_schema'; json_schema: { description?: string; name: string; schema?: Record; strict?: boolean | null; }; }; ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Update pipe ```ts {{ title: 'update-basic-pipe.ts' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const summaryAgent = await langbase.pipes.update({ name: 'summary-agent', description: 'Updated pipe description', temperature: 0.8, }); console.log('Summary agent:', summaryAgent); } main(); ``` ```ts {{ title: 'update-advanced-pipe.ts' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const summaryAgent = await langbase.pipes.update({ name: 'data-processing-agent', model: 'anthropic:claude-3-5-sonnet-latest', json: true, tools: [ { type: 'function', function: { name: 'processNewData', description: 'Process updated data', parameters: { type: 'object', properties: { data: { type: 'string', description: 'Data to process', }, }, }, }, }, ], memory: [{name: 'knowledge-base'}], messages: [ { role: 'system', content: 'You are an enhanced data processing assistant.', }, ], }); console.log('Summary agent:', summaryAgent); } main(); ``` --- ## Response The response object returned by the `langbase.pipes.update()` function. ```ts {{title: 'PipeUpdateResponse'}} interface PipeUpdateResponse { name: string; description: string; status: 'public' | 'private'; owner_login: string; url: string; type: 'chat' | 'generate' | 'run'; api_key: string; } ``` Name of the updated pipe. Updated description of the pipe. Updated pipe visibility status. Login of the pipe owner. Pipe access URL. The type of the pipe. API key for pipe access. ```json {{ title: 'PipeUpdateResponse type of langbase.pipes.update()' }} { "name": "summary-agent", "description": "Updated AI pipe for summarization", "status": "public", "owner_login": "user123", "url": "https://langbase.com/user123/summary-agent", "type": "run", "api_key": "pipe_xyz123" } ``` --- Run Pipe <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.pipes.run()</span> https://langbase.com/docs/sdk/pipe/run/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Pipe langbase.pipes.run() You can use the `langbase.pipes.run()` function to run a pipe. It can both **generate** or **stream** text for a user prompt like "Who is an AI Engineer?" or give it a an entire doc and ask it to summarize it. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.pipes.run(options)` Request LLM by running a pipe with `langbase.pipes.run()` function. ```ts {{ title: 'index.ts' }} langbase.pipes.run(options); // with types. langbase.pipes.run(options: RunOptions); ``` ## options ```ts {{title: 'RunOptions Object'}} interface RunOptions { name: string; apiKey?: string; llmKey?: string; messages: Message[]; variables?: Record; threadId?: string; rawResponse?: boolean; tools?: Tool[]; memory?: Memory[]; } ``` *Following are the properties of the options object.* --- ### name The name of the pipe to run. E.g. `ai-agent`. Pipe name and the User/Org API key is used to run the pipe. --- ### apiKey API key of the pipe you want to run. If provided, `pipe.run()` will use this key instead of user/org key to identify and run the pipe. --- ### llmKey LLM API key for the pipe. If not provided, the LLM key from Pipe/User/Organization keyset will be used. Explore the [example](https://github.com/LangbaseInc/langbase-sdk/blob/main/examples/nodejs/pipes/pipe.run.stream.llmkey.ts) to learn more. --- ### messages A messages array including the following properties. Optional if variables are provided. ```ts {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | null; name?: string; tool_call_id?: string; tool_calls?: ToolCall[]; } ``` --- The role of the author of this message. The contents of the chunk message. The name of the tool called by LLM The id of the tool called by LLM The array of tools called by LLM. ```ts {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` The function that the LLM called. ```ts {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` --- ### variables An object containing pipe variables. The key is the variable name and the value is the variable value. Default: `{}` --- ### threadId The ID of the thread. Enable if you want to continue the conversation in the same thread from the second message onwards. Works only with deployed pipes. - If `threadId` is not provided, a new thread will be created. E.g. first message of a new chat will not have a threadId. - After the first message, a new `threadId` will be returned. - Use this `threadId` to continue the conversation in the same thread from the second message onwards. --- ### rawResponse Enable if you want to get complete raw LLM response. Default: `false` --- ### tools A list of tools the model may call. ```ts {{title: 'Tools Object'}} interface ToolsOptions { type: 'function'; function: FunctionOptions } ``` The type of the tool. Currently, only `function` is supported. The function that the model may call. ```ts {{title: 'FunctionOptions Object'}} export interface FunctionOptions { name: string; description?: string; parameters?: Record } ``` The name of the function to call. The description of the function. The parameters of the function. --- ### memory An array of memory objects that specify the memories your pipe should use at run time. If memories are defined here, they will override the default pipe memories, which will be ignored. All referenced memories must exist in your account. ```json {{title: 'Run time Memory array example'}} "memory": [ { "name": "runtime-memory-1" }, { "name": "runtime-memory-2" } ] ``` If this property is not set or is empty, the pipe will fall back to using its default memories. Default: `undefined` Each memory in the array follows this structure: ```js {{title: 'Memory Object'}} interface Memory { name: string; } ``` The name of the memory. --- ## options ### RunOptionsStream ```ts {{title: 'RunOptions Object'}} interface RunOptionsStream extends RunOptions { stream: boolean; } ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### `langbase.pipes.run()` examples ```ts {{ title: 'Non-stream' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const {completion} = await langbase.pipes.run({ name: 'summary-agent', messages: [ { role: 'user', content: 'Who is an AI Engineer?', }, ], stream: false }); console.log('Summary agent completion:', completion); } main(); ``` ```ts {{ title: 'Stream' }} import {getRunner, Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const {stream, threadId, rawResponse} = await langbase.pipes.run({ stream: true, name: 'summary-agent', messages: [{role: 'user', content: 'Who is an AI Engineer?'}], }); // Convert the stream to a stream runner. const runner = getRunner(stream); runner.on('connect', () => { console.log('Stream started.\n'); }); runner.on('content', content => { process.stdout.write(content); }); runner.on('end', () => { console.log('\nStream ended.'); }); runner.on('error', error => { console.error('Error:', error); }); } main(); ``` ```ts {{ title: 'Chat w/ pipe.run()' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { // Message 1: Tell something to the LLM. const response1 = await langbase.pipes.run({ name: 'summary-agent', stream: false, messages: [{role: 'user', content: 'My company is called Langbase'}], }); console.log(response1.completion); // Message 2: Ask something about the first message. // Continue the conversation in the same thread by sending // `threadId` from the second message onwards. const response2 = await langbase.pipes.run({ name: 'summary-agent', stream: false, threadId: response1.threadId, messages: [{role: 'user', content: 'Tell me the name of my company?'}], }); console.log(response2.completion); // You'll see any LLM will know the company is `Langbase` // since it's the same chat thread. This is how you can // continue a conversation in the same thread. } main(); ``` ```ts {{ title: 'Tool Calling' }} import 'dotenv/config'; import { Langbase, getToolsFromRun } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); async function main() { const userMsg = "What's the weather in SF"; const response = await langbase.pipes.run({ stream: false, name: 'summary-agent', messages: [{ role: 'user', content: userMsg }], tools: [ { type: 'function', function: { name: 'get_current_weather', description: 'Get the current weather of a given location', parameters: { type: 'object', required: ['location'], properties: { unit: { enum: ['celsius', 'fahrenheit'], type: 'string' }, location: { type: 'string', description: 'The city and state, e.g. San Francisco, CA' } } } } } ] }); const toolCalls = await getToolsFromRun(response); const hasToolCalls = toolCalls.length > 0; if (hasToolCalls) { // handle the tool calls console.log('Tools:', toolCalls); } else { // handle the response console.log('Response:', response); } } main(); ``` ```ts {{ title: 'Use Pipe API key' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const {completion} = await langbase.pipes.run({ // do not add pipe name apiKey: '', // Replace with your pipe API key. stream: false, messages: [ { role: 'user', content: 'Hello, I am a {{profession}}', }, ], }); console.log('Agent generation:', completion); } main(); ``` ```ts {{ title: 'Variables' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const {completion} = await langbase.pipes.run({ name: 'summary-agent', stream: false, messages: [ { role: 'user', content: 'Hello, I am a {{profession}}', }, ], variables: { profession: 'AI Engineer' }, }); console.log('Summary Agent completion:', completion); } main(); ``` --- ## Response Response of `langbase.pipes.run()` is a `Promise` object. ### RunResponse Object ```ts {{title: 'RunResponse Object'}} interface RunResponse { completion: string; threadId?: string; id: string; object: string; created: number; model: string; choices: ChoiceGenerate[]; usage: Usage; system_fingerprint: string | null; rawResponse?: { headers: Record; }; } ``` The generated text completion. The ID of the thread. Useful for a chat pipe to continue the conversation in the same thread. Optional. The ID of the raw response. The object type name of the response. The timestamp of the response creation. The model used to generate the response. A list of chat completion choices. Can contain more than one elements if n is greater than 1. ```ts {{title: 'Choice Object for langbase.pipes.run() with stream off'}} interface ChoiceGenerate { index: number; message: Message; logprobs: boolean | null; finish_reason: string; } ``` The index of the choice in the list of choices. A messages array including `role` and `content` params. ```ts {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | null; tool_calls?: ToolCall[]; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. The array of the tools called by LLM ```ts {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` The ID of the tool call. The type of the tool. Currently, only `function` is supported. The function that the model called. ```ts {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` The name of the function to call. The arguments to call the function with, as generated by the model in JSON format. Log probability information for the choice. Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. It could also be `eos` end of sequence and depends on the type of LLM, you can check their docs. The usage object including the following properties. ```ts {{title: 'Usage Object'}} interface Usage { prompt_tokens: number; completion_tokens: number; total_tokens: number; } ``` The number of tokens in the prompt (input). The number of tokens in the completion (output). The total number of tokens. This fingerprint represents the backend configuration that the model runs with. The different headers of the response. --- ### RunResponseStream Object Response of `langbase.pipes.run()` with `stream: true` is a `Promise`. ```ts {{title: 'RunResponseStream Object'}} interface RunResponseStream { stream: ReadableStream; threadId: string | null; rawResponse?: { headers: Record; }; } ``` The ID of the thread. Useful for a chat pipe to continue the conversation in the same thread. Optional. The different headers of the response. Stream is an object with a streamed sequence of StreamChunk objects. ```ts {{title: 'StreamResponse Object'}} type StreamResponse = ReadableStream; ``` ### StreamChunk Represents a streamed chunk of a completion response returned by model, based on the provided input. ```js {{title: 'StreamChunk Object'}} interface StreamChunk { id: string; object: string; created: number; model: string; choices: ChoiceStream[]; } ``` A `StreamChunk` object has the following properties. The ID of the response. The object type name of the response. The timestamp of the response creation. The model used to generate the response. A list of chat completion choices. Can contain more than one elements if n is greater than 1. ```js {{title: 'Choice Object for langbase.pipes.run() with stream true'}} interface ChoiceStream { index: number; delta: Delta; logprobs: boolean | null; finish_reason: string; } ``` The index of the choice in the list of choices. A chat completion delta generated by streamed model responses. ```js {{title: 'Delta Object'}} interface Delta { role?: Role; content?: string | null; tool_calls?: ToolCall[]; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. The array of the tools called by LLM ```js {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` The ID of the tool call. The type of the tool. Currently, only `function` is supported. The function that the model called. ```js {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` The name of the function to call. The arguments to call the function with, as generated by the model in JSON format. Log probability information for the choice. Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. It could also be `eos` end of sequence and depends on the type of LLM, you can check their docs. ```json {{ title: 'RunResponse type of langbase.pipes.run()' }} { "completion": "AI Engineer is a person who designs, builds, and maintains AI systems.", "threadId": "thread_123", "id": "chatcmpl-123", "object": "chat.completion", "created": 1720131129, "model": "gpt-4o-mini", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "AI Engineer is a person who designs, builds, and maintains AI systems." }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 28, "completion_tokens": 36, "total_tokens": 64 }, "system_fingerprint": "fp_123" } ``` ```js {{ title: 'RunResponseStream of langbase.pipes.run() with stream true' }} { "threadId": "string-uuid-123", "stream": StreamResponse // example of streamed chunks below. } ``` ```json {{ title: 'StreamResponse has stream chunks' }} // A stream chunk looks like this … { "id": "chatcmpl-123", "object": "chat.completion.chunk", "created": 1719848588, "model": "gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices": [{ "index": 0, "delta": { "content": "Hi" }, "logprobs": null, "finish_reason": null }] } // More chunks as they come in... {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"content":"there"},"logprobs":null,"finish_reason":null}]} … {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} ``` --- List Pipes <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.pipes.list()</span> https://langbase.com/docs/sdk/pipe/list/ import { generateMetadata } from '@/lib/generate-metadata'; # List Pipes langbase.pipes.list() Retrieve a list of all your AI agent pipes on Langbase using the `langbase.pipes.list()` function. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ### `langbase.pipes.list()` ```ts {{ title: 'index.ts' }} langbase.pipes.list(); ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### List pipes ```ts {{ title: 'list-pipes.ts' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const pipeAgents = await langbase.pipes.list(); console.log('Pipe agents:', pipeAgents); } main(); ``` --- ## Response An array of pipe objects returned by the `langbase.pipes.list()` function. ```ts {{title: 'PipeListResponse'}} interface PipeListResponse { name: string; description: string; status: 'public' | 'private'; owner_login: string; url: string; model: string; stream: boolean; json: boolean; store: boolean; moderate: boolean; top_p: number; max_tokens: number; temperature: number; presence_penalty: number; frequency_penalty: number; stop: string[]; tool_choice: 'auto' | 'required' | ToolChoice; parallel_tool_calls: boolean; messages: Message[]; variables: Variable[] | []; tools: ToolFunction[] | []; memory: Memory[] | []; } ``` Name of the pipe. Description of the AI pipe. Status of the pipe. Login of the pipe owner. Pipe access URL. Pipe LLM model. Combination of model provider and model id. Format: `provider:model_id` Pipe stream status. If enabled, the pipe will stream the response. Pipe JSON status. If enabled, the pipe will return the response in JSON format. Whether to store the prompt and completions in the database. Whether to moderate the completions returned by the model. Pipe configured top_p value. Configured maximum tokens for the pipe. Configured temperature for the pipe. What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic. Configured presence penalty for the pipe. Configured frequency penalty for the pipe. Configured stop sequences for the pipe. Tool usage configuration. Model decides when to use tools. Model must use specified tools. Forces use of a specific function. ```ts {{title: 'ToolChoice Object'}} interface ToolChoice { type: 'function'; function: { name: string; }; } ``` If enabled, the pipe will make parallel tool calls. A messages array including the following properties. ```ts {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | null; name?: string; tool_call_id?: string; tool_calls?: ToolCall[]; } ``` The role of the author of this message The contents of the chunk message The name of the tool called by LLM The id of the tool called by LLM The array of tools sent to LLM ```ts {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` Function definition sent to LLM ```ts {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` A variables array including the `name` and `value` params. ```ts {{title: 'Variable Object'}} interface Variable { name: string; value: string; } ``` The name of the variable. The value of the variable. An array of memories the pipe has access to. ```ts {{title: 'Memory Object'}} interface Memory { name: string; } ``` ```json {{ title: 'Example PipeListResponse' }} [ { "name": "summary-agent", "description": "AI pipe for summarization", "status": "public", "owner_login": "user123", "url": "https://langbase.com/user123/summary-agent", "model": "openai:gpt-4o-mini", "stream": true, "json": false, "store": true, "moderate": false, "top_p": 1, "max_tokens": 1000, "temperature": 0.7, "presence_penalty": 1, "frequency_penalty": 1, "stop": [], "tool_choice": "auto", "parallel_tool_calls": true, "messages": [], "variables": [], "tools": [], "memory": [] } ] ``` --- Create Pipe <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.pipes.create()</span> https://langbase.com/docs/sdk/pipe/create/ import { generateMetadata } from '@/lib/generate-metadata'; # Create Pipe langbase.pipes.create() Create a new AI agent pipe on Langbase using the `langbase.pipes.create()` function. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.pipes.create(options)` ```ts {{ title: 'index.ts' }} langbase.pipes.create(options); // with types. langbase.pipes.create(options: PipeCreateOptions); ``` ## options ```ts {{title: 'PipeCreateOptions Object'}} interface PipeCreateOptions { name: string; description?: string; status?: 'public' | 'private'; upsert?: boolean; model?: string; stream?: boolean; json?: boolean; store?: boolean; moderate?: boolean; top_p?: number; max_tokens?: number; temperature?: number; presence_penalty?: number; frequency_penalty?: number; stop?: string[]; tools?: { type: 'function'; function: { name: string; description?: string; parameters?: Record; }; }[]; tool_choice?: 'auto' | 'required' | ToolChoice; parallel_tool_calls?: boolean; messages?: Message[]; variables?: Record; memory?: { name: string; }[]; response_format?: ResponseFormat; } ``` *Following are the properties of the options object.* --- Name of the pipe. Description of the AI pipe. Status of the pipe. Defaults to `public` Pipe LLM model. Combination of model provider and model id. Format: `provider:model_id` You can copy the ID of a model from the list of [supported LLM models](/supported-models-and-providers) at Langbase. Default: `openai:gpt-4o-mini` Tool usage configuration. Model decides when to use tools. Model must use specified tools. Forces use of a specific function. ```ts {{title: 'ToolChoice Object'}} interface ToolChoice { type: 'function'; function: { name: string; }; } ``` An array of memories that the Pipe should use ```ts {{title: 'Memory Object'}} interface Memory { name: string; } ``` If enabled, the output will be streamed in real-time like ChatGPT. This is helpful if user is directly reading the text. Default: `true` Enforce the output to be in JSON format. Default: `false` If enabled, both prompt and completions will be stored in the database. Otherwise, only system prompt and few shot messages will be saved. Default: `true` If enabled, Langbase blocks flagged requests automatically. Default: `false` What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic. Default: `0.7` An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Default: `1` Maximum number of tokens in the response message returned. Default: `1000` Penalizes a word based on its occurrence in the input text. Default: `1` Penalizes a word based on how frequently it appears in the training data. Default: `1` Up to 4 sequences where the API will stop generating further tokens. Default: `[]` Call multiple tools in parallel, allowing the effects and results of these function calls to be resolved in parallel. Default: `true` --- ### messages A messages array including the following properties. Optional if variables are provided. ```ts {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | null; name?: string; tool_call_id?: string; tool_calls?: ToolCall[]; } ``` --- The role of the author of this message. The contents of the chunk message. The name of the tool called by LLM The id of the tool called by LLM The array of tools sent to LLM. ```ts {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` Function definition sent to LLM. ```ts {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` --- ### variables An object containing pipe variables. The key is the variable name and the value is the variable value. Default: `{}` --- ### response_format Defines the format of the response. Primarily used for Structured Outputs. To enforce Structured Outputs, set type to `json_schema`, and provide a JSON schema for your response with `strict: true` option. Default: `text` ```ts {{title: 'ResponseFormat Object'}} type ResponseFormat = | {type: 'text'} | {type: 'json_object'} | { type: 'json_schema'; json_schema: { description?: string; name: string; schema?: Record; strict?: boolean | null; }; }; ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Create pipe ```ts {{ title: 'create-basic-pipe.ts' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const summaryAgent = await langbase.pipes.create({ name: 'summary-agent', description: 'A simple pipe example', }); console.log('Pipe created:', summaryAgent); } main(); ``` ```ts {{ title: 'create-advanced-pipe.ts' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const summaryAgent = await langbase.pipes.create({ name: 'data-processing-agent', description: 'Advanced pipe with tools and memory', model: 'google:gemini-1.5-pro-latest', json: true, tools: [ { type: 'function', function: { name: 'processData', description: 'Process input data', parameters: { type: 'object', properties: { data: { type: 'string', description: 'Data to process', }, }, }, }, }, ], memory: [{name: 'knowledge-base'}], messages: [ { role: 'system', content: 'You are a data processing assistant.', }, ], variables: { apiEndpoint: 'https://api.example.com', }, }); console.log('Pipe created:', summaryAgent); } main(); ``` --- ## Response The response object returned by the `langbase.pipes.create()` function. ```ts {{title: 'PipeCreateResponse'}} interface PipeCreateResponse { name: string; description: string; status: 'public' | 'private'; owner_login: string; url: string; type: 'chat' | 'generate' | 'run'; api_key: string; } ``` Name of the pipe. Description of the pipe. Pipe visibility status. Login of the pipe owner. Pipe access URL. The type of the pipe. API key for pipe access. ```json {{ title: 'PipeCreateResponse type of langbase.pipes.create()' }} { "name": "summary-agent", "description": "AI pipe for summarization", "status": "public", "owner_login": "user123", "url": "https://langbase.com/user123/summary-agent", "type": "run", "api_key": "pipe_xyz123" } ``` --- Get Thread `langbase.threads.get()` API reference of `langbase.threads.get()` function in Langbase AI SDK. https://langbase.com/docs/api/og?title=langbase.threads.get()§ion=Langbase%20AI%20SDK https://langbase.com/docs/sdk/threads/get # Get Thread langbase.threads.get() You can use the `threads.get()` function to retrieve thread information and its metadata. This helps you access conversation history and thread details when needed. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.threads.get(options)` Retrieve a thread by its ID. ```ts {{ title: 'index.ts' }} langbase.threads.get(options); // with types langbase.threads.get(options: ThreadsGet); ``` ## options ```ts {{title: 'ThreadsGet Object'}} interface ThreadsGet { threadId: string; } ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### `langbase.threads.get()` example ```ts {{ title: 'Get Example' }} const thread = await langbase.threads.get({ threadId: "thread_123" }); ``` --- ### Response The response of the `threads.get()` function is a promise that resolves to a `ThreadsBaseResponse` object. ```ts {{title: 'ThreadsBaseResponse'}} interface ThreadsBaseResponse { id: string; object: 'thread'; created_at: number; metadata: Record; } ``` ```json {{ title: 'Response Example' }} { "id": "thread_123", "object": "thread", "created_at": 1709544000, "metadata": { "userId": "user123", "topic": "support", "status": "resolved" } } ``` Update Thread `langbase.threads.update()` API reference of `langbase.threads.update()` function in Langbase AI SDK. https://langbase.com/docs/api/og?title=langbase.threads.update()§ion=Langbase%20AI%20SDK https://langbase.com/docs/sdk/threads/update # Update Thread langbase.threads.update() You can use the `threads.update()` function to modify an existing thread's metadata. This helps you manage and organize your conversation threads effectively. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.threads.update(options)` Update an existing thread's metadata. ```ts {{ title: 'index.ts' }} langbase.threads.update(options); // with types langbase.threads.update(options: ThreadsUpdate); ``` ## options ```ts {{title: 'ThreadsUpdate Object'}} export interface ThreadsUpdate { threadId: string; metadata: Record; } ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### `langbase.threads.update()` example ```ts {{ title: 'Update Example' }} const updated = await langbase.threads.update({ threadId: "thread_123", metadata: { status: "resolved" } }); ``` --- ### Response The response of the `threads.update()` function is a promise that resolves to a `ThreadsBaseResponse` object. ```ts {{title: 'ThreadsBaseResponse'}} export interface ThreadsBaseResponse { id: string; object: 'thread'; created_at: number; metadata: Record; } ``` ```json {{ title: 'Response Example' }} { "id": "thread_123", "object": "thread", "created_at": 1709544000, "metadata": { "status": "resolved" } } ``` List Messages `langbase.threads.messages.list()` API reference of `langbase.threads.messages.list()` function in Langbase AI SDK. https://langbase.com/docs/api/og?title=langbase.threads.messages.list()§ion=Langbase%20AI%20SDK https://langbase.com/docs/sdk/threads/list-messages # List Messages langbase.threads.messages.list() You can use the `threads.messages.list()` function to retrieve all messages in a thread. This helps you access the complete conversation history of a specific thread. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.threads.messages.list(options)` List all messages in a thread. ```ts {{ title: 'index.ts' }} langbase.threads.messages.list(options); // with types langbase.threads.messages.list(options: ThreadsCreate); ``` ## options ```ts {{title: 'ThreadMessagesList Object'}} interface ThreadMessagesList { threadId: string; } ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### `langbase.threads.messages.list()` example ```ts {{ title: 'List Messages Example' }} const messages = await langbase.threads.messages.list({ threadId: "thread_123" }); ``` --- ### Response The response of the `threads.messages.list()` function is a promise that resolves to an array of `ThreadMessagesBaseResponse` objects. ```ts {{title: 'ThreadMessagesBaseResponse'}} export interface ThreadMessagesBaseResponse { id: string; created_at: number; thread_id: string; content: string; role: Role; tool_call_id: string | null; tool_calls: ToolCall[] | []; name: string | null; attachments: any[] | []; metadata: Record | {}; } ``` ```json {{ title: 'Response Example' }} [ { "id": "msg_125", "thread_id": "thread_123", "created_at": 1709544120, "role": "assistant", "content": "How can I help you today?", "tool_call_id": null, "tool_calls": [], "name": null, "attachments": [], "metadata": {} } ] ``` Delete Thread `langbase.threads.delete()` API reference of `langbase.threads.delete()` function in Langbase AI SDK. https://langbase.com/docs/api/og?title=langbase.threads.delete()§ion=Langbase%20AI%20SDK https://langbase.com/docs/sdk/threads/delete # Delete Thread langbase.threads.delete() You can use the `threads.delete()` function to remove threads that are no longer needed. This helps you manage your conversation history and clean up unused threads. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.threads.delete(options)` Delete a thread by its ID. ```ts {{ title: 'index.ts' }} langbase.threads.delete(options); // with types langbase.threads.delete(options: DeleteThreadOptions); ``` ## options ```ts {{title: 'DeleteThreadOptions Object'}} interface DeleteThreadOptions { threadId: string; } ``` ```ts {{ title: 'Delete Example' }} const result = await langbase.threads.delete({ threadId: "thread_123" }); ``` --- ### Response The response of the `langbase.threads.delete()` function is a JSON object with the following structure: ```ts {{title: 'DeleteResponse'}} interface DeleteResponse { success: boolean; } ``` *The response will contain a `success` property indicating whether the thread was successfully deleted.* ```json {{ title: 'Response Example' }} { "success": true } ``` Crawler `langbase.tools.crawl()` API reference of `langbase.tools.crawl()` function in Langbase AI SDK. https://langbase.com/docs/api/og?title=langbase.tools.crawl()§ion=Langbase%20AI%20SDK https://langbase.com/docs/sdk/tools/crawl # Crawler langbase.tools.crawl() You can use the `tools.crawl()` function to extract content from web pages. This is particularly useful when you need to gather information from websites for your AI applications. The crawling functionality is powered by the following services: - [Spider.cloud](https://spider.cloud) - [Firecrawl](https://firecrawl.dev) --- ## Pre-requisites 1. **Langbase API Key**: Generate your API key from the [User/Org API key documentation](/api-reference/api-keys). 2. **Crawl API Key**: Sign up at [Spider.cloud](https://spider.cloud) OR [Firecrawl](https://firecrawl.dev) to get your crawl API key. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.tools.crawl(options)` Crawl web pages by running the `langbase.tools.crawl()` function. ```ts {{ title: 'index.ts' }} langbase.tools.crawl(options); // with types langbase.tools.crawl(options: ToolCrawlOptions); ``` ## options ```ts {{title: 'ToolCrawlOptions Object'}} interface ToolCrawlOptions { url: string[]; apiKey: string; maxPages?: number; service?: 'spider' | 'firecrawl'; } ``` *Following are the properties of the options object.* --- ### url An array of URLs to crawl. Each URL should be a valid web address. --- ### apiKey Your Spider.cloud API key – get one from [Spider.cloud](https://spider.cloud). --- ### maxPages Maximum number of pages to crawl. Limits crawl depth. --- ### service The crawling service to use. Options are `spider` or `firecrawl`. Default is `spider`. ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" CRAWL_KEY="" ``` ### `langbase.tools.crawl()` examples ```ts {{ title: 'Basic' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const results = await langbase.tools.crawl({ url: ['https://example.com'], apiKey: process.env.CRAWL_KEY!, maxPages: 5 }); console.log('Crawled content:', results); } main(); ``` ```ts {{ title: 'Firecrawl' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const results = await langbase.tools.crawl({ url: ['https://example.com'], apiKey: process.env.CRAWL_KEY!, maxPages: 5, service: 'firecrawl' // Use Firecrawl service }); console.log('Crawled content:', results); } main(); ``` ```ts {{ title: 'Multiple URLs' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const results = await langbase.tools.crawl({ url: [ 'https://example.com/page1', 'https://example.com/page2', 'https://example.com/page3' ], apiKey: process.env.CRAWL_KEY!, // Spider.cloud API key maxPages: 10 }); console.log('Crawled content:', results); } main(); ``` --- ## Response An array of objects containing the URL and the extracted content returned by the `langbase.tools.crawl()` function. ```ts {{title: 'ToolCrawlResponse Type'}} interface ToolCrawlResponse { url: string; content: string; } ``` The URL of the crawled page. The extracted content from the crawled page. ```json {{ title: 'ToolCrawlResponse Example' }} [ { "url": "https://example.com/page1", "content": "Extracted content from the webpage..." }, { "url": "https://example.com/page2", "content": "More extracted content..." } ] ``` Append Messages `langbase.threads.append()` API reference of `langbase.threads.append()` function in Langbase AI SDK. https://langbase.com/docs/api/og?title=langbase.threads.append()§ion=Langbase%20AI%20SDK https://langbase.com/docs/sdk/threads/append-messages # Append Messages langbase.threads.append() You can use the `threads.append()` function to add new messages to an existing thread. This helps you maintain conversation history and build interactive chat experiences. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.threads.append(options)` Add new messages to an existing thread. ```ts {{ title: 'index.ts' }} langbase.threads.append(options); // with types langbase.threads.append(options: ThreadsCreate); ``` ## options ```ts {{title: 'ThreadMessagesCreate Object'}} interface ThreadMessagesCreate { threadId: string; messages: ThreadMessage[]; } interface ThreadMessage extends Message { attachments?: any[]; metadata?: Record; } interface Message { role: 'user' | 'assistant' | 'system' | 'tool'; content: string | null; name?: string; tool_call_id?: string; tool_calls?: ToolCall[]; } interface ToolCall { id: string; type: 'function'; function: Function; } interface Function { name: string; arguments: string; } ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### `langbase.threads.append()` example ```ts {{ title: 'Append Example' }} const messages = await langbase.threads.append({ threadId: "thread_125", messages: [{ role: "assistant", content: "How can I help you today?" }] }); ``` --- ### Response The response of the `threads.append()` function is a promise that resolves to an array of `ThreadMessagesBaseResponse` objects. ```ts {{title: 'ThreadMessagesBaseResponse'}} export interface ThreadMessagesBaseResponse { id: string; created_at: number; thread_id: string; content: string; role: Role; tool_call_id: string | null; tool_calls: ToolCall[] | []; name: string | null; attachments: any[] | []; metadata: Record | {}; } ``` ```json {{ title: 'Response Example' }} [ { "id": "msg_124", "thread_id": "thread_123", "created_at": 1709544120, "role": "user", "content": "Hello, I need help!", "tool_call_id": null, "tool_calls": [], "name": null, "attachments": [], "metadata": {} }, { "id": "msg_125", "thread_id": "thread_123", "created_at": 1709544120, "role": "assistant", "content": "How can I help you today?", "tool_call_id": null, "tool_calls": [], "name": null, "attachments": [], "metadata": {} } ] ``` Web Search `langbase.tools.webSearch()` API reference of `langbase.tools.webSearch()` function in Langbase AI SDK. https://langbase.com/docs/api/og?title=langbase.tools.webSearch()§ion=Langbase%20AI%20SDK https://langbase.com/docs/sdk/tools/web-search # Web Search langbase.tools.webSearch() You can use the `tools.webSearch()` function to search the web for relevant information. This functionality is powered by [Exa](https://exa.ai), and you'll need to obtain an API key from them to use this feature. --- ## Pre-requisites 1. **Langbase API Key**: Generate your API key from the [User/Org API key documentation](/api-reference/api-keys). 2. **Exa API Key**: Sign up at [Exa Dashboard](https://dashboard.exa.ai/api-keys) to get your web search API key. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.tools.webSearch(options)` Search the web by running the `langbase.tools.webSearch()` function. ```ts {{ title: 'index.ts' }} langbase.tools.webSearch(options); // with types langbase.tools.webSearch(options: ToolWebSearchOptions); ``` ## options ```ts {{title: 'ToolWebSearchOptions Object'}} interface ToolWebSearchOptions { query: string; service: 'exa'; apiKey: string; totalResults?: number; domains?: string[]; } ``` *Following are the properties of the options object.* --- ### query The search query to execute. --- ### service Currently only supports `'exa'` as the search service provider. --- ### apiKey Your Exa API key – get one from the [Exa Dashboard](https://dashboard.exa.ai/api-keys). --- ### totalResults The maximum number of results to return from the search. --- ### domains Optional array of domains to restrict the search to. ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" EXA_API_KEY="" ``` ### `langbase.tools.webSearch()` examples ```ts {{ title: 'Basic' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const results = await langbase.tools.webSearch({ query: 'What is Langbase?', service: 'exa', apiKey: process.env.EXA_API_KEY!, totalResults: 2 }); console.log('Search results:', results); } main(); ``` ```ts {{ title: 'Domain-Specific Search' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const results = await langbase.tools.webSearch({ query: 'What is Langbase?', service: 'exa', apiKey: process.env.EXA_API_KEY!, totalResults: 2, domains: ['https://langbase.com'] }); console.log('Domain-specific results:', results); } main(); ``` --- ## Response An array of web search result objects returned by the `langbase.tools.webSearch()` function. ```ts {{title: 'ToolWebSearchResponse Type'}} interface ToolWebSearchResponse { url: string; content: string; } ``` The URL of the search result. The extracted content from the search result. ```json {{ title: 'ToolWebSearchResponse Example' }} [ { "url": "https://langbase.com/docs/introduction", "content": "Langbase is a powerful AI development platform..." }, { "url": "https://langbase.com/docs/getting-started", "content": "Get started with Langbase by installing our SDK..." } ] ``` Retrieve from Memory <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.memories.retrieve()</span> https://langbase.com/docs/sdk/memory/retrieve/ import { generateMetadata } from '@/lib/generate-metadata'; # Retrieve from Memory langbase.memories.retrieve() Retrieve similar chunks from an AI memory on Langbase for a query using the `langbase.memories.retrieve()` function. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.memories.retrieve(options)` ```ts {{ title: 'index.ts' }} langbase.memories.retrieve(options); // with types. langbase.memories.retrieve(options: MemoryRetrieveOptions); ``` ## options ```ts {{title: 'MemoryRetrieveOptions Object'}} interface MemoryRetrieveOptions { query: string; memory: Memory[]; topK?: number; } ``` *Following are the properties of the options object.* --- The search query for retrieving similar chunks. An array of memory objects from which to retrieve similar chunks. Each object can include optional filters to narrow down the search. ```ts {{title: 'Memory Object'}} interface Memory { name: string; filters?: MemoryFilters; } ``` The name of the memory. Optional array of filters to narrow down the search results. ```ts {{title: 'MemoryFilters Type'}} type FilterOperator = 'Eq' | 'NotEq' | 'In' | 'NotIn' | 'And' | 'Or'; type FilterConnective = 'And' | 'Or'; type FilterValue = string | string[]; type FilterCondition = [string, FilterOperator, FilterValue]; type MemoryFilters = [FilterConnective, MemoryFilters[]] | FilterCondition; ``` Filters can be either: - A single condition: `["field", "operator", "value"]` - A nested structure: `["And"|"Or", MemoryFilters]` The number of top similar chunks to return from memory. Default is 20, minimum is 1, and maximum is 100. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Retrieve from memory ```ts {{ title: 'Basic' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const chunks = await langbase.memories.retrieve({ query: "What are the key features?", memory: [{ name: "knowledge-base" }] }); console.log('Memory chunk:', chunks); } main(); ``` ```ts {{ title: 'With Filters' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const chunks = await langbase.memories.retrieve({ query: 'What are the key features?', memory: [ { name: 'knowledge-base', filters: ['category', 'Eq', 'features'], }, ], }); console.log('Memory chunk:', chunks); } main(); ``` ```ts {{ title: 'Advanced Filters' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const chunks = await langbase.memories.retrieve({ query: 'What are the key features?', memory: [ { name: 'knowledge-base', filters: [ 'And', [ ['category', 'Eq', 'features'], ['section', 'In', ['overview', 'features']], ], ], }, ], }); console.log('Memory chunk:', chunks); } main(); ``` --- ## Response The response array returned by the `langbase.memories.retrieve()` function. ```ts {{title: 'MemoryRetrieveResponse'}} interface MemoryRetrieveResponse { text: string; similarity: number; meta: Record; } ``` Retrieved text segment from memory. Similarity score between the query and retrieved text (0-1 range). Additional metadata associated with the retrieved text. ```json {{ title: 'Basic' }} [ { "text": "Key features of Langbase include: semantic search capabilities, flexible memory management, and scalable architecture for handling large datasets.", "similarity": 0.92, "meta": { "category": "features", "section": "overview" } }, { "text": "Our platform offers advanced features like real-time memory updates, custom metadata filtering, and enterprise-grade security.", "similarity": 0.87, "meta": { "category": "updates", "section": "highlights" } }, { "text": "Platform highlights include AI-powered memory retrieval, customizable embedding models, and advanced filtering capabilities.", "similarity": 0.85, "meta": { "category": "features", "section": "highlights" } } ] ``` ```json {{ title: 'With Filters' }} [ { "text": "Key features of Langbase include: semantic search capabilities, flexible memory management, and scalable architecture for handling large datasets.", "similarity": 0.92, "meta": { "category": "features", "section": "overview" } }, { "text": "Platform highlights include AI-powered memory retrieval, customizable embedding models, and advanced filtering capabilities.", "similarity": 0.85, "meta": { "category": "features", "section": "highlights" } } ] ``` ```json {{ title: 'With Advanced Filters' }} [ { "text": "Key features of Langbase include: semantic search capabilities, flexible memory management, and scalable architecture for handling large datasets.", "similarity": 0.92, "meta": { "category": "features", "section": "overview" } } ] ``` --- Create Thread `langbase.threads.create()` API reference of `langbase.threads.create()` function in Langbase AI SDK. https://langbase.com/docs/api/og?title=langbase.threads.create()§ion=Langbase%20AI%20SDK https://langbase.com/docs/sdk/threads/create # Create Thread langbase.threads.create() You can use the `threads.create()` function to create new conversation threads. Threads help you organize and maintain conversation history, making it easier to build conversational applications. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.threads.create(options)` Create a new thread with optional initial messages and metadata. ```ts {{ title: 'index.ts' }} langbase.threads.create(options); // with types langbase.threads.create(options: ThreadsCreate); ``` ## options ```ts {{title: 'ThreadsCreate Object'}} interface ThreadsCreate { threadId?: string; metadata?: Record; messages?: ThreadMessage[]; } interface ThreadMessage extends Message { attachments?: any[]; metadata?: Record; } interface Message { role: 'user' | 'assistant' | 'system' | 'tool'; content: string | null; name?: string; tool_call_id?: string; tool_calls?: ToolCall[]; } interface ToolCall { id: string; type: 'function'; function: Function; } interface Function { name: string; arguments: string; } ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### `langbase.threads.create()` example ```ts {{ title: 'Create Example' }} const thread = await langbase.threads.create({ metadata: { userId: "user123", topic: "support" }, messages: [{ role: "user", content: "Hello, I need help!" }] }); ``` --- ### Response The response of the `threads.create()` function is a `Promise` that resolves to a `ThreadsBaseResponse` object. ```ts {{title: 'ThreadsBaseResponse'}} interface ThreadsBaseResponse { id: string; object: 'thread'; created_at: number; metadata: Record; } ``` ```json {{ title: 'Response Example' }} { "id": "thread_123", "object": "thread", "created_at": 1709544000, "metadata": { "userId": "user123", "topic": "support" } } ``` Upload Document <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.memories.documents.upload()</span> https://langbase.com/docs/sdk/memory/document-upload/ import { generateMetadata } from '@/lib/generate-metadata'; # Upload Document langbase.memories.documents.upload() Upload documents to a memory in using the `langbase.memories.documents.upload()` function. This function can also be used to replace an existing document in a memory. To do this, you need to provide the same `fileName` and `memoryName` attributes as the existing document. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.memories.documents.upload(options)` ```ts {{ title: 'index.ts' }} langbase.memories.documents.upload(options); // with types. langbase.memories.documents.upload(options: MemoryUploadDocOptions); ``` ## options ```ts {{title: 'MemoryUploadDocOptions Object'}} interface MemoryUploadDocOptions { memoryName: string; documentName: string; document: Buffer | File | FormData | ReadableStream; contentType: | 'application/pdf' | 'text/plain' | 'text/markdown' | 'text/csv'; | 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' | 'application/vnd.ms-excel'; meta?: Record; } ``` *Following are the properties of the options object.* --- Name of the memory to upload the document to. Name of the document. The body of the document to be stored in the bucket. It can be - `Buffer` - `File` - `FormData` - `ReadableStream` MIME type of the document. Supported types: - `application/pdf`: PDF documents - `text/plain`: Plain text files and all major code files - `text/markdown`: Markdown files - `text/csv`: CSV files - `application/vnd.openxmlformats-officedocument.spreadsheetml.sheet`: Excel files - `application/vnd.ms-excel`: Excel files Custom metadata for the document, limited to string key-value pairs. A maximum of 10 pairs is allowed. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Upload document ```ts {{ title: 'upload-file-node.ts' }} import {Langbase} from 'langbase'; import {readFileSync} from 'fs'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const hasDocumentUploaded = await langbase.memories.documents.upload({ memoryName: 'knowledge-base', contentType: 'application/pdf', documentName: 'technical-doc.pdf', document: readFileSync('document.pdf'), meta: { category: 'technical', section: 'overview', }, }); if (hasDocumentUploaded.ok) { console.log('Document uploaded successfully'); } } main(); ``` --- ## Response The response object returned by the `langbase.memories.documents.upload()` function. ```ts {{title: 'Response'}} interface Response { ok: boolean; status: number; statusText: string; } ``` Indicates whether the upload was successful. HTTP status code of the upload response. HTTP status message corresponding to the status code. ```json {{ title: 'Response of langbase.memories.documents.upload()' }} { "ok": true, "status": 200, "statusText": "OK" } ``` --- List Memories <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.memories.list()</span> https://langbase.com/docs/sdk/memory/list/ import { generateMetadata } from '@/lib/generate-metadata'; # List Memories langbase.memories.list() Retrieve a list of all AI memory present in an account on Langbase using the `langbase.memories.list()` function. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.memories.list()` ```ts {{ title: 'index.ts' }} langbase.memories.list(); ``` The `langbase.memories.list()` method takes no parameters and returns an array of memories present in your account. ## Usage example ```bash {{ title: 'Install the SDK' }} npm i langbase pnpm i langbase yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### List memories ```ts {{ title: 'list-memories.ts' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const memoryList = await langbase.memories.list(); console.log('Memories:', memoryList); } main(); ``` --- ## Response The response array returned by the `langbase.memories.list()` function. ```ts {{title: 'MemoryListResponse'}} interface MemoryListResponse { name: string; description: string; owner_login: string; url: string; embeddingModel: | 'openai:text-embedding-3-large' | 'cohere:embed-multilingual-v3.0' | 'cohere:embed-multilingual-light-v3.0'; } ``` Name of the memory. Description of the memory. Login of the memory owner. Memory access URL. The embedding model used by the AI memory. - `openai:text-embedding-3-large` - `cohere:embed-v4.0` - `cohere:embed-multilingual-v3.0` - `cohere:embed-multilingual-light-v3.0` ```json {{ title: 'Response of langbase.memories.list()' }} [ { "name": "knowledge-base", "description": "An AI memory for storing company internal docs.", "owner_login": "user123", "url": "https://langbase.com/user123/document-memory", "embeddingModel": "openai:text-embedding-3-large" }, { "name": "multilingual-knowledge-base", "description": "Advanced memory with multilingual support", "owner_login": "user123", "url": "https://langbase.com/user123/multilingual-memory", "embeddingModel": "cohere:embed-multilingual-v3.0" } ] ``` --- List Documents <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.memories.documents.list()</span> https://langbase.com/docs/sdk/memory/document-list/ import { generateMetadata } from '@/lib/generate-metadata'; # List Documents langbase.memories.documents.list() List documents in a memory on Langbase using the `langbase.memories.documents.list()` function. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.memories.documents.list(options)` ```ts {{ title: 'index.ts' }} langbase.memories.documents.list(options); // with types. langbase.memories.documents.list(options: MemoryListDocOptions); ``` ## options ```ts {{title: 'MemoryListDocOptions Object'}} interface MemoryListDocOptions { memoryName: string; } ``` *Following are the properties of the options object.* --- The memory name. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### List documents ```ts {{ title: 'list-documents.ts' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const documents = await langbase.memories.documents.list({ memoryName: 'knowledge-base' }); console.log('Documents:', documents); } main(); ``` --- ## Response The response array returned by the `langbase.memories.documents.list()` function. ```ts {{title: 'MemoryListDocResponse'}} interface MemoryListDocResponse { name: string; status: 'queued' | 'in_progress' | 'completed' | 'failed'; status_message: string | null; metadata: { size: number; type: | 'application/pdf' | 'text/plain' | 'text/markdown' | 'text/csv' | 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' | 'application/vnd.ms-excel'; }; enabled: boolean; chunk_size: number; chunk_overlap: number; owner_login: string; } ``` Name of the document. Current processing status of the document. Can be one of: - `queued`: Document is waiting to be processed - `in_progress`: Document is currently being processed - `completed`: Document has been successfully processed - `failed`: Document processing failed Additional details about the document's status, particularly useful when status is 'failed'. Document metadata including: - `size`: Size of the document in bytes - `type`: MIME type of the document Whether the document is enabled for retrieval. Size of text chunks used for document processing. Overlap size between consecutive text chunks. Login of the document owner. ```json {{ title: 'Response of langbase.memories.documents.list()' }} [ { "name": "product-manual.pdf", "status": "completed", "status_message": null, "metadata": { "size": 1156, "type": "application/pdf" }, "enabled": true, "chunk_size": 10000, "chunk_overlap": 2048, "owner_login": "user123" }, { "name": "technical-specs.md", "status": "in_progress", "status_message": null, "metadata": { "size": 1156, "type": "text/markdown" }, "enabled": true, "chunk_size": 10000, "chunk_overlap": 2048, "owner_login": "user123" } ] ``` --- Retry Doc Embedding <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.memories.documents.embeddings.retry()</span> https://langbase.com/docs/sdk/memory/document-embeddings-retry/ import { generateMetadata } from '@/lib/generate-metadata'; # Retry Doc Embedding langbase.memories.documents.embeddings.retry() Retry the embedding process for a failed document using the `langbase.memories.documents.embeddings.retry()` function. Document embeddings generation may fail due to various reasons such as OpenAI API rate limits, invalid API keys, document parsing errors, special characters, corrupted or locked PDFs, and excessively large documents. If the issue is related to the API key, it needs to be corrected; before retrying, ensure that the document is accessible and can be parsed correctly. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.memories.documents.embeddings.retry(options)` ```ts {{ title: 'index.ts' }} langbase.memories.documents.embeddings.retry(options); // with types. langbase.memories.documents.embeddings.retry(options: MemoryRetryDocEmbedOptions); ``` ## options ```ts {{title: 'MemoryRetryDocEmbedOptions Object'}} interface MemoryRetryDocEmbedOptions { memoryName: string; documentName: string; } ``` *Following are the properties of the options object.* --- The name of memory to which the document belongs. The name of the document. ## Usage example ```bash {{ title: 'Install the SDK' }} npm i langbase pnpm i langbase yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Retry document embedding ```ts {{ title: 'retry-embedding.ts' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const hasEmbeddingRetried = await langbase.memories.documents.embeddings.retry( { memoryName: 'knowledge-base', documentName: 'technical-doc.pdf', }, ); console.log('Has embeddings been retried:', hasEmbeddingRetried); } main(); ``` --- ## Response The response object returned by the `langbase.memories.documents.embeddings.retry()` function. ```ts {{title: 'MemoryRetryDocEmbedResponse'}} interface MemoryRetryDocEmbedResponse { success: boolean; } ``` Indicates whether the embedding retry was successfully initiated. ```json {{ title: 'Response of langbase.memories.documents.embeddings.retry()' }} { "success": true } ``` --- Delete Document <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.memories.documents.delete()</span> https://langbase.com/docs/sdk/memory/document-delete/ import { generateMetadata } from '@/lib/generate-metadata'; # Delete Document langbase.memories.documents.delete() Delete an existing document from a memory on Langbase using the `langbase.memories.documents.delete()` function. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.memories.documents.delete(options)` ```ts {{ title: 'index.ts' }} langbase.memories.documents.delete(options); // with types. langbase.memories.documents.delete(options: MemoryDeleteDocOptions); ``` ## options ```ts {{title: 'MemoryDeleteDocOptions Object'}} interface MemoryDeleteDocOptions { memoryName: string; documentName: string; } ``` *Following are the properties of the options object.* --- Name of the memory instance containing the document. Name of the document to delete. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Delete document ```ts {{ title: 'delete-document.ts' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const hasDocDeleted = await langbase.memories.documents.delete({ memoryName: 'knowledge-base', documentName: 'old-report.pdf' }); console.log('Document deleted:', hasDocDeleted); } main(); ``` --- ## Response The response object returned by the `langbase.memories.documents.delete()` function. ```ts {{title: 'MemoryDeleteDocResponse'}} interface MemoryDeleteDocResponse { success: boolean; } ``` Indicates whether the document deletion was successful. ```json {{ title: 'Response of langbase.memories.documents.delete()' }} { "success": true } ``` --- Delete Memory <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.memories.delete()</span> https://langbase.com/docs/sdk/memory/delete/ import { generateMetadata } from '@/lib/generate-metadata'; # Delete Memory langbase.memories.delete() Delete an AI memory on Langbase using the `langbase.memories.delete()` function. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.memories.delete(options)` ```ts {{ title: 'index.ts' }} langbase.memories.delete(options); // with types. langbase.memories.delete(options: MemoryDeleteOptions); ``` ## options ```ts {{title: 'MemoryDeleteOptions Object'}} interface MemoryDeleteOptions { name: string; } ``` *Following are the properties of the options object.* --- Name of the AI memory to delete. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Delete memory ```ts {{ title: 'delete-memory.ts' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const hasMemoryDeleted = await langbase.memories.delete({ name: 'knowledge-base' }); console.log('Memory deleted:', hasMemoryDeleted); } main(); ``` --- ## Response The response object returned by the `langbase.memories.delete()` function. ```ts {{title: 'MemoryDeleteResponse'}} interface MemoryDeleteResponse { success: boolean; } ``` Indicates whether the deletion was successful. ```json {{ title: 'Response of langbase.memories.delete()' }} { "success": true } ``` --- Create Memory <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.memories.create()</span> https://langbase.com/docs/sdk/memory/create/ import { generateMetadata } from '@/lib/generate-metadata'; # Create Memory langbase.memories.create() Create a new AI memory on Langbase using the `langbase.memories.create()` function. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.memories.create(options)` ```ts {{ title: 'index.ts' }} langbase.memories.create(options); // with types. langbase.memories.create(options: MemoryCreateOptions); ``` ## options ```ts {{title: 'MemoryCreateOptions Object'}} interface MemoryCreateOptions { name: string; description?: string; top_k?: number; chunk_size?: number; chunk_overlap?: number; embedding_model?: | 'openai:text-embedding-3-large' | 'cohere:embed-v4.0' | 'cohere:embed-multilingual-v3.0' | 'cohere:embed-multilingual-light-v3.0' | 'google:text-embedding-004'; } ``` *Following are the properties of the options object.* --- Name of the memory. Description of the memory. Number of chunks to return. Default: `10` Minimum: `1` Maximum: `100` Maximum number of characters in a single chunk. Default: `10000` Maximum: `30000` Cohere has a limit of 512 tokens (1 token ~= 4 characters in English). If you are using Cohere models, adjust the `chunk_size` accordingly. For most use cases, default values should work fine. Number of characters to overlap between chunks. Default: `2048` Maximum: Less than `chunk_size` The model to use for text embeddings. Available options: - `openai:text-embedding-3-large` - `cohere:embed-multilingual-v3.0` - `cohere:embed-multilingual-light-v3.0` - `google:text-embedding-004` Default: `openai:text-embedding-3-large` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Create memory ```ts {{ title: 'create-memory.ts' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const memory = await langbase.memories.create({ name: 'knowledge-base', description: 'An AI memory for storing company internal docs.', }); console.log('Memory created:', memory); } main(); ``` ```ts {{ title: 'configure-embedding-model.ts' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const memory = await langbase.memories.create({ name: 'knowledge-base', description: 'Advanced memory with multilingual support', embedding_model: 'cohere:embed-multilingual-v3.0', }); console.log('Memory created:', memory); } main(); ``` ```ts {{ title: 'custom-chunking.ts' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const memory = await langbase.memories.create({ name: "knowledge-base", description: 'An AI memory for storing company internal docs.', top_k: 10, chunk_size: 10000, chunk_overlap: 2048, }); console.log('Memory created:', memory); } main(); ``` --- ## Response The response object returned by the `langbase.memories.create()` function. ```ts {{title: 'MemoryCreateResponse'}} interface MemoryCreateResponse { name: string; description: string; owner_login: string; url: string; embedding_model: | 'openai:text-embedding-3-large' | 'cohere:embed-v4.0' | 'cohere:embed-multilingual-v3.0' | 'cohere:embed-multilingual-light-v3.0'; chunk_size: number; chunk_overlap: number; top_k: number; } ``` Name of the memory. Description of the AI memory. Login of the memory owner. Memory access URL. The embedding model used by the AI memory. - `openai:text-embedding-3-large` - `cohere:embed-multilingual-v3.0` - `cohere:embed-multilingual-light-v3.0` ```json {{ title: 'Response of langbase.memories.create()' }} { "name": "knowledge-base", "description": "An AI memory for storing company internal docs.", "owner_login": "user123", "url": "https://langbase.com/user123/document-memory", "embedding_model": "openai:text-embedding-3-large", "chunk_size": 10000, "chunk_overlap": 2048, "top_k": 10, } ``` --- Stream Text <span className="text-sm font-mono text-muted-foreground/70 ml-2">pipe.streamText()</span> https://langbase.com/docs/sdk/deprecated/stream-text/ import { generateMetadata } from '@/lib/generate-metadata'; # Stream Text pipe.streamText() You can use a pipe to get any LLM to stream text based on a user prompt. Streaming provides a better user experience, as the moment an LLM starts generating text the user can start seeing words print out in a stream just like ChatGPT. For example, you can ask a pipe to stream a text completion based on a user prompt like "Who is an AI Engineer?" or give it a an entire doc and ask it to summarize it. The Langbase AI SDK provides a `streamText()` function to stream text using pipes with any LLM. This SDK method has been deprecated. Please use the new [`run`](/sdk/pipe/run) SDK method with `stream` true. --- ## API reference ## `streamText(options)` {{ tag: 'Deprecated', status: 'deprecated' }} Stream a text completion using `streamText()` function. ```js {{title: 'Function Signature'}} streamText(options) // With types. streamText(options: StreamOptions) ``` ## options ```js {{title: 'StreamOptions Object'}} interface StreamOptions { messages?: Message[]; variables?: Variable[]; chat?: boolean; threadId?: string | null; } ``` *Following are the properties of the options object.* --- ### messages A messages array including the following properties. Optional if variables are provided. ```js {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string; name?: string; tool_call_id?: string; } ``` The role of the author of this message. The contents of the chunk message. The name of the tool called by LLM The id of the tool called by LLM --- ### variables A variables array including the `name` and `value` params. Optional if messages are provided. ```js {{title: 'Variable Object'}} interface Variable { name: string; value: string; } ``` The name of the variable. The value of the variable. --- ### chat For a chat pipe, set `chat` to `true`. This is useful when you want to use a chat pipe to generate text as it returns a `threadId`. Defaults to `false`. --- ### threadId The ID of the thread. Useful for a chat pipe to continue the conversation in the same thread. Optional. - If `threadId` is not provided, a new thread will be created. E.g. first message of a new chat will not have a threadId. - After the first message, a new `threadId` will be returned. - Use this `threadId` to continue the conversation in the same thread from the second message onwards. ## Usage example ```bash {{ title: 'Install the SDK' }} npm i langbase pnpm i langbase yarn add langbase ``` ```bash {{ title: '.env file' }} # Add your Pipe API key here. LANGBASE_MY_PIPE_API_KEY="pipe_12345" # … add more keys if you have more pipes. ``` ```js {{ title: 'Generate Pipe: Use streamText()' }} import { Pipe } from 'langbase'; // 1. Initiate your Pipes. `myPipe` as an example. const myPipe = new Pipe({ apiKey: process.env.LANGBASE_MY_PIPE_API_KEY!, }); // 2. SIMPLE example. Stream text by asking a question. const {stream} = await myPipe.streamText({ messages: [{role: 'user', content: 'Who is an AI Engineer?'}], }); // 3. Print the stream // NOTE: This is a Node.js only example. // Stream works differently in browsers. // For browers, Next.js, and more examples: // https://langbase.com/docs/sdk/examples for await (const chunk of stream) { // Streaming text part — a single word or several. const textPart = chunk.choices[0]?.delta?.content || ''; // Demo: Print the stream to shell output — you can use it however. process.stdout.write(textPart); } ``` ```js {{ title: 'Variables with streamText()' }} // 1. Initiate the Pipe. // … same as above // 2. Stream text by asking a question. const {stream} = await myPipe.streamText({ messages: [{role: 'user', content: 'Who is {{question}}?'}], variables: [{name: 'question', value: 'AI Engineer'}], }); // 3. Print the stream // … same as above ``` ```js {{ title: 'Chat Pipe: Use streamText()' }} import { Pipe } from 'langbase'; // 1. Initiate the Pipe. const myPipe = new Pipe({ apiKey: process.env.LANGBASE_MY_PIPE_API_KEY!, }); // 2. Stream text by asking a question. const {stream, threadId} = await myPipe.streamText({ messages: [{role: 'user', content: 'My company is called Langbase'}], chat: true, }); // 3. Print the stream // NOTE: This is a Node.js only example. // Stream works differently in browsers. // For browers, Next.js, and more examples: // https://langbase.com/docs/sdk/examples for await (const chunk of stream) { // Streaming text part — a single word or several. const textPart = chunk.choices[0]?.delta?.content || ''; // Demo: Print the stream to shell output — you can use it however. process.stdout.write(textPart); } // 4. Continue the conversation in the same thread by sending `threadId` from the second message onwards. const {stream} = await myPipe.streamText({ messages: [{role: 'user', content: 'Tell me the name of my company?'}], chat: true, threadId, }); // You'll see any LLM will know the company is `Langbase` // since it's the same chat thread. This is how you can // continue a conversation in the same thread. ``` --- ## Response Response of `streamText()` is a `Promise` which is an object with `stream` and `threadId` for chat pipes. ```js {{title: 'StreamResponse Object'}} interface StreamResponse = { threadId: string | null; stream: StreamText; }; ``` --- ### threadId The ID of the thread. Useful for a chat pipe to continue the conversation in the same thread. Optional. --- ### stream Stream is a `StreamText` object with a streamed sequence of `StreamChunk` objects. ```js {{title: 'StreamResponse Object'}} type StreamText = Stream; ``` ### StreamChunk Represents a streamed chunk of a completion response returned by model, based on the provided input. ```js {{title: 'StreamResponse Object'}} interface StreamChunk { id: string; object: string; created: number; model: string; choices: ChoiceStream[]; } ``` A `StreamChunk` object has the following properties. The ID of the response. The object type name of the response. The timestamp of the response creation. The model used to generate the response. A list of chat completion choices. Can contain more than one elements if n is greater than 1. ```js {{title: 'Choice Object for streamText()'}} interface ChoiceStream { index: number; delta: Delta; logprobs: boolean | null; finish_reason: string; } ``` The index of the choice in the list of choices. A chat completion delta generated by streamed model responses. ```js {{title: 'Delta Object'}} interface Delta { role?: Role; content?: string | null; tool_calls?: ToolCall[]; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. The array of the tools called by LLM ```js {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` The ID of the tool call. The type of the tool. Currently, only `function` is supported. The function that the model called. ```js {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` The name of the function to call. The arguments to call the function with, as generated by the model in JSON format. Log probability information for the choice. Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. It could also be `eos` end of sequence and depends on the type of LLM, you can check their docs. ```js {{ title: 'Response of streamText()' }} // Response of a streamText() call is a Promise. { threadId: 'string-uuid-123', stream: StreamText // example of streamed chunks below. } ``` ```js {{ title: 'StreamText has stream chunks' }} // A stream chunk looks like this … { "id":"chatcmpl-123", "object":"chat.completion.chunk", "created":1719848588, "model":"gpt-4o-mini", "system_fingerprint":"fp_44709d6fcb" "choices":[{ "index":0, "delta":{"content":"Hi"}, "logprobs":null, "finish_reason":null }] } // More chunks as they come in... {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"content":"there"},"logprobs":null,"finish_reason":null}]} … {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} ``` ```js {{ title: 'Response stream with tool fn calls' }} // Stream chunks with tool fn calls have content null and include a `tool_calls` array. {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1723757387,"model":"gpt-4o-mini","system_fingerprint":null,"choices":[{"index":0,"delta":{"role":"assistant","content":null,"tool_calls":[{"index":0,"id":"call_123","type":"function","function":{"name":"get_current_weather","arguments":""}}]},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1723757387,"model":"gpt-4o-mini","system_fingerprint":null,"choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"{\""}}]},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1723757387,"model":"gpt-4o-mini","system_fingerprint":null,"choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"location"}}]},"logprobs":null,"finish_reason":null}]} ... {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1723757387,"model":"gpt-4o-mini","system_fingerprint":null,"choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"tool_calls"}]} ``` --- Generate Text <span className="text-sm font-mono text-muted-foreground/70 ml-2">pipe.generateText()</span> https://langbase.com/docs/sdk/deprecated/generate-text/ import { generateMetadata } from '@/lib/generate-metadata'; # Generate Text pipe.generateText() You can use a pipe to get any LLM to generate text based on a user prompt. For example, you can ask a pipe to generate a text completion based on a user prompt like "Who is an AI Engineer?" or give it a an entire doc and ask it to summarize it. The Langbase AI SDK provides a `generateText()` function to generate text using pipes with any LLM. This SDK method has been deprecated. Please use the new [`run`](/sdk/pipe/run) SDK method with `stream` false. --- ## API reference ## `generateText(options)` {{ tag: 'Deprecated', status: 'deprecated' }} Generate a text completion using `generateText()` function. ```js {{title: 'Function Signature'}} generateText(options) // With types. generateText(options: GenerateOptions) ``` ## options ```js {{title: 'GenerateOptions Object'}} interface GenerateOptions { messages?: Message[]; variables?: Variable[]; chat?: boolean; threadId?: string; } ``` *Following are the properties of the options object.* --- ### messages A messages array including the following properties. Optional if variables are provided. ```js {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string; name?: string; tool_call_id?: string; } ``` --- The role of the author of this message. The contents of the chunk message. The name of the tool called by LLM The id of the tool called by LLM --- ### variables A variables array including the `name` and `value` params. Optional if messages are provided. ```js {{title: 'Variable Object'}} interface Variable { name: string; value: string; } ``` The name of the variable. The value of the variable. --- ### chat For a chat pipe, set `chat` to `true`. This is useful when you want to use a chat pipe to generate text as it returns a `threadId`. Defaults to `false`. --- ### threadId The ID of the thread. Useful for a chat pipe to continue the conversation in the same thread. Optional. - If `threadId` is not provided, a new thread will be created. E.g. first message of a new chat will not have a threadId. - After the first message, a new `threadId` will be returned. - Use this `threadId` to continue the conversation in the same thread from the second message onwards. ## Usage example ```bash {{ title: 'Install the SDK' }} npm i langbase pnpm i langbase yarn add langbase ``` ```bash {{ title: '.env file' }} # Add your Pipe API key here. LANGBASE_MY_PIPE_API_KEY="pipe_12345" # … add more keys if you have more pipes. ``` ```js {{ title: 'Generate Pipe: Use generateText()' }} import {Pipe} from 'langbase'; // 1. Initiate the Pipe. const myPipe = new Pipe({ apiKey: process.env.LANGBASE_PIPE_API_KEY!, }); // 2. Generate the text by asking a question. const {completion} = await myPipe.generateText({ messages: [{role: 'user', content: 'Who is an AI Engineer?'}], }); console.log(completion); ``` ```js {{ title: 'Variables with generateText()' }} // 1. Initiate the Pipe. // … same as above // 2. Stream text by asking a question. const {completion} = await myPipe.streamText({ messages: [{role: 'user', content: 'Who is {{question}}?'}], variables: [{name: 'question', value: 'AI Engineer'}], }); ``` ```js {{ title: 'Chat Pipe: Use generateText()' }} import {Pipe} from 'langbase'; // Initiate the Pipe. const myPipe = new Pipe({ apiKey: process.env.LANGBASE_PIPE_API_KEY!, }); // Message 1: Tell something to the LLM. const {completion, threadId} = await myPipe.generateText({ messages: [{role: 'user', content: 'My company is called Langbase'}], chat: true, }); console.log(completion); // Message 2: Ask something about the first message. // Continue the conversation in the same thread by sending // `threadId` from the second message onwards. const {completion: completion2} = await myPipe.generateText({ messages: [{role: 'user', content: 'Tell me the name of my company?'}], chat: true, threadId, }); console.log(completion2); // You'll see any LLM will know the company is `Langbase` // since it's the same chat thread. This is how you can // continue a conversation in the same thread. ``` --- ## Response Response of `generateText()` is a `Promise`. ```js {{title: 'GenerateResponse Object'}} interface GenerateResponse { completion: string; threadId?: string; id: string; object: string; created: number; model: string; system_fingerprint: string | null; choices: ChoiceGenerate[]; usage: Usage; } ``` The generated text completion. The ID of the thread. Useful for a chat pipe to continue the conversation in the same thread. Optional. The ID of the raw response. The object type name of the response. The timestamp of the response creation. The model used to generate the response. This fingerprint represents the backend configuration that the model runs with. A list of chat completion choices. Can contain more than one elements if n is greater than 1. ```js {{title: 'Choice Object for generateText()'}} interface ChoiceGenerate { index: number; message: Message; logprobs: boolean | null; finish_reason: string; } ``` The index of the choice in the list of choices. A messages array including `role` and `content` params. ```js {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | null; tool_calls?: ToolCall[]; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. The array of the tools called by LLM ```js {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` The ID of the tool call. The type of the tool. Currently, only `function` is supported. The function that the model called. ```js {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` The name of the function to call. The arguments to call the function with, as generated by the model in JSON format. Log probability information for the choice. Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. It could also be `eos` end of sequence and depends on the type of LLM, you can check their docs. The usage object including the following properties. ```js {{title: 'Usage Object'}} interface Usage { prompt_tokens: number; completion_tokens: number; total_tokens: number; } ``` The number of tokens in the prompt (input). The number of tokens in the completion (output). The total number of tokens. ```js {{ title: 'Response of generateText()' }} { "completion": "AI Engineer is a person who designs, builds, and maintains AI systems.", "raw": { "id": "chatcmpl-123", "object": "chat.completion", "created": 1720131129, "model": "gpt-4o-mini", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "AI Engineer is a person who designs, builds, and maintains AI systems.", }, "logprobs": null, "finish_reason": "stop" } ], usage: { prompt_tokens: 28, completion_tokens: 36, total_tokens: 64 }, "system_fingerprint": "fp_123" } } ``` ```js {{ title: 'Response with function call' }} // Completion is null when an LLM responds // with a tool function call. { "completion": null, "raw": { "id": "chatcmpl-123", "object": "chat.completion", "created": 1720131129, "model": "gpt-3.5-turbo-0125", "choices": [ { "index": 0, "message": { "role": "assistant", "content": null, "tool_calls": [ { "id": "call_abc123", "type": "function", "function": { "name": "get_current_weather", "arguments": "{\n\"location\": \"Boston, MA\"\n}" } } ] }, "logprobs": null, "finish_reason": "stop" } ], usage: { prompt_tokens: 28, completion_tokens: 36, total_tokens: 64 }, "system_fingerprint": null } } ``` --- Agent Run <span className="text-sm font-mono text-muted-foreground/70 ml-2">langbase.agent.run()</span> https://langbase.com/docs/sdk/agent/run/ import { generateMetadata } from '@/lib/generate-metadata'; # Agent Run langbase.agent.run() You can use the `agent.run()` function as runtime LLM agent. You can specify all parameters at runtime and get the response from the agent. This makes agent.run() ideal for scenarios where you want maximum flexibility — you can dynamically set the input, tools, and configuration without predefining them. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## API reference ## `langbase.agent.run(options)` Request the Agent by running the `langbase.agent.run()` function. ```ts {{ title: 'index.ts' }} langbase.agent.run(options); // with types. langbase.agent.run(options: AgentRunOptions); ``` ## options ```ts {{title: 'AgentRunOptions Object'}} interface AgentRunOptions { model: string; apiKey: string; input: string | Array; instructions?: string; stream?: boolean; tools?: Tool[]; tool_choice?: 'auto' | 'required' | ToolChoice; parallel_tool_calls?: boolean; mcp_servers?: McpServerSchema[]; top_p?: number; max_tokens?: number; temperature?: number; presence_penalty?: number; frequency_penalty?: number; stop?: string[]; customModelParams?: Record; } ``` *Following are the properties of the options object.* --- ### model LLM model. Combination of model provider and model id, like `openai:gpt-4o-mini` Format: `provider:model_id` You can copy the ID of a model from the list of [supported LLM models](/supported-models-and-providers) at Langbase. --- ### apiKey LLM API key for the selected model. --- ### input A string (for simple text queries) or an array of input messages. When using a string, it will be treated as a single user message. Use it for simple queries. For example: ```ts {{title: 'String Input Example'}} langbase.agent.run({ input: 'What is an AI Agent?', ... }); ``` When using an array of input messages `InputMessage[]`. Each input message should include the following properties: ```ts {{title: 'Input Message Object'}} interface InputMessage { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | ContentType[] | null; name?: string; tool_call_id?: string; } ``` ```ts {{title: 'Array Input Messages Example'}} langbase.agent.run({ input: [ { role: 'user', content: 'What is an AI Agent?', }, ], ... }); ``` --- The role of the author of this message. The content of the message. 1. `String` For text generation, it's a plain string. 2. `Null` or `undefined` Tool call messages can have no content. 3. `ContentType[]` Array used in vision and audio models, where content consists of structured parts (e.g., text, image URLs). ```js {{ title: 'ContentType Object' }} interface ContentType { type: string; text?: string | undefined; image_url?: | { url: string; detail?: string | undefined; } | undefined; }; ``` The name of the tool called by LLM The id of the tool called by LLM --- ### instructions Used to give high level instructions to the model about the task it should perform, including tone, goals, and examples of correct responses. This is equivalent to a system/developer role message at the top of LLM's context. --- ### stream Whether to stream the response or not. If `true`, the response will be streamed. --- ### tools A list of tools the model may call. ```ts {{title: 'Tools Object'}} interface ToolsOptions { type: 'function'; function: FunctionOptions } ``` The type of the tool. Currently, only `function` is supported. The function that the model may call. ```ts {{title: 'FunctionOptions Object'}} export interface FunctionOptions { name: string; description?: string; parameters?: Record } ``` The name of the function to call. The description of the function. The parameters of the function. --- ### tool_choice Tool usage configuration. Model decides when to use tools. Model must use specified tools. Forces use of a specific function. ```ts {{title: 'ToolChoice Object'}} interface ToolChoice { type: 'function'; function: { name: string; }; } ``` --- ### parallel_tool_calls Call multiple tools in parallel, allowing the effects and results of these function calls to be resolved in parallel. --- ### mcp_servers An SSE type MCP servers array ```js {{ title: 'McpServerSchema Object' }} interface McpServerSchema { name: string; type: 'url'; url: string; authorization_token?: string; tool_configuration?: { allowed_tools?: string[]; enabled?: boolean; }; custom_headers?: Record; } ``` The name of the MCP server. Type of the MCP server. The URL of the MCP server. The authorization token is for MCP servers that require OAuth authentication, you’ll need to obtain an access token. Please note that we do not store this token. Tool configuration for the MCP server have the following properties: 1. `allowed_tools` - Specify the tool names that the MCP server is permitted to use. 2. `enabled` - Whether to enable tools from this server. Custom headers are additional headers for MCP servers if required. --- ### temperature What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic. Default: `0.7` --- ### top_p An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Default: `1` --- ### max_tokens Maximum number of tokens in the response message returned. Default: `1000` --- ### presence_penalty Penalizes a word based on its occurrence in the input text. Default: `0` --- ### frequency_penalty Penalizes a word based on how frequently it appears in the training data. Default: `0` --- ### stop Up to 4 sequences where the API will stop generating further tokens. --- ### customModelParams Additional parameters to pass to the model as key-value pairs. These parameters are passed on to the model as-is. ```ts {{title: 'CustomModelParams Object'}} interface CustomModelParams { [key: string]: any; } ``` Example: ```ts { "logprobs": true, "service_tier": "auto", } ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" LLM_API_KEY="" ``` ### `langbase.agent.run()` examples ```ts {{ title: 'Non-stream' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const {output} = await langbase.agent.run({ model: 'openai:gpt-4o-mini', instructions: 'You are a helpful AI Agent.', input: 'Who is an AI Engineer?', apiKey: process.env.LLM_API_KEY!, stream: false, }); console.log('Agent response:', output); } main(); ``` ```ts {{ title: 'Stream' }} import {getRunner, Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const {stream, rawResponse} = await langbase.agent.run({ model: 'openai:gpt-4o-mini', instructions: 'You are a helpful AI Agent.', input: 'Who is an AI Engineer?', apiKey: process.env.LLM_API_KEY!, stream: true, }); // Convert the stream to a stream runner. const runner = getRunner(stream); runner.on('connect', () => { console.log('Stream started.\n'); }); runner.on('content', content => { process.stdout.write(content); }); runner.on('end', () => { console.log('\nStream ended.'); }); runner.on('error', error => { console.error('Error:', error); }); } main(); ``` ```ts {{ title: 'Tool Calling' }} // Tool Calling Example import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const tools = [ { type: 'function', function: { name: 'get_current_weather', description: 'Get the current weather in a given location', parameters: { type: 'object', properties: { location: { type: 'string', description: 'The city and state, e.g. San Francisco, CA', }, unit: { type: 'string', enum: ['celsius', 'fahrenheit'] }, }, required: ['location'], }, }, }, ]; const response = await langbase.agent.run({ model: 'openai:gpt-4o-mini', input: 'What is the weather like in SF today?', tools: tools, tool_choice: 'auto', apiKey: process.env.LLM_API_KEY!, stream: false, }); console.log(response); } main(); ``` ```ts {{ title: 'Vision' }} // Tool Calling Example import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.agent.run({ model: 'openai:gpt-4o-mini', input: [ role: 'user', content: [ { type: 'text', text: 'Extract the text from this image', }, { type: 'image_url', image_url: { url: 'https://upload.wikimedia.org/wikipedia/commons/a/a7/Handwriting.png', }, }, ], ] apiKey: process.env.LLM_API_KEY!, stream: false, }); console.log(response); } main(); ``` --- ## Response Response of `langbase.agent.run()` is a `Promise` object. ### RunResponse Object ```ts {{title: 'AgentRunResponse Object'}} interface RunResponse { output: string | null; id: string; object: string; created: number; model: string; choices: ChoiceGenerate[]; usage: Usage; system_fingerprint: string | null; rawResponse?: { headers: Record; }; } ``` The generated text response (also called completion) from the agent. It can be a string or null if the model called a tool. The ID of the raw response. The object type name of the response. The timestamp of the response creation. The model used to generate the response. A list of chat completion choices. Can contain more than one elements if n is greater than 1. ```ts {{title: 'Choice Object for langbase.agent.run() with stream off'}} interface ChoiceGenerate { index: number; message: Message; logprobs: boolean | null; finish_reason: string; } ``` The index of the choice in the list of choices. A messages array including `role` and `content` params. ```ts {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | null; tool_calls?: ToolCall[]; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. The array of the tools called by the agent ```ts {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` The ID of the tool call. The type of the tool. Currently, only `function` is supported. The function that the model called. ```ts {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` The name of the function to call. The arguments to call the function with, as generated by the model in JSON format. Log probability information for the choice. Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. It could also be `eos` end of sequence and depends on the type of LLM, you can check their docs. The usage object including the following properties. ```ts {{title: 'Usage Object'}} interface Usage { prompt_tokens: number; completion_tokens: number; total_tokens: number; } ``` The number of tokens in the prompt (input). The number of tokens in the completion (output). The total number of tokens. This fingerprint represents the backend configuration that the model runs with. The different headers of the response. --- ### RunResponseStream Object Response of `langbase.agent.run()` with `stream: true` is a `Promise`. ```ts {{title: 'AgentRunResponseStream Object'}} interface RunResponseStream { stream: ReadableStream; rawResponse?: { headers: Record; }; } ``` The different headers of the response. Stream is an object with a streamed sequence of StreamChunk objects. ```ts {{title: 'StreamResponse Object'}} type StreamResponse = ReadableStream; ``` ### StreamChunk Represents a streamed chunk of a completion response returned by model, based on the provided input. ```js {{title: 'StreamChunk Object'}} interface StreamChunk { id: string; object: string; created: number; model: string; choices: ChoiceStream[]; } ``` A `StreamChunk` object has the following properties. The ID of the response. The object type name of the response. The timestamp of the response creation. The model used to generate the response. A list of chat completion choices. Can contain more than one elements if n is greater than 1. ```js {{title: 'Choice Object for langbase.agent.run() with stream true'}} interface ChoiceStream { index: number; delta: Delta; logprobs: boolean | null; finish_reason: string; } ``` The index of the choice in the list of choices. A chat completion delta generated by streamed model responses. ```js {{title: 'Delta Object'}} interface Delta { role?: Role; content?: string | null; tool_calls?: ToolCall[]; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. The array of the tools called by LLM ```js {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` The ID of the tool call. The type of the tool. Currently, only `function` is supported. The function that the model called. ```js {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` The name of the function to call. The arguments to call the function with, as generated by the model in JSON format. Log probability information for the choice. Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. It could also be `eos` end of sequence and depends on the type of LLM, you can check their docs. ```json {{ title: 'RunResponse type of langbase.agent.run()' }} { "output": "AI Engineer is a person who designs, builds, and maintains AI systems.", "id": "chatcmpl-123", "object": "chat.completion", "created": 1720131129, "model": "gpt-4o-mini", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "AI Engineer is a person who designs, builds, and maintains AI systems." }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 28, "completion_tokens": 36, "total_tokens": 64 }, "system_fingerprint": "fp_123" } ``` ```js {{ title: 'RunResponseStream of langbase.agent.run() with stream true' }} { "stream": StreamResponse // example of streamed chunks below. } ``` ```json {{ title: 'StreamResponse has stream chunks' }} // A stream chunk looks like this … { "id": "chatcmpl-123", "object": "chat.completion.chunk", "created": 1719848588, "model": "gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices": [{ "index": 0, "delta": { "content": "Hi" }, "logprobs": null, "finish_reason": null }] } // More chunks as they come in... {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"content":"there"},"logprobs":null,"finish_reason":null}]} … {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} ``` --- Pricing for Parser Primitive https://langbase.com/docs/parser/platform/pricing/ import { generateMetadata } from '@/lib/generate-metadata'; # Pricing for Parser Primitive Requests to the Parser primitive are counted as **Runs** against your subscription plan. | Plan | Runs | Overage | |------------|----------|----------| | Hobby | 500 | - | | Enterprise | [Contact Us][contact-us] | [Contact Us][contact-us] | Each run is an API request which can have at the max 1,000 Tokens in it which is equivalent to almost 750 words (an article). If your API request has, for instance, 1500 tokens in it, it will count as 2 runs. ### Free Users - **Limit**: 500 runs per month. - **Overage**: No overage. ### Enterprise Users There are no hard limits for Enterprise. Billing is based solely on the number of runs during each billing period. If you have questions about your usage or need assistance, please don't hesitate to [contact us](mailto:support@langbase.com). --- [contact-us]: mailto:support@langbase.com Limits for Parser Primitive https://langbase.com/docs/parser/platform/limits/ import { generateMetadata } from '@/lib/generate-metadata'; # Limits for Parser Primitive The following Rate and Usage Limits apply for the Parser primitive: ### Rate Limits Parser primitive requests follow our standard rate limits. See the [Rate Limits](/api-reference/limits/rate-limits) page for more details. ### Usage Limits Requests to the Parser primitive are counted as **Runs** against your subscription plan. See the [Run Usage Limits](/api-reference/limits/usage-limits) page for more details. Guide: Setup Chatbot in Next.js https://langbase.com/docs/guides/setup-docs-agent/setup-chatbot/ import { generateMetadata } from '@/lib/generate-metadata'; # Guide: Setup Chatbot in Next.js ### A step-by-step guide to setup a chatbot in Next.js using Langbase components. --- ## Prerequisites: Create an AI memory & AI agent This guide assumes you have already created an AI memory and agent on Langbase. If you haven't, please follow these guides: - [Create an AI memory](/guides/setup-docs-agent/create-memory) - [Create an AI agent](/guides/setup-docs-agent/create-agent) --- In this guide, we will learn how to setup a chatbot in Next.js using Langbase components. We will: - **Langbase SDK and components**: Install Langbase SDK and components to your Next.js app. - **API route**: Create an API route to call our AI agent. - **Chatbot component**: Setup chatbot component in your Next.js app. Let's get started. --- ## Step 1: Install Langbase SDK and components We will use Langbase SDK to call our [AI agent](/guides/setup-docs-agent/create-agent) and Langbase components to import a pre-built chatbot component. ```bash npm i langbase @langbase/components ``` ## Step 2: Create an API route Go ahead and create an API route in your Next.js app where we will request our AI agent pipe using Langbase SDK. Add the following code in the API route: ```ts import { NextRequest } from 'next/server'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! // Your Langbase API key }); export async function POST(req: NextRequest) { const options = await req.json(); const { stream, threadId } = await langbase.pipes.run({ ...options, name: 'REPLACE-WITH-PIPE-NAME' // e.g. 'ask-xyz-docs' }); return new Response(stream, { status: 200, headers: { 'lb-thread-id': threadId ?? '' } }); } ``` Please replace `REPLACE-WITH-PIPE-NAME` with the name of your AI agent pipe. ## Step 3: Setup chatbot component First we need to import the CSS for the chatbot component. Add the following code in your root layout: ```tsx import '@langbase/components/styles' ``` Now go ahead and import the chatbot component in the root page of your Next.js app: ```tsx 'use client'; import { Chatbot } from '@langbase/components'; ``` You can read complete API reference of `Chatbot` component [here](https://www.npmjs.com/package/@langbase/components). ## Step 4: Test your chatbot Go ahead and run the Next.js app. You should see the chatbot on the bottom right corner of your app. You can now ask questions from your documentation and the chatbot will answer them using the Docs agent. ## Wrap up That's all you need to add AI in your docs using Langbase. To summarize, we: 1. [Created an AI Memory](/guides/setup-docs-agent/create-memory) 2. [Created an AI Agent](/guides/setup-docs-agent/create-agent) 3. [Setup a Chatbot](/guides/setup-docs-agent/setup-chatbot) The chatbot is now ready to answer questions from your documentation. --- ## Next Steps - Build something cool with Langbase. - Join our [Discord community](https://langbase.com/discord) for feedback, requests, and support. --- Guide: Create an AI agent https://langbase.com/docs/guides/setup-docs-agent/create-agent/ import { generateMetadata } from '@/lib/generate-metadata'; # Guide: Create an AI agent ### A step-by-step guide to create an AI agent pipe on Langbase. --- ## Prerequisites: Create an AI memory This guide assumes you have already created an AI memory on Langbase using BaseAI. If you haven't, you can follow the [Create an AI memory](/guides/setup-docs-agent/create-memory) guide to create one. --- In this guide, we will learn how to create an AI agent pipe on Langbase that will use OpenAI `gpt-4o-mini` to answer user queries from documentation. We will: - **Create an AI Agent**: Generate an AI agent pipe to interact with the AI memory. - **System prompts**: Update system prompts to the AI agent. - **Attach AI Memory**: Attach the AI memory to the AI agent. - **RAG prompts**: Update RAG prompts to the AI agent. Let's get started. --- ## Step 1: Create an AI Agent pipe To get started, you'll need to [create a free personal account on Langbase.com](https://langbase.com/signup) and verify your email address. 0. When logged in, you can always go to [pipe.new](https://pipe.new) to create a new Pipe. 1. Give your Pipe a name. Let's call it `ask-xyz-docs`. 2. Click on the `[Create Pipe]` button. And just like that, you have created an AI agent pipe. You can also fork the [`ask-xyz-docs`](https://langbase.com/examples/ask-xyz-docs) pipe by clicking on the `Fork` button. Forking a pipe is a great way to start experimenting with it. ## Step 2: System prompts A system prompt sets the context, instructions, and guidelines for a language model before it receives questions or tasks. It helps define the model's role, personality, tone, and other details to improve its responses to user input. Let's add the following system prompts to the AI agent pipe: ``` You are an AI chatbot designed to assist users with questions about {{company}}, which helps {{tagline}}. Your responses should be based strictly on the provided {{company}} documentation. Provide accurate and concise answers, avoiding any information not included in the documentation. If you are unsure of an answer, inform the user and suggest consulting the official {{company}} documentation or support for further assistance. Always ensure your answers are clear, accurate, and derived directly from the {{company}} documentation. Include any relevant extra detail. ``` ## Step 3: Attach AI Memory Inside Pipe IDE, click on Memory. It will open a sidebar where you can select and attach the AI memory you created [earlier](/guides/setup-docs-agent/create-memory). This will allow the AI agent to access the AI memory and provide accurate responses to user queries based on the documentation. ## Step 4: RAG prompt A RAG prompt is a prompt that helps the AI model understand the context of the conversation. It is a prompt that is used to guide the AI model to generate responses that are relevant, accurate, and grammatically correct. Let's update the RAG prompt of the AI agent pipe: ``` Below is the CONTEXT to answer the questions. ONLY use information from the CONTEXT to respond. The CONTEXT consists of multiple information chunks, each attributed to a specific source. When providing responses: 1. Cite the source for each piece of information in brackets, e.g., [1]. 2. At the end of your response, list all referenced sources in a new section, formatted as follows: **Sources:** [1] [Document Name](url) [2] Document Name - Use the filename given in the CONTEXT and remove prefixes like `src-app` or file extensions (e.g., `.mdx`). - Capitalize words and insert spaces, e.g., `src-app-user-guide.mdx` becomes `User Guide`. - If a URL is present, hyperlink the document name. - If no URL is provided, list only the document name. If the answer is not found in the CONTEXT, state: "I could not find the answer in the provided CONTEXT. Please provide more details or ask a more specific question." ``` Feel free to **customize** the RAG prompt to suit your use case. --- ## Next steps Lastly, we will now setup the chatbot using Langbase components. --- Guide: Create an AI Memory https://langbase.com/docs/guides/setup-docs-agent/create-memory/ import { generateMetadata } from '@/lib/generate-metadata'; # Guide: Create an AI Memory ### A step-by-step guide to create an AI memory using BaseAI. --- In this guide, we will learn how to create an AI memory using BaseAI and sync it with a Git repository. We will: - **Initialize BaseAI**: Setup BaseAI in your Next.js app. - **Create an AI Memory**: Generate an AI memory to track changes in your Git repository. - **Add doc metadata**: Add metadata to each document in the memory. - **Add Langbase API Key**: Add your Langbase API key to the `.env` file. - **Deploy the Memory**: Deploy the AI memory to Langbase. BaseAI makes it easy to build an AI memory within a code repo and deploy it to **Langbase serverless** AI cloud. --- ## Prerequisites: Generate Langbase API Key We will use BaseAI and Langbase SDK in this guide. To work with both, you need to generate an API key. Visit [User/Org API key documentation](/api-reference/api-keys) page to learn more. --- ## Why use BaseAI? It's useful if your company **docs** are present inside a **Git repository**. An AI memory created via BaseAI can track changes in your repository. This is useful: 1. When you **add** a new doc to repository, you can deploy and only add the new doc to Langbase. 2. When you **update** a doc in repository, you can sync the **changes** to Langbase. 3. When you **delete** a doc in repository, you can deploy and **delete** the doc from Langbase. *Let's now create an AI memory using BaseAI.* --- ## Step 1: Initialize BaseAI Run the following command to initialize BaseAI in your Next.js app: ```bash npx baseai@latest init ``` ## Step 2: Create an AI Memory Run the following command to create an AI memory: ```bash npx baseai@latest memory ``` It will also prompt you if you want to create a memory from the current project git repository. Select yes. This will create a memory at `baseai/memory/company-docs.ts` and track files in the current git repository. It prints the path of the memory created. Open that file in your editor, it looks like this: ```ts {{title: 'baseai/memory/company-docs.ts'}} import {MemoryI} from '@baseai/core'; const docsMemory = (): MemoryI => ({ name: 'company-docs', description: "All docs as memory for an AI agent pipe", git: { enabled: true, include: ['**/*'], gitignore: true, deployedAt: '', embeddedAt: '' } }); export default docsMemory; ``` Below is the explanation of the fields in the memory file: - **enabled**: Set to true to enable tracking of git repository. - **include**: Follows glob pattern to include files from the git repository. You can change the pattern to include only specific files or directories where docs are present. - **gitignore**: Set to true to include .gitignore file in the memory. - **deployedAt**: Set to the commit hash where the memory was last deployed. It is used to track the changes in the memory for deployment. Try to avoid changing this field manually. - **embeddedAt**: Set to the commit hash where the memory was last embedded. It is used to track the changes in the memory for local development. Try to avoid changing this field manually. ## Step 3: Add doc metadata Go ahead and add metadata to each document, like its hosting URL. The AI agent will then **include the URL** in its response when referencing the document. Let's edit the memory file to add a `documents` field. Set `meta` to a function that returns an object with string key/value pairs, like a `url` for each document. ```ts {{title: 'baseai/memory/company-docs.ts'}} import {MemoryI} from '@baseai/core'; const docsMemory = (): MemoryI => ({ name: 'company-docs', description: 'All docs as memory for an AI agent pipe', git: { enabled: true, include: ['**/*'], gitignore: true, deployedAt: '', embeddedAt: '', }, documents: { meta: doc => { // generate a URL for each document const url = `https://example.com/${doc.path}`; return { url, name: doc.name, }; }, }, }); export default docsMemory; ``` The `doc` is of type `MemoryDocumentI`. ```ts {{title: 'MemoryDocumentI Object'}} interface MemoryDocumentI { name: string; size: string; content: string; blob: Blob; path: string; } ``` Following are the properties of the `MemoryDocumentI` object: - **name**: name of the document. - **size**: size of the document. - **content**: content of the document. - **blob**: blob of the document. - **path**: path of the document. If your docs have path based URL structure, you can use the `path` field to generate the URL. ## Step 4: Commit changes Once you are done, commit all the changes in the repository. This is important because the BaseAI will use the latest commit hash as a reference to track changes in the memory. ```bash git add . git commit -m "Setup AI BaseAI memory" ``` ## Step 5: Add Langbase API Key Add your Langbase API key to the `.env` file: ```bash LANGBASE_API_KEY=YOUR_API_KEY ``` ## Step 6: Deploy the Memory Run the following command to deploy the memory to serverless cloud: ```bash npx baseai@latest deploy -m ``` Go ahead and replace `` with the name of the memory. In this case, it is `company-docs`. This will deploy the AI memory and all its documents to Langbase. Next time you make changes to the git repository, you can run the deploy command again to update the memory. Make sure to commit all the changes before deploying. BaseAI memory git sync helps you to **track** changes in the git repository and deploy only the changes to Langbase instead of the whole memory every time. This is useful to **avoid** embedding generation for every document in the memory. --- ## Next steps Now that you have created an AI memory, you can proceed to create an AI agent pipe using Langbase. --- How to use tool calling with Generate API? https://langbase.com/docs/deprecated/tool-calling/generate-api/ import { generateMetadata } from '@/lib/generate-metadata'; # How to use tool calling with Generate API? Follow this quick guide to learn how to use tool calling with the Generate API in Langbase. We will not use `stream` mode for this example. The Generate API endpoint has been deprecated. Learn how to use tool calling with Pipe run API [here](/features/tool-calling). --- ## Step 0: Create a Pipe Create a new `generate` type Pipe or open an existing `generate` Pipe in your Langbase account. Go ahead and turn off the `stream` mode for this example and deploy the Pipe. _Alternatively, you can fork this [tool call generate Pipe](https://langbase.com/examples/tool-calling-generate-example) Pipe and skip to step 3._ --- ## Step 1: Select OpenAI model Tool calling is available with **OpenAI models**. So select any of the available OpenAI models in your Pipe. --- ## Step 2: Add a tool to the Pipe Let's add a tool to get the current weather of a given location. Click on the `Add` button in the Tools section to add a new tool. This will open a modal where you can define the tool. The tool we are defining will take two arguments: 1. location 2. unit. The `location` argument is required and the `unit` argument is optional. The tool definition will look something like the following. ```json { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather of a given location", "parameters": { "type": "object", "required": [ "location" ], "properties": { "unit": { "enum": [ "celsius", "fahrenheit" ], "type": "string" }, "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } } } } } ``` Go ahead and deploy the Pipe to production. If a Pipe has tools, the playground will be disabled. You can only test tool calling with our Generate and Chat API. --- ## Step 3: User prompt to call the tool Go ahead and copy your Pipe API key from the Pipe API page. You will need this key to call the Generate API. Now let's create an `index.js` file where we will define `get_current_weather` function and also call the Pipe. ```js const get_current_weather = ({ location, unit }) => { // get weather for the location and return the temperature }; const tools = { get_current_weather }; (async () => { const messages = [ { role: 'user', content: 'Whats the weather in San Francisco?' } ]; // replace this with your Pipe API key const pipeApiKey = ``; const res = await fetch('https://api.langbase.com/beta/generate', { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${pipeApiKey}` }, body: JSON.stringify({ messages }) }); })(); ``` Because the user prompt requires the current weather of San Francisco, the model will respond with a tool call like the following: ```json { "role": "assistant", "content": null, "tool_calls": [ { "id": "call_u28sPmmCAWkop0OdgDYDJ9OG", "type": "function", "function": { "name": "get_current_weather", "arguments": "{\"location\": \"San Francisco\"}" } } ] } ``` --- ## Step 4: Handle the tool call To check if the model has called the tool, you can check the `tool_calls` array in the model's response. If it exists, call the functions specified in the `tool_calls` array and send the response back to Langbase. In Generate API, it's required that you push the assistant response (above one) that contained `tool_calls` before pushing tool responses into the `messages` array. ```js (async () => { const messages = [ { role: 'user', content: 'Whats the weather in Lahore and saad?' } ]; // add your Pipe API key here const pipeApiKey = ``; const res = await fetch('https://api.langbase.com/beta/generate', { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${pipeApiKey}` }, body: JSON.stringify({ messages }) }); const data = await res.json(); const { raw } = data; // get the response message from the model const responseMessage = raw.choices[0].message; // get the tool calls from the response message const toolCalls = responseMessage.tool_calls; if (toolCalls) { // push the assistant response to the messages array to send back to the API messages.push(responseMessage); // Call all the functions in the tool_calls array toolCalls.forEach(toolCall => { const toolName = toolCall.function.name; const toolParameters = JSON.parse(toolCall.function.arguments); const toolFunction = tools[toolName]; const toolResponse = toolFunction(toolParameters); // push the tool response to the messages array to send back to the API messages.push({ tool_call_id: toolCall.id, // required: id of the tool call role: 'tool', // required: role of the message name: toolName, // required: name of the tool content: JSON.stringify(toolResponse) // required: response of the tool }); }); // send the messages array back to the API const res = await fetch('https://api.langbase.com/beta/generate', { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${pipeApiKey}` }, body: JSON.stringify({ messages }) }); const data = await res.json(); } })(); ``` This is what a typical model response will look like after calling the tool: ```json { "completion": "The current temperature in San Francisco, CA is 25°C.", "raw": { "id": "chatcmpl-9hQG8k2pD1A6JoFKQ0O6BKKvJzogS", "object": "chat.completion", "created": 1720136072, "model": "gpt-4o-2024-05-13", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "The current temperature in San Francisco, CA is 25°C." }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 121, "completion_tokens": 14, "total_tokens": 135 }, "system_fingerprint": "fp_ce0793330f" } } ``` And that's it! You have successfully used tool calling with the Generate API in Langbase. How to use tool calling with Chat API? https://langbase.com/docs/deprecated/tool-calling/chat-api/ import { generateMetadata } from '@/lib/generate-metadata'; # How to use tool calling with Chat API? Follow this quick guide to learn how to use tool calling with the Chat API in Langbase. We will not use `stream` mode for this example. The Chat API endpoint has been deprecated. Learn how to use tool calling with Pipe run API [here](/features/tool-calling). --- ## Step 0: Create a Pipe Create a new `chat` type Pipe or open an existing `chat` Pipe in your Langbase account. Go ahead and turn off the `stream` mode for this example and deploy the Pipe. _Alternatively, you can fork this [tool call chat Pipe](https://langbase.com/examples/tool-calling-chat-example) Pipe and skip to step 3._ --- ## Step 1: Select OpenAI model Tool calling is available with **OpenAI models**. So select any of the available OpenAI models in your Pipe. --- ## Step 2: Add a tool to the Pipe Let's add a tool to get the current weather of a given location. Click on the `Add` button in the Tools section to add a new tool. This will open a modal where you can define the tool. The tool we are defining will take two arguments: 1. location 2. unit. The `location` argument is required and the `unit` argument is optional. The tool definition will look something like the following. ```json { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather of a given location", "parameters": { "type": "object", "required": [ "location" ], "properties": { "unit": { "enum": [ "celsius", "fahrenheit" ], "type": "string" }, "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } } } } } ``` Go ahead and deploy the Pipe to production. If a Pipe has tools, the playground will be disabled. You can only test tool calling with our Generate and Chat API. --- ## Step 3: User prompt to call the tool Go ahead and copy your Pipe API key from the Pipe API page. You will need this key to call the Generate API. Now let's create an `index.js` file where we will define `get_current_weather` function and also call the Pipe. ```js const get_current_weather = ({ location, unit }) => { // get weather for the location and return the temperature }; const tools = { get_current_weather }; (async () => { const messages = [ { role: 'user', content: 'Whats the weather in SF?' } ]; // replace this with your Pipe API key const pipeApiKey = ``; const res = await fetch('https://api.langbase.com/beta/chat', { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${pipeApiKey}` }, body: JSON.stringify({ messages }) }); })(); ``` Because the user prompt requires the current weather of San Francisco, the model will respond with a tool call like the following: ```json { "role": "assistant", "content": null, "tool_calls": [ { "id": "call_u28sPmmCAWkop0OdgDYDJ9OG", "type": "function", "function": { "name": "get_current_weather", "arguments": "{\"location\": \"San Francisco\"}" } } ] } ``` --- ## Step 4: Handle the tool call To check if the model has called the tool, you can check the `tool_calls` array in the model's response. If it exists, call the functions specified in the `tool_calls` array and send the response back to Langbase. ```js (async () => { const messages = [ { role: 'user', content: 'Whats the weather in SF?', }, ]; // replace this with your Pipe API key const pipeApiKey = ``; const res = await fetch('https://api.langbase.com/beta/chat', { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${pipeApiKey}`, }, body: JSON.stringify({ messages, }), }); const data = await res.json(); // get the threadId from the response headers const threadId = await res.headers.get('lb-thread-id'); const { raw } = data; // get the response message from the model const responseMessage = raw.choices[0].message; // get the tool calls from the response message const toolCalls = responseMessage.tool_calls; if (toolCalls) { const toolMessages = []; // call all the functions in the tool_calls array toolCalls.forEach(toolCall => { const toolName = toolCall.function.name; const toolParameters = JSON.parse(toolCall.function.arguments); const toolFunction = tools[toolName]; const toolResponse = toolFunction(toolParameters); toolMessages.push({ tool_call_id: toolCall.id, // required: id of the tool call role: 'tool', // required: role of the message name: toolName, // required: name of the tool content: JSON.stringify(toolResponse), // required: response of the tool }); }); // send the tool responses back to the API const res = await fetch('https://api.langbase.com/beta/chat', { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${pipeApiKey}`, }, body: JSON.stringify({ messages: toolMessages, threadId, }), }); const data = await res.json(); } })(); ``` Unlike the Generate API, you don't need to send the assistant's response back to the Chat API. Instead, just send the role "tool" responses. Also, make sure to include the `threadId` in the request body of your next requests to the Chat API. You can get the `threadId` from the response headers of the first request to the Chat API. This is what a typical model response will look like after calling the tool: ```json { "completion": "The current temperature in San Francisco, CA is 25°C.", "raw": { "id": "chatcmpl-9hQG8k2pD1A6JoFKQ0O6BKKvJzogS", "object": "chat.completion", "created": 1720136072, "model": "gpt-4o-2024-05-13", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "The current temperature in San Francisco, CA is 25°C." }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 121, "completion_tokens": 14, "total_tokens": 135 }, "system_fingerprint": "fp_ce0793330f" } } ``` And that's it! You have successfully used tool calling with the Chat API in Langbase. Pricing for Embed Primitive https://langbase.com/docs/embed/platform/pricing/ import { generateMetadata } from '@/lib/generate-metadata'; # Pricing for Embed Primitive Requests to the Embed primitive are counted as **Runs** against your subscription plan. | Plan | Runs | Overage | |------------|----------|----------| | Hobby | 500 | - | | Enterprise | [Contact Us][contact-us] | [Contact Us][contact-us] | Each run is an API request which can have at the max 1,000 Tokens in it which is equivalent to almost 750 words (an article). If your API request has, for instance, 1500 tokens in it, it will count as 2 runs. ### Free Users - **Limit**: 500 runs per month. - **Overage**: No overage. ### Enterprise Users There are no hard limits for Enterprise. Billing is based solely on the number of runs during each billing period. If you have questions about your usage or need assistance, please don't hesitate to [contact us](mailto:support@langbase.com). --- [contact-us]: mailto:support@langbase.com Limits for Embed Primitive https://langbase.com/docs/embed/platform/limits/ import { generateMetadata } from '@/lib/generate-metadata'; # Limits for Embed Primitive The following Rate and Usage Limits apply for the Embed primitive: ### Rate Limits Embed primitive requests follow our standard rate limits. See the [Rate Limits](/api-reference/limits/rate-limits) page for more details. ### Usage Limits Requests to the Embed primitive are counted as **Runs** against your subscription plan. See the [Run Usage Limits](/api-reference/limits/usage-limits) page for more details. Summarization workflow https://langbase.com/docs/examples/workflow/summarization/ import { generateMetadata } from '@/lib/generate-metadata'; # Summarization workflow This example demonstrates how to create a workflow that summarizes text input with parallel processing. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import { Langbase, Workflow } from 'langbase'; async function processText({ input }: { input: string }) { // Initialize Langbase const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); // Create workflow with debug mode const workflow = new Workflow({ debug: true }); try { // Define a single step with retries const response = await workflow.step({ id: 'process_text', retries: { limit: 2, delay: 1000, backoff: 'exponential' }, run: async () => { const response = await langbase.agent.run({ model: 'openai:gpt-4o', instructions: `Summarize the following text in a single paragraph. Be concise but capture the key information.`, apiKey: process.env.LLM_API_KEY!, input: [{ role: 'user', content: input }], stream: false }); return response.output; } }); // Return the result return { response }; } catch (error) { console.error('Workflow step failed:', error); throw error; } } async function main() { const sampleText = ` Langbase is the most powerful serverless AI platform for building AI agents with memory. Build, deploy, and scale AI agents with tools and memory (RAG). Simple AI primitives with a world-class developer experience without using any frameworks. Compared to complex AI frameworks, Langbase is serverless and the first composable AI platform. Build AI agents without any bloated frameworks. You write the logic, we handle the logistics. Langbase offers AI Pipes (serverless agents with tools), AI Memory (serverless RAG), and AI Studio (developer platform). The platform is 30-50x less expensive than competitors, supports 250+ LLM models, and enables collaboration among team members. `; const results = await processText({ input: sampleText }); console.log(JSON.stringify(results, null, 2)); } main(); ``` --- Page https://langbase.com/docs/examples/workflow/email-processing/ import { generateMetadata } from '@/lib/generate-metadata'; ## Create an email processing workflow This example demonstrates how to create a workflow that analyzes an email and generates a response when needed. --- ```ts {{ title: 'index.ts' }} import "dotenv/config"; import { Langbase, Workflow } from "langbase"; async function processEmail({ emailContent }: { emailContent: string }) { // Initialize Langbase const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); // Create a new workflow const workflow = new Workflow(); try { // Steps 1 & 2: Run summary and sentiment analysis in parallel const [summary, sentiment] = await Promise.all([ workflow.step({ id: "summarize_email", // The id for the step run: async () => { const response = await langbase.agent.run({ model: "openai:gpt-4.1-mini", instructions: `Create a concise summary of this email. Focus on the main points, requests, and any action items mentioned.`, apiKey: process.env.LLM_API_KEY!, input: [{ role: "user", content: emailContent }], stream: false, }); return response.output; }, }), workflow.step({ id: "analyze_sentiment", run: async () => { const response = await langbase.agent.run({ model: "openai:gpt-4.1-mini", instructions: `Analyze the sentiment of this email. Provide a brief analysis that includes the overall tone (positive, neutral, or negative) and any notable emotional elements.`, apiKey: process.env.LLM_API_KEY!, input: [{ role: "user", content: emailContent }], stream: false, }); return response.output; }, }), ]); // Step 3: Determine if response is needed (using the results from previous steps) const responseNeeded = await workflow.step({ id: "determine_response_needed", run: async () => { const response = await langbase.agent.run({ model: "openai:gpt-4.1-mini", instructions: `Based on the email summary and sentiment analysis, determine if a response is needed. Answer with 'yes' if a response is required, or 'no' if no response is needed. Consider factors like: Does the email contain a question? Is there an explicit request? Is it urgent?`, apiKey: process.env.LLM_API_KEY!, input: [ { role: "user", content: `Email: ${emailContent}\n\nSummary: ${summary}\n\nSentiment: ${sentiment}\n\nDoes this email require a response?`, }, ], stream: false, }); return response.output.toLowerCase().includes("yes"); }, }); // Step 4: Generate response if needed let response = null; if (responseNeeded) { response = await workflow.step({ id: "generate_response", run: async () => { const response = await langbase.agent.run({ model: "openai:gpt-4.1-mini", instructions: `Generate a professional email response. Address all questions and requests from the original email. Be helpful, clear, and maintain a professional tone that matches the original email sentiment.`, apiKey: process.env.LLM_API_KEY!, input: [ { role: "user", content: `Original Email: ${emailContent}\n\nSummary: ${summary}\n\n Sentiment Analysis: ${sentiment}\n\nPlease draft a response email.`, }, ], stream: false, }); return response.output; }, }); } // Return the results return { summary, sentiment, responseNeeded, response, }; } catch (error) { console.error("Email processing workflow failed:", error); throw error; } } async function main() { const sampleEmail = ` Subject: Pricing Information and Demo Request Hello, I came across your platform and I'm interested in learning more about your product for our growing company. Could you please send me some information on your pricing tiers? We're particularly interested in the enterprise tier as we now have a team of about 50 people who would need access. Would it be possible to schedule a demo sometime next week? Thanks in advance for your help! Best regards, Jamie `; const results = await processEmail({ emailContent: sampleEmail }); console.log(JSON.stringify(results, null, 2)); } main(); ``` --- Website Crawler https://langbase.com/docs/examples/tools/website-crawler/ import { generateMetadata } from '@/lib/generate-metadata'; # Website Crawler This example demonstrates how to crawl a website using Langbase website crawler tool. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); /** * Crawls specified URLs using spider.cloud service. * * Get your API key from the following link and set it in .env file. * * @link https://spider.cloud/docs/quickstart */ async function main() { const results = await langbase.tools.crawl({ url: ['https://langbase.com', 'https://langbase.com/about'], maxPages: 1, apiKey: process.env.CRAWL_KEY, }); console.log(results); } main(); ``` --- Web Search https://langbase.com/docs/examples/tools/web-search/ import { generateMetadata } from '@/lib/generate-metadata'; # Web Search This example demonstrates how to perform a web search using Langbase web search tool with Exa. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; // Learn more about Langbase SDK keys: https://langbase.com/docs/api-reference/api-keys const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const results = await langbase.tools.webSearch({ service: 'exa', totalResults: 2, query: 'What is Langbase?', domains: ['https://langbase.com'], apiKey: process.env.EXA_API_KEY!, // Find Exa key: https://dashboard.exa.ai/api-keys }); console.log(results); } main(); ``` --- Update a Thread https://langbase.com/docs/examples/threads/update-message/ import { generateMetadata } from '@/lib/generate-metadata'; # Update a Thread This example demonstrates how to update a thread. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.threads.update({ threadId: 'REPLACE_WITH_THREAD_ID', metadata: { company: 'langbase', about: 'Langbase is the most powerful serverless platform for building AI agents with memory.', }, }); console.log(response); } main(); ``` --- Get a Thread https://langbase.com/docs/examples/threads/get/ import { generateMetadata } from '@/lib/generate-metadata'; # Get a Thread This example demonstrates how to get a thread. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.threads.get({ threadId: 'REPLACE_WITH_THREAD_ID', }); console.log(response); } main(); ``` --- List Messages in a Thread https://langbase.com/docs/examples/threads/list-messages/ import { generateMetadata } from '@/lib/generate-metadata'; # List Messages in a Thread This example demonstrates how to list messages in a thread. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.threads.messages.list({ threadId: 'REPLACE_WITH_THREAD_ID', }); console.log(response); } main(); ``` --- Delete a Thread https://langbase.com/docs/examples/threads/delete/ import { generateMetadata } from '@/lib/generate-metadata'; # Delete a Thread This example demonstrates how to delete a thread. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.threads.delete({ threadId: 'REPLACE_WITH_THREAD_ID', }); console.log(response); } main(); ``` --- Create a Thread https://langbase.com/docs/examples/threads/create/ import { generateMetadata } from '@/lib/generate-metadata'; # Create a Thread This example demonstrates how to create a thread. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.threads.create({ messages: [{role: 'user', content: 'hello', metadata: {size: 'small'}}], metadata: {company: 'langbase'}, }); console.log(response); } main(); ``` --- Append to a Thread https://langbase.com/docs/examples/threads/append/ import { generateMetadata } from '@/lib/generate-metadata'; # Append to a Thread This example demonstrates how to append to a thread. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.threads.append({ threadId: 'REPLACE_WITH_THREAD_ID', messages: [ { role: 'assistant', content: 'Nice to meet you', metadata: {size: 'small'}, }, ], }); console.log(response); } main(); ``` --- Update Pipe Agent https://langbase.com/docs/examples/pipe-agent/update-pipe-agent/ import { generateMetadata } from '@/lib/generate-metadata'; # Update Pipe Agent This example demonstrates how to update a pipe agent. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.pipes.update({ name: 'summary-agent', description: 'This is a pipe updated with the SDK', model: 'google:gemini-1.5-flash-8b-latest', }); console.log(response); } main(); ``` --- Run Pipe Agent with Tools https://langbase.com/docs/examples/pipe-agent/run-pipe-agent-tools/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Pipe Agent with Tools This example demonstrates how to run a pipe agent with tools and use function calling capabilities with a Langbase pipe agent. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import { Langbase, Message, Tools, getToolsFromRun } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { await createSummaryAgent(); const response = await langbase.pipes.run({ stream: false, name: 'summary-agent', messages: [ { role: 'user', content: "What's the weather in SF?", }, ], tools: [weatherToolSchema], }); const toolCalls = await getToolsFromRun(response); const hasToolCalls = toolCalls.length > 0; const threadId = response.threadId; if (hasToolCalls) { // Process each tool call const toolResultPromises = toolCalls.map(async (toolCall): Promise => { const toolName = toolCall.function.name; const toolParameters = JSON.parse(toolCall.function.arguments); const toolFunction = tools[toolName as keyof typeof tools]; // Call the tool function with the parameters const toolResponse = await toolFunction(toolParameters); // Return the tool result return { role: 'tool', name: toolName, content: toolResponse, tool_call_id: toolCall.id, }; }); // Wait for all tool calls to complete const toolResults = await Promise.all(toolResultPromises); // Call the agent pipe again with the updated messages const finalResponse = await langbase.pipes.run({ threadId, stream: false, name: 'summary-agent', messages: toolResults, tools: [weatherToolSchema], }); console.log(JSON.stringify(finalResponse, null, 2)); } else { console.log('Direct response (no tools called):'); console.log(JSON.stringify(response, null, 2)); } } // Mock implementation of the weather function async function getCurrentWeather(args: { location: string }) { return 'Sunny, 75°F'; } // Weather tool schema const weatherToolSchema: Tools = { type: 'function', function: { name: 'getCurrentWeather', description: 'Get the current weather of a given location', parameters: { type: 'object', required: ['location'], properties: { unit: { enum: ['celsius', 'fahrenheit'], type: 'string', }, location: { type: 'string', description: 'The city and state, e.g. San Francisco, CA', }, }, }, }, }; // Object to hold all tools const tools = { getCurrentWeather }; /** * Creates a summary agent pipe if it doesn't already exist. * * This function checks if a pipe with the name 'summary-agent' exists in the system. * If the pipe doesn't exist, it creates a new private pipe with a system message * configuring it as a helpful assistant. * * @async * @returns {Promise} A promise that resolves when the operation is complete * @throws {Error} Logs any errors encountered during the creation process */ async function createSummaryAgent() { try { await langbase.pipes.create({ name: 'summary-agent', upsert: true, status: 'private', messages: [ { role: 'system', content: 'You are a helpful assistant that help users summarize text.', }, ], }); } catch (error) { console.error('Error creating summary agent:', error); } } main(); ``` --- Run Pipe Agent with Tools and Streaming https://langbase.com/docs/examples/pipe-agent/run-pipe-agent-tools-streaming/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Pipe Agent with Tools and Streaming This example demonstrates how to run a pipe agent with tools to use function calling with streaming responses. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import { Langbase, Message, Tools, getRunner, getToolsFromRunStream, getTextPart, ChunkStream } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { await createSummaryAgent(); // Call summary agent pipe const response = await langbase.pipes.run({ stream: true, name: 'summary-agent', messages: [ { role: 'user', content: "What's the weather in SF ?" }, ], tools: [weatherToolSchema], }); const [streamForResponse, streamForToolCall] = response.stream.tee(); const toolCalls = await getToolsFromRunStream(streamForToolCall); const hasToolCalls = toolCalls.length > 0; const threadId = response.threadId if (hasToolCalls) { // Process each tool call const toolResultPromises = toolCalls.map(async (toolCall): Promise => { const toolName = toolCall.function.name; const toolParameters = JSON.parse(toolCall.function.arguments); const toolFunction = tools[toolName as keyof typeof tools]; // Call the tool function with the parameters const toolResponse = await toolFunction(toolParameters); // Return the tool result return { role: 'tool', name: toolName, content: toolResponse, tool_call_id: toolCall.id, }; }); // Wait for all tool calls to complete const toolResults = await Promise.all(toolResultPromises); // Call the agent pipe again with the updated messages const finalResponse = await langbase.pipes.run({ stream: true, name: 'summary-agent', threadId: threadId!, messages: toolResults, tools: [weatherToolSchema], }); const runner = await getRunner(finalResponse.stream); for await (const chunk of runner) { const textContent = getTextPart(chunk as ChunkStream); process.stdout.write(textContent); } console.log("\n"); } else { console.log("Direct response (no tools called):"); const runner = await getRunner(streamForResponse); for await (const chunk of runner) { const textContent = getTextPart(chunk as ChunkStream); process.stdout.write(textContent); } } } // Mock implementation of the weather function async function getCurrentWeather(args: { location: string }) { return 'Sunny, 75°F'; } // Weather tool schema const weatherToolSchema: Tools = { type: 'function', function: { name: 'getCurrentWeather', description: 'Get the current weather of a given location', parameters: { type: 'object', required: ['location'], properties: { unit: { enum: ['celsius', 'fahrenheit'], type: 'string', }, location: { type: 'string', description: 'The city and state, e.g. San Francisco, CA', }, }, }, }, }; // Object to hold all tools const tools = { getCurrentWeather }; /** * Creates a summary agent pipe if it doesn't already exist. * * This function checks if a pipe with the name 'summary-agent' exists in the system. * If the pipe doesn't exist, it creates a new private pipe with a system message * configuring it as a helpful assistant. * * @async * @returns {Promise} A promise that resolves when the operation is complete * @throws {Error} Logs any errors encountered during the creation process */ async function createSummaryAgent() { try { await langbase.pipes.create({ name: 'summary-agent', upsert: true, status: 'private', messages: [ { role: 'system', content:'You are a helpful assistant that can answer questions and help with tasks in json format', } ] }); } catch (error) { console.error('Error creating summary agent:', error); } } main(); ``` --- Run Pipe Agent with Structured Output https://langbase.com/docs/examples/pipe-agent/run-pipe-agent-structured-output/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Pipe Agent with Structured Output This example demonstrates how to run a pipe agent with structured output. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; import {z} from 'zod'; import {zodToJsonSchema} from 'zod-to-json-schema'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); // Define the Strucutred Output JSON schema with Zod const MathReasoningSchema = z.object({ steps: z.array( z.object({ explanation: z.string(), output: z.string(), }), ), final_answer: z.string(), }); const jsonSchema = zodToJsonSchema(MathReasoningSchema, {target: 'openAi'}); async function main() { if (!process.env.LANGBASE_API_KEY) { console.error('❌ Missing LANGBASE_API_KEY in environment variables.'); process.exit(1); } await createMathTutorPipe(); await runMathTutorPipe('How can I solve 8x + 22 = -23?'); } /** * Create the pipe agent with the name 'math-tutor-agent' * Set the model to 'openai:gpt-4o' * Set the response_format to the JSON schema. * Set the system message to 'You are a helpful math tutor. Guide the user through the solution step by step.' */ async function createMathTutorPipe() { await langbase.pipes.create({ name: 'math-tutor-agent', model: 'openai:gpt-4o', upsert: true, messages: [ { role: 'system', content: 'You are a helpful math tutor. Guide the user through the solution step by step.' }, ], json: true, response_format: { type: 'json_schema', json_schema: { name: 'math_reasoning', schema: jsonSchema, }, }, }); } /** * Run the pipe agent with the name 'math-tutor-agent' * Send the user message to the pipe agent. * Parse the response using Zod and validate the response is correct. * Display the response to the user. **/ async function runMathTutorPipe(question: string) { const {completion} = await langbase.pipes.run({ stream: false, name: 'math-tutor-agent', messages: [ { role: 'user', content: question } ], }); // Parse and validate the response using Zod const solution = MathReasoningSchema.parse(JSON.parse(completion)); console.log('✅ Structured Output Response:', solution); } main(); ``` --- Run Pipe Agent Streaming Chat https://langbase.com/docs/examples/pipe-agent/run-pipe-agent-streaming-chat/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Pipe Agent Streaming Chat This example demonstrates how to use streaming responses with a Langbase pipe agent while maintaining conversation context. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase, getRunner} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { await createSummaryAgent(); // Message 1: Tell something to the LLM. const response1 = await langbase.pipes.run({ stream: true, name: 'summary-agent', messages: [ { role: 'user', content: 'My company is called Langbase' } ] }); const runner = await getRunner(response1.stream); for await (const chunk of runner) { if (chunk.choices[0]?.delta?.content) { process.stdout.write(chunk.choices[0].delta.content); } } console.log("\n"); const threadId = response1.threadId; // Message 2: Ask something about the first message. // Continue the conversation in the same thread by sending // `threadId` from the second message onwards. const response2 = await langbase.pipes.run({ stream: true, name: 'summary-agent', threadId: threadId!, messages: [ { role: 'user', content: 'Tell me the name of my company?' } ] }); const runner2 = getRunner(response2.stream); runner2.on('content', content => { process.stdout.write(content); }); } /** * Creates a summary agent pipe if it doesn't already exist. * * This function checks if a pipe with the name 'summary-agent' exists in the system. * If the pipe doesn't exist, it creates a new private pipe with a system message * configuring it as a helpful assistant. * * @async * @returns {Promise} A promise that resolves when the operation is complete * @throws {Error} Logs any errors encountered during the creation process */ async function createSummaryAgent() { try { await langbase.pipes.create({ name: 'summary-agent', upsert: true, status: 'private', messages: [ { role: 'system', content: 'You are a helpful assistant that help users summarize text.', }, ], }); } catch (error) { console.error('Error creating summary agent:', error); } } main(); ``` --- Run Pipe Agent Streaming https://langbase.com/docs/examples/pipe-agent/run-pipe-agent-streaming/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Pipe Agent Streaming This example demonstrates how to run a pipe agent with streaming response. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {getRunner, Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { await createSummaryAgent(); // Get readable stream const {stream, threadId, rawResponse} = await langbase.pipes.run({ stream: true, name: 'summary-agent', rawResponse: true, messages: [ { role: 'user', content: 'Who is an AI Engineer?' } ] }); // Convert the stream to a stream runner. const runner = getRunner(stream); // Method 1: Using event listeners runner.on('connect', () => { console.log('Stream started.\n'); }); runner.on('content', content => { process.stdout.write(content); }); runner.on('end', () => { console.log('\nStream ended.'); }); runner.on('error', error => { console.error('Error:', error); }); } /** * Creates a summary agent pipe if it doesn't already exist. * * This function checks if a pipe with the name 'summary-agent' exists in the system. * If the pipe doesn't exist, it creates a new private pipe with a system message * configuring it as a helpful assistant. * * @async * @returns {Promise} A promise that resolves when the operation is complete * @throws {Error} Logs any errors encountered during the creation process */ async function createSummaryAgent() { try { await langbase.pipes.create({ name: 'summary-agent', upsert: true, status: 'private', messages: [ { role: 'system', content: 'You are a helpful assistant that help users summarize text.', }, ], }); } catch (error) { console.error('Error creating summary agent:', error); } } main(); ``` --- Run Pipe Agent Chat with Pipe API Keys https://langbase.com/docs/examples/pipe-agent/run-pipe-agent-chat-pipe-api-keys/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Pipe Agent Chat with Pipe API Keys This example demonstrates how to run a pipe agent chat with pipe API keys. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {getRunner, Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { await createSummaryAgent(); // Get readable stream if (!process.env.PIPE_API_KEY) { console.log(`PIPE_API_KEY is not set in the environment variables get it from https://langbase.com/pipes`); } else { const {stream, threadId, rawResponse} = await langbase.pipes.run({ stream: true, rawResponse: true, apiKey: process.env.PIPE_API_KEY!, messages: [ { role: 'user', content: 'Who is an AI Engineer?' } ] }); // Convert the stream to a stream runner. const runner = getRunner(stream); // Method 1: Using event listeners runner.on('connect', () => { console.log('Stream started.\n'); }); runner.on('content', content => { process.stdout.write(content); }); runner.on('end', () => { console.log('\nStream ended.'); }); runner.on('error', error => { console.error('Error:', error); }); } } /** * Creates a summary agent pipe if it doesn't already exist. * * This function checks if a pipe with the name 'summary-agent' exists in the system. * If the pipe doesn't exist, it creates a new private pipe with a system message * configuring it as a helpful assistant. * * @async * @returns {Promise} A promise that resolves when the operation is complete * @throws {Error} Logs any errors encountered during the creation process */ async function createSummaryAgent() { try { await langbase.pipes.create({ name: 'summary-agent', upsert: true, status: 'private', messages: [ { role: 'system', content: 'You are a helpful assistant that help users summarize text.', }, ], }); } catch (error) { console.error('Error creating summary agent:', error); } } main(); ``` --- Run Pipe Agent Chat with LLM API Keys https://langbase.com/docs/examples/pipe-agent/run-pipe-agent-chat-llm-api-keys/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Pipe Agent Chat with LLM API Keys This example demonstrates how to run a pipe agent chat with LLM API keys. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {getRunner, Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); async function main() { await createSummaryAgent(); // Get readable stream const {stream, threadId, rawResponse} = await langbase.pipes.run({ stream: true, name: 'summary-agent', rawResponse: true, messages: [ { role: 'user', content: 'Who is an AI Engineer?' } ], llmKey: process.env.LLM_KEY!, // Your LLM API key }); // Convert the stream to a stream runner. const runner = getRunner(stream); // Method 1: Using event listeners runner.on('connect', () => { console.log('Stream started.\n'); }); runner.on('content', content => { process.stdout.write(content); }); runner.on('end', () => { console.log('\nStream ended.'); }); runner.on('error', error => { console.error('Error:', error); }); } /** * Creates a summary agent pipe if it doesn't already exist. * * This function checks if a pipe with the name 'summary-agent' exists in the system. * If the pipe doesn't exist, it creates a new private pipe with a system message * configuring it as a helpful assistant. * * @async * @returns {Promise} A promise that resolves when the operation is complete * @throws {Error} Logs any errors encountered during the creation process */ async function createSummaryAgent() { try { await langbase.pipes.create({ name: 'summary-agent', upsert: true, status: 'private', messages: [ { role: 'system', content: 'You are a helpful assistant that help users summarize text.', }, ], }); } catch (error) { console.error('Error creating summary agent:', error); } } main(); ``` --- Run Pipe Agent Chat https://langbase.com/docs/examples/pipe-agent/run-pipe-agent-chat/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Pipe Agent Chat This example demonstrates how to maintain a conversational thread with a Langbase pipe agent. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { await createSummaryAgent(); // Message 1: Tell something to the LLM. const response1 = await langbase.pipes.run({ stream: false, name: 'summary-agent', messages: [{role: 'user', content: 'My company is called Langbase'}], }); console.log(response1.completion); // Message 2: Ask something about the first message. // Continue the conversation in the same thread by sending // `threadId` from the second message onwards. const response2 = await langbase.pipes.run({ stream: false, name: 'summary-agent', threadId: response1.threadId!, messages: [{role: 'user', content: 'Tell me the name of my company?'}], }); console.log(response2.completion); // You'll see any LLM will know the company is `Langbase` // since it's the same chat thread. This is how you can // continue a conversation in the same thread. } /** * Creates a summary agent pipe if it doesn't already exist. * * This function checks if a pipe with the name 'summary-agent' exists in the system. * If the pipe doesn't exist, it creates a new private pipe with a system message * configuring it as a helpful assistant. * * @async * @returns {Promise} A promise that resolves when the operation is complete * @throws {Error} Logs any errors encountered during the creation process */ async function createSummaryAgent() { try { await langbase.pipes.create({ name: 'summary-agent', upsert: true, status: 'private', messages: [ { role: 'system', content: 'You are a helpful assistant that help users summarize text.', }, ], }); } catch (error) { console.error('Error creating summary agent:', error); } } main(); ``` --- Run Pipe Agent https://langbase.com/docs/examples/pipe-agent/run-pipe/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Pipe Agent This example demonstrates how to run a pipe agent with a user message. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { await createSummaryAgent(); const response = await langbase.pipes.run({ stream: false, name: 'summary-agent', messages: [ { role: 'user', content: 'Who is an AI Engineer?', }, ], }); console.log('response: ', response.completion); } /** * Creates a summary agent pipe if it doesn't already exist. * * This function checks if a pipe with the name 'summary-agent' exists in the system. * If the pipe doesn't exist, it creates a new private pipe with a system message * configuring it as a helpful assistant. * * @async * @returns {Promise} A promise that resolves when the operation is complete * @throws {Error} Logs any errors encountered during the creation process */ async function createSummaryAgent() { try { await langbase.pipes.create({ name: 'summary-agent', upsert: true, status: 'private', messages: [ { role: 'system', content: 'You are a helpful assistant that help users summarize text.', }, ], }); } catch (error) { console.error('Error creating summary agent:', error); } } main(); ``` --- Multi Agent https://langbase.com/docs/examples/pipe-agent/multi-agent/ import { generateMetadata } from '@/lib/generate-metadata'; # Multi Agent This example demonstrates how to create an multi agentic workflow chaining multiple agents where one agent's output feeds into another. --- ```ts {{ title: 'index.ts' }} // Basic example to demonstrate how to feed the output of an agent as an input to another agent. import dotenv from "dotenv"; import { Langbase } from "langbase"; dotenv.config(); const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { await createSummaryAgent(); // First agent: Summarize text const summarizeAgent = async (text: string) => { const response = await langbase.pipes.run({ stream: false, name: "summary-agent", messages: [ { role: "user", content: `Summarize the following text: ${text}` } ] }); return response.completion; }; await createGenerateQuestionsAgent(); // Second agent: Generate questions const questionsAgent = async (summary: string) => { const response = await langbase.pipes.run({ stream: false, name: "generate-questions-agent", messages: [ { role: "user", content: `Generate 3 questions based on this summary: ${summary}` } ] }); return response.completion; }; // Router agent: Orchestrate the flow const workflow = async (inputText: string) => { const summary = await summarizeAgent(inputText); const questions = await questionsAgent(summary); return { summary, questions }; }; // Example usage const inputText = "Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to natural intelligence displayed by humans. AI research has been defined as the field of study of intelligent agents, which refers to any system that perceives its environment and takes actions that maximize its chance of achieving its goals."; const result = await workflow(inputText); console.log("Summary:", result.summary); console.log("Questions:", result.questions); } /** * Creates a summary agent pipe if it doesn't already exist. * * This function checks if a pipe with the name 'summary-agent' exists in the system. * If the pipe doesn't exist, it creates a new private pipe with a system message * configuring it as a helpful assistant. * * @async * @returns {Promise} A promise that resolves when the operation is complete * @throws {Error} Logs any errors encountered during the creation process */ async function createSummaryAgent() { try { await langbase.pipes.create({ name: 'summary-agent', upsert: true, status: 'private', messages: [ { role: 'system', content: 'You are a helpful assistant that help users summarize text.', }, ], }); } catch (error) { console.error('Error creating summary agent:', error); } } /** * Creates a questions agent pipe if it doesn't already exist. * * This function checks if a pipe with the name 'questions-agent' exists in the system. * If the pipe doesn't exist, it creates a new private pipe with a system message * configuring it as a helpful assistant. * * @async * @returns {Promise} A promise that resolves when the operation is complete * @throws {Error} Logs any errors encountered during the creation process */ async function createGenerateQuestionsAgent() { try { await langbase.pipes.create({ name: 'generate-questions-agent', upsert: true, status: 'private', messages: [ { role: 'system', content: 'You are a helpful assistant that help user to generate questions based on the text.', } ] }); } catch (error) { console.error('Error creating questions agent:', error); } } main(); ``` --- List Pipe Agents https://langbase.com/docs/examples/pipe-agent/list-pipe-agents/ import { generateMetadata } from '@/lib/generate-metadata'; # List Pipe Agents This example demonstrates how to list all the pipe agents. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.pipes.list(); console.log(response); } main(); ``` --- Create a Pipe Agent https://langbase.com/docs/examples/pipe-agent/create-pipe-example/ import { generateMetadata } from '@/lib/generate-metadata'; # Create a Pipe Agent This example demonstrates how to create a private pipe in Langbase. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.pipes.create({ name: 'summary-agent', status: 'private', messages: [{ role: 'system', content: 'You are a helpful assistant that helps user to summarize text.', }] }); console.log(response); } main(); ``` --- Page https://langbase.com/docs/examples/parser/parse-document/ import { generateMetadata } from '@/lib/generate-metadata'; ## Parse a document This example demonstrates how to parse a document. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; import fs from 'fs'; import path from 'path'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const documentPath = path.join( process.cwd(), 'examples', 'parse', 'composable-ai.md', ); const results = await langbase.parser({ document: fs.readFileSync(documentPath), documentName: 'composable-ai.md', contentType: 'application/pdf', }); console.log(results); } main(); ``` --- Upload Documents to a Memory https://langbase.com/docs/examples/memory/upload-documents/ import { generateMetadata } from '@/lib/generate-metadata'; # Upload Documents to a Memory This example demonstrates how to upload documents to a memory. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; import fs from 'fs'; import path from 'path'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const src = path.join( process.cwd(), 'examples', 'memory', 'memory.docs.upload.ts', ); const response = await langbase.memories.documents.upload({ document: fs.readFileSync(src), memoryName: 'memory-sdk', documentName: 'memory.docs.upload.ts', contentType: 'text/plain', meta: { extension: 'ts', description: 'This is a test file', }, }); console.log(response); } main(); ``` --- Retry Embedding https://langbase.com/docs/examples/memory/retry-embedding/ import { generateMetadata } from '@/lib/generate-metadata'; # Retry Embedding This example demonstrates how to retry embedding generation for a document in Langbase memory. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.memories.documents.embeddings.retry({ memoryName: 'memory-sdk', documentName: 'memory.upload.doc.ts', }); console.log(response); } main(); ``` --- Retrieve Memories using multiple filters https://langbase.com/docs/examples/memory/retrieve-filters-advanced/ import { generateMetadata } from '@/lib/generate-metadata'; # Retrieve Memories using multiple filters This example demonstrates how to retrieve memories from Langbase with multiple filters. --- ```ts {{ title: 'index.ts' }} /** * Advanced example to demonstrate how to retrieve memories with filters. * * - And: This filter is used to retrieve memories that match all the filters. * - Or: This filter is used to retrieve memories that match any of the filters. * - In: This filter is used to retrieve memories that match any of the value/values in the array. * - Eq: This filter is used to retrieve memories that match the exact value. * * In this example, we retrieve memories with the following filters: * - company: Langbase * - category: docs or examples * - primitive: Chunk or Threads * * We expect to get all chunks of memory from the Langbase Docs memory * that have the company Langbase, the category docs or examples, and the primitive can be Chunk or Threads. * */ import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.memories.retrieve({ memory: [ { name: "memory-sdk", filters: [ "And", [ ["company", "Eq", "Langbase"], ["Or", [ ["category", "Eq", "docs"], ["category", "Eq", "examples"] ]], ["primitive", "In", ["Chunk", "Threads"]] ] ] } ], query: "What are primitives in Langbase?", topK: 3 }); console.log(response); } main(); ``` --- Retrieve Memories with Filters NotEq https://langbase.com/docs/examples/memory/retrieve-filters-NotEq/ import { generateMetadata } from '@/lib/generate-metadata'; # Retrieve Memories with Filters NotEq This example demonstrates how to retrieve memories from Langbase with filters. --- ```ts {{ title: 'index.ts' }} /** * Basic example to demonstrate how to retrieve memories with filters. * * - NotEq: This filter is used to retrieve memories that do not match the exact value. * * In this example, we retrieve memories with the following filters: * - company: Langbase * * We expect to get all chunks of memory from the Langbase Docs memory that do not have the company Langbase. * */ import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.memories.retrieve({ memory: [ { name: "memory-sdk", filters: ["company", "NotEq", "Google"], }, ], query: "What is Langbase?", topK: 3 }); console.log(response); } main(); ``` --- Retrieve Memories with Filters In https://langbase.com/docs/examples/memory/retrieve-filters-In/ import { generateMetadata } from '@/lib/generate-metadata'; # Retrieve Memories with Filters In This example demonstrates how to retrieve memories from Langbase with filters. --- ```ts {{ title: 'index.ts' }} /** * Basic example to demonstrate how to retrieve memories with filters. * * - In: This filter is used to retrieve memories that match any of the value/values in the array. * * In this example, we retrieve memories with the following filters: * - company: Langbase or Google * * We expect to get all chunks of memory from the Langbase Docs memory that have the company Langbase or Google. * */ import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.memories.retrieve({ memory: [ { name: "memory-sdk", filters: ["company", "In", ["Langbase","Google"]], }, ], query: "What are pipes in Langbase?", topK: 3 }); console.log(response); } main(); ``` Retrieve Memories with Filters NotIn https://langbase.com/docs/examples/memory/retrieve-filters-NotIn/ import { generateMetadata } from '@/lib/generate-metadata'; # Retrieve Memories with Filters NotIn This example demonstrates how to retrieve memories from Langbase with filters. --- ```ts {{ title: 'index.ts' }} /** * Basic example to demonstrate how to retrieve memories with filters. * * - NotIn: This filter is used to retrieve memories that do not match any of the value/values in the array. * * In this example, we retrieve memories with the following filters: * - company: Google * * We expect to get all chunks of memory from the Langbase Docs memory that do not have the company Google. * */ import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.memories.retrieve({ memory: [ { name: "memory-sdk", filters: ["company", "NotIn", "Google"], }, ], query: "What are pipes in Langbase?", topK: 3 }); console.log(response); } main(); ``` --- Retrieve Memories with Filters And https://langbase.com/docs/examples/memory/retrieve-filters-And/ import { generateMetadata } from '@/lib/generate-metadata'; # Retrieve Memories with Filters And This example demonstrates how to retrieve memories from Langbase with filters. --- ```ts {{ title: 'index.ts' }} /** * Basic example to demonstrate how to retrieve memories with filters. * * - And: This filter is used to retrieve memories that match all the filters. * - Eq: This filter is used to retrieve memories that match the exact value. * * In this example, we retrieve memories with the following filters: * - company: Langbase * - category: docs * * We expect to get all chunks of memory from the Langbase Docs memory that have the company Langbase and the category docs. * */ import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.memories.retrieve({ memory: [ { name: "memory-sdk", filters: ["And", [ ["company", "Eq", "Langbase"], ["category", "Eq", "docs"] ]] }, ], query: "What are pipes in Langbase Docs?", topK: 3 }); console.log(response); } main(); ``` --- List Memories https://langbase.com/docs/examples/memory/list-memories/ import { generateMetadata } from '@/lib/generate-metadata'; # List Memories This example demonstrates how to list memories from Langbase. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.memories.list(); console.log(response); } main(); ``` --- Retrieve from a Memory https://langbase.com/docs/examples/memory/retrieve/ import { generateMetadata } from '@/lib/generate-metadata'; # Retrieve from a Memory This example demonstrates how to retrieve from a memory. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.memories.retrieve({ memory: [ { name: 'memory-sdk', }, ], query: 'What are pipes in Langbase?', topK: 2, }); console.log(response); } main(); ``` --- Multi-Agent Routing https://langbase.com/docs/examples/memory/multi-agent-memory-routing/ import { generateMetadata } from '@/lib/generate-metadata'; # Multi-Agent Routing This example demonstrates how to implement intelligent routing between memory and non-memory agents based on query context. --- ```ts {{ title: 'index.ts' }} import dotenv from "dotenv"; import { Langbase } from 'langbase'; dotenv.config(); // Initialize Langbase with your API key const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); // Router agent checks whether or not to use memory agent. async function runRouterAgent(query) { const response = await langbase.pipes.run({ stream: false, name: 'router-agent', model: 'openai:gpt-4o-mini', // Ensure this model supports JSON mode messages: [ { role: 'system', // Update the content with your memory description content: `You are an expert query analyzer. Given a query, analyze whether it needs to use the memory agent or not. The memory agent contains a knowledge base that provides context-aware responses about AI, machine learning, and related topics. If the query is related to these topics, indicate that the memory agent should be used. Otherwise, indicate that it should not be used. Always respond in JSON format with the following structure: {"useMemory": true/false}.` }, { role: 'user', content: query } ] }); // Parse the response to determine if we should use the memory agent const parsedResponse = JSON.parse(response.completion); return parsedResponse.useMemory; } // Example usage async function main() { const query = 'What is AI?'; const useMemory = await runRouterAgent(query); console.log('Use Memory:', useMemory); if (useMemory) { // Run the memory agent const response = await langbase.pipes.run({ stream: false, name: 'memory-agent', // Name of your memory agent messages: [ { role: 'user', content: query } ] }); console.log('Response from memory agent:', response); } else { // Run the non-memory agent const response = await langbase.pipes.run({ stream: false, name: 'non-memory-agent', // Name of your non-memory agent messages: [ { role: 'user', content: query } ] }); console.log('Response from non-memory agent:', response); } } main(); ``` --- Delete a Memory https://langbase.com/docs/examples/memory/delete-memory/ import { generateMetadata } from '@/lib/generate-metadata'; # Delete a Memory This example demonstrates how to delete a memory from Langbase. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.memories.delete({ name: 'memory-sdk', }); console.log(response); } main(); ``` --- Delete Documents from a Memory https://langbase.com/docs/examples/memory/delete-documents/ import { generateMetadata } from '@/lib/generate-metadata'; # Delete Documents from a Memory This example demonstrates how to delete documents from a memory. ## Key Features - Delete a specific document from a memory - Delete multiple documents from a memory - Delete all documents from a memory --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.memories.documents.delete({ memoryName: 'memory-sdk', documentName: 'readme.md', }); console.log(response); } main(); ``` --- List Documents from a Memory https://langbase.com/docs/examples/memory/list-documents/ import { generateMetadata } from '@/lib/generate-metadata'; # List Documents from a Memory This example demonstrates how to list documents with name and metadata from a memory. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.memories.documents.list({ memoryName: 'memory-sdk', }); console.log(response); } main(); ``` --- Retrieve Memories with Filters Eq https://langbase.com/docs/examples/memory/retrieve-filters-Eq/ import { generateMetadata } from '@/lib/generate-metadata'; # Retrieve Memories with Filters Eq This example demonstrates how to retrieve memories from Langbase with filters. --- ```ts {{ title: 'index.ts' }} /** * Basic example to demonstrate how to retrieve memories with filters. * * - And: This filter is used to retrieve memories that match all the filters. * - Eq: This filter is used to retrieve memories that match the exact value. * * In this example, we retrieve memories with the following filters: * - company: Langbase * - category: docs * * We expect to get all chunks of memory from the Langbase Docs memory that have the company Langbase and the category docs. * */ import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.memories.retrieve({ memory: [ { name: "memory-sdk", filters: ["And", [ ["company", "Eq", "Langbase"], ["category", "Eq", "docs"] ]] }, ], query: "What are pipes in Langbase Docs?", topK: 3 }); console.log(response); } main(); ``` --- Create a Memory https://langbase.com/docs/examples/memory/create-memory/ import { generateMetadata } from '@/lib/generate-metadata'; # Create a Memory This example demonstrates how to create a memory. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.memories.create({ name: 'memory-sdk', embedding_model: 'cohere:embed-multilingual-v3.0' }); console.log(response); } main(); ``` --- Page https://langbase.com/docs/examples/embed/generate-embeddings/ import { generateMetadata } from '@/lib/generate-metadata'; ## Generate Embeddings from chunks This example demonstrates how to generate embeddings for a given text. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); /** * Generates embeddings for the given text chunks. */ async function main() { const response = await langbase.embed({ chunks: [ 'Langbase is the most powerful serverless platform for building AI agents with memory. Build, scale, and evaluate AI agents with semantic memory (RAG) and world-class developer experience. We process billions of AI messages/tokens daily. Built for every developer, not just AI/ML experts.', ], embeddingModel: 'openai:text-embedding-3-large', }); console.log(response); } main(); ``` --- Page https://langbase.com/docs/examples/chunker/chunk-text/ import { generateMetadata } from '@/lib/generate-metadata'; ## Chunk Text This example demonstrates how to chunk text. --- ```ts {{ title: 'index.ts' }} import {Langbase} from 'langbase'; import 'dotenv/config'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const content = `Langbase is the most powerful serverless AI platform for building AI agents with memory. Build, deploy, and scale AI agents with tools and memory (RAG). Simple AI primitives with a world-class developer experience without using any frameworks.`; const results = await langbase.chunker({ content, chunkMaxLength: 1024, chunkOverlap: 256, }); console.log(results); } main(); ``` --- Run Agent Streaming https://langbase.com/docs/examples/agent/run-stream/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Agent Streaming This example demonstrates how to run an agent with streaming response. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {getRunner, Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { // Get readable stream const {stream} = await langbase.agent.run({ stream: true, name: 'summary-agent', model: 'openai:gpt-4.1-mini', instructions: 'You are a helpful assistant that help users summarize text.', input: [ { role: 'user', content: 'Who is an AI Engineer?' } ] }); // Convert the stream to a stream runner. const runner = getRunner(stream); // Method 1: Using event listeners runner.on('connect', () => { console.log('Stream started.\n'); }); runner.on('content', content => { process.stdout.write(content); }); runner.on('end', () => { console.log('\nStream ended.'); }); runner.on('error', error => { console.error('Error:', error); }); } main(); ``` --- Run Agent with Structured Output https://langbase.com/docs/examples/agent/run-agent-structured-output/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Agent with Structured Output This example demonstrates how to run an agent with structured output. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; import {z} from 'zod'; import {zodToJsonSchema} from 'zod-to-json-schema'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); // Define the Strucutred Output JSON schema with Zod const MathReasoningSchema = z.object({ steps: z.array( z.object({ explanation: z.string(), output: z.string(), }), ), final_answer: z.string(), }); const jsonSchema = zodToJsonSchema(MathReasoningSchema, {target: 'openAi'}); async function main() { if (!process.env.LANGBASE_API_KEY) { console.error('❌ Missing LANGBASE_API_KEY in environment variables.'); process.exit(1); } const {output} = await langbase.agent.run({ name: 'math-tutor-agent', model: 'openai:gpt-4.1', apiKey: process.env.LLM_API_KEY!, instructions: 'You are a helpful math tutor. Guide the user through the solution step by step.', input: [ { role: 'user', content: 'How can I solve 8x + 22 = -23?', }, ], json: true, response_format: { type: 'json_schema', json_schema: { name: 'math_reasoning', schema: jsonSchema, }, }, }); // Parse and validate the response using Zod const solution = MathReasoningSchema.parse(JSON.parse(output)); console.log('✅ Structured Output Response:', solution); } main(); ``` --- Run Agent with MCP https://langbase.com/docs/examples/agent/run-mcp/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Agent with MCP This example demonstrates how to run an agent with MCP. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.agent.run({ stream: false, mcp_servers: [ { type: 'url', name: 'deepwiki', url: 'https://mcp.deepwiki.com/sse', }, ], model: 'openai:gpt-4.1-mini', apiKey: process.env.OPENAI_API_KEY!, instructions: 'You are a helpful assistant that help users summarize text.', input: [ { role: 'user', content: 'What transport protocols does the 2025-03-26 version of the MCP spec (modelcontextprotocol/modelcontextprotocol) support?', }, ], }); console.log('response: ', response.output); } main(); ``` --- Run Agent https://langbase.com/docs/examples/agent/run/ import { generateMetadata } from '@/lib/generate-metadata'; # Run Agent This example demonstrates how to run an agent with a user message. --- ```ts {{ title: 'index.ts' }} import 'dotenv/config'; import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const response = await langbase.agent.run({ stream: false, name: 'summary-agent', model: 'openai:gpt-4.1-mini', apiKey: process.env.LLM_API_KEY!, instructions: 'You are a helpful assistant that help users summarize text.', input: [ { role: 'user', content: 'Who is an AI Engineer?', }, ], }); console.log('response: ', response.output); } main(); ``` --- Pricing for Chunk Primitive https://langbase.com/docs/chunker/platform/pricing/ import { generateMetadata } from '@/lib/generate-metadata'; # Pricing for Chunk Primitive Requests to the Chunk primitive are counted as **Runs** against your subscription plan. | Plan | Runs | Overage | |------------|----------|----------| | Hobby | 500 | - | | Enterprise | [Contact Us][contact-us] | [Contact Us][contact-us] | Each run is an API request which can have at the max 1,000 Tokens in it which is equivalent to almost 750 words (an article). If your API request has, for instance, 1500 tokens in it, it will count as 2 runs. ### Free Users - **Limit**: 500 runs per month. - **Overage**: No overage. ### Enterprise Users There are no hard limits for Enterprise. Billing is based solely on the number of runs during each billing period. If you have questions about your usage or need assistance, please don't hesitate to [contact us](mailto:support@langbase.com). --- [contact-us]: mailto:support@langbase.com Limits for Chunk Primitive https://langbase.com/docs/chunker/platform/limits/ import { generateMetadata } from '@/lib/generate-metadata'; # Limits for Chunk Primitive The following Rate and Usage Limits apply for the Chunk primitive: ### Rate Limits Chunk primitive requests follow our standard rate limits. See the [Rate Limits](/api-reference/limits/rate-limits) page for more details. ### Usage Limits Requests to the Chunk primitive are counted as **Runs** against your subscription plan. See the [Run Usage Limits](/api-reference/limits/usage-limits) page for more details. Pricing for Agent Primitive https://langbase.com/docs/agent/platform/pricing/ import { generateMetadata } from '@/lib/generate-metadata'; # Pricing for Agent Primitive Requests to the Agent primitive are counted as **Runs** against your subscription plan. | Plan | Runs | Overage | |------------|----------|----------| | Hobby | 500 | - | | Enterprise | [Contact Us][contact-us] | [Contact Us][contact-us] | Each run is an API request which can have at the max 1,000 Tokens in it which is equivalent to almost 750 words (an article). If your API request has, for instance, 1500 tokens in it, it will count as 2 runs. ### Free Users - **Limit**: 500 runs per month. - **Overage**: No overage. ### Enterprise Users There are no hard limits for Enterprise. Billing is based solely on the number of runs during each billing period. If you have questions about your usage or need assistance, please don't hesitate to [contact us](mailto:support@langbase.com). --- [contact-us]: mailto:support@langbase.com Limits for Agent Primitive https://langbase.com/docs/agent/platform/limits/ import { generateMetadata } from '@/lib/generate-metadata'; # Limits for Agent Primitive The following Rate and Usage Limits apply for the Agent primitive: ### Rate Limits Agent primitive requests follow our standard rate limits. See the [Rate Limits](/api-reference/limits/rate-limits) page for more details. ### Usage Limits Requests to the Agent primitive are counted as **Runs** against your subscription plan. See the [Run Usage Limits](/api-reference/limits/usage-limits) page for more details. Tools: Web Search <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/tools/web-search/ import { generateMetadata } from '@/lib/generate-metadata'; # Tools: Web Search v1 The `web-search` API endpoint allows you to search the web for relevant information. This is particularly useful when you need to gather up-to-date information from the internet for your AI applications. The web search functionality is powered by [Exa](https://exa.ai). --- ## Pre-requisites 1. **Langbase API Key**: Generate your API key from the [User/Org API key documentation](/api-reference/api-keys). 2. **Exa API Key**: Sign up at [Exa Dashboard](https://dashboard.exa.ai/api-keys) to get your web search API key. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Search the web {{ tag: 'POST', label: '/v1/tools/web-search' }} Search the web for relevant information by sending queries to the web search API endpoint. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. Your Exa API key – obtain one from [Exa Dashboard](https://dashboard.exa.ai/api-keys). Replace `YOUR_EXA_API_KEY` with your key. --- ### Request Body The search query to execute. Currently only supports `'exa'` as the search service provider. The maximum number of results to return from the search. Optional array of domains to restrict the search to. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" EXA_API_KEY="" ``` ### Search the web
```ts {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const results = await langbase.tools.webSearch({ query: 'What is Langbase?', service: 'exa', apiKey: process.env.EXA_API_KEY!, totalResults: 2 }); console.log('Search results:', results); } main(); ``` ```python import requests import json def search_web(): url = 'https://api.langbase.com/v1/tools/web-search' api_key = 'YOUR_API_KEY' exa_api_key = 'YOUR_EXA_API_KEY' headers = { 'Authorization': f'Bearer {api_key}', 'Content-Type': 'application/json', 'LB-WEB-SEARCH-KEY': exa_api_key } data = { 'query': 'What is Langbase?', 'service': 'exa', 'totalResults': 2, 'domains': ['https://langbase.com'] } response = requests.post( url, headers=headers, data=json.dumps(data) ) results = response.json() return results ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/tools/web-search \ -X POST \ -H 'Authorization: Bearer ' \ -H 'Content-Type: application/json' \ -H 'LB-WEB-SEARCH-KEY: ' \ -d '{ "query": "What is Langbase?", "service": "exa", "totalResults": 2, "domains": ["https://langbase.com"] }' ``` --- ### Response An array of objects containing the URL and the extracted content returned by the web search operation. ```ts {{title: 'Web Search API Response'}} interface ToolWebSearchResponse { url: string; content: string; } type WebSearchResponse = ToolWebSearchResponse[]; ``` The URL of the search result. The extracted content from the search result. ```json {{ title: 'API Response' }} [ { "url": "https://langbase.com/docs/introduction", "content": "Langbase is a powerful AI development platform..." }, { "url": "https://langbase.com/docs/getting-started", "content": "Get started with Langbase by installing our SDK..." } ] ``` --- Tools: Crawl <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/tools/crawl/ import { generateMetadata } from '@/lib/generate-metadata'; # Tools: Crawl v1 The `crawl` API endpoint allows you to extract content from web pages. This is particularly useful when you need to gather information from websites for your AI applications. The crawling functionality is powered by the following services: - [Spider.cloud](https://spider.cloud) - [Firecrawl](https://firecrawl.dev) --- ## Limitations - Maximum number of URLs per request: **100** - Maximum crawl depth (pages): **50** --- ## Pre-requisites 1. **Langbase API Key**: Generate your API key from the [User/Org API key documentation](/api-reference/api-keys). 2. **Crawl API Key**: Sign up at [Spider.cloud](https://spider.cloud) OR [Firecrawl](https://firecrawl.dev) to get your crawl API key. You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Crawl web pages {{ tag: 'POST', label: '/v1/tools/crawl' }} Extract content from web pages by sending URLs to the crawl API endpoint. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. Your crawl API key – obtain one from [Spider.cloud](https://spider.cloud) OR [Firecrawl](https://firecrawl.dev). Replace `` with your key. --- ### Request Body An array of URLs to crawl. Each URL should be a valid web address. Maximum 100 URLs per request. The maximum number of pages to crawl. This limits the depth of the crawl operation. Maximum value: 50. The crawling service to use. Options are `spider` or `firecrawl`. Default is `spider`. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" CRAWL_KEY="" ``` ### Crawl web pages
```ts {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const results = await langbase.tools.crawl({ url: ['https://example.com'], apiKey: process.env.CRAWL_KEY!, // Spider.cloud API key maxPages: 5 }); console.log('Crawled content:', results); } main(); ``` ```python import requests import json def crawl_webpages(): url = 'https://api.langbase.com/v1/tools/crawl' api_key = '' crawl_api_key = '' headers = { 'Authorization': f'Bearer {api_key}', 'Content-Type': 'application/json', 'LB-CRAWL-KEY': crawl_api_key } data = { 'url': [ 'https://example.com/page1', 'https://example.com/page2' ], 'maxPages': 5 } response = requests.post( url, headers=headers, data=json.dumps(data) ) results = response.json() return results ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/tools/crawl \ -X POST \ -H 'Authorization: Bearer ' \ -H 'Content-Type: application/json' \ -H 'LB-CRAWL-KEY: ' \ -d '{ "url": [ "https://example.com/page1", "https://example.com/page2" ], "maxPages": 5 }' ``` --- ### Response An array of objects containing the URL and the extracted content returned by the crawl operation. ```ts {{title: 'Crawl API Response'}} interface ToolCrawlResponse { url: string; content: string; } type CrawlResponse = ToolCrawlResponse[]; ``` The URL of the crawled page. The extracted content from the crawled page. ```json {{ title: 'API Response' }} [ { "url": "https://example.com/page1", "content": "Extracted content from the webpage..." }, { "url": "https://example.com/page2", "content": "More extracted content..." } ] ``` --- Thread: Update <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/threads/update/ import { generateMetadata } from '@/lib/generate-metadata'; # Thread: Update v1 The Threads API allows you to update existing conversation threads. This endpoint helps you manage thread metadata for better organization and filtering of your conversational applications. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Update an existing thread {{ tag: 'POST', label: '/v1/threads/{threadId}' }} Update an existing thread's metadata. ### Headers Request content type. Needs to be `application/json` Replace `LANGBASE_API_KEY` with your User/Org API key --- ### Path Parameters The ID of the thread to update. --- ### Body Parameters Key-value pairs to update or add to the thread's metadata. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Update an existing thread
```js {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY // Your User/Org API key }); async function main() { const threadId = "thread_abc123xyz456"; const updatedThread = await langbase.threads.update({ threadId: threadId, metadata: { status: "resolved", priority: "high" } }); console.log('Thread updated:', updatedThread.id); return updatedThread; } main(); ``` ```python import requests import json import os def main(): thread_id = "thread_abc123xyz456" url = f"https://api.langbase.com/v1/threads/{thread_id}" api_key = os.environ["LANGBASE_API_KEY"] headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" } payload = { "metadata": { "status": "resolved", "priority": "high" } } response = requests.post(url, headers=headers, json=payload) updated_thread = response.json() print(f"Thread updated: {updated_thread['id']}") return updated_thread main() ``` ```bash {{ title: 'cURL' }} curl -X POST https://api.langbase.com/v1/threads/thread_abc123xyz456 \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "metadata": { "status": "resolved", "priority": "high" } }' ``` --- ### Response The response is a `ThreadsBaseResponse` object with information about the updated thread. The unique identifier for the thread. The type of object. Always "thread". The Unix timestamp (in seconds) for when the thread was created. The updated metadata associated with the thread. ```json {{ title: 'Response Example' }} { "id": "thread_abc123xyz456", "object": "thread", "created_at": 1714322048, "metadata": { "userId": "user123", "topic": "support", "status": "resolved", "priority": "high" } } ``` --- Thread: Get <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/threads/get/ import { generateMetadata } from '@/lib/generate-metadata'; # Thread: Get v1 The Get Thread API allows you to retrieve thread information like its metadata and other details. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Get a thread {{ tag: 'GET', label: '/v1/threads/{threadId}' }} Retrieve a thread by its ID. ### Headers Replace `LANGBASE_API_KEY` with your User/Org API key --- ### Path Parameters The unique identifier of the thread to retrieve. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Retrieve a thread
```js {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY // Your User/Org API key }); async function main() { const threadId = "thread_abc123xyz456"; const thread = await langbase.threads.get({ threadId: threadId }); console.log('Thread retrieved:', thread); return thread; } main(); ``` ```python import requests import os def main(thread_id): url = f"https://api.langbase.com/v1/threads/{thread_id}" api_key = os.environ["LANGBASE_API_KEY"] headers = { "Authorization": f"Bearer {api_key}" } response = requests.get(url, headers=headers) if response.status_code == 200: thread = response.json() print(f"Thread retrieved: {thread['id']}") return thread else: print(f"Error: {response.status_code}") print(response.json()) return None # Call the function with your thread ID thread = main("thread_abc123xyz456") ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/threads/thread_abc123xyz456 \ -H 'Authorization: Bearer ' ``` --- ### Response The response is a `ThreadsBaseResponse` object with information about the requested thread. The unique identifier for the thread. The type of object. Always "thread". The Unix timestamp (in seconds) for when the thread was created. The metadata associated with the thread. This may include custom fields that were added when the thread was created or updated. ```json {{ title: 'Response Example' }} { "id": "thread_abc123xyz456", "object": "thread", "created_at": 1714322048, "metadata": { "userId": "user123", "topic": "support", "status": "active", "priority": "high" } } ``` --- Thread: Delete <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/threads/delete/ import { generateMetadata } from '@/lib/generate-metadata'; # Thread: Delete v1 The Delete Thread API allows you to delete threads that are no longer needed. This helps you manage conversation history and clean up unused threads in your applications. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Delete a thread {{ tag: 'DELETE', label: '/v1/threads/{threadId}' }} Delete a thread by its ID. ### Headers Replace `LANGBASE_API_KEY` with your User/Org API key --- ### Path Parameters The unique identifier of the thread to delete. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Delete a thread
```js {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY // Your User/Org API key }); async function main() { const threadId = "thread_abc123xyz456"; const result = await langbase.threads.delete({ threadId: threadId }); console.log('Thread deleted:', result.success); return result; } main(); ``` ```python import requests import os def main(thread_id): url = f"https://api.langbase.com/v1/threads/{thread_id}" api_key = os.environ["LANGBASE_API_KEY"] headers = { "Authorization": f"Bearer {api_key}" } response = requests.delete(url, headers=headers) if response.status_code == 200: result = response.json() print(f"Thread deleted: {result['success']}") return result else: print(f"Error: {response.status_code}") print(response.json()) return None # Call the function with your thread ID result = main("thread_abc123xyz456") ``` ```bash {{ title: 'cURL' }} curl -X DELETE https://api.langbase.com/v1/threads/thread_abc123xyz456 \ -H 'Authorization: Bearer ' ``` --- ### Response The response is a simple JSON object that confirms the deletion operation. Indicates whether the thread was successfully deleted. ```json {{ title: 'Response Example' }} { "success": true } ``` --- Thread Messages: List <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/threads/list-messages/ import { generateMetadata } from '@/lib/generate-metadata'; # Thread Messages: List v1 The List Messages API allows you to retrieve all messages in a thread. This is essential for accessing complete conversation history and building interfaces that display past interactions. Messages are returned in chronological order with the oldest messages appearing first in the array. This makes it easy to reconstruct the conversation flow as it occurred. The List Messages API provides: - Complete conversation history for a specific thread - Chronological ordering of messages - Access to message metadata and attachments --- ## List messages in a thread {{ tag: 'GET', label: '/v1/threads/{threadId}/messages' }} Retrieve all messages in a specific thread. ### Headers Replace `LANGBASE_API_KEY` with your User/Org API key --- ### Path Parameters The unique identifier of the thread to retrieve messages from. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### List messages in a thread
```js {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY // Your User/Org API key }); async function listMessages() { const threadId = "thread_abc123xyz456"; const messages = await langbase.threads.messages.list({ threadId: threadId }); console.log(`Retrieved ${messages.length} messages from thread`); return messages; } listMessages(); ``` ```python import requests import os def list_messages(thread_id): url = f"https://api.langbase.com/v1/threads/{thread_id}/messages" api_key = os.environ["LANGBASE_API_KEY"] headers = { "Authorization": f"Bearer {api_key}" } response = requests.get(url, headers=headers) if response.status_code == 200: messages = response.json() print(f"Retrieved {len(messages)} messages from thread") return messages else: print(f"Error: {response.status_code}") print(response.json()) return None # Call the function with your thread ID messages = list_messages("thread_abc123xyz456") ``` ```bash {{ title: 'cURL' }} curl -X GET https://api.langbase.com/v1/threads/thread_abc123xyz456/messages \ -H 'Authorization: Bearer YOUR_LANGBASE_API_KEY' ``` --- ### Response The response is an array of `ThreadMessagesBaseResponse` objects representing all messages in the thread. The unique identifier for the message. The ID of the thread that this message belongs to. The Unix timestamp (in seconds) for when the message was created. The role of the message author. One of 'user', 'assistant', 'system', or 'tool'. The content of the message. Will be null for messages that only contain tool calls. If the message is a tool response, this is the ID of the tool call it is responding to. If the message contains tool calls, this array will contain the tools called by the assistant. If the message is a tool response, this is the name of the tool that was called. Any attachments associated with the message. Key-value pairs of metadata associated with the message. ```json {{ title: 'Response Example' }} [ { "id": "msg_abc123", "thread_id": "thread_abc123xyz456", "created_at": 1714322048, "role": "system", "content": "You are a helpful assistant that provides concise responses.", "tool_call_id": null, "tool_calls": [], "name": null, "attachments": [], "metadata": {} }, { "id": "msg_def456", "thread_id": "thread_abc123xyz456", "created_at": 1714322148, "role": "user", "content": "Can you help me track my order?", "tool_call_id": null, "tool_calls": [], "name": null, "attachments": [], "metadata": { "userId": "user123", "source": "mobile_app" } }, { "id": "msg_ghi789", "thread_id": "thread_abc123xyz456", "created_at": 1714322248, "role": "assistant", "content": null, "tool_call_id": null, "tool_calls": [ { "id": "call_abc123", "type": "function", "function": { "name": "track_order", "arguments": "{\"order_id\":\"ORD-12345\"}" } } ], "name": null, "attachments": [], "metadata": {} }, { "id": "msg_jkl012", "thread_id": "thread_abc123xyz456", "created_at": 1714322348, "role": "tool", "content": "{\"status\":\"shipped\",\"estimated_delivery\":\"2025-05-01\",\"carrier\":\"FedEx\",\"tracking_number\":\"TRK123456789\"}", "tool_call_id": "call_abc123", "tool_calls": [], "name": "track_order", "attachments": [], "metadata": {} }, { "id": "msg_mno345", "thread_id": "thread_abc123xyz456", "created_at": 1714322448, "role": "assistant", "content": "Your order #ORD-12345 has been shipped via FedEx and is expected to arrive on May 1, 2025. You can track it with tracking number TRK123456789.", "tool_call_id": null, "tool_calls": [], "name": null, "attachments": [], "metadata": {} } ] ``` --- Thread: Create <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/threads/create/ import { generateMetadata } from '@/lib/generate-metadata'; # Thread: Create v1 The Threads API allows you to create and manage conversation threads for building conversational applications. Threads help you organize and maintain conversation history across multiple interactions. The Threads API supports: - Creating new threads with optional initial messages - Adding metadata to threads for organization and filtering - Attaching initial messages to establish conversation context --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Create a new thread {{ tag: 'POST', label: '/v1/threads' }} Create a new thread with optional initial messages and metadata. ### Headers Request content type. Needs to be `application/json` Replace `LANGBASE_API_KEY` with your User/Org API key --- ### Body Parameters Optional custom ID for the thread. If not provided, a unique ID will be generated. Optional key-value pairs to store with the thread for organizational purposes. Optional initial messages to populate the thread. ```ts {{ title: 'ThreadMessage Object' }} interface ThreadMessage extends Message { attachments?: any[]; metadata?: Record; } interface Message { role: 'user' | 'assistant' | 'system' | 'tool'; content: string | null; name?: string; tool_call_id?: string; tool_calls?: ToolCall[]; } ``` The role of the message author: `system` | `user` | `assistant` | `tool` The content of the message. Can be null for tool calls. Optional name identifier for the message author. ID of the tool call this message is responding to, if applicable. Tool calls made in this message, if any. ```ts {{ title: 'ToolCall Object' }} interface ToolCall { id: string; type: 'function'; function: { name: string; arguments: string; }; } ``` Optional attachments for the message. Optional metadata for the message. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Create a new thread
```js {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY // Your User/Org API key }); async function main() { const thread = await langbase.threads.create({ metadata: { userId: "user123", topic: "support" }, messages: [{ role: "user", content: "Hello, I need help with my order!" }] }); console.log('Thread created:', thread.id); return thread; } main(); ``` ```python import requests import json import os def main(): url = "https://api.langbase.com/v1/threads" api_key = os.environ["LANGBASE_API_KEY"] headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" } payload = { "metadata": { "userId": "user123", "topic": "support" }, "messages": [{ "role": "user", "content": "Hello, I need help with my order!" }] } response = requests.post(url, headers=headers, json=payload) thread = response.json() print(f"Thread created: {thread['id']}") return thread main() ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/threads \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "metadata": { "userId": "user123", "topic": "support" }, "messages": [ { "role": "user", "content": "Hello, I need help with my order!" } ] }' ``` ```js {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY }); async function main() { const thread = await langbase.threads.create({ threadId: "cust-thread-2024-05", metadata: { customerId: "cust-456", department: "billing" } }); console.log('Custom thread created:', thread.id); return thread; } main(); ``` ```python import requests import json import os def main(): url = "https://api.langbase.com/v1/threads" api_key = os.environ["LANGBASE_API_KEY"] headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" } payload = { "threadId": "cust-thread-2024-05", "metadata": { "customerId": "cust-456", "department": "billing" } } response = requests.post(url, headers=headers, json=payload) thread = response.json() print(f"Custom thread created: {thread['id']}") return thread main() ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/threads \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "threadId": "cust-thread-2024-05", "metadata": { "customerId": "cust-456", "department": "billing" } }' ``` ```js {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY }); async function main() { const thread = await langbase.threads.create({ messages: [ { role: "system", content: "You are a helpful customer support agent for Acme Corp. Be concise and friendly." }, { role: "user", content: "I haven't received my order yet. It's been 5 days." } ] }); console.log('Thread with system message created:', thread.id); return thread; } main(); ``` ```python import requests import json import os def main(): url = "https://api.langbase.com/v1/threads" api_key = os.environ["LANGBASE_API_KEY"] headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" } payload = { "messages": [ { "role": "system", "content": "You are a helpful customer support agent for Acme Corp. Be concise and friendly." }, { "role": "user", "content": "I haven't received my order yet. It's been 5 days." } ] } response = requests.post(url, headers=headers, json=payload) thread = response.json() print(f"Thread with system message created: {thread['id']}") return thread main() ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/threads \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "messages": [ { "role": "system", "content": "You are a helpful customer support agent for Acme Corp. Be concise and friendly." }, { "role": "user", "content": "I haven't received my order yet. It's been 5 days." } ] }' ``` --- ### Response The response is a `ThreadsBaseResponse` object with information about the created thread. The unique identifier for the thread. The type of object. Always "thread". The Unix timestamp (in seconds) for when the thread was created. The metadata associated with the thread. ```json {{ title: 'Response Example' }} { "id": "thread_abc123xyz456", "object": "thread", "created_at": 1714322048, "metadata": { "userId": "user123", "topic": "support" } } ``` --- Thread Messages: Append <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/threads/append-messages/ import { generateMetadata } from '@/lib/generate-metadata'; # Thread Messages: Append v1 The Append Messages API allows you to add new messages to an existing thread. This is essential for building interactive chat experiences and maintaining conversation history in your applications. The Append Messages API supports: - Adding user messages to represent input from your users - Adding assistant messages to represent AI responses - Adding system messages to guide conversation behavior - Adding tool messages to represent function call results - Including metadata and attachments with messages --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Append messages to a thread {{ tag: 'POST', label: '/v1/threads/{threadId}/messages' }} Add new messages to an existing thread. ### Headers Request content type. Needs to be `application/json` Replace `LANGBASE_API_KEY` with your User/Org API key --- ### Path Parameters The unique identifier of the thread to append messages to. --- ### Body Parameters An array containing message objects to append to the thread. ```ts {{ title: 'ThreadMessage Object' }} interface ThreadMessage extends Message { attachments?: any[]; metadata?: Record; } interface Message { role: 'user' | 'assistant' | 'system' | 'tool'; content: string | null; name?: string; tool_call_id?: string; tool_calls?: ToolCall[]; } ``` The role of the message author: `system` | `user` | `assistant` | `tool` The content of the message. Can be null for tool calls. Optional name identifier for the message author. ID of the tool call this message is responding to, if applicable. Tool calls made in this message, if any. ```ts {{ title: 'ToolCall Object' }} interface ToolCall { id: string; type: 'function'; function: { name: string; arguments: string; }; } ``` Optional attachments for the message. Optional metadata for the message. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Append messages to a thread
```js {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY // Your User/Org API key }); async function main() { const threadId = "thread_abc123xyz456"; const messages = await langbase.threads.append({ threadId: threadId, messages: [{ role: "user", content: "I have a question about my order #12345" }] }); console.log('Messages appended:', messages); return messages; } main(); ``` ```python import requests import json import os def main(thread_id, messages): url = f"https://api.langbase.com/v1/threads/{thread_id}/messages" api_key = os.environ["LANGBASE_API_KEY"] headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" } payload = { "messages": messages } response = requests.post(url, headers=headers, json=payload) if response.status_code == 200: result = response.json() print(f"Messages appended: {len(result)} message(s)") return result else: print(f"Error: {response.status_code}") print(response.json()) return None # Call the function with your thread ID and messages messages = main( "thread_abc123xyz456", [{ "role": "user", "content": "I have a question about my order #12345" }] ) ``` ```bash {{ title: 'cURL' }} curl -X POST https://api.langbase.com/v1/threads/thread_abc123xyz456/messages \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "messages": [ { "role": "user", "content": "I have a question about my order #12345" } ] }' ``` ```js {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY }); async function main() { const threadId = "thread_abc123xyz456"; const messages = await langbase.threads.append({ threadId: threadId, messages: [ { role: "user", content: "Can you help me with my account?" }, { role: "assistant", content: "Sure, I'd be happy to help with your account. What specific issue are you having?" }, { role: "user", content: "I can't reset my password." } ] }); console.log(`Appended ${messages.length} messages to thread`); return messages; } main(); ``` ```python import requests import json import os def main(thread_id): url = f"https://api.langbase.com/v1/threads/{thread_id}/messages" api_key = os.environ["LANGBASE_API_KEY"] headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" } payload = { "messages": [ { "role": "user", "content": "Can you help me with my account?" }, { "role": "assistant", "content": "Sure, I'd be happy to help with your account. What specific issue are you having?" }, { "role": "user", "content": "I can't reset my password." } ] } response = requests.post(url, headers=headers, json=payload) if response.status_code == 200: result = response.json() print(f"Appended {len(result)} messages to thread") return result else: print(f"Error: {response.status_code}") print(response.json()) return None # Call the function with your thread ID messages = main("thread_abc123xyz456") ``` ```bash {{ title: 'cURL' }} curl -X POST https://api.langbase.com/v1/threads/thread_abc123xyz456/messages \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "messages": [ { "role": "user", "content": "Can you help me with my account?" }, { "role": "assistant", "content": "Sure, I'\''d be happy to help with your account. What specific issue are you having?" }, { "role": "user", "content": "I can'\''t reset my password." } ] }' ``` ```js {{ title: 'Node.js' }} import { Langbase } from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY }); async function main() { const threadId = "thread_abc123xyz456"; const messages = await langbase.threads.append({ threadId: threadId, messages: [ { role: "user", content: "Did you ship my order yet?", metadata: { orderId: "order-789012", userId: "user-456", source: "mobile_app", location: "customer_support" } } ] }); console.log('Message with metadata appended:', messages[0]); return messages; } main(); ``` ```python import requests import json import os def main(thread_id): url = f"https://api.langbase.com/v1/threads/{thread_id}/messages" api_key = os.environ["LANGBASE_API_KEY"] headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" } payload = { "messages": [ { "role": "user", "content": "Did you ship my order yet?", "metadata": { "orderId": "order-789012", "userId": "user-456", "source": "mobile_app", "location": "customer_support" } } ] } response = requests.post(url, headers=headers, json=payload) if response.status_code == 200: result = response.json() print(f"Message with metadata appended: {result[0]['id']}") return result else: print(f"Error: {response.status_code}") print(response.json()) return None # Call the function with your thread ID messages = main("thread_abc123xyz456") ``` ```bash {{ title: 'cURL' }} curl -X POST https://api.langbase.com/v1/threads/thread_abc123xyz456/messages \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "messages": [ { "role": "user", "content": "Did you ship my order yet?", "metadata": { "orderId": "order-789012", "userId": "user-456", "source": "mobile_app", "location": "customer_support" } } ] }' ``` --- ### Response The response is an array of `ThreadMessagesBaseResponse` objects representing the newly added messages. The unique identifier for the message. The ID of the thread that this message belongs to. The Unix timestamp (in seconds) for when the message was created. The role of the message author. One of 'user', 'assistant', 'system', or 'tool'. The content of the message. Will be null for messages that only contain tool calls. If the message is a tool response, this is the ID of the tool call it is responding to. If the message contains tool calls, this array will contain the tools called by the assistant. If the message is a tool response, this is the name of the tool that was called. Any attachments associated with the message. Key-value pairs of metadata associated with the message. ```json {{ title: 'Response Example - Single Message' }} [ { "id": "msg_abc123xyz456", "thread_id": "thread_abc123xyz456", "created_at": 1714322048, "role": "user", "content": "I have a question about my order #12345", "tool_call_id": null, "tool_calls": [], "name": null, "attachments": [], "metadata": {} } ] ``` ```json {{ title: 'Response Example - Multiple Messages' }} [ { "id": "msg_abc123xyz456", "thread_id": "thread_abc123xyz456", "created_at": 1714322048, "role": "user", "content": "Can you help me with my account?", "tool_call_id": null, "tool_calls": [], "name": null, "attachments": [], "metadata": {} }, { "id": "msg_def456uvw789", "thread_id": "thread_abc123xyz456", "created_at": 1714322048, "role": "assistant", "content": "Sure, I'd be happy to help with your account. What specific issue are you having?", "tool_call_id": null, "tool_calls": [], "name": null, "attachments": [], "metadata": {} }, { "id": "msg_ghi789rst012", "thread_id": "thread_abc123xyz456", "created_at": 1714322048, "role": "user", "content": "I can't reset my password.", "tool_call_id": null, "tool_calls": [], "name": null, "attachments": [], "metadata": {} } ] ``` --- Memory: Retrieve <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/memory/retrieve/ import { generateMetadata } from '@/lib/generate-metadata'; # Memory: Retrieve v1 The `retrieve` memory API endpoint allows you to retrieve similar chunks from an existing memory on Langbase based on a query. This endpoint requires an Org or User API key. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Retrieve similar chunks from multiple memory {{ tag: 'POST', label: '/v1/memory/retrieve' }} Retrieve similar chunks by specifying the query and memory names in the request body. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. ### Body Parameters The search query for retrieving similar chunks. An array of memory objects from which to retrieve similar chunks. ```js {{title: 'Memory Object'}} interface Memory { name: string; filters?: MemoryFilters; } ``` The name of the memory. Optional array of filters to narrow down the search results. ```ts {{title: 'MemoryFilters Type'}} type FilterOperator = 'Eq' | 'NotEq' | 'In' | 'NotIn' | 'And' | 'Or'; type FilterConnective = 'And' | 'Or'; type FilterValue = string | string[]; type FilterCondition = [string, FilterOperator, FilterValue]; type MemoryFilters = [FilterConnective, MemoryFilters[]] | FilterCondition; ``` Filters can be either: - A single condition: `["field", "operator", "value"]` - A nested structure: `["And"|"Or", MemoryFilters]` The number of top similar chunks to return from memory. Default is 20, minimum is 1, and maximum is 100. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Retrieve Similar Chunks ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const chunks = await langbase.memories.retrieve({ query: "What are the key features?", memory: [{ name: "knowledge-base" }] }); console.log('Memory chunk:', chunks); } main(); ``` ```python import requests def retrieve_similar_chunks(): url = 'https://api.langbase.com/v1/memory/retrieve' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } data = { 'query': 'What are the key features?', 'memory': [ { 'name': 'knowledge-base' } ] } response = requests.post(url, headers=headers, json=data) result = response.json() return result ``` ```bash {{ title: 'cURL' }} curl -X POST https://api.langbase.com/v1/memory/retrieve \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "query": "What are the key features?", "memory": [ { "name": "knowledge-base" } ] }' ``` ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); const results = await langbase.memories.retrieve({ query: "What are the key features?", memory: [{ name: "knowledge-base", filters: [ ["category", "Eq", "features"] ] }] }); ``` ```python import requests def retrieve_similar_chunks(): url = 'https://api.langbase.com/v1/memory/retrieve' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } data = { 'query': 'What are the key features?', 'memory': [ { 'name': 'knowledge-base', 'filters': [ ['category', 'Eq', 'features'] ] } ] } response = requests.post(url, headers=headers, json=data) result = response.json() return result ``` ```bash {{ title: 'cURL' }} curl -X POST https://api.langbase.com/v1/memory/retrieve \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "query": "What are the key features?", "memory": [ { "name": "knowledge-base", "filters": [ ["category", "Eq", "features"] ] } ] }' ``` ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); const results = await langbase.memories.retrieve({ query: "What are the key features?", memory: [{ name: "knowledge-base", filters: [ ["category", "Eq", "features"], ["And", [ ["category", "Eq", "features"], ["section", "In", ["overview", "features"]] ]] ] }] }); ``` ```python import requests def retrieve_similar_chunks(): url = 'https://api.langbase.com/v1/memory/retrieve' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } data = { 'query': 'What are the key features?', 'memory': [ { 'name': 'knowledge-base', 'filters': [ ['category', 'Eq', 'features'], ['And', [ ['category', 'Eq', 'features'], ['section', 'In', ['overview', 'features']] ]] ] } ] } response = requests.post(url, headers=headers, json=data) result = response.json() return result ``` ```bash {{ title: 'cURL' }} curl -X POST https://api.langbase.com/v1/memory/retrieve \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "query": "What are the key features?", "memory": [ { "name": "knowledge-base", "filters": [ ["category", "Eq", "features"], ["And", [ ["category", "Eq", "features"], ["section", "In", ["overview", "features"]] ]] ] } ] }' ``` --- ### Response The array of retrieve response objects returned by the API endpoint. ```ts {{title: 'MemoryRetrieveResponse'}} interface MemoryRetrieveResponse { text: string; similarity: number; meta: Record; } ``` Retrieved text segment from memory. Similarity score between the query and retrieved text (0-1 range). Additional metadata associated with the retrieved text. ```json {{ title: 'Basic' }} [ { "text": "Key features of Langbase include: semantic search capabilities, flexible memory management, and scalable architecture for handling large datasets.", "similarity": 0.92, "meta": { "category": "features", "section": "overview" } }, { "text": "Our platform offers advanced features like real-time memory updates, custom metadata filtering, and enterprise-grade security.", "similarity": 0.87, "meta": { "category": "updates", "section": "highlights" } }, { "text": "Platform highlights include AI-powered memory retrieval, customizable embedding models, and advanced filtering capabilities.", "similarity": 0.85, "meta": { "category": "features", "section": "highlights" } } ] ``` ```json {{ title: 'With Filters' }} [ { "text": "Key features of Langbase include: semantic search capabilities, flexible memory management, and scalable architecture for handling large datasets.", "similarity": 0.92, "meta": { "category": "features", "section": "overview" } }, { "text": "Platform highlights include AI-powered memory retrieval, customizable embedding models, and advanced filtering capabilities.", "similarity": 0.85, "meta": { "category": "features", "section": "highlights" } } ] ``` ```json {{ title: 'Advanced Filters' }} [ { "text": "Key features of Langbase include: semantic search capabilities, flexible memory management, and scalable architecture for handling large datasets.", "similarity": 0.92, "meta": { "category": "features", "section": "overview" } } ] ``` --- Document: List <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/memory/document-list/ import { generateMetadata } from '@/lib/generate-metadata'; # Document: List v1 The `list` document API endpoint allows you to list documents in a memory on Langbase dynamically with API. This endpoint requires a User or Org API key. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Get a list of memory documents {{ tag: 'GET', label: '/v1/memory/{memoryName}/documents' }} Get a list of documents in a memory by sending a GET request to this endpoint. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. ### Path parameters The memory name. Replace `{memoryName}` with the memory name. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Get memory documents ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const documents = await langbase.memories.documents.list({ memoryName: 'knowledge-base' }); console.log('Documents:', documents); } main(); ``` ```python import requests def get_memory_documents_list(): url = 'https://api.langbase.com/v1/memory/{memoryName}/documents' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.get(url, headers=headers) memory_documents_list = response.json() return memory_documents_list ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/memory/{memoryName}/documents \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " ``` --- ### Response The response array returned by the API endpoint. ```ts {{title: 'MemoryDocument'}} interface MemoryDocument { name: string; status: | 'queued' | 'in_progress' | 'completed' | 'failed'; status_message: string | null; metadata: { size: number; type: | 'application/pdf' | 'text/plain' | 'text/markdown' | 'text/csv' | 'application/vnd.openxmlformats-officedocument.spreadsheetml.sheet' | 'application/vnd.ms-excel'; }; enabled: boolean; chunk_size: number; chunk_overlap: number; owner_login: string; } ``` Name of the document. Current processing status of the document. Can be one of: - `queued`: Document is waiting to be processed - `in_progress`: Document is currently being processed - `completed`: Document has been successfully processed - `failed`: Document processing failed Additional details about the document's status, particularly useful when status is 'failed'. Document metadata including: - `size`: Size of the document in bytes - `type`: MIME type of the document Whether the document is enabled for retrieval. Size of text chunks used for document processing. Overlap size between consecutive text chunks. Login of the document owner. ```json {{ title: 'API Response' }} [ { "name": "product-manual.pdf", "status": "completed", "status_message": null, "metadata": { "size": 1156, "type": "application/pdf" }, "enabled": true, "chunk_size": 10000, "chunk_overlap": 2048, "owner_login": "user123" }, { "name": "technical-specs.md", "status": "in_progress", "status_message": null, "metadata": { "size": 1156, "type": "text/markdown" }, "enabled": true, "chunk_size": 10000, "chunk_overlap": 2048, "owner_login": "user123" } ] ``` --- Memory: List <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/memory/list/ import { generateMetadata } from '@/lib/generate-metadata'; # Memory: List v1 The `list` memory API endpoint allows you to get a list of memory sets on Langbase with API. This endpoint requires a User or Org API key. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Get a list of memory {{ tag: 'GET', label: '/v1/memory' }} Get a list of all memory by sending a GET request to this endpoint. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### List Memory ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const memoryList = await langbase.memories.list(); console.log('Memories:', memoryList); } main(); ``` ```python import requests def get_memory_list(): url = 'https://api.langbase.com/v1/memory' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.get(url, headers=headers) memory_list = response.json() return memory_list ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/memory \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " ``` --- ### Response An array of Memory objects returned by the API endpoint. ```ts {{title: 'Memory'}} interface MemoryListResponse { name: string; description: string; owner_login: string; url: string; embeddingModel: | 'openai:text-embedding-3-large' | 'cohere:embed-v4.0' | 'cohere:embed-multilingual-v3.0' | 'cohere:embed-multilingual-light-v3.0'; } ``` Name of the memory. Description of the AI memory. Login of the memory owner. Memory studio URL. The embedding model used by the AI memory. - `openai:text-embedding-3-large` - `cohere:embed-multilingual-v3.0` - `cohere:embed-multilingual-light-v3.0` ```json {{ title: 'API Response' }} [ { "name": "knowledge-base", "description": "An AI memory for storing company internal docs.", "owner_login": "user123", "url": "https://langbase.com/user123/document-memory", "embeddingModel": "openai:text-embedding-3-large" }, { "name": "multilingual-knowledge-base", "description": "Advanced memory with multilingual support", "owner_login": "user123", "url": "https://langbase.com/user123/multilingual-memory", "embeddingModel": "cohere:embed-multilingual-v3.0" } ] ``` --- Document: Upload <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/memory/document-upload/ import { generateMetadata } from '@/lib/generate-metadata'; # Document: Upload v1 The `upload` document API endpoint allows you to upload documents to a memory in Langbase with API. This endpoint requires a User or Org API key. This endpoint can also be used to replace an existing document in a memory. To do this, you need to provide the same `documentName` and `memoryName` attributes as the existing document. We also have a separate guide on [how to replace an existing document in memory](/guides/memory-document-replace). --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Step 1: Get SignedUrl to Upload Document {{ tag: 'POST', label: '/v1/memory/documents' }} Uploading a document to a memory requires a signed URL. `POST` a request to this endpoint to get a signed URL and use `PUT` method to upload the document. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. ### Body Parameters Name of the memory to which the document will be uploaded. Name of the document. ### Optional attributes Custom metadata for the document, limited to string key-value pairs. A maximum of 10 pairs is allowed. ### Deprecated attributes Name of the document. ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; import {readFileSync} from 'fs'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const hasDocumentUploaded = await langbase.memories.documents.upload({ memoryName: 'knowledge-base', contentType: 'application/pdf', documentName: 'technical-doc.pdf', document: readFileSync('document.pdf'), meta: { category: 'technical', section: 'overview', }, }); if (hasDocumentUploaded.ok) { console.log('Document uploaded successfully'); } } main(); ``` ```python import requests import json def get_signed_upload_url(): url = 'https://api.langbase.com/v1/memory/documents' api_key = '' newDoc = { "memoryName": "rag-memory", "documentName": "test-document.pdf", "meta": { "category": "technical", "section": "overview" } } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(newDoc)) signed_upload_url = response.json() return signed_upload_url ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/memory/documents \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "memoryName": "rag-memory", "documentName": "test-document.pdf", "meta": { "category": "technical", "section": "overview" } }' ``` --- ### Response The response object returned by the API endpoint. ```ts {{title: 'Response'}} interface Response { signedUrl: string; } ``` Signed URL that can be used to upload a document. ```json {{ title: 'API Response' }} { "signedUrl": "https://b.langbase.com/..." } ``` --- ## Step 2: Upload Document on SignedUrl {{ tag: 'PUT', label: '{SignedUrl}' }} Use the signed URL to upload the document to the memory. The signed URL is valid for 2 hours. ### Headers Request content type. Needs to be the MIME type of the document. Currently, we support `application/pdf`, `text/plain`, `text/markdown`, `text/csv`, and all major code files as `text/plain`. For csv, pdf, text, and markdown files, it should correspond to the file type used in the `documentName` attribute in step 1. For code files, it should be `text/plain`. ### Body Parameters The body of the file to be stored in the bucket. It can be `Buffer`, `File`, `FormData`, or `ReadableStream` type. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Upload Document ```python import requests def upload_document(signed_url, file): with open(file_path, 'rb') as file: headers = {'Content-Type': 'application/pdf'} response = requests.put(signed_url, headers=headers, data=file) return response ``` ```bash {{ title: 'cURL' }} curl -X PUT \ -H 'Content-Type: application/pdf' \ --data-binary "@path/to/pdfFile" \ "{SignedUrl}" ``` --- ### Response The response object returned by the API endpoint. ```ts {{title: 'Response'}} interface Response { ok: boolean; status: number; statusText: string; } ``` Indicates whether the upload was successful. HTTP status code of the upload response. HTTP status message corresponding to the status code. ```json {{ title: 'API Response' }} { "ok": true, "status": 200, "statusText": "OK" } ``` --- Document: Delete <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/memory/document-delete/ import { generateMetadata } from '@/lib/generate-metadata'; # Document: Delete v1 The `delete` document API endpoint allows you to delete an existing document from a memory on Langbase dynamically with the API. This endpoint requires an Org or User API key. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Delete a document {{ tag: 'DELETE', label: '/v1/memory/{memoryName}/documents/{documentName}' }} Delete a document by specifying the memory name and document name in the path. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. ### Path Parameters The name of the memory containing the document. Replace `{memoryName}` with the name of the memory. The name of the document to delete. Replace `{documentName}` with the name of the document. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Delete Document ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const hasDocDeleted = await langbase.memories.documents.delete({ memoryName: 'knowledge-base', documentName: 'old-report.pdf' }); console.log('Document deleted:', hasDocDeleted); } main(); ``` ```python import requests def delete_document(): url = 'https://api.langbase.com/v1/memory/{memoryName}/documents/{documentName}' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.delete(url, headers=headers) result = response.json() return result ``` ```bash {{ title: 'cURL' }} curl -X DELETE https://api.langbase.com/v1/memory/{memoryName}/documents/{documentName} \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " ``` --- ### Response The response object returned by the API endpoint. ```ts {{title: 'Response'}} interface Response { success: boolean; } ``` Indicates whether the deletion was successful. ```json {{ title: 'API Response' }} { "success": true } ``` --- Document: Embeddings Retry Generate <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/memory/document-embeddings-retry/ import { generateMetadata } from '@/lib/generate-metadata'; # Document: Embeddings Retry Generate v1 Document embeddings generation may fail due to various reasons such as OpenAI API rate limits, invalid API keys, document parsing errors, special characters, corrupted or locked PDFs, and excessively large documents. If the issue is related to the API key, it needs to be corrected; before retrying, ensure that the document is accessible and can be parsed correctly. You need to regenerate document embeddings in a memory before you can use them. The `retry` document API endpoint allows you to retry generating document embeddings in a memory on Langbase with API. This endpoint requires a User or Org API key. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Retry generating document embeddings in a memory {{ tag: 'GET', label: '/v1/memory/{memoryName}/documents/{documentName}/embeddings/retry' }} Retry generate document embeddings in a memory by sending a GET request to this endpoint. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. ### Path Parameters The name of memory to which the document belongs. Replace `{memoryName}` with the name of the memory. The name of the document. Replace `{documentName}` with the name of the document. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Retry generating document embeddings ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const hasEmbeddingRetried = await langbase.memories.documents.embeddings.retry( { memoryName: 'knowledge-base', documentName: 'technical-doc.pdf', }, ); console.log('Has embeddings been retried:', hasEmbeddingRetried); } main(); ``` ```python import requests def retry_document_embeddings(): url = 'https://api.langbase.com/v1/memory/{memoryName}/documents/{documentName}/embeddings/retry' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.get(url, headers=headers) res = response.json() return res ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/memory/{memoryName}/documents/{documentName}/embeddings/retry \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " ``` --- ### Response The response object returned by the API endpoint. ```ts {{title: 'Response'}} interface Response { success: boolean; } ``` Indicates whether the embedding retry was successfully initiated. ```json {{ title: 'API Response' }} { "success": true } ``` --- Memory: Create <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/memory/create/ import { generateMetadata } from '@/lib/generate-metadata'; # Memory: Create v1 The `create` memory API endpoint allows you to create a new memory on Langbase dynamically with API. This endpoint requires a User or Org API key. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Create a new memory {{ tag: 'POST', label: '/v1/memory' }} Create a new memory by sending the memory data inside the request body. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. ### Body Parameters Name of the memory. Short description of the memory. Default: `''` ### Optional Parameters Name of the embedding model to use for the memory. Default: `openai:text-embedding-3-large` Supported models: - `openai:text-embedding-3-large` - `cohere:embed-v4.0` - `cohere:embed-multilingual-v3.0` - `cohere:embed-multilingual-light-v3.0` - `google:text-embedding-004` Maximum number of characters in a single chunk. Default: `10000` Maximum: `30000` Cohere has a limit of 512 tokens (1 token ~= 4 characters in English). If you are using Cohere models, adjust the `chunk_size` accordingly. For most use cases, default values should work fine. Number of characters to overlap between chunks. Default: `2048` Maximum: Less than `chunk_size` Number of chunks to return. Default: `10` Minimum: `1` Maximum: `100` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Create memory ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const memory = await langbase.memories.create({ name: 'knowledge-base', description: 'Advanced memory with multilingual support', embedding_model: 'cohere:embed-multilingual-v3.0', }); console.log('Memory created:', memory); } main(); ``` ```python import requests import json def create_new_memory(): url = 'https://api.langbase.com/v1/memory' api_key = '' memory = { "name": 'knowledge-base', "description": 'An AI memory for storing company internal docs.', "embedding_model": "openai:text-embedding-3-large" } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(memory)) new_memory = response.json() return new_memory ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/memory \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "name": "knowledge-base", "description": "An AI memory for storing company internal docs.", "embedding_model": "openai:text-embedding-3-large", "chunk_size": 10000, "chunk_overlap": 2048, "top_k": 10 }' ``` --- ### Response The response object returned by the API endpoint. ```ts {{title: 'Memory'}} interface Memory { name: string; description: string; owner_login: string; url: string; embedding_model: | 'openai:text-embedding-3-large' | 'cohere:embed-multilingual-v3.0' | 'cohere:embed-multilingual-light-v3.0' | 'google:text-embedding-004'; } ``` Name of the memory. Description of the AI memory. Login of the memory owner. Memory studio URL. The embedding model used by the AI memory. - `openai:text-embedding-3-large` - `cohere:embed-multilingual-v3.0` - `cohere:embed-multilingual-light-v3.0` - `google:text-embedding-004` ```json {{ title: 'API Response' }} { "name": "knowledge-base", "description": "An AI memory for storing company internal docs.", "chunk_size": 10000, "chunk_overlap": 2048, "owner_login": "user123", "url": "https://langbase.com/user123/document-memory", "embedding_model": "openai:text-embedding-3-large", "top_k": 10 } ``` --- Memory: Delete <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/memory/delete/ import { generateMetadata } from '@/lib/generate-metadata'; # Memory: Delete v1 The `delete` memory API endpoint allows you to delete an existing memory on Langbase dynamically with the API. This endpoint requires an Org or User API key. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Delete a memory {{ tag: 'DELETE', label: '/v1/memory/{memoryName}' }} Delete a memory by sending a DELETE request to this endpoint. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. ### Path Parameters The name of the memory to delete. Replace `{memoryName}` with the name of the memory. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ## Delete Memory ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const hasMemoryDeleted = await langbase.memories.delete({ name: 'knowledge-base' }); console.log('Memory deleted:', hasMemoryDeleted); } main(); ``` ```python import requests def delete_memory(): url = 'https://api.langbase.com/v1/memory/{memoryName}' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.delete(url, headers=headers) result = response.json() return result ``` ```bash {{ title: 'cURL' }} curl -X DELETE https://api.langbase.com/v1/memory/{memoryName} \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " ``` --- ### Response The response object returned by the API endpoint. ```ts {{title: 'Response'}} interface Response { success: boolean; } ``` Indicates whether the deletion was successful. ```json {{ title: 'API Response' }} { "success": true } ``` --- Usage Limits https://langbase.com/docs/api-reference/limits/usage-limits/ import { generateMetadata } from '@/lib/generate-metadata'; # Usage Limits Usage limits are the number of runs that a user can make to the Langbase API within their subscription plan. --- ## Overview We offer the folllowing runs and overage limits for each subscription plan: | Plan | Runs | Overage | |------------|----------|----------| | Hobby | 500 | - | | Enterprise | [Contact Us][contact-us] | [Contact Us][contact-us] | Each run is an API request which can have at the max 1,000 Tokens in it which is equivalent to almost 750 words (an article). If your API request has, for instance, 1500 tokens in it, it will count as 2 runs. ### Free Users The usage limit for free tier users is as follows: - **Limit**: 500 runs per month. - **Overage**: No overage. ### Enterprise Users There are no hard limits for Enterprise. Billing is based solely on the number of runs during each billing period. If you have questions about your usage or need assistance, please don't hesitate to [contact us](mailto:support@langbase.com). --- ### Usage Limit Headers Free tier runs to the Langbase API have the following response headers. These headers convey status of the usage limits. | Header | Description | | --------------------------- | ------------------------------------------------------------ | | `lb-usagelimit-limit` | The usage limit applicable to your request. | | `lb-usagelimit-remaining` | The number of runs remaining in the current usage limit window. | | `lb-usagelimit-used` | The number of runs made in the current usage limit window. | ### About Usage Limits - If a free tier user exceeds the usage limit, an HTTP status code `403 USAGE_EXCEEDED` error will be sent. - Usage limits are applied on a per-user or per-organization basis. For organizations, all runs made by the organization are collectively restricted within a single limit window. [contact-us]: mailto:support@langbase.com --- Rate Limits https://langbase.com/docs/api-reference/limits/rate-limits/ import { generateMetadata } from '@/lib/generate-metadata'; # Rate Limits Rate limits are the number of requests that a client can make to the Langbase API within a specific time period. These limits are enforced to ensure fair usage of the API and to prevent abuse. --- ## Overview We have implemented the following rate limits on the Langbase API. The limits vary by your subscription plan. | Plan | Rate Limit | |---------------|----------------------------------------------------------------------------------------| | **Hobby** | 25 requests per minute (RPM) | | **Pro** | 300 requests per minute (RPM) | | **Enterprise**| 300 requests per minute (RPM) – please [contact us][contact-us] for higher rate limits | Custom enterprise packages can go upto 1K requests per second. Please [contact us][contact-us] for higher limits. --- ### Pipe API v1 Following is the list of Pipe API (v1) endpoints and their rate limits based on your subscription plan. | API Endpoints | Hobby Plan | Pro Plan | Enterprise Plan | |-------------------------------------|------------|----------|--------------------------| | Run: `/v1/pipes/run` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Create pipe: `/v1/pipes` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | List pipes: `/v1/pipes` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Update pipe: `/v1/pipes/{pipeName}` | 25 RPM | 300 RPM | [Contact Us][contact-us] | *RPM = requests per minute* --- ### Pipe API beta Following is the list of Pipe API (beta) endpoints and their rate limits based on your subscription plan. | API Endpoints | Hobby Plan | Pro Plan | Enterprise Plan | |-------------------------------------------|------------|----------|--------------------------| | Run: `/beta/run` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Generate: `/beta/generate` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Chat: `/beta/chat` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Create pipe (user): `/beta/user/pipes` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Create pipe (org): `/beta/org/{org}/pipes`| 25 RPM | 300 RPM | [Contact Us][contact-us] | | Update pipe: `/beta/pipes/{owner}/{pipe}` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | List pipes (user): `/beta/user/pipes` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | List pipes (org): `/beta/org/{org}/pipes` | 25 RPM | 300 RPM | [Contact Us][contact-us] | *RPM = requests per minute* --- ### Memory API v1 Following is the list of Memory API (v1) endpoints and their rate limits based on your subscription plan. | API Endpoints | Hobby Plan | Pro Plan | Enterprise Plan | |------------------------------------------|------------|----------|--------------------------| | Create memory: `/v1/memory` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | List memory: `/v1/memory` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Delete memory: `/v1/memory/{memoryName}` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Retrieve memory: `/v1/memory/retrieve` | 25 RPM | 300 RPM | [Contact Us][contact-us] | *RPS = requests per second*
*RPM = requests per minute* --- ### Memory API beta Following is the list of Memory API (beta) endpoints and their rate limits based on your subscription plan. | API Endpoints | Hobby Plan | Pro Plan | Enterprise Plan | |-------------------------------------------------------------|------------|----------|--------------------------| | Create memory (user): `/beta/user/memorysets` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Create memory (org): `/beta/org/{org}/memorysets` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | List memory (user): `/beta/user/memorysets` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | List memory (org): `/beta/org/{org}/memorysets` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Delete memory: `/beta/memorysets/{ownerLogin}/{memoryName}` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Retrieve memory: `/beta/memory/retrieve` | 25 RPM | 300 RPM | [Contact Us][contact-us] | *RPM = requests per minute* --- ### Document API v1 Following is the list of Document API (v1) endpoints and their rate limits based on your subscription plan. | API Endpoints | Hobby Plan | Pro Plan | Enterprise Plan | |---------------------------------------------------------------------------------------|------------|----------|--------------------------| | Upload document: `/v1/memory/documents` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | List documents: `/v1/memory/{memoryName}/documents` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Delete document: `/v1/memory/{memoryName}/documents/{documentName}` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Retry embeddings: `/v1/memory/{memoryName}/documents/{documentName}/embeddings/retry` | 25 RPM | 300 RPM | [Contact Us][contact-us] | *RPM = requests per minute* --- ### Document API beta Following is the list of Document API (beta) endpoints and their rate limits based on your subscription plan. | API Endpoints | Hobby Plan | Pro Plan | Enterprise Plan | |-------------------------------------------------------------------------|------------|----------|--------------------------| | Upload document (user): `/beta/user/memorysets/documents` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Upload document (org): `/beta/org/{org}/memorysets/documents` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | List documents: `/beta/memorysets/{owner}/{memoryName}/documents` | 25 RPM | 300 RPM | [Contact Us][contact-us] | | Retry embeddings: `/beta/memorysets/{owner}/documents/embeddings/retry` | 25 RPM | 300 RPM | [Contact Us][contact-us] | *RPM = requests per minute* --- ### Unauthenticated Requests If your request does not have a valid API key, it is considered an unauthenticated request. Limits for them are: - **Limit**: 10 requests per minute - **Reset Interval**: 60 seconds Langbase is in the public beta stage right now. These limits and pricing are subject to change without notice. Please bear with us as we improve and get ready for a stable release and massive scale. Already processing tens of billions of AI tokens every month, you're in good hands. Feedback welcomed. --- ### Rate Limit Headers When you make a request to the Langbase API, it returns the following response headers. These headers convey you the rate limit status. | Header | Description | |-----------------------------|--------------------------------------------------------------------| | `lb-ratelimit-limit` | The rate limit applicable to your request. | | `lb-ratelimit-remaining` | The number of requests remaining in the current rate limit window. | | `lb-ratelimit-reset` | The time at which the current rate limit window resets. | --- ### About Rate Limits - If you exceed the rate limit, you will receive an HTTP status code `429 Too Many Requests` error. - Rate limits are applied on a per-user or per-organization basis. For organizations, all requests made by the organization are collectively restricted within a single limit window. - These limits are subject to adjustment and may differ depending on your subscription plan. - If you need higher rate limits, please [contact us](mailto:support@langbase.com). [contact-us]: mailto:support@langbase.com
Pipe: Update <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/pipe/update/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe: Update v1 The `update` pipe API endpoint allows you to update a pipe on Langbase dynamically with the API. You can use this endpoint to update a pipe with all the custom configuration. This endpoint requires a User or Org API key. To generate a User or Org API key visit your profile/organization settings page on Langbase. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Update a pipe {{ tag: 'POST', label: '/v1/pipes/{pipeName}' }} Update a pipe by sending the pipe configuration inside the request body. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. ### Body Parameters Name of the pipe. Short description of the pipe. Default: `''` Status of the pipe. Default: `public` Can be one of: `public`, `private` Pipe LLM model. This is a combination of model provider and model id. Format: `provider:model_id` You can copy the ID of a model from the list of [supported LLM models](/supported-models-and-providers) at Langbase. Default: `openai:gpt-4o-mini` If enabled, the output will be streamed in real-time like ChatGPT. This is helpful if user is directly reading the text. Default: `true` Enforce the output to be in JSON format. Default: `false` If enabled, both prompt and completions will be stored in the database. Otherwise, only system prompt and few shot messages will be saved. Default: `true` If enabled, Langbase blocks flagged requests automatically. Default: `false` An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Default: `1` Maximum number of tokens in the response message returned. Default: `1000` What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic. Default: `0.7` Penalizes a word based on its occurrence in the input text. Default: `1` Penalizes a word based on how frequently it appears in the training data. Default: `1` Up to 4 sequences where the API will stop generating further tokens. Default: `[]` Controls which (if any) tool is called by the model. - `auto` - the model can pick between generating a message or calling one or more tools. - `required` - the model must call one or more tools. - `object` - Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool. Default: `auto` Call multiple tools in parallel, allowing the effects and results of these function calls to be resolved in parallel. Default: `true` An array containing message objects. Default: `[]` ```js {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string; name?: 'json' | 'safety' | 'opening' | 'rag'; } ``` The role of the author of this message. The contents of the message. The name of the `system` message type. An object containing pipe variables. The key is the variable name and the value is the variable value. Default: `{}` An array of objects with valid tool definitions. Read more about valid [tool definition](/features/tool-calling#tool-definition-schema) Default: `[]` An array of memory objects. Default: `[]` ```js {{title: 'Memory Object'}} interface Memory { name: string; } ``` The name of the memory. Defines the format of the response. Primarily used for Structured Outputs. To enforce Structured Outputs, set type to `json_schema`, and provide a JSON schema for your response with `strict: true` option. Default: `text` ```ts {{title: 'ResponseFormat Object'}} type ResponseFormat = | {type: 'text'} | {type: 'json_object'} | { type: 'json_schema'; json_schema: { description?: string; name: string; schema?: Record; strict?: boolean | null; }; }; ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Update a pipe
```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const summaryAgent = await langbase.pipes.update({ name: 'summary-agent', description: 'Updated pipe description', temperature: 0.8, }); console.log('Summary agent:', summaryAgent); } main(); ``` ```python import requests import json def update_pipe(): url = 'https://api.langbase.com/v1/pipes/{pipeName}' api_key = 'YOUR_API_KEY' pipe = { "name": "summary-agent", "upsert": true, "description": "AI pipe for summarization", "status": "public", "model": "openai:gpt-4o-mini" } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(pipe)) updated_pipe = response.json() return updated_pipe ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/pipes/{pipeName} \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "name": "summary-agent", "upsert": true, "description": "AI pipe for summarization", "status": "public", "model": "openai:gpt-4o-mini" }' ``` ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const summaryAgent = await langbase.pipes.update({ name: 'data-processing-agent', model: 'anthropic:claude-3-5-sonnet-latest', json: true, tools: [ { type: 'function', function: { name: 'processNewData', description: 'Process updated data', parameters: { type: 'object', properties: { data: { type: 'string', description: 'Data to process', }, }, }, }, }, ], memory: [{name: 'knowledge-base'}], messages: [ { role: 'system', content: 'You are an enhanced data processing assistant.', }, ], }); console.log('Summary agent:', summaryAgent); } main(); ``` ```python import requests import json def update_pipe(): url = 'https://api.langbase.com/v1/pipes/{pipeName}' api_key = 'YOUR_API_KEY' pipe = { "name": "summary-agent", "upsert": true, "description": "AI pipe for summarization", "status": "public", "model": "openai:gpt-4o-mini", "stream": true, "json": true, "store": false, "moderate": true, "top_p": 1, "max_tokens": 1000, "temperature": 0.7, "presence_penalty": 1, "frequency_penalty": 1, "stop": [], "tool_choice": "auto", "parallel_tool_calls": false, "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } } ], "memory": [] } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(pipe)) updated_pipe = response.json() return updated_pipe ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/pipes/{pipeName} \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "name": "summary-agent", "description": "AI pipe for summarization", "status": "public", "model": "openai:gpt-4o-mini", "stream": true, "json": true, "store": false, "moderate": true, "top_p": 1, "max_tokens": 1000, "temperature": 0.7, "presence_penalty": 1, "frequency_penalty": 1, "stop": [], "tool_choice": "auto", "parallel_tool_calls": false, "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } } ], "memory": [] }' ``` --- ### Response The response object returned by the API endpoint. ```ts {{title: 'Pipe update response'}} interface Pipe { name: string; description: string; status: 'public' | 'private'; owner_login: string; url: string; type: 'chat' | 'generate' | 'run'; api_key: string; } ``` Name of the pipe. Description of the pipe. Pipe visibility status. Login of the pipe owner. Pipe studio URL. The type of the pipe. API key for pipe access. ```json {{ title: 'API Response' }} { "name": "summary-agent", "description": "AI pipe for summarization", "status": "public", "owner_login": "user123", "url": "https://langbase.com/user123/summary-agent", "type": "run", "api_key": "pipe_4FVBn2DgrzfJf..." } ``` --- Pipe: Run <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/pipe/run/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe: Run v1 The Run API allows you to execute any pipe and receive its response. It supports all use cases of Pipes, including chat interactions, single generation tasks, and function calls. The `/run` API consolidates the functionality of the previously separate `/generate` and `/chat` endpoints, providing a unified interface. As a result, we will soon be deprecating both `/generate` and `/chat` in favor of `/run`. The Run API supports: - Single generation requests for straightforward tasks. - Dynamic variables to create adaptable prompts in real-time. - Thread management for handling multi-turn conversations. - Seamless conversation continuation, ensuring smooth transitions across interactions. If needed, Langbase can store messages and conversation threads, allowing for persistent conversation history for chat use cases. --- ## Run a pipe {{ tag: 'POST', label: '/v1/pipes/run' }} Run a pipe by sending the required data with the request. For basic request, send a messages array inside request body. ### Headers Request content type. Needs to be `application/json` Replace `PIPE_API_KEY` with your Pipe API key LLM API key for the request. If not provided, the LLM key from Pipe/User/Organization keyset will be used. --- ### Body Parameters An array containing message objects. ```js {{ title: 'Message Object' }} interface Message { role: string; content?: string | ContentType[] | null; tool_call_id?: string; name?: string; } ``` The role of the message, i.e., `system` | `user` | `assistant` | `tool` The content of the message. 1. `String` For text generation, it's a plain string. 2. `Null` or `undefined` Tool call messages can have no content. 3. `ContentType[]` Array used in vision and audio models, where content consists of structured parts (e.g., text, image URLs). ```js {{ title: 'ContentType Object' }} interface ContentType { type: string; text?: string | undefined; image_url?: | { url: string; detail?: string | undefined; } | undefined; }; ``` The id of the called LLM tool if the role is `tool` The name of the called tool if the role is `tool` --- An object containing pipe variables. The key is the variable name and the value is the variable value. Default: `{}` --- The ID of an existing chat thread. The conversation will continue in this thread. --- A list of tools the model may call. ```ts {{title: 'Tools Object'}} interface ToolsOptions { type: 'function'; function: FunctionOptions } ``` The type of the tool. Currently, only `function` is supported. The function that the model may call. ```ts {{title: 'FunctionOptions Object'}} export interface FunctionOptions { name: string; description?: string; parameters?: Record } ``` The name of the function to call. The description of the function. The parameters of the function. An array of memory objects that specify the memories your pipe should use at run time. If memories are defined here, they will override the default pipe memories, which will be ignored. All referenced memories must exist in your account. ```json {{title: 'Run time Memory array example'}} "memory": [ { "name": "runtime-memory-1" }, { "name": "runtime-memory-2" } ] ``` If this property is not set or is empty, the pipe will fall back to using its default memories. Default: `undefined` Each memory in the array follows this structure: ```js {{title: 'Memory Object'}} interface Memory { name: string; } ``` The name of the memory. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Run an agent pipe
```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase(); async function main() { const {completion} = await langbase.pipes.run({ apiKey: '', // Replace with your pipe API key. messages: [ { role: 'user', content: 'Who is an AI Engineer?', }, ], stream: false, }); console.log('Summary agent completion:', completion); } main(); ``` ```python import requests import json def generate_completion(): url = 'https://api.langbase.com/v1/pipes/run' api_key = '' body_data = { "messages": [ {"role": "user", "content": "Hello!"} ], "stream": False } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}' } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res = response.json() completion = res['completion'] return completion ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "messages": [ { "role": "user", "content": "Hello!" } ] }' ``` ```js {{ title: 'Node.js' }} import {getRunner, Langbase} from 'langbase'; const langbase = new Langbase(); async function main() { const {stream} = await langbase.pipes.run({ stream: true, apiKey: '', // Replace with your pipe API key. messages: [{role: 'user', content: 'Who is an AI Engineer?'}], }); // Convert the stream to a stream runner. const runner = getRunner(stream); runner.on('content', content => { process.stdout.write(content); }); } main(); ``` ```python import requests import json def main(): url = 'https://api.langbase.com/v1/pipes/run' api_key = '' # TODO: Replace with your Pipe API key. data = { "messages": [{"role": "user", "content": "Hello!"}], "stream": True } headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" } response = requests.post(url, headers=headers, data=json.dumps(data)) if not response.ok: print(response.json()) return for line in response.iter_lines(): if line: try: decoded_line = line.decode('utf-8') if decoded_line.startswith('data: '): json_str = decoded_line[6:] if json_str.strip() and json_str != '[DONE]': data = json.loads(json_str) if data['choices'] and len(data['choices']) > 0: delta = data['choices'][0].get('delta', {}) if 'content' in delta and delta['content']: print(delta['content'], end='', flush=True) except json.JSONDecodeError: print("Failed to parse JSON") # Debug JSON parsing continue except Exception as e: print(f"Error processing line: {e}") if __name__ == "__main__": main() ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "messages": [ { "role": "user", "content": "Hello!" } ], "stream": true }' ``` ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase(); async function main() { const {completion} = await langbase.pipes.run({ apiKey: '', // Replace with your pipe API key. messages: [ { role: 'user', content: 'Hello, I am a {{profession}}', }, ], variables: { profession: 'AI Engineer' }, stream: false, }); console.log('Summary Agent completion:', completion); } main(); ``` ```python import requests import json def generate_completion(): url = 'https://api.langbase.com/v1/pipes/run' api_key = '' body_data = { "messages": [ {"role": "user", "content": "Hello, I am a {{profession}}"} ], "variables": { "profession": "AI Engineer" }, } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}' } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res = response.json() completion = res['completion'] return completion ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer ' \ -d '{ "messages": [ { "role": "user", "content": "Hello, I am a {{profession}}"} ], "variables": { "profession": "AI Engineer" } }' ``` ```js {{ title: 'Node.js' }} import {getRunner, Langbase} from 'langbase'; const langbase = new Langbase(); async function main() { // Message 1: Tell something to the LLM. const response1 = await langbase.pipes.run({ apiKey: '', // Replace with your pipe API key. stream: true, messages: [{role: 'user', content: 'My company is called Langbase'}], }); const runner1 = getRunner(response1.stream); runner1.on('content', content => { process.stdout.write(content); }); // Message 2: Ask something about the first message. // Continue the conversation in the same thread by sending // `threadId` from the second message onwards. const response2 = await langbase.pipes.run({ apiKey: '', // Replace with your pipe API key. stream: true, threadId: response1.threadId!, messages: [{role: 'user', content: 'Tell me the name of my company?'}], }); const runner2 = getRunner(response2.stream); runner2.on('content', content => { process.stdout.write(content); }); } main(); ``` ```python import requests import json def main(): url = 'https://api.langbase.com/v1/pipes/run' api_key = '' # TODO: Replace with your Pipe API key. # NOTE: How chat thread works # 1. You send first request without a threadId # 2. In reponse headers you get back the `lb-thread-id` # 3. To maintain the same chat thread, you send the `lb-thread-id` in all next requests # NOTE: To start a new thread, you send a request without `threadId`. thread_id = None data = { "messages": [{"role": "user", "content": "Hello!"}], "stream": True, "threadId": thread_id # In first request, thread_id is None, in next requests, it's the `lb-thread-id` from response headers. } headers = { "Content-Type": "application/json", "Authorization": f"Bearer {api_key}" } response = requests.post(url, headers=headers, data=json.dumps(data)) if not response.ok: print(response.json()) return # Get thread ID from response headers for continuing the conversation after first request. thread_id = response.headers.get('lb-thread-id') for line in response.iter_lines(): if line: try: decoded_line = line.decode('utf-8') if decoded_line.startswith('data: '): json_str = decoded_line[6:] # Remove 'data: ' prefix if json_str.strip() and json_str != '[DONE]': # Check if there's actual content and not the end marker data = json.loads(json_str) if data['choices'] and len(data['choices']) > 0: delta = data['choices'][0].get('delta', {}) if 'content' in delta and delta['content']: print(delta['content'], end='', flush=True) except json.JSONDecodeError: print("Failed to parse JSON") # Debug JSON parsing continue except Exception as e: print(f"Error processing line: {e}") if __name__ == "__main__": main() ``` ```bash {{ title: 'cURL' }} # NOTE: How chat thread works # 1. You send first request without a threadId # 2. In response headers you get back the `lb-thread-id` # 3. To maintain the same chat thread, you send the `lb-thread-id` in all next requests # NOTE: To start a new thread, you send a request without `threadId`. curl https://api.langbase.com/v1/pipes/run \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "threadId": "", "messages": [ { "role": "user", "content": "Hello!" } ] }' ``` ```js {{ title: 'Node.js' }} import 'dotenv/config'; import { Langbase, getToolsFromRun } from 'langbase'; const langbase = new Langbase(); async function main() { const userMsg = "What's the weather in SF"; const response = await langbase.pipes.run({ stream: false, apiKey: '', // Replace with your pipe API key. messages: [{ role: 'user', content: userMsg }], tools: [ { type: 'function', function: { name: 'get_current_weather', description: 'Get the current weather of a given location', parameters: { type: 'object', required: ['location'], properties: { unit: { enum: ['celsius', 'fahrenheit'], type: 'string' }, location: { type: 'string', description: 'The city and state, e.g. San Francisco, CA' } } } } } ] }); const toolCalls = await getToolsFromRun(response); const hasToolCalls = toolCalls.length > 0; if (hasToolCalls) { // handle the tool calls console.log('Tools:', toolCalls); } else { // handle the response console.log('Response:', response); } } main(); ``` ```python import requests import json import os def get_weather_info(): url = 'https://api.langbase.com/v1/pipes/run' api_key = '' body_data = { "stream": False, "messages": [ {"role": "user", "content": "What's the weather in SF"} ], "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather of a given location", "parameters": { "type": "object", "required": ["location"], "properties": { "unit": { "enum": ["celsius", "fahrenheit"], "type": "string" }, "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } } } } } ] } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}' } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res = response.json() return res ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/pipes/run \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "messages": [ { "role": "user", "content": "What\'s the weather in SF" } ], "stream": false, "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather of a given location", "parameters": { "type": "object", "required": ["location"], "properties": { "unit": { "enum": ["celsius", "fahrenheit"], "type": "string" }, "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" } } } } } ] }' ``` --- ### Response Headers The ID of the new/existing thread. If you want to continue conversation in this thread, send it as `threadId` in the next request. _[Learn how to use tool calling with this API.](/features/tool-calling/chat-api)_ ```json {{ title: 'Response Header' }} HTTP/2 200 lb-thread-id: "…-…-…-…-… ID of the thread" … … … rest of the headers … : … … … ``` --- ### Response Body Response of the endpoint is a `Promise` object. ### RunResponse Object ```ts {{title: 'RunResponse Object'}} interface RunResponse { completion: string; raw: RawResponse; } ``` The generated text completion. The raw response object. ```ts {{title: 'RawResponse Object'}} interface RawResponse { id: string; object: string; created: number; model: string; choices: ChoiceGenerate[]; usage: Usage; system_fingerprint: string | null; } ``` The ID of the raw response. The object type name of the response. The timestamp of the response creation. The model used to generate the response. A list of chat completion choices. Can contain more than one elements if n is greater than 1. ```ts {{title: 'Choice Object for langbase.pipes.run() with stream off'}} interface ChoiceGenerate { index: number; message: Message; logprobs: boolean | null; finish_reason: string; } ``` The index of the choice in the list of choices. A messages array including `role` and `content` params. ```ts {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | null; tool_calls?: ToolCall[]; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. The array of the tools called by LLM ```ts {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` The ID of the tool call. The type of the tool. Currently, only `function` is supported. The function that the model called. ```ts {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` The name of the function to call. The arguments to call the function with, as generated by the model in JSON format. Log probability information for the choice. Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. It could also be `eos` end of sequence and depends on the type of LLM, you can check their docs. The usage object including the following properties. ```ts {{title: 'Usage Object'}} interface Usage { prompt_tokens: number; completion_tokens: number; total_tokens: number; } ``` The number of tokens in the prompt (input). The number of tokens in the completion (output). The total number of tokens. This fingerprint represents the backend configuration that the model runs with. --- ### RunResponseStream Object Response of the endpoint with `stream: true` is a `Promise`. ```ts {{title: 'RunResponseStream Object'}} interface RunResponseStream { id: string; object: string; created: number; model: string; system_fingerprint: string | null; choices: ChoiceStream[]; } ``` The ID of the response. The object type name of the response. The timestamp of the response creation. The model used to generate the response. This fingerprint represents the backend configuration that the model runs with. A list of chat completion choices. Can contain more than one elements if n is greater than 1. ```js {{title: 'Choice Object with stream true'}} interface ChoiceStream { index: number; delta: Delta; logprobs: boolean | null; finish_reason: string; } ``` The index of the choice in the list of choices. A chat completion delta generated by streamed model responses. ```js {{title: 'Delta Object'}} interface Delta { role?: Role; content?: string | null; tool_calls?: ToolCall[]; } ``` The role of the author of this message. The contents of the chunk message. Null if a tool is called. The array of the tools called by LLM ```js {{title: 'ToolCall Object'}} interface ToolCall { id: string; type: 'function'; function: Function; } ``` The ID of the tool call. The type of the tool. Currently, only `function` is supported. The function that the model called. ```js {{title: 'Function Object'}} export interface Function { name: string; arguments: string; } ``` The name of the function to call. The arguments to call the function with, as generated by the model in JSON format. Log probability information for the choice. Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the `content` of `message`. The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool, or `function_call` (deprecated) if the model called a function. It could also be `eos` end of sequence and depends on the type of LLM, you can check their docs. ```json {{ title: 'RunResponse type' }} { "completion": "AI Engineer is a person who designs, builds, and maintains AI systems.", "raw": { "id": "chatcmpl-123", "object": "chat.completion", "created": 1720131129, "model": "gpt-4o-mini", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "AI Engineer is a person who designs, builds, and maintains AI systems." }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 28, "completion_tokens": 36, "total_tokens": 64 }, "system_fingerprint": "fp_123" } } ``` ```json {{ title: 'RunResponseStream type with stream true' }} // A stream chunk looks like this … { "id": "chatcmpl-123", "object": "chat.completion.chunk", "created": 1719848588, "model": "gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices": [{ "index": 0, "delta": { "content": "Hi" }, "logprobs": null, "finish_reason": null }] } // More chunks as they come in... {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"content":"there"},"logprobs":null,"finish_reason":null}]} … {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} ``` --- Pipe: List <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/pipe/list/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe: List v1 The `list` pipe API endpoint allows you to get a list of pipes on Langbase with API. This endpoint requires a User or Org API key. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Get a list of pipes {{ tag: 'GET', label: '/v1/pipes' }} Get a list of all pipes by sending a GET request to this endpoint. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### List pipes ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const pipeAgents = await langbase.pipes.list(); console.log('Pipe agents:', pipeAgents); } main(); ``` ```python import requests def get_pipes(): url = 'https://api.langbase.com/v1/pipes' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.get(url, headers=headers) pipes_list = response.json() return pipes_list ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/pipes \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " ``` --- ### Response An array of pipe objects returned by the API endpoint. ```ts {{title: 'Pipe'}} interface Pipe { name: string; description: string; status: 'public' | 'private'; owner_login: string; url: string; model: string; stream: boolean; json: boolean; store: boolean; moderate: boolean; top_p: number; max_tokens: number; temperature: number; presence_penalty: number; frequency_penalty: number; stop: string[]; tool_choice: 'auto' | 'required' | ToolChoice; parallel_tool_calls: boolean; messages: Message[]; variables: Variable[] | []; tools: ToolFunction[] | []; memory: Memory[] | []; } ``` Name of the pipe. Description of the AI pipe. Status of the pipe. Login of the pipe owner. Pipe access URL. Pipe LLM model. Combination of model provider and model id. Format: `provider:model_id` Pipe stream status. If enabled, the pipe will stream the response. Pipe JSON status. If enabled, the pipe will return the response in JSON format. Whether to store the prompt and completions in the database. Whether to moderate the completions returned by the model. Pipe configured top_p value. Configured maximum tokens for the pipe. Configured temperature for the pipe. What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic. Configured presence penalty for the pipe. Configured frequency penalty for the pipe. Configured stop sequences for the pipe. Tool usage configuration. Model decides when to use tools. Model must use specified tools. Forces use of a specific function. ```ts {{title: 'ToolChoice Object'}} interface ToolChoice { type: 'function'; function: { name: string; }; } ``` If enabled, the pipe will make parallel tool calls. A messages array including the following properties. ```ts {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string | null; name?: 'json' | 'safety' | 'opening' | 'rag'; } ``` The role of the author of this message. The contents of the message. The name of the `system` message type. A variables array including the `name` and `value` params. ```ts {{title: 'Variable Object'}} interface Variable { name: string; value: string; } ``` The name of the variable. The value of the variable. An array of memories the pipe has access to. ```ts {{title: 'Memory Object'}} interface Memory { name: string; } ``` ```json {{ title: 'API Response' }} [ { "name": "summary-agent", "description": "AI pipe for summarization", "status": "public", "owner_login": "user123", "url": "https://langbase.com/user123/summary-agent", "model": "openai:gpt-4o-mini", "stream": true, "json": false, "store": true, "moderate": false, "top_p": 1, "max_tokens": 1000, "temperature": 0.7, "presence_penalty": 1, "frequency_penalty": 1, "stop": [], "tool_choice": "auto", "parallel_tool_calls": true, "messages": [], "variables": [], "tools": [], "memory": [] } ] ``` --- Pipe: Create <span className="text-xl font-mono text-muted-foreground/70">v1</span> https://langbase.com/docs/api-reference/pipe/create/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe: Create v1 The `create` pipe API endpoint allows you to create a new pipe on Langbase dynamically with API. You can use this endpoint to create a new pipe with all the custom configuration. This endpoint requires a User or Org API key. To generate a User or Org API key visit your profile/organization settings page on Langbase. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). You can generate API keys from the [Langbase studio](https://studio.langbase.com) by following these steps: 1. Switch to your user or org account. 2. From the sidebar, click on the `Settings` menu. 3. In the developer settings section, click on the `Langbase API keys` link. 4. From here you can create a new API key or manage existing ones. For more details follow the [Langbase API keys](/api-reference/api-keys) documentation. --- ## Create a new pipe {{ tag: 'POST', label: '/v1/pipes' }} Create a new pipe by sending the pipe configuration inside the request body. ### Headers Request content type. Needs to be `application/json`. Replace `` with your user/org API key. --- ### Body Parameters Name of the pipe. Upsert pipe. Default: `false` Short description of the pipe. Default: `''` Status of the pipe. Default: `public` Can be one of: `public`, `private` Pipe LLM model. This is a combination of model provider and model id. Format: `provider:model_id` You can copy the ID of a model from the list of [supported LLM models](/supported-models-and-providers) at Langbase. Default: `openai:gpt-4o-mini` If enabled, the output will be streamed in real-time like ChatGPT. This is helpful if user is directly reading the text. Default: `true` Enforce the output to be in JSON format. Default: `false` If enabled, both prompt and completions will be stored in the database. Otherwise, only system prompt and few shot messages will be saved. Default: `true` If enabled, Langbase blocks flagged requests automatically. Default: `false` An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Default: `1` Maximum number of tokens in the response message returned. Default: `1000` What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic. Default: `0.7` Penalizes a word based on its occurrence in the input text. Default: `1` Penalizes a word based on how frequently it appears in the training data. Default: `1` Up to 4 sequences where the API will stop generating further tokens. Default: `[]` Controls which (if any) tool is called by the model. - `auto` - the model can pick between generating a message or calling one or more tools. - `required` - the model must call one or more tools. - `object` - Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool. Default: `auto` Call multiple tools in parallel, allowing the effects and results of these function calls to be resolved in parallel. Default: `true` An array containing message objects. Default: `[]` ```js {{title: 'Message Object'}} interface Message { role: 'user' | 'assistant' | 'system'| 'tool'; content: string; name?: 'json' | 'safety' | 'opening' | 'rag'; } ``` The role of the author of this message. The contents of the message. The name of the `system` message type. An object containing pipe variables. The key is the variable name and the value is the variable value. Default: `{}` An array of objects with valid tool definitions. Read more about valid [tool definition](/features/tool-calling#tool-definition-schema) Default: `[]` An array of memory objects. Default: `[]` ```js {{title: 'Memory Object'}} interface Memory { name: string; } ``` The name of the memory. Defines the format of the response. Primarily used for Structured Outputs. To enforce Structured Outputs, set type to `json_schema`, and provide a JSON schema for your response with `strict: true` option. Default: `text` ```ts {{title: 'ResponseFormat Object'}} type ResponseFormat = | {type: 'text'} | {type: 'json_object'} | { type: 'json_schema'; json_schema: { description?: string; name: string; schema?: Record; strict?: boolean | null; }; }; ``` ## Usage example ```bash {{ title: 'npm' }} npm i langbase ``` ```bash {{ title: 'pnpm' }} pnpm i langbase ``` ```bash {{ title: 'yarn' }} yarn add langbase ``` ### Environment variables ```bash {{ title: '.env file' }} LANGBASE_API_KEY="" ``` ### Create a new pipe
```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const summaryAgent = await langbase.pipes.create({ name: 'summary-agent', description: 'A simple pipe example', }); console.log('Pipe created:', summaryAgent); } main(); ``` ```python import requests import json def create_new_pipe(): url = 'https://api.langbase.com/v1/pipes' api_key = 'YOUR_API_KEY' pipe = { "name": "summary-agent", "upsert": true, "description": "AI pipe for summarization", "status": "public", "model": "openai:gpt-4o-mini" } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(pipe)) new_pipe = response.json() return new_pipe ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/pipes \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "name": "summary-agent", "upsert": true, "description": "AI pipe for summarization", "status": "public", "model": "openai:gpt-4o-mini" }' ``` ```js {{ title: 'Node.js' }} import {Langbase} from 'langbase'; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY!, }); async function main() { const summaryAgent = await langbase.pipes.create({ name: 'data-processing-agent', description: 'Advanced pipe with tools and memory', model: 'google:gemini-1.5-pro-latest', json: true, tools: [ { type: 'function', function: { name: 'processData', description: 'Process input data', parameters: { type: 'object', properties: { data: { type: 'string', description: 'Data to process', }, }, }, }, }, ], memory: [{name: 'knowledge-base'}], messages: [ { role: 'system', content: 'You are a data processing assistant.', }, ], variables: { apiEndpoint: 'https://api.example.com', }, }); console.log('Pipe created:', summaryAgent); } main(); ``` ```python import requests import json def create_new_pipe(): url = 'https://api.langbase.com/v1/pipes' api_key = 'YOUR_API_KEY' pipe = { "name": "summary-agent", "upsert": true, "description": "AI pipe for summarization", "status": "public", "model": "openai:gpt-4o-mini", "stream": true, "json": true, "store": false, "moderate": true, "top_p": 1, "max_tokens": 1000, "temperature": 0.7, "presence_penalty": 1, "frequency_penalty": 1, "stop": [], "tool_choice": "auto", "parallel_tool_calls": false, "messages": [ { "role": "system", "content": "You're a helpful AI assistant.", }, { "role": "system", "content": "Don't ignore these instructions", "name": "safety", }, ], "variables": [], "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } } ], "memory": [] } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(pipe)) new_pipe = response.json() return new_pipe ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/v1/pipes \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "name": "summary-agent", "upsert": true, "description": "AI pipe for summarization", "status": "public", "model": "openai:gpt-4o-mini", "stream": true, "json": true, "store": false, "moderate": true, "top_p": 1, "max_tokens": 1000, "temperature": 0.7, "presence_penalty": 1, "frequency_penalty": 1, "stop": [], "tool_choice": "auto", "parallel_tool_calls": false, "messages": [ { "role": "system", "content": "You'\''re a helpful AI assistant." }, { "role": "system", "content": "Don'\''t ignore these instructions", "name": "safety" } ], "variables": [], "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } } ], "memory": [] }' ``` --- ### Response The response object returned by the API endpoint. ```ts {{title: 'Pipe create response'}} interface Pipe { name: string; description: string; status: 'public' | 'private'; owner_login: string; url: string; type: 'chat' | 'generate' | 'run'; api_key: string; } ``` Name of the pipe. Description of the pipe. Pipe visibility status. Login of the pipe owner. Pipe studio URL. The type of the pipe. API key for pipe access. ```json {{ title: 'API Response' }} { "name": "summary-agent", "description": "AI pipe for summarization", "status": "public", "owner_login": "user123", "url": "https://langbase.com/user123/summary-agent", "type": "run", "api_key": "pipe_4FVBn2DgrzfJf..." } ``` --- Usage exceeded (403) https://langbase.com/docs/api-reference/errors/usage_exceeded/ import { generateMetadata } from '@/lib/generate-metadata'; # Usage exceeded (403) The Usage Exceeded error occurs on Langbase when you have exceeded your allowed usage limits. This error indicates that you have consumed more resources or made more requests than the allocated limits within Langbase. --- ## Possible Causes Reaching the maximum number of requests allowed by Langbase. --- ## Troubleshooting Steps If consistently hitting usage limits within Langbase, consider upgrading service plans on Langbase. Please refer to the [Langbase Pricing page](https://langbase.com/pricing) for more details on available plans and their respective limits. If you are already subscribed, please check your billing page from the sidebar in [Langbase Studio](https://langbase.com/studio) to see your current plan and usage. --- ## Recommendation If you are consistently hitting usage limits within Langbase, consider upgrading your service plan to accommodate your usage needs. Rate limited (429) https://langbase.com/docs/api-reference/errors/rate_limited/ import { generateMetadata } from '@/lib/generate-metadata'; # Rate limited (429) This error occurs when the [rate limit](/api-reference/limits/rate-limits) for your account has been exceeded. You will need to wait for the rate limit to reset before making additional requests. --- ## Troubleshooting Steps 1. **Wait for Rate Limit Reset**: The rate limit will automatically reset after a certain period. You can check the [rate limit headers](/api-reference/limits/rate-limits) in the response to determine when the rate limit will reset. 2. **Optimize Requests**: Review your application's request patterns and optimize them to reduce the number of requests made to Langbase's API. 3. **Upgrade Plan**: If you consistently exceed the rate limit, consider upgrading your plan to accommodate higher request volumes. --- ## Recommendation Monitor your application's request volume and optimize your requests to avoid exceeding the rate limit. Upgrading your plan can also provide additional capacity to handle higher request volumes. Unauthorized (401) https://langbase.com/docs/api-reference/errors/unauthorized/ import { generateMetadata } from '@/lib/generate-metadata'; # Unauthorized (401) The unauthorized error occurs when Langbase detects that the client's request lacks valid authentication credentials or the provided credentials are insufficient to access the requested resource. This error signals that proper authentication is required to proceed with the request. --- ## Possible Causes - Missing or invalid authentication credentials (e.g., API keys, tokens, username/password) within Langbase. - Use of expired or revoked authentication tokens. - Attempting to access a protected resource within Langbase without providing adequate authentication. --- ## Troubleshooting Steps 1. Check Authentication Credentials: Ensure that the client provides valid and up-to-date authentication credentials (such as pipe API keys) with the request to Langbase. 2. Verify Token Validity: If using tokens for authentication within Langbase, check if the token is expired, revoked, or otherwise invalid. 3. Review Authentication Requirements: Refer to Langbase's API documentation to understand the required authentication method (e.g., OAuth, API keys) and ensure compliance with Langbase's authentication mechanisms. 4. Test with Valid Credentials: If possible, test the request with known valid credentials within Langbase to confirm if the issue is related to authentication. --- ## Recommendation Ensure that valid authentication credentials (such as tokens, keys, etc.) are provided with the request to Langbase in order to access the requested resource successfully. Precondition failed (412) https://langbase.com/docs/api-reference/errors/precondition_failed/ import { generateMetadata } from '@/lib/generate-metadata'; # Precondition failed (412) You can run into this error when the preconditions specified in the request headers are not satisfied by the current state of the resource on Langbase's server. --- ## Possible Causes - Missing or incorrect precondition headers in your request within Langbase. - Preconditions specified in the request headers are not satisfied by the current state of the resource on Langbase's server. - Conflicting preconditions between client and server expectations within Langbase. --- ## Troubleshooting Steps 1. Check Request Headers: Verify that your request includes all necessary precondition headers and that their values are correct. 2. Review Preconditions: Check the preconditions expected by the server and ensure that they are compatible with the current state of the resource. 3. Resolve Conflicts: If there are conflicting preconditions between client and server expectations, communicate with relevant parties to resolve the conflicts. 4. Update Request: Adjust your request to meet the required preconditions as specified by the server. --- ## Recommendation Review your request headers to ensure that the specified preconditions are met and compatible with Langbase's server expectations to avoid encountering the Precondition Failed error. Not found (404) https://langbase.com/docs/api-reference/errors/not_found/ import { generateMetadata } from '@/lib/generate-metadata'; # Not found (404) The Not Found error occurs within Langbase when the server cannot find the requested resource. This error can occur due to an incorrect URL or a resource that no longer exists. --- ## Possible Causes - Use of incorrect pipe API endpoint. - The requested resource within Langbase has been removed or moved. --- ## Troubleshooting Steps 1. Check endpoint: Ensure that your API endpoint is correct and points to a valid resource within Langbase. 2. Verify Resource Existence: Check the documentation to find if the requested resource exists. --- ## Recommendation Review the API endpoint that you have integrated in your app and check documentation to find whether the API you are requesting exists or not. Internal Server Error (500) https://langbase.com/docs/api-reference/errors/internal_server_error/ import { generateMetadata } from '@/lib/generate-metadata'; # Internal Server Error (500) The Internal Server Error is a generic error message indicating that something went wrong on Langbase's server while processing your request. This error is often used to indicate unexpected or unhandled server-side issues that prevented the request from being fulfilled. --- ## Possible Causes - Configuration issues within Langbase's server environment. - Database connectivity problems within Langbase. - Resource exhaustion or server overload within Langbase. --- ## Troubleshooting Steps If you encounter an Internal Server Error, contact Langbase's support team or system administrators for assistance. --- ## Recommendation If you consistently encounter an Internal Server Error from Langbase API, please report the issue to Langbase's support team for further investigation and resolution. Insufficient permissions (403) https://langbase.com/docs/api-reference/errors/insufficient_permissions/ import { generateMetadata } from '@/lib/generate-metadata'; # Insufficient permissions (403) The Insufficient Permissions error occurs within Langbase when your request lacks the necessary permissions to access the requested resource or perform the intended action. This error indicates that you are authenticated but does not have the required level of authorization. --- ## Possible Causes - Attempting to access a restricted resource within Langbase without the necessary permissions. - Trying to perform an action that requires higher privileges than what the user possesses within Langbase. - Insufficient role-based access controls (RBAC) configured for the user within Langbase. --- ## Troubleshooting Steps 1. Check User Permissions: Verify that the user or client within Langbase has the appropriate permissions assigned to access the requested resource or perform the intended action. 2. Review Role-Based Access Controls (RBAC): If applicable within Langbase, ensure that the user's role or permissions are correctly configured in the organization. --- ## Recommendation Ensure that the user has the necessary permissions granted in your organization to access the requested resource or perform the intended action successfully. Forbidden (403) https://langbase.com/docs/api-reference/errors/forbidden/ import { generateMetadata } from '@/lib/generate-metadata'; # Forbidden (403) The Forbidden error occurs when Langbase accepts the client's request but refuses to authorize it. This refusal can be due to various reasons, such as insufficient permissions, access restrictions, or account-related issues. --- ## Possible Causes - Lack of necessary permissions to access the resource within Langbase. - Attempting to access a resource that requires authentication without providing valid credentials. - Trying to perform an action that the user account or role is not permitted to do. - Accessing a resource from an IP address or network that is blocked by Langbase's security policies. --- ## Troubleshooting Steps 1. Check Permissions: Ensure that your pipe API key is correct and you have appropriate permissions within Langbase 2. Review Access Policies: Check if Langbase has any access restrictions or policies in place that might be blocking the request. This includes IP whitelisting/blacklisting, role-based access controls, etc. 3. Check Account Status: Verify that the user account associated with the request in Langbase is active and not disabled or restricted in any way. --- ## Recommendation Review the access permissions, authentication credentials, and account status to ensure that the request meets the server's authorization requirements. Conflict (409) https://langbase.com/docs/api-reference/errors/conflict/ import { generateMetadata } from '@/lib/generate-metadata'; # Conflict (409) The conflict error occurs when the client attempts to create or update a resource with an identifier that is conflict. This error typically arises in situations where different resources cannot have the same name. For instance, creating a pipe through API with a name that already exists in the profile will result in a Not Unique error. --- ## Possible Causes - Attempting to create a resource within Langbase with an identifier (e.g., username, email, ID, file) that already exists in the system. - Updating a resource within Langbase with a value that violates a unique constraint, such as trying to assign an already-used identifier. --- ## Troubleshooting Steps 1. Check Existing Records: Verify that the identifier being used (e.g., username, email) is not already in use. 2. Use Different Identifier: Use a unique identifier that is not already in use. --- ## Recommendation Make sure to always use unique names for resources to avoid this error. If you encounter this error, check the existing records and use a different identifier to resolve it. Bad request (400) https://langbase.com/docs/api-reference/errors/bad_request/ import { generateMetadata } from '@/lib/generate-metadata'; # Bad request (400) The Bad Request error occurs when Langbase cannot process the client's request due to invalid request body. This error indicates that there is an issue with the request itself. --- ## Possible Causes - Missing or incorrect request parameters. - Invalid data format or structure in the request body. - Malformed request headers. - Unsupported request method or HTTP version. - Encoding issues, such as improperly encoded characters in the request. --- ## Troubleshooting Steps 1. Check Request body: Ensure that all required parameters are included in the request body and that their values are valid according to Langbase's API documentation. Also, ensure that the format of the request body is correct. 2. Validate Data Format: If the request includes a payload (e.g., JSON or XML), validate that it follows the expected format without any syntax errors. 3. Verify Request Headers: Check the request headers for correctness and completeness as per Langbase's API requirements. 4. Use Supported Methods: Confirm that you are using the correct HTTP method (e.g., GET, POST, PUT, DELETE) for the intended operation. 5. Check Encoding: If dealing with character encoding, ensure that special characters are properly encoded according to the specified encoding standard, such as UTF-8. --- ## Recommendation Review the request and make necessary corrections as per the API documentation or guidelines. Pipe API: Update <span className="text-xl font-mono text-muted-foreground/70">beta</span> https://langbase.com/docs/api-reference/deprecated/pipe-update/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe API: Update beta The `update` pipe API endpoint allows you to update a pipe on Langbase dynamically with the API. You can use this endpoint to update a pipe with all the custom configuration. This endpoint requires a User or Org API key. To generate a User or Org API key visit your profile/organization settings page on Langbase. --- This API endpoint has been deprecated. Please use the new [`update`](/api-reference/pipe/update) pipe API endpoint. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). --- ## Update a pipe {{ tag: 'Deprecated', label: '/beta/pipes/{owner}/{pipe}', status: 'deprecated' }} Update a pipe by sending the pipe configuration inside the request body. ### Required headers Request content type. Needs to be `application/json`. Replace `YOUR_API_KEY` with your User/Org API key. ### Required path parameters Your organization name or username. Replace `{owner}` with your organization name or username. Name of the pipe. Replace `{pipe}` with the name of the pipe. ### Optional attributes Name of the pipe. Short description of the pipe. Status of the pipe. Can be one of: `public`, `private` Configuration object of the pipe. If enabled, the output will be streamed in real-time like ChatGPT. This is helpful if user is directly reading the text. Enforce the output to be in JSON format. If enabled, both prompt and completions will be stored in the database. Otherwise, only system prompt and few shot messages will be saved. If enabled, Langbase blocks flagged requests automatically. ID of the LLM model. You can copy the ID of a model from the list of [supported LLM models](/supported-models-and-providers) at Langbase. Name of the LLM model provider. Check out the list of all the [supported LLM providers](/supported-models-and-providers) at Langbase. Can be one of the [supported providers](/supported-models-and-providers): `OpenAI`, `Together`, `Anthropic`, `Groq`, `Google`, `Cohere`. Controls which (if any) tool is called by the model. - `auto` - the model can pick between generating a message or calling one or more tools. - `required` - the model must call one or more tools. - `object` - Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool. Default: `auto` Call multiple tools in parallel, allowing the effects and results of these function calls to be resolved in parallel. An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Maximum number of tokens in the response message returned. What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic. Penalizes a word based on its occurrence in the input text. Penalizes a word based on how frequently it appears in the training data. Up to 4 sequences where the API will stop generating further tokens. System prompt. Insert variables in the prompt with syntax like {{variable}}. Chat opening prompt. AI Safety prompt. An array containing message objects. An array containing different variable objects. Use this prompt to define the JSON output format, schema, and more. It will be appended to the system prompt. Use this prompt to make the LLM answer questions from Memoryset documents. An array of objects with valid tool definitions. Read more about valid [tool definition](/features/tool-calling#tool-definition-schema) Default: `[]` An array of memoryset names. ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/pipes/{owner}/{pipe} \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "name": "Test Pipe", "description": "This is a test pipe", "status": "public" }' ``` ```js {{ title: 'Node.js' }} async function updatePipe() { const url = 'https://api.langbase.com/beta/pipes/{owner}/{pipe}'; const apiKey = 'YOUR_API_KEY'; const pipe = { name: 'Test Pipe', description: 'This is a test pipe', status: 'public' }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(pipe), }); const updatedPipe = await response.json(); return updatedPipe; } ``` ```python import requests import json def update_pipe(): url = 'https://api.langbase.com/beta/pipes/{owner}/{pipe}' api_key = 'YOUR_API_KEY' pipe = { "name": "Test Pipe", "description": "This is a test pipe", "status": "public" } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(pipe)) updated_pipe = response.json() return updated_pipe ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/pipes/{owner}/{pipe} \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "name": "Test Pipe", "description": "This is a test pipe", "status": "public", "config": { "meta": { "stream": true, "json": false, "store": true, "moderate": false }, "model": { "name": "gpt-3.5-turbo", "provider": "OpenAI", "params": { "top_p": 1, "max_tokens": 1000, "temperature": 0.7, "presence_penalty": 1, "frequency_penalty": 1, "stop": [] }, "tool_choice": "required", "parallel_tool_calls": false }, "prompt": { "system": "You are a helpful AI assistant.", "opening": "Welcome to Langbase. Prompt away!", "safety": "", "messages": [], "variables": [], "json": "", "rag": "" }, "tool": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } } ], "memorysets": [] } }' ``` ```js {{ title: 'Node.js' }} async function updatePipe() { const url = 'https://api.langbase.com/beta/pipes/{owner}/{pipe}'; const apiKey = 'YOUR_API_KEY'; const pipe = { name: 'Test Pipe', description: 'This is a test pipe', status: 'public', config: { meta: { stream: true, json: false, store: true, moderate: false, }, model: { name: 'gpt-3.5-turbo', provider: 'OpenAI', params: { max_tokens: 1000, temperature: 0.7, top_p: 1, frequency_penalty: 1, presence_penalty: 1, stop: [], }, tool_choice: 'required', parallel_tool_calls: false }, prompt: { opening: 'Welcome to Langbase. Prompt away!', system: 'You are a helpful AI assistant.', safety: '', messages: [], variables: [], json: '', rag: '', }, tools: [ { type: 'function', function: { name: 'get_current_weather', description: 'Get the current weather in a given location', parameters: { type: 'object', properties: { location: { type: 'string', description: 'The city and state, e.g. San Francisco, CA' }, unit: { type: 'string', enum: ['celsius', 'fahrenheit'] } }, required: ['location'] } } } ], memorysets: [] } }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(pipe), }); const updatedPipe = await response.json(); return updatedPipe; } ``` ```python import requests import json def update_pipe(): url = 'https://api.langbase.com/beta/pipes/{owner}/{pipe}' api_key = 'YOUR_API_KEY' pipe = { "name": "Test Pipe", "description": "This is a test pipe", "status": "public", "config": { "meta": { "stream": true, "json": false, "store": true, "moderate": false }, "model": { "name": "gpt-3.5-turbo", "provider": "OpenAI", "params": { "top_p": 1, "max_tokens": 1000, "temperature": 0.7, "presence_penalty": 1, "frequency_penalty": 1, "stop": [] }, "tool_choice": "required", "parallel_tool_calls": false }, "prompt": { "system": "You are a helpful AI assistant.", "opening": "Welcome to Langbase. Prompt away!", "safety": "", "messages": [], "variables": [], "json": "", "rag": "" }, "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } } ], "memorysets": [] } } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(pipe)) updated_pipe = response.json() return updated_pipe ``` ```json {{ title: 'Response' }} { "name": "test-pipe", "type": "chat", "description": "This is a create Pipe test from API", "status": "private", "api_key": "pipe_4FVBn2DgrzfJf...", "owner_login": "langbase", "url": "https://langbase.com/langbase/test-pipe" } ``` Pipe API: Run <span className="text-xl font-mono text-muted-foreground/70">beta</span> https://langbase.com/docs/api-reference/deprecated/pipe-run/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe API: Run beta The Run API allows you to execute any pipe and receive its response. It supports all use cases of Pipes, including chat interactions, single generation tasks, and function calls. This API endpoint has been deprecated. Please use the new [`run`](/api-reference/pipe/run) pipe API endpoint. The `Run` API consolidates the functionality of the previously separate `Generate` and `Chat` endpoints, providing a unified interface. As a result, we will soon be deprecating both `Generate` and `Chat` in favor of `Run`. The Run API supports: - Single generation requests for straightforward tasks. - Dynamic variables to create adaptable prompts in real-time. - Thread management for handling multi-turn conversations. - Seamless conversation continuation, ensuring smooth transitions across interactions. If needed, Langbase can store messages and conversation threads, allowing for persistent conversation history for chat use cases. --- ## Run a pipe{{ tag: 'Deprecated', label: '/beta/pipes/run', status: 'deprecated' }} Run a pipe by sending the required data with the request. For basic request, send a messages array inside request body. ### Required headers Request content type. Needs to be `application/json` Replace `PIPE_API_KEY` with your Pipe API key ### Required attributes An array containing message objects The role of the message, i.e., `system` | `user` | `assistant` | `tool` The content of the message ### Optional attributes The id of the called LLM tool if the role is `tool` The name of the called tool if the role is `tool` An array containing different variable objects The name of the variable The value of the variable The ID of an existing chat thread. The conversation will continue in this thread. ### Response headers The ID of the new/existing thread. If you want to continue conversation in this thread, send it as `threadId` in the next request. _[Learn how to use function calling with this API.](/features/tool-calling/chat-api)_ ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer PIPE_API_KEY' \ -d '{ "messages": [ { "role": "user", "content": "Hello!" } ] }' ``` ```js {{ title: 'Node.js' }} async function generateCompletion() { const url = 'https://api.langbase.com/beta/pipes/run' const apiKey = '' const data = { messages: [{ role: 'user', content: 'Hello!' }], } const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }) const res = await response.json(); const completion = res.completion; return completion; } ``` ```python import requests import json def generate_completion(): url = 'https://api.langbase.com/beta/pipes/run' api_key = '' body_data = { "messages": [ {"role": "user", "content": "Hello!"} ] } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}' } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res = response.json() completion = res['completion'] return completion ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer PIPE_API_KEY' \ -d '{ "messages": [ { "role": "user", "content": "Hello!" } ] }' ``` ```js {{ title: 'Node.js' }} async function main() { const url = 'https://api.langbase.com/beta/pipes/run'; const apiKey = 'LANGBASE_PIPE_API_KEY'; // TODO: Replace with your Pipe API key. const data = { messages: [{role: 'user', content: 'Hello!'}], }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }); if (!response.ok) return console.error(await response.json()); // Read SSE stream response (OpenAI Format) and log the response // You can also use any OpenAI streaming helper library const reader = response.body.getReader(); const decoder = new TextDecoder('utf-8'); while (true) { const {done, value} = await reader.read(); if (done) break; const chunk = decoder.decode(value); const lines = chunk.split('\n').filter(line => line.trim() !== ''); for (const line of lines) { if (line.startsWith('data:')) { const json = JSON.parse(line.substring('data:'.length).trim()); if (json.choices[0].delta.content) { console.log(json.choices[0].delta.content); } } } } } main(); ``` ```python import requests import json def main(): url = 'https://api.langbase.com/beta/pipes/run' apiKey = 'LANGBASE_PIPE_API_KEY' # TODO: Replace with your Pipe API key. data = { "messages": [{"role": "user", "content": "Hello!"}] } headers = { "Content-Type": "application/json", "Authorization": f"Bearer {apiKey}" } response = requests.post(url, headers=headers, data=json.dumps(data)) if not response.ok: print(response.json()) return # Read SSE stream response (OpenAI Format) and log the response # Here, we manually process the response stream for line in response.iter_lines(): if line: line = line.decode('utf-8') if line.startswith('data:'): json_data = json.loads(line[len('data:'):].strip()) if "choices" in json_data and json_data["choices"]: content = json_data["choices"][0].get("delta", {}).get("content") if content: print(content) if __name__ == "__main__": main() ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer PIPE_API_KEY' \ -d '{ "messages": [ { "role": "user", "content": "Hello!" } ], "variables": [ { "name": "", "value": "" } ] }' ``` ```js {{ title: 'Node.js' }} async function generateCompletion() { const url = 'https://api.langbase.com/beta/pipes/run' const apiKey = '' const data = { messages: [{ role: 'user', content: 'Hello!' }], variables: [{ name: '', value: '' }], } const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }) const res = await response.json(); const completion = res.completion; return completion; } ``` ```python import requests import json def generate_completion(): url = 'https://api.langbase.com/beta/pipes/run' api_key = '' body_data = { "messages": [ {"role": "user", "content": "Hello!"} ], "variables": [ {"name": "", "value": ""} ], } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}' } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res = response.json() completion = res['completion'] return completion ``` ```bash {{ title: 'cURL' }} # NOTE: How chat thread works # 1. You send first request without a threadId # 2. In response headers you get back the `lb-thread-id` # 3. To maintain the same chat thread, you send the `lb-thread-id` in all next requests # NOTE: To start a new thread, you send a request without `threadId`. curl https://api.langbase.com/beta/pipes/run \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer PIPE_API_KEY" \ -d '{ "threadId": "", "messages": [{ "role": "user", "content": "Hello!" }] }' ``` ```js {{ title: 'Node.js' }} async function main() { const url = 'https://api.langbase.com/beta/pipes/run'; const apiKey = 'LANGBASE_PIPE_API_KEY'; // TODO: Replace with your Pipe API key. const data = { messages: [{role: 'user', content: 'Hello!'}], // NOTE: How chat thread works // 1. You send first request without a threadId // 2. In reponse headers you get back the `lb-thread-id` // 3. To maintain the same chat thread, you send the `lb-thread-id` in all next requests // NOTE: To start a new thread, you send a request without `threadId`. threadId: '', // TODO: Add "threadId" to all requests after first request. }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }); if (!response.ok) return console.error(await response.json()); // Read SSE stream response (OpenAI Format) and log the response // You can also use any OpenAI streaming helper library const reader = response.body.getReader(); const decoder = new TextDecoder('utf-8'); while (true) { const {done, value} = await reader.read(); if (done) break; const chunk = decoder.decode(value); const lines = chunk.split('\n').filter(line => line.trim() !== ''); for (const line of lines) { if (line.startsWith('data:')) { const json = JSON.parse(line.substring('data:'.length).trim()); if (json.choices[0].delta.content) { console.log(json.choices[0].delta.content); } } } } } main(); ``` ```python import requests import json def main(): url = 'https://api.langbase.com/beta/pipes/run' apiKey = 'LANGBASE_PIPE_API_KEY' # TODO: Replace with your Pipe API key. data = { "messages": [{"role": "user", "content": "Hello!"}] # NOTE: How chat thread works # 1. You send first request without a threadId # 2. In reponse headers you get back the `lb-thread-id` # 3. To maintain the same chat thread, you send the `lb-thread-id` in all next requests # NOTE: To start a new thread, you send a request without `threadId`. threadId: '', # TODO: Add "threadId" to all requests after first request. } headers = { "Content-Type": "application/json", "Authorization": f"Bearer {apiKey}" } response = requests.post(url, headers=headers, data=json.dumps(data)) if not response.ok: print(response.json()) return # Read SSE stream response (OpenAI Format) and log the response # Here, we manually process the response stream for line in response.iter_lines(): if line: line = line.decode('utf-8') if line.startswith('data:'): json_data = json.loads(line[len('data:'):].strip()) if "choices" in json_data and json_data["choices"]: content = json_data["choices"][0].get("delta", {}).get("content") if content: print(content) if __name__ == "__main__": main() ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/pipes/run \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer PIPE_API_KEY' \ -d '{ "messages": [ { "role": "user", "content": "Whats the weather in SF?" } ] }' ``` ```js {{ title: 'Node.js' }} async function generateChatCompletion() { const url = 'https://api.langbase.com/beta/pipes/run'; const apiKey = ''; const data = { messages: [ { role: 'user', content: 'Whats the weather in SF?' } ] }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}` }, body: JSON.stringify(data) }); const res = await response.json(); return res; } ``` ```python import requests import json def generate_chat_completion(): url = 'https://api.langbase.com/beta/pipes/run' api_key = '' body_data = { "messages": [ {"role": "user", "content": "Whats the weather in SF?"} ] } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}' } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res = response.json() return res ``` ```json {{ title: 'STREAMING' }} {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"content":"Hello"},"logprobs":null,"finish_reason":null}]} ... {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} ``` ```json {{ title: 'STREAM-OFF' }} { "completion": "Hello! How can I assist you today?", "raw": { "id": "chatcmpl-123", "object": "chat.completion", "created": 1720131129, "model": "gpt-4o-mini", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hello! How can I assist you today?" }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 20, "completion_tokens": 9, "total_tokens": 29 }, "system_fingerprint": "fp_44709d6fcb" } } ``` ```json {{ title: 'Response Header' }} HTTP/2 200 lb-thread-id: "…-…-…-…-… ID of the thread" … … … rest of the headers … : … … … ``` --- Pipe API: List <span className="text-xl font-mono text-muted-foreground/70">beta</span> https://langbase.com/docs/api-reference/deprecated/pipe-list/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe API: List beta The `list` pipe API endpoint allows you to get a list of pipes on Langbase with API. This endpoint requires a User or Org API key. --- This API endpoint has been deprecated. Please use the new [`list`](/api-reference/pipe/list) pipe API endpoint. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). --- ## Get a list of org pipes {{ tag: 'Deprecated', label: '/beta/org/{org}/pipes', status: 'deprecated' }} Get a list of org pipes by sending a GET request to this endpoint. ### Required headers Request content type. Needs to be `application/json`. Replace `` with your organization API key. ### Required path parameters The organization username. Replace `{org}` with your organization username. ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/org/{org}/pipes \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " ``` ```js {{ title: 'Node.js' }} async function getPipes() { const url = 'https://api.langbase.com/beta/org/{org}/pipes'; const apiKey = ''; const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, } }); const pipesList = await response.json(); return pipesList; } ``` ```python import requests def get_pipes(): url = 'https://api.langbase.com/beta/org/{org}/pipes' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.get(url, headers=headers) pipes_list = response.json() return pipes_list ``` ```json {{ title: 'Response' }} { "pipes" : [ { "name": "test-pipe", "type": "chat", "description": "This is a pipe test from API", "status": "public", "api_key": "pipe_4FVBn2DgrzfJf...", "owner_login": "langbase", "url": "https://langbase.com/langbase/test-pipe" }, ... ] } ``` --- ## Get a list of user pipes {{ tag: 'Deprecated', label: '/beta/user/pipes', status: 'deprecated' }} Get a list of user pipes by sending a GET request to this endpoint. ### Required headers Request content type. Needs to be `application/json`. Replace `` with your User API key. ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/user/pipes \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " ``` ```js {{ title: 'Node.js' }} async function getPipes() { const url = 'https://api.langbase.com/beta/user/pipes'; const apiKey = ''; const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, } }); const pipesList = await response.json(); return pipesList; } ``` ```python import requests def get_pipes(): url = 'https://api.langbase.com/beta/user/pipes' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.get(url, headers=headers) pipes_list = response.json() return pipes_list ``` ```json {{ title: 'Response' }} { "pipes" : [ { "name": "test-pipe", "type": "chat", "description": "This is a pipe test from API", "status": "public", "api_key": "pipe_4FVBn2DgrzfJf...", "owner_login": "langbase", "url": "https://langbase.com/langbase/test-pipe" }, ... ] } ``` --- Pipe API: Generate <span className="text-xl font-mono text-muted-foreground/70">beta</span> https://langbase.com/docs/api-reference/deprecated/pipe-generate/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe API: Generate beta The `generate` endpoint allows easy integration of Large Language Models (LLM) like OpenAI or Claude into your app, enabling few-shot training and sending various prompts. It supports dynamic variables to send dynamic prompts. --- This API endpoint has been deprecated. Please use the new [`run`](/api-reference/pipe/run) pipe API endpoint. --- ## Generate a completion {{ tag: 'Deprecated', label: '/beta/generate', status: 'deprecated' }} Generate a completion by sending a messages array inside request body. ### Required headers Request content type. Needs to be `application/json` Replace `PIPE_API_KEY` with your Pipe API key ### Required attributes An array containing message objects The role of the message, i.e.,`system` | `user` | `assistant` | `tool` The content of the message ### Optional attributes The tool calls array returned by the assistant The id of the called LLM tool if the role is `tool` The name of the called tool if the role is `tool` An array containing different variable objects The name of the variable The value of the variable _[Learn how to use function calling with the Generate API.](/features/tool-calling/generate-api)_ ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/generate \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer PIPE_API_KEY' \ -d '{ "messages": [ { "role": "user", "content": "Hello!" } ] }' ``` ```js {{ title: 'Node.js' }} async function main() { const url = 'https://api.langbase.com/beta/generate'; const apiKey = 'LANGBASE_PIPE_API_KEY'; // TODO: Replace with your Pipe API key. const data = { messages: [{role: 'user', content: 'Hello!'}], }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }); if (!response.ok) return console.error(await response.json()); // Read SSE stream response (OpenAI Format) and log the response // You can also use any OpenAI streaming helper library const reader = response.body.getReader(); const decoder = new TextDecoder('utf-8'); while (true) { const {done, value} = await reader.read(); if (done) break; const chunk = decoder.decode(value); const lines = chunk.split('\n').filter(line => line.trim() !== ''); for (const line of lines) { if (line.startsWith('data:')) { const json = JSON.parse(line.substring('data:'.length).trim()); if (json.choices[0].delta.content) { console.log(json.choices[0].delta.content); } } } } } main(); ``` ```python import requests import json def main(): url = 'https://api.langbase.com/beta/generate' apiKey = 'LANGBASE_PIPE_API_KEY' # TODO: Replace with your Pipe API key. data = { "messages": [{"role": "user", "content": "Hello!"}] } headers = { "Content-Type": "application/json", "Authorization": f"Bearer {apiKey}" } response = requests.post(url, headers=headers, data=json.dumps(data)) if not response.ok: print(response.json()) return # Read SSE stream response (OpenAI Format) and log the response # Here, we manually process the response stream for line in response.iter_lines(): if line: line = line.decode('utf-8') if line.startswith('data:'): json_data = json.loads(line[len('data:'):].strip()) if "choices" in json_data and json_data["choices"]: content = json_data["choices"][0].get("delta", {}).get("content") if content: print(content) if __name__ == "__main__": main() ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/generate \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer PIPE_API_KEY' \ -d '{ "messages": [ { "role": "user", "content": "Hello!" } ] }' ``` ```js {{ title: 'Node.js' }} async function generateCompletion() { const url = 'https://api.langbase.com/beta/generate' const apiKey = '' const data = { messages: [{ role: 'user', content: 'Hello!' }], } const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }) const res = await response.json(); const completion = res.completion; return completion; } ``` ```python import requests import json def generate_completion(): url = 'https://api.langbase.com/beta/generate' api_key = '' body_data = { "messages": [ {"role": "user", "content": "Hello!"} ] } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}' } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res = response.json() completion = res['completion'] return completion ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/generate \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer PIPE_API_KEY' \ -d '{ "messages": [ { "role": "user", "content": "Hello!" } ], "variables": [ { "name": "", "value": "" } ] }' ``` ```js {{ title: 'Node.js' }} async function generateCompletion() { const url = 'https://api.langbase.com/beta/generate' const apiKey = '' const data = { messages: [{ role: 'user', content: 'Hello!' }], variables: [{ name: '', value: '' }], } const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }) const res = await response.json(); const completion = res.completion; return completion; } ``` ```python import requests import json def generate_completion(): url = 'https://api.langbase.com/beta/generate' api_key = '' body_data = { "messages": [ {"role": "user", "content": "Hello!"} ], "variables": [ {"name": "", "value": ""} ], } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}' } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res = response.json() completion = res['completion'] return completion ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/generate \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer PIPE_API_KEY' \ -d '{ "messages": [ { "role": "user", "content": "Whats the weather in SF?" } ] }' ``` ```js {{ title: 'Node.js' }} async function generateCompletion() { const url = 'https://api.langbase.com/beta/generate'; const apiKey = ''; const data = { messages: [ { role: 'user', content: 'Whats the weather in SF?' } ] }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}` }, body: JSON.stringify(data) }); const res = await response.json(); return res; } ``` ```python import requests import json def generate_completion(): url = 'https://api.langbase.com/beta/generate' api_key = '' body_data = { "messages": [ {"role": "user", "content": "Whats the weather in SF?"} ] } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}' } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res = response.json() return res ``` ```json {{ title: 'STREAMING' }} {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"role":"assistant","content":"Hi"},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"content":"there"},"logprobs":null,"finish_reason":null}]} ... {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} ``` ```json {{ title: 'STREAM-OFF' }} { "completion": "Hello! How can I assist you today?", "raw": { "id": "chatcmpl-123", "object": "chat.completion", "created": 1720131129, "model": "gpt-4o-mini", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hello! How can I assist you today?" }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 20, "completion_tokens": 9, "total_tokens": 29 }, "system_fingerprint": "fp_44709d6fcb" } } ``` --- Memory API: Retrieve <span className="text-xl font-mono text-muted-foreground/70">beta</span> https://langbase.com/docs/api-reference/deprecated/memory-retrieve/ import { generateMetadata } from '@/lib/generate-metadata'; # Memory API: Retrieve beta The `retrieve` memory API endpoint allows you to retrieve similar chunks from an existing memory on Langbase based on a query. This endpoint requires an Org or User API key. --- This API endpoint has been deprecated. Please use the new [`retrieve`](/api-reference/memory/retrieve) memory API endpoint. --- ## Generate an Org/User API key You will need to generate an API key to authenticate your requests. For more information, visit the [Org/User API key documentation](/api-reference/api-keys). --- ## Retrieve similar chunks from multiple memory {{ tag: 'Deprecated', label: '/beta/memory/retrieve', status: 'deprecated' }} Retrieve similar chunks by specifying the owner login, query, and memory names in the request body. ### Required headers Request content type. Needs to be `application/json`. Replace `` with your Org/User API key. ### Required body parameters The username of the owner (either an organization or a user). Replace `` with your organization or user username. The search query for retrieving similar chunks. An array of memory names from which to retrieve similar chunks. ### Optional body parameters The number of top similar chunks to return from memory. Default is 20, minimum is 1, and maximum is 100. ```bash {{ title: 'cURL' }} curl -X POST https://api.langbase.com/beta/memory/retrieve \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "ownerLogin": "", "query": "your query here", "memory": [ { "name": "memory1" }, { "name": "memory2" } ] }' ``` ```js {{ title: 'Node.js' }} async function retrieveSimilarChunks() { const url = 'https://api.langbase.com/beta/memory/retrieve'; const apiKey = ''; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify({ ownerLogin: '', query: 'your query here', memory: [ { name: 'memory1' }, { name: 'memory2' } ] }), }); const result = await response.json(); return result; } ``` ```python import requests def retrieve_similar_chunks(): url = 'https://api.langbase.com/beta/memory/retrieve' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } data = { 'ownerLogin': '', 'query': 'your query here', 'memory': [ {'name': 'memory1'}, {'name': 'memory2'} ] } response = requests.post(url, headers=headers, json=data) result = response.json() return result ``` ```json {{ title: 'Response' }} [ { "text": "This is the first similar chunk", "similarity": 0.98, "meta": { "docName": "Filename.ext" } }, { "text": "This is the second similar chunk", "similarity": 0.95, "meta": { "docName": "Filename.ext" } } ] ``` --- ## Retrieve similar chunks from memory {{ tag: 'Deprecated', label: '/beta/memorysets/{ownerLogin}/{memoryName}/retrieve', status: 'deprecated' }} This endpoint is deprecated. Please use the [new retrieve endpoint](/api-reference/memory/retrieve#retrieve-similar-chunks-from-multiple-memory) `/beta/memory/retrieve` instead. Retrieve similar chunks by specifying the owner login and memory name in the path and providing the query in the request body. ### Required headers Request content type. Needs to be `application/json`. Replace `` with your Org/User API key. ### Required path parameters The username of the owner (either an organization or a user). Replace `{ownerLogin}` with your organization or user username. The name of the memory from which to retrieve similar chunks. Replace `{memoryName}` with the name of the memory. ### Required body parameters The search query for retrieving similar chunks. ### Optional body parameters The number of top similar chunks to return from memory. Default is 20, minimum is 1, and maximum is 100. ```bash {{ title: 'cURL' }} curl -X POST https://api.langbase.com/beta/memorysets/{ownerLogin}/{memoryName}/retrieve \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "query": "your query here" }' ``` ```js {{ title: 'Node.js' }} async function retrieveSimilarChunks() { const url = 'https://api.langbase.com/beta/memorysets/{ownerLogin}/{memoryName}/retrieve'; const apiKey = ''; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify({ query: 'your query here' }), }); const result = await response.json(); return result; } ``` ```python import requests def retrieve_similar_chunks(): url = 'https://api.langbase.com/beta/memorysets/{ownerLogin}/{memoryName}/retrieve' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } data = { 'query': 'your query here' } response = requests.post(url, headers=headers, json=data) result = response.json() return result ``` ```json {{ title: 'Response' }} [ { "text": "This is the first similar chunk", "similarity": 0.98, "meta": { "docName": "Filename.ext" } }, { "text": "This is the second similar chunk", "similarity": 0.95, "meta": { "docName": "Filename.ext" } } ] ``` --- Pipe API: Chat <span className="text-xl font-mono text-muted-foreground/70">beta</span> https://langbase.com/docs/api-reference/deprecated/pipe-chat/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe API: Chat beta For chat-style LLM integration with OpenAI, Mistral, etc., use the chat endpoint with a chat pipe. It supports thread creation, history tracking, and seamless conversation continuation. Langbase can store all messages and threads if desired for easy chat app development. --- This API endpoint has been deprecated. Please use the new [`run`](/api-reference/pipe/run) pipe API endpoint. --- ## Generate a chat completion {{ tag: 'Deprecated', label: '/beta/chat', status: 'deprecated' }} Generate a chat completion by sending messages array inside request body. ### Required headers Request content type. Needs to be `application/json` Replace `PIPE_API_KEY` with your Pipe API key ### Required attributes An array containing message objects The role of the message, i.e.,`system` | `user` | `assistant` | `tool` The content of the message ### Optional attributes The id of the called LLM tool if the role is `tool` The name of the called tool if the role is `tool` An array containing different variable objects The name of the variable The value of the variable The ID of an existing chat thread. The conversation will continue in this thread. ### Response headers The ID of the new chat thread. Use this ID in the next request to continue the conversation. _[Learn how to use function calling with the Chat API.](/features/tool-calling/chat-api)_ ```bash {{ title: 'cURL' }} # NOTE: How chat thread works # 1. You send first request without a threadId # 2. In response headers you get back the `lb-thread-id` # 3. To maintain the same chat thread, you send the `lb-thread-id` in all next requests # NOTE: To start a new thread, you send a request without `threadId`. curl https://api.langbase.com/beta/chat \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer PIPE_API_KEY" \ -d '{ "threadId": "", "messages": [{ "role": "user", "content": "Hello!" }] }' ``` ```js {{ title: 'Node.js' }} async function main() { const url = 'https://api.langbase.com/beta/chat'; const apiKey = 'LANGBASE_PIPE_API_KEY'; // TODO: Replace with your Pipe API key. const data = { messages: [{role: 'user', content: 'Hello!'}], // NOTE: How chat thread works // 1. You send first request without a threadId // 2. In reponse headers you get back the `lb-thread-id` // 3. To maintain the same chat thread, you send the `lb-thread-id` in all next requests // NOTE: To start a new thread, you send a request without `threadId`. threadId: '', // TODO: Add "threadId" to all requests after first request. }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(data), }); if (!response.ok) return console.error(await response.json()); // Read SSE stream response (OpenAI Format) and log the response // You can also use any OpenAI streaming helper library const reader = response.body.getReader(); const decoder = new TextDecoder('utf-8'); while (true) { const {done, value} = await reader.read(); if (done) break; const chunk = decoder.decode(value); const lines = chunk.split('\n').filter(line => line.trim() !== ''); for (const line of lines) { if (line.startsWith('data:')) { const json = JSON.parse(line.substring('data:'.length).trim()); if (json.choices[0].delta.content) { console.log(json.choices[0].delta.content); } } } } } main(); ``` ```python import requests import json def main(): url = 'https://api.langbase.com/beta/chat' apiKey = 'LANGBASE_PIPE_API_KEY' # TODO: Replace with your Pipe API key. data = { "messages": [{"role": "user", "content": "Hello!"}] # NOTE: How chat thread works # 1. You send first request without a threadId # 2. In reponse headers you get back the `lb-thread-id` # 3. To maintain the same chat thread, you send the `lb-thread-id` in all next requests # NOTE: To start a new thread, you send a request without `threadId`. threadId: '', # TODO: Add "threadId" to all requests after first request. } headers = { "Content-Type": "application/json", "Authorization": f"Bearer {apiKey}" } response = requests.post(url, headers=headers, data=json.dumps(data)) if not response.ok: print(response.json()) return # Read SSE stream response (OpenAI Format) and log the response # Here, we manually process the response stream for line in response.iter_lines(): if line: line = line.decode('utf-8') if line.startswith('data:'): json_data = json.loads(line[len('data:'):].strip()) if "choices" in json_data and json_data["choices"]: content = json_data["choices"][0].get("delta", {}).get("content") if content: print(content) if __name__ == "__main__": main() ``` ```bash {{ title: 'cURL' }} # NOTE: How chat thread works # 1. You send first request without a threadId # 2. In response headers you get back the `lb-thread-id` # 3. To maintain the same chat thread, you send the `lb-thread-id` in all next requests # NOTE: To start a new thread, you send a request without `threadId`. curl https://api.langbase.com/beta/chat \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer PIPE_API_KEY" \ -d '{ "threadId": "", # TODO: Add "threadId" to all chats in the same thread. "messages": [ { "role": "user", "content": "Hello!" } ] }' ``` ```js {{ title: 'Node.js' }} async function generateChatCompletion() { const url = 'https://api.langbase.com/beta/chat' const apiKey = '' const bodyData = { // NOTE: How chat thread works // 1. You send first request without a threadId // 2. In reponse headers you get back the `lb-thread-id` // 3. To maintain the same chat thread, you send the `lb-thread-id` in all next requests // NOTE: To start a new thread, you send a request without `threadId`. threadId: '', // TODO: Add "threadId" to all requests after first request. messages: [ { role: 'user', content: 'Hello!', }, ] } const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(bodyData), }) const resText = await response.text() return resText } ``` ```python import requests import json def generate_chat_completion(): url = 'https://api.langbase.com/beta/chat' api_key = '' body_data = { # NOTE: How chat thread works # 1. You send first request without a threadId # 2. In reponse headers you get back the `lb-thread-id` # 3. To maintain the same chat thread, you send the `lb-thread-id` in all next requests # NOTE: To start a new thread, you send a request without `threadId`. threadId: '', # TODO: Add "threadId" to all requests after first request. "messages": [ { "role": "user", "content": "Hello!", }, ] } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res_text = response.text return res_text ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/chat \ -H 'Content-Type: application/json' \ -H 'Authorization: Bearer PIPE_API_KEY' \ -d '{ "messages": [ { "role": "user", "content": "Whats the weather in SF?" } ] }' ``` ```js {{ title: 'Node.js' }} async function generateChatCompletion() { const url = 'https://api.langbase.com/beta/chat'; const apiKey = ''; const data = { messages: [ { role: 'user', content: 'Whats the weather in SF?' } ] }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}` }, body: JSON.stringify(data) }); const res = await response.json(); return res; } ``` ```python import requests import json def generate_chat_completion(): url = 'https://api.langbase.com/beta/chat' api_key = '' body_data = { "messages": [ {"role": "user", "content": "Whats the weather in SF?"} ] } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}' } response = requests.post(url, headers=headers, data=json.dumps(body_data)) res = response.json() return res ``` ```json {{ title: 'STREAMING' }} {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]} {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{"content":"Hello"},"logprobs":null,"finish_reason":null}]} ... {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1719848588,"model":"gpt-4o-mini","system_fingerprint":"fp_44709d6fcb","choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]} ``` ```json {{ title: 'STREAM-OFF' }} { "completion": "Hello! How can I assist you today?", "raw": { "id": "chatcmpl-123", "object": "chat.completion", "created": 1720131129, "model": "gpt-4o-mini", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hello! How can I assist you today?" }, "logprobs": null, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 20, "completion_tokens": 9, "total_tokens": 29 }, "system_fingerprint": "fp_44709d6fcb" } } ``` ```json {{ title: 'Response Header' }} HTTP/2 200 lb-thread-id: "…-…-…-…-… ID of chat thread" … … … rest of the headers … : … … … ``` --- Pipe API: Create <span className="text-xl font-mono text-muted-foreground/70">beta</span> https://langbase.com/docs/api-reference/deprecated/pipe-create/ import { generateMetadata } from '@/lib/generate-metadata'; # Pipe API: Create beta The `create` pipe API endpoint allows you to create a new pipe on Langbase dynamically with API. You can use this endpoint to create a new pipe with all the custom configuration. This endpoint requires a User or Org API key. To generate a User or Org API key visit your profile/organization settings page on Langbase. --- This API endpoint has been deprecated. Please use the new [`create`](/api-reference/pipe/create) pipe API endpoint. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). --- ## Create a new org pipe {{ tag: 'Deprecated', label: '/beta/org/{org}/pipes', status: 'deprecated' }} Create a new organization pipe by sending the pipe configuration inside the request body. ### Required headers Request content type. Needs to be `application/json`. Replace `YOUR_API_KEY` with your Organization API key. ### Required path parameters The organization name. Replace `{org}` with your organization name. ### Required attributes Name of the pipe. ### Optional attributes Short description of the pipe. Default: `''` Status of the pipe. Default: `public` Can be one of: `public`, `private` Type of the pipe. Default: `generate` Can be one of: `generate`, `chat` Configuration object of the pipe. Default: `{}` If enabled, the output will be streamed in real-time like ChatGPT. This is helpful if user is directly reading the text. Default: `true` Enforce the output to be in JSON format. Default: `false` If enabled, both prompt and completions will be stored in the database. Otherwise, only system prompt and few shot messages will be saved. Default: `true` If enabled, Langbase blocks flagged requests automatically. Default: `false` ID of the LLM model. You can copy the ID of a model from the list of [supported LLM models](/supported-models-and-providers) at Langbase. Default: `gpt-4o-mini` Name of the LLM model provider. Check out the list of all the [supported LLM providers](/supported-models-and-providers) at Langbase. Default: `OpenAI` Can be one of the [supported providers](/supported-models-and-providers): `OpenAI`, `Together`, `Anthropic`, `Groq`, `Google`, `Cohere`. Controls which (if any) tool is called by the model. - `auto` - the model can pick between generating a message or calling one or more tools. - `required` - the model must call one or more tools. - `object` - Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool. Default: `auto` Call multiple tools in parallel, allowing the effects and results of these function calls to be resolved in parallel. Default: `true` An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Default: `1` Maximum number of tokens in the response message returned. Default: `1000` What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic. Default: `0.7` Penalizes a word based on its occurrence in the input text. Default: `1` Penalizes a word based on how frequently it appears in the training data. Default: `1` Up to 4 sequences where the API will stop generating further tokens. Default: `[]` System prompt. Insert variables in the prompt with syntax like {{variable}}. Default: `You're a helpful AI assistant.` Chat opening prompt. Default: `Welcome to Langbase. Prompt away!` AI Safety prompt. Default: `''` An array containing message objects. Default: `[]` An array containing different variable objects. Default: `[]` Use this prompt to define the JSON output format, schema, and more. It will be appended to the system prompt. Default: `''` Use this prompt to make the LLM answer questions from Memoryset documents. Default: `''` An array of objects with valid tool definitions. Read more about valid [tool definition](/features/tool-calling#tool-definition-schema) Default: `[]` An array of memoryset names. Default: `[]` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/org/{org}/pipes \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "name": "Test Pipe", "description": "This is a test pipe", "status": "public", "type": "chat" }' ``` ```js {{ title: 'Node.js' }} async function createNewPipe() { const url = 'https://api.langbase.com/beta/org/{org}/pipes'; const apiKey = 'YOUR_API_KEY'; const pipe = { name: 'Test Pipe', description: 'This is a test pipe', status: 'public', type: 'chat' }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(pipe), }); const newPipe = await response.json(); return newPipe; } ``` ```python import requests import json def create_new_pipe(): url = 'https://api.langbase.com/beta/org/{org}/pipes' api_key = 'YOUR_API_KEY' pipe = { "name": "Test Pipe", "description": "This is a test pipe", "status": "public", "type": "chat" } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(pipe)) new_pipe = response.json() return new_pipe ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/org/{org}/pipes \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "name": "Test Pipe", "description": "This is a test pipe", "status": "public", "type": "chat", "config": { "meta": { "stream": true, "json": false, "store": true, "moderate": false }, "model": { "name": "gpt-3.5-turbo", "provider": "OpenAI", "params": { "top_p": 1, "max_tokens": 1000, "temperature": 0.7, "presence_penalty": 1, "frequency_penalty": 1, "stop": [] }, "tool_choice": "required", "parallel_tool_calls": false }, "prompt": { "system": "You are a helpful AI assistant.", "opening": "Welcome to Langbase. Prompt away!", "safety": "", "messages": [], "variables": [], "json": "", "rag": "" }, "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } } ], "memorysets": [] } }' ``` ```js {{ title: 'Node.js' }} async function createNewPipe() { const url = 'https://api.langbase.com/beta/org/{org}/pipes'; const apiKey = 'YOUR_API_KEY'; const pipe = { name: 'Test Pipe', description: 'This is a test pipe', status: 'public', type: 'chat' config: { meta: { stream: true, json: false, store: true, moderate: false, }, model: { name: 'gpt-3.5-turbo', provider: 'OpenAI', params: { max_tokens: 1000, temperature: 0.7, top_p: 1, frequency_penalty: 1, presence_penalty: 1, stop: [], }, tool_choice: 'required', parallel_tool_calls: false }, prompt: { opening: 'Welcome to Langbase. Prompt away!', system: 'You are a helpful AI assistant.', safety: '', messages: [], variables: [], json: '', rag: '', }, tools: [ { type: 'function', function: { name: 'get_current_weather', description: 'Get the current weather in a given location', parameters: { type: 'object', properties: { location: { type: 'string', description: 'The city and state, e.g. San Francisco, CA' }, unit: { type: 'string', enum: ['celsius', 'fahrenheit'] } }, required: ['location'] } } } ], memorysets: [] } }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(pipe), }); const newPipe = await response.json(); return newPipe; } ``` ```python import requests import json def create_new_pipe(): url = 'https://api.langbase.com/beta/org/{org}/pipes' api_key = 'YOUR_API_KEY' pipe = { "name": "Test Pipe", "description": "This is a test pipe", "status": "public", "type": "chat", "config": { "meta": { "stream": true, "json": false, "store": true, "moderate": false }, "model": { "name": "gpt-3.5-turbo", "provider": "OpenAI", "params": { "top_p": 1, "max_tokens": 1000, "temperature": 0.7, "presence_penalty": 1, "frequency_penalty": 1, "stop": [] }, "tool_choice": "required", "parallel_tool_calls": false }, "prompt": { "system": "You are a helpful AI assistant.", "opening": "Welcome to Langbase. Prompt away!", "safety": "", "messages": [], "variables": [], "json": "", "rag": "" }, "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } } ], "memorysets": [] } } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(pipe)) new_pipe = response.json() return new_pipe ``` ```json {{ title: 'Response' }} { "name": "test-pipe", "type": "chat", "description": "This is a create Pipe test from API", "status": "private", "api_key": "pipe_4FVBn2DgrzfJf...", "owner_login": "langbase", "url": "https://langbase.com/langbase/test-pipe" } ``` --- ## Create a new user pipe {{ tag: 'Deprecated', label: '/beta/user/pipes', status: 'deprecated' }} Create a new user pipe by sending the pipe configuration inside the request body. ### Required headers Request content type. Needs to be `application/json`. Replace `YOUR_API_KEY` with your User API key. ### Required attributes Name of the pipe. ### Optional attributes Short description of the pipe. Default: `''` Status of the pipe. Default: `public` Can be one of: `public`, `private` Type of the pipe. Default: `generate` Can be one of: `generate`, `chat` Configuration object of the pipe. Default: `{}` If enabled, the output will be streamed in real-time like ChatGPT. This is helpful if user is directly reading the text. Default: `true` Enforce the output to be in JSON format. Default: `false` If enabled, both prompt and completions will be stored in the database. Otherwise, only system prompt and few shot messages will be saved. Default: `true` If enabled, Langbase blocks flagged requests automatically. Default: `false` ID of the LLM model. You can copy the ID of a model from the list of [supported LLM models](/supported-models-and-providers) at Langbase. Default: `gpt-3.5-turbo` Name of the LLM model provider. Check out the list of all the [supported LLM providers](/supported-models-and-providers) at Langbase. Default: `OpenAI` Can be one of the [supported providers](/supported-models-and-providers): `OpenAI`, `Together`, `Anthropic`, `Groq`, `Google`, `Cohere` Controls which (if any) tool is called by the model. - `auto` - the model can pick between generating a message or calling one or more tools. - `required` - the model must call one or more tools. - `object` - Specifying a particular tool via `{"type": "function", "function": {"name": "my_function"}}` forces the model to call that tool. Default: `auto` Call multiple tools in parallel, allowing the effects and results of these function calls to be resolved in parallel. Default: `true` An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Default: `1` Maximum number of tokens in the response message returned. Default: `1000` What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random. Lower values like 0.2 will make it more focused and deterministic. Default: `0.7` Penalizes a word based on its occurrence in the input text. Default: `1` Penalizes a word based on how frequently it appears in the training data. Default: `1` Up to 4 sequences where the API will stop generating further tokens. Default: `[]` System prompt. Insert variables in the prompt with syntax like {{variable}}. Default: `You're a helpful AI assistant.` Chat opening prompt. Default: `Welcome to Langbase. Prompt away!` AI Safety prompt. Default: `''` An array containing message objects. Default: `[]` An array containing different variable objects. Default: `[]` Use this prompt to define the JSON output format, schema, and more. It will be appended to the system prompt. Default: `''` Use this prompt to make the LLM answer questions from Memoryset documents. Default: `''` An array of objects with valid tool definitions. Read more about valid [tool definition](/features/tool-calling#tool-definition-schema) Default: `[]` An array of memoryset names. Default: `[]` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/user/pipes \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "name": "Test Pipe", "description": "This is a test pipe", "status": "public", "type": "chat" }' ``` ```js {{ title: 'Node.js' }} async function createNewPipe() { const url = 'https://api.langbase.com/beta/user/pipes'; const apiKey = 'YOUR_API_KEY'; const pipe = { name: 'Test Pipe', description: 'This is a test pipe', status: 'public', type: 'chat' }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(pipe), }); const newPipe = await response.json(); return newPipe; } ``` ```python import requests import json def create_new_pipe(): url = 'https://api.langbase.com/beta/user/pipes' api_key = 'YOUR_API_KEY' pipe = { "name": "Test Pipe", "description": "This is a test pipe", "status": "public", "type": "chat" } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(pipe)) new_pipe = response.json() return new_pipe ``` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/user/pipes \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "name": "Test Pipe", "description": "This is a test pipe", "status": "public", "type": "chat", "config": { "meta": { "stream": true, "json": false, "store": true, "moderate": false }, "model": { "name": "gpt-3.5-turbo", "provider": "OpenAI", "params": { "top_p": 1, "max_tokens": 1000, "temperature": 0.7, "presence_penalty": 1, "frequency_penalty": 1, "stop": [] }, "tool_choice": "required", "parallel_tool_calls": false }, "prompt": { "system": "You are a helpful AI assistant.", "opening": "Welcome to Langbase. Prompt away!", "safety": "", "messages": [], "variables": [], "json": "", "rag": "" }, "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } } ], "memorysets": [] } }' ``` ```js {{ title: 'Node.js' }} async function createNewPipe() { const url = 'https://api.langbase.com/beta/user/pipes'; const apiKey = 'YOUR_API_KEY'; const pipe = { name: 'Test Pipe', description: 'This is a test pipe', status: 'public', type: 'chat', config: { meta: { stream: true, json: false, store: true, moderate: false, }, model: { name: 'gpt-3.5-turbo', provider: 'OpenAI', params: { max_tokens: 1000, temperature: 0.7, top_p: 1, frequency_penalty: 1, presence_penalty: 1, stop: [], }, tool_choice: 'required', parallel_tool_calls: false }, prompt: { opening: 'Welcome to Langbase. Prompt away!', system: 'You are a helpful AI assistant.', safety: '', messages: [], variables: [], json: '', rag: '', }, tools: [ { type: 'function', function: { name: 'get_current_weather', description: 'Get the current weather in a given location', parameters: { type: 'object', properties: { location: { type: 'string', description: 'The city and state, e.g. San Francisco, CA' }, unit: { type: 'string', enum: ['celsius', 'fahrenheit'] } }, required: ['location'] } } } ], memorysets: [] } }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(pipe), }); const newPipe = await response.json(); return newPipe; } ``` ```python import requests import json def create_new_pipe(): url = 'https://api.langbase.com/beta/user/pipes' api_key = 'YOUR_API_KEY' pipe = { "name": "Test Pipe", "description": "This is a test pipe", "status": "public", "type": "chat", "config": { "meta": { "stream": true, "json": false, "store": true, "moderate": false }, "model": { "name": "gpt-3.5-turbo", "provider": "OpenAI", "params": { "top_p": 1, "max_tokens": 1000, "temperature": 0.7, "presence_penalty": 1, "frequency_penalty": 1, "stop": [] }, "tool_choice": "required", "parallel_tool_calls": false }, "prompt": { "system": "You are a helpful AI assistant.", "opening": "Welcome to Langbase. Prompt away!", "safety": "", "messages": [], "variables": [], "json": "", "rag": "" }, "tools": [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] } }, "required": ["location"] } } } ], "memorysets": [] } } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(pipe)) new_pipe = response.json() return new_pipe ``` ```json {{ title: 'Response' }} { "name": "test-pipe", "type": "chat", "description": "This is a create Pipe test from API", "status": "private", "api_key": "pipe_4FVBn2DgrzfJf...", "owner_login": "langbase", "url": "https://langbase.com/langbase/test-pipe" } ``` --- Memory API: Delete <span className="text-xl font-mono text-muted-foreground/70">beta</span> https://langbase.com/docs/api-reference/deprecated/memory-delete/ import { generateMetadata } from '@/lib/generate-metadata'; # Memory API: Delete beta The `delete` memory API endpoint allows you to delete an existing memory on Langbase dynamically with the API. This endpoint requires an Org or User API key. --- This API endpoint has been deprecated. Please use the new [`delete`](/api-reference/memory/delete) memory API endpoint. --- ## Generate an Org/User API key You will need to generate an API key to authenticate your requests. For more information, visit the [Org/User API key documentation](/api-reference/api-keys). --- ## Delete a memory {{ tag: 'Deprecated', label: '/beta/memorysets/{ownerLogin}/{memoryName}', status: 'deprecated' }} Delete a memory by specifying the owner login and memory name in the path. ### Required headers Request content type. Needs to be `application/json`. Replace `` with your Org/User API key. ### Required path parameters The login name of the owner (either an organization or a user). Replace `{ownerLogin}` with your organization or user username. The name of the memory to delete. Replace `{memoryName}` with the name of the memory. ```bash {{ title: 'cURL' }} curl -X DELETE https://api.langbase.com/beta/memorysets/{ownerLogin}/{memoryName} \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " ``` ```js {{ title: 'Node.js' }} async function deleteMemory() { const url = 'https://api.langbase.com/beta/memorysets/{ownerLogin}/{memoryName}'; const apiKey = ''; const response = await fetch(url, { method: 'DELETE', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, }); const result = await response.json(); return result; } ``` ```python import requests def delete_memory(): url = 'https://api.langbase.com/beta/memorysets/{ownerLogin}/{memoryName}' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.delete(url, headers=headers) result = response.json() return result ``` ```json {{ title: 'Response' }} { "success": true } ``` --- Document API: Upload <span className="text-xl font-mono text-muted-foreground/70">beta</span> https://langbase.com/docs/api-reference/deprecated/document-upload/ import { generateMetadata } from '@/lib/generate-metadata'; # Document API: Upload beta The `upload` document API endpoint allows you to upload documents to a memory in Langbase with API. This endpoint requires a User or Org API key. This endpoint can also be used to replace an existing document in a memory. To do this, you need to provide the same `fileName` and `memoryName` attributes as the existing document. We also have a separate guide on [how to replace an existing document in memory](/guides/memory-document-replace). --- This API endpoint has been deprecated. Please use the new [`upload`](/api-reference/memory/document-upload) document API endpoint. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). --- ## Step 1: Get SignedUrl to Upload Document for org {{ tag: 'Deprecated', label: '/beta/org/{org}/memorysets/documents', status: 'deprecated' }} Uploading a document to a memory requires a signed URL. `POST` a request to this endpoint to get a signed URL and use `PUT` method to upload the document. ### Required headers Request content type. Needs to be `application/json`. Replace `` with your organization API key. ### Required path parameters The organization username. Replace `{org}` with the organization username. ### Required attributes Name of the memory to which the document will be uploaded. It is the username of the org. It is returned in [Memory List endpoint](memory-list) as well as `owner_login`. Name of the document. Once you have the signed URL, you can upload the document using the PUT method. ## Step 2: Upload Document on SignedUrl {{ tag: 'PUT', label: '{SignedUrl}', status: 'deprecated' }} Use the signed URL to upload the document to the memory. The signed URL is valid for 2 hours. ### Required headers Request content type. Needs to be the MIME type of the document. Currently, we support `application/pdf`, `text/plain`, `text/markdown`, `text/csv`, and all major code files as `text/plain`. For csv, pdf, text, and markdown files, it should correspond to the file type used in the `fileName` attribute in step 1. For code files, it should be `text/plain`. ### Required attributes The body of the file to be stored in the bucket. It can be `Buffer`, `File`, `FormData`, or `ReadableStream` type. ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/org/{org}/memorysets/documents \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "memoryName": "rag-memory", "ownerLogin": "langbase", "fileName": "file.pdf" }' ``` ```js {{ title: 'Node.js' }} async function getSignedUploadUrl() { const url = 'https://api.langbase.com/beta/org/{org}/memorysets/documents'; const apiKey = ''; const newDoc = { memoryName: 'rag-memory', ownerLogin: 'langbase', fileName: 'file.pdf', }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(newDoc), }); const res = await response.json(); return res; } ``` ```python import requests import json def get_signed_upload_url(): url = 'https://api.langbase.com/beta/org/{org}/memorysets/documents' api_key = '' newDoc = { "memoryName": "rag-memory", "ownerLogin": "langbase", "fileName": "file.pdf" } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(newDoc)) signed_upload_url = response.json() return signed_upload_url ``` ```json {{ title: 'Response' }} { "signedUrl": "https://b.langbase.com/..." } ``` ```bash {{ title: 'cURL' }} curl -X PUT \ -H 'Content-Type: application/pdf' \ --data-binary "@path/to/pdfFile" \ "{SignedUrl}" ``` ```js {{ title: 'Node.js' }} const fs = require("fs"); async function uploadDocument(signedUrl, filePath) { const file = fs.readFileSync(filePath); const response = await fetch(signedUrl, { method: 'PUT', headers: { 'Content-Type': 'application/pdf', }, body: file, }); return response; } ``` ```python import requests def upload_document(signed_url, file): with open(file_path, 'rb') as file: headers = {'Content-Type': 'application/pdf'} response = requests.put(signed_url, headers=headers, data=file) return response ``` ```json {{ title: 'Response' }} { // ... "status": 200, "statusText": 'OK', // ... } ``` --- ## Step 1: Get SignedUrl to Upload Document for user {{ tag: 'Deprecated', label: '/beta/user/memorysets/documents', status: 'deprecated' }} Uploading a document to a memory requires a signed URL. `POST` a request to this endpoint to get a signed URL and use `PUT` method to upload the document. ### Required headers Request content type. Needs to be `application/json`. Replace `USER_API_KEY` with your User API key. ### Required attributes Name of the memory to which the document will be uploaded. It is the username of the org/user. It is returned in [Memory List endpoint](memory-list) as well as `owner_login`. Name of the document. Once you have the signed URL, you can upload the document using the PUT method. ## Step 2: Upload Document on SignedUrl {{ tag: 'PUT', label: '{SignedUrl}', status: 'deprecated' }} Use the signed URL to upload the document to the memory. The signed URL is valid for 2 hours. ### Required headers Request content type. Needs to be the MIME type of the document. Currently, we support `application/pdf`, `text/plain`, `text/markdown`, `text/csv`, and all major code files as `text/plain`. For csv, pdf, text, and markdown files, it should correspond to the file type used in the `fileName` attribute in step 1. For code files, it should be `text/plain`. ### Required attributes The body of the file to be stored in the bucket. It can be `Buffer`, `File`, `FormData`, or `ReadableStream` type. ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/user/memorysets/documents \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "memoryName": "rag-memory", "ownerLogin": "langbase", "fileName": "file.pdf" }' ``` ```js {{ title: 'Node.js' }} async function getSignedUploadUrl() { const url = 'https://api.langbase.com/beta/user/memorysets/documents'; const apiKey = ''; const newDoc = { memoryName: 'rag-memory', ownerLogin: 'langbase', fileName: 'file.pdf', }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(newDoc), }); const res = await response.json(); return res; } ``` ```python import requests import json def get_signed_upload_url(): url = 'https://api.langbase.com/beta/user/memorysets/documents' api_key = '' newDoc = { "memoryName": "rag-memory", "ownerLogin": "langbase", "fileName": "file.pdf" } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(newDoc)) signed_upload_url = response.json() return signed_upload_url ``` ```json {{ title: 'Response' }} { "signedUrl": "https://b.langbase.com/..." } ``` ```bash {{ title: 'cURL' }} curl -X PUT \ -H 'Content-Type: application/pdf' \ --data-binary "@path/to/pdfFile" \ "{SignedUrl}" ``` ```js {{ title: 'Node.js' }} const fs = require("fs"); async function uploadDocument(signedUrl, filePath) { const file = fs.readFileSync(filePath); const response = await fetch(signedUrl, { method: 'PUT', headers: { 'Content-Type': 'application/pdf', }, body: file, }); return response; } ``` ```python import requests def upload_document(signed_url, file): with open(file_path, 'rb') as file: headers = {'Content-Type': 'application/pdf'} response = requests.put(signed_url, headers=headers, data=file) return response ``` ```json {{ title: 'Response' }} { // ... "status": 200, "statusText": 'OK', // ... } ``` --- Memory API: Create <span className="text-xl font-mono text-muted-foreground/70">beta</span> https://langbase.com/docs/api-reference/deprecated/memory-create/ import { generateMetadata } from '@/lib/generate-metadata'; # Memory API: Create beta The `create` memory API endpoint allows you to create a new memory on Langbase dynamically with API. This endpoint requires a User or Org API key. --- This API endpoint has been deprecated. Please use the new [`create`](/api-reference/memory/create) memory API endpoint. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). --- ## Create a new org memory {{ tag: 'Deprecated', label: '/beta/org/{org}/memorysets', status: 'deprecated' }} Create a new organization memory by sending the memory data inside the request body. ### Required headers Request content type. Needs to be `application/json`. Replace `` with your organization API key. ### Required path parameters The organization name. Replace `{org}` with your organization username. ### Required attributes Name of the memory. ### Optional attributes Short description of the memory. Default: `''` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/org/{org}/memorysets \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "name": "rag-memory", "description": "RAG memory", }' ``` ```js {{ title: 'Node.js' }} async function createNewMemory() { const url = 'https://api.langbase.com/beta/org/{org}/memorysets'; const apiKey = ''; const memory = { name: 'rag-memory', description: 'RAG memory', }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(memory), }); const newMemory = await response.json(); return newMemory; } ``` ```python import requests import json def create_new_memory(): url = 'https://api.langbase.com/beta/org/{org}/memorysets' api_key = '' memory = { "name": 'rag-memory', "description": 'RAG memory', } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(memory)) new_memory = response.json() return new_memory ``` ```json {{ title: 'Response' }} { "name": "rag-memory", "description": "RAG memory", "owner_login": "langbase", "url": "https://langbase.com/memorysets/langbase/rag-memory" } ``` --- ## Create a new user memory {{ tag: 'Deprecated', label: '/beta/user/memorysets', status: 'deprecated' }} Create a new user memory by sending the memory data inside the request body. ### Required headers Request content type. Needs to be `application/json`. Replace `` with your User API key. ### Required attributes Name of the memory. ### Optional attributes Short description of the memory. Default: `''` ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/user/memorysets \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "name": "rag-memory", "description": "RAG memory", }' ``` ```js {{ title: 'Node.js' }} async function createNewMemory() { const url = 'https://api.langbase.com/beta/user/memorysets'; const apiKey = ''; const memory = { name: "rag-memory", description: "RAG memory", }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, }, body: JSON.stringify(memory), }); const newMemory = await response.json(); return newMemory; } ``` ```python import requests import json def create_new_memory(): url = 'https://api.langbase.com/beta/user/memorysets' api_key = '' memory = { "name": "rag-memory", "description": "RAG memory",", } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, data=json.dumps(memory)) new_memory = response.json() return new_memory ``` ```json {{ title: 'Response' }} { "name": "rag-memory", "description": "RAG memory", "owner_login": "langbase", "url": "https://langbase.com/memorysets/langbase/rag-memory" } ``` --- Document API: Embeddings Retry Generate <span className="text-xl font-mono text-muted-foreground/70">beta</span> https://langbase.com/docs/api-reference/deprecated/document-embeddings-retry/ import { generateMetadata } from '@/lib/generate-metadata'; # Document API: Embeddings Retry Generate beta Document embeddings generation may fail due to various reasons such as OpenAI API rate limits, invalid API keys, document parsing errors, special characters, corrupted or locked PDFs, and excessively large documents. If the issue is related to the API key, it needs to be corrected; before retrying, ensure that the document is accessible and can be parsed correctly. You need to regenerate document embeddings in a memory before you can use them. The `retry` document API endpoint allows you to retry generating document embeddings in a memory on Langbase with API. This endpoint requires a User or Org API key. --- This API endpoint has been deprecated. Please use the new [`embeddings retry`](/api-reference/memory/document-embeddings-retry) document API endpoint. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). --- ## Retry generating document embeddings in a memory {{ tag: 'Deprecated', label: '/beta/memorysets/{owner}/documents/embeddings/retry', status: 'deprecated' }} Retry generate document embeddings in a memory by sending a POST request to this endpoint. ### Required headers Request content type. Needs to be `application/json`. Replace `` with your User or Org API key. ### Required path parameters The organization or user username. Replace `{owner}` with the organization or user username. ## Request attributes The name of memory to which the document belongs. Name of the document. ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/memorysets/{owner}/documents/embeddings/retry \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " \ -d '{ "memoryName": "rag-memory", "documentName": "file.pdf" }' ``` ```js {{ title: 'Node.js' }} async function retryDocumentEmbeddings() { const url = 'https://api.langbase.com/beta/memorysets/{owner}/documents/embeddings/retry'; const apiKey = ''; const body = { memoryName: 'rag-memory', documentName: 'file.pdf' } const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}` }, body: JSON.stringify(body) }); const res = await response.json(); return res; } ``` ```python import requests def retry_document_embeddings(): url = 'https://api.langbase.com/beta/memorysets/{owner}/documents/embeddings/retry' api_key = '' body = { 'memoryName': 'rag-memory', 'documentName': 'file.pdf' } headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.post(url, headers=headers, json=json.dumps(body)) res = response.json() return res ``` ```json {{ title: 'Response' }} { "success" : true, } ``` --- Memory API: List <span className="text-xl font-mono text-muted-foreground/70">beta</span> https://langbase.com/docs/api-reference/deprecated/memory-list/ import { generateMetadata } from '@/lib/generate-metadata'; # Memory API: List beta The `list` memory API endpoint allows you to get a list of memory sets on Langbase with API. This endpoint requires a User or Org API key. --- This API endpoint has been deprecated. Please use the new [`list`](/api-reference/memory/list) memory API endpoint. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). --- ## Get a list of org memory sets {{ tag: 'Deprecated', label: '/beta/org/{org}/memorysets', status: 'deprecated' }} Get a list of organization memory sets by sending a GET request to this endpoint. ### Required headers Request content type. Needs to be `application/json`. Replace `` with your organization API key. ### Required path parameters The organization username. Replace `{org}` with your organization username. ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/org/{org}/memorysets \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " ``` ```js {{ title: 'Node.js' }} async function listMemorySets() { const url = 'https://api.langbase.com/beta/org/{org}/memorysets'; const apiKey = ''; const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}` }, }); const memorySetsList = await response.json(); return memorySetsList; } ``` ```python import requests def get_memory_sets_list(): url = 'https://api.langbase.com/beta/org/{org}/memorysets' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.get(url, headers=headers) memory_sets_list = response.json() return memory_sets_list ``` ```json {{ title: 'Response' }} { "memorySets" : [ { "name": "rag-memory", "description": "RAG memory", "owner_login": "langbase", "url": "https://langbase.com/memorysets/langbase/rag-memory", }, ... ] } ``` --- ## Get a list of user memory sets {{ tag: 'Deprecated', label: '/beta/user/memorysets', status: 'deprecated' }} Get a list of user memory sets by sending a GET request to this endpoint. ### Required headers Request content type. Needs to be `application/json`. Replace `` with your User API key. ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/user/memorysets \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " ``` ```js {{ title: 'Node.js' }} async function getMemorySets() { const url = 'https://api.langbase.com/beta/user/memorysets'; const apiKey = ''; const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}`, } }); const memorySetsList = await response.json(); return memorySetsList; } ``` ```python import requests def get_memory_sets_list(): url = 'https://api.langbase.com/beta/user/memorysets' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.get(url, headers=headers) memory_sets_list = response.json() return memory_sets_list ``` ```json {{ title: 'Response' }} { "memorySets" : [ { "name": "rag-memory", "description": "RAG memory", "owner_login": "langbase", "url": "https://langbase.com/memorysets/langbase/rag-memory", }, ... ] } ``` --- Document API: List <span className="text-xl font-mono text-muted-foreground/70">beta</span> https://langbase.com/docs/api-reference/deprecated/document-list/ import { generateMetadata } from '@/lib/generate-metadata'; # Document API: List beta The `list` document API endpoint allows you to list documents in a memory on Langbase dynamically with API. This endpoint requires a User or Org API key. --- This API endpoint has been deprecated. Please use the new [`list`](/api-reference/memory/document-list) document API endpoint. --- ## Generate a User/Org API key You will need to generate an API key to authenticate your requests. For more information, visit the [User/Org API key documentation](/api-reference/api-keys). --- ## Get a list of org/user memory documents {{ tag: 'Deprecated', label: '/beta/memorysets/{owner}/{memoryName}/documents', status: 'deprecated' }} Get a list of documents in a memory by sending a GET request to this endpoint. ### Required headers Request content type. Needs to be `application/json`. Replace `` with your User or Org API key. ### Required path parameters The organization or user username. Replace `{owner}` with the organization or user username. The memory name. Replace `{memoryName}` with the memory name. ```bash {{ title: 'cURL' }} curl https://api.langbase.com/beta/memorysets/{owner}/{memoryName}/documents \ -H 'Content-Type: application/json' \ -H "Authorization: Bearer " ``` ```js {{ title: 'Node.js' }} async function listMemorySets() { const url = 'https://api.langbase.com/beta/memorysets/{owner}/{memoryName}/documents'; const apiKey = ''; const response = await fetch(url, { method: 'GET', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${apiKey}` }, }); const memoryDocumentsList = await response.json(); return memoryDocumentsList; } ``` ```python import requests def get_memory_sets_list(): url = 'https://api.langbase.com/beta/memorysets/{owner}/{memoryName}/documents' api_key = '' headers = { 'Content-Type': 'application/json', 'Authorization': f'Bearer {api_key}', } response = requests.get(url, headers=headers) memory_documents_list = response.json() return memory_documents_list ``` ```json {{ title: 'Response' }} { "docs" : [ { "name": "file.pdf", "status": "completed", "status_message": null, "metadata": { "size": 1156, "type": "application/pdf" }, "enabled": true, "chunk_size": 10000, "chunk_overlap": 2048, "owner_login": "langbase" }, // ... ] } ``` ---