Langbase Docs

Langbase helps developers ship composable AI agents with hyper-personalized memory (RAG).

Langbase is the composable AI infrastructure and developer experience to build, collaborate, and deploy any AI agents (AI features). Our infra is serverless, pay as you go and processing billions of tokens/messages every day. Our mission is to make AI accessible to everyone, any developer not just AI/ML experts. We are the only composable AI infrastructure. That's all we do.

  1. Start by building AI agents Pipes
  2. Then create managed semantic memory (RAG) so your AI can talk to your data

ProductsDescription
AI Pipes
(Agents)
Your custom-built AI agent available as an API. Work with any language or framework. Highly scalable, dynamic, and inexpensive. You can deploy in seconds. A new LLM computing primitive called Pipe. Pipe is the fastest way to ship your AI features in production. It's like having a composable GPT anywhere.
AI Memory
(RAG)
Memory is a managed search engine as an API for developers. Our long-term memory solution has the ability to acquire, process, retain, and later retrieve information. It combines vector storage, RAG (Retrieval-Augmented Generation), and internet access to help you build powerful AI features and products.
AI Studio
Langbase studio is your playground to build, collaborate, and deploy AI. It allows you to experiment with your pipes in real-time, with real data, store messages, version your prompts, and truly helps you take your idea from building prototypes to deployed in production with LLMOps on usage, cost, and quality.
A complete AI developers platform.
- Collaborate: Invite all team members to collaborate on the pipe. Build AI together.
- Developers & Stakeholders: All your R&D team, engineering, product, GTM (marketing and sales), literally invlove every stakeholder can collaborate on the same pipe. It's like a powerful version of GitHub x Google Docs for AI. A complete AI developers platform.

Join today

Langbase is free for anyone to get started. We process billions of AI messages tokens daily, used by thousands of developers. Tweet us — what will you ship with Langbase? It all started with a developer thinking … GPT is amazing, I want it everywhere, that's what ⌘ Langbase does for me.


API Reference

Generate

Learn about the generate API and how to use it to generate completions from a generate.

Chat

Learn about the chat API and how to use it to generate chat completions from a chat pipe.

Guides

Quickstart Guide

Get started with pipes through an AI example.

RAG Quickstart Guide

Begin creating your own RAG applications.

Integration: Next.js

Integrate a Pipe into your Next.js application.

Composable RAG on docs

Build a composable RAG app on your docs.

Build a composable AI Email agent

Learn how to build an AI email agent

Build a composable AI coding agent

Learn how to build an AI coding agent

Features

Generate

A Pipe tailored for LLM completions, with all standard Pipe features.

Chat

A Pipe crafted for chat-like completions. Ideal for building chat-bots, ChatGPT, and similar apps.

Prompt

A prompt sets the context for the LLM and the user, shaping the conversation and responses.

Variables

Add variables to a prompts in Pipes to make them dynamic.

Few-shot

Few-shot messages enable LLMs to learn from simple system and AI prompt examples.

Safety

Define a safety prompt for any LLM.

Logs

Detailed logs of each Pipe request with information like LLM request cost etc.

Stream

Stream LLM responses for all supported models on both the API and Langbase dashboard inside a Pipe.

Moderation

Set custom moderation settings for OpenAI models in a Pipe.

JSON mode

JSON mode of Pipe instructs the LLM to give output in JSON.

Store messages

Store user prompts and LLM completions in a Pipe to review in Usage.

Versions

Versions in Pipe lets you see how its config has changed over time.

Experiments

Experiments test latest Pipe config with the last five "generate" requests to see its impact on LLM responses.

Readme

Add a README to a Pipe to provide additional information.

Pipe API

Pipe offers two APIs, i.e., Generate and Chat, to interact with LLMs and integrate them into an applications.

Keysets

Add all LLM API keys once to seamless switch between models in a Pipe.

Usage

View insights of each Pipe request.

Fork

Make a copy of any Pipe either in your account or within any of your organizations.

Examples

Multiple ready to use examples to quickly setup the Pipe.

Model Presets

Configure response parameters of LLMs in a Pipe using model presets.

Organizations

Foster collaboration among users within a shared workspace via organizations.

Open Pipes

Open Pipes on Langbase allows users to create and share pipes with the public.

Tool calling

Call tools to perform operations like fetching data from an API, etc.