Langbase Docs

Langbase is the most powerful serverless platform for building AI agents with memory.

Build, scale, and evaluate AI agents with semantic memory (RAG) and world-class developer experience. We process billions of AI messages/tokens daily. Built for every developer, not just AI/ML experts.

Compared to complex AI frameworks, Langbase is simple, serverless, and the first composable AI platform.

  1. Start by building simple AI agents (pipes)
  2. Then train serverless semantic Memory agents (RAG) to get accurate and trusted results

Langbase provides several options to get started:

  • AI Studio: Build, collaborate, and deploy AI (Pipes) and Memory (RAG) agents.
  • Langbase SDK: The easiest TypeScript/Node.js developer experience.
  • HTTP API: Works with any language (Python, Go, PHP, etc.).
  • or BaseAI.dev: Local-first, open-source web AI framework.

ProductsDescription
AI Pipes
(Serverless Agents)
Pipes are serverless AI agents with agentic tools. Work with any language or framework. Pipe is a serverless AI agent. It has agentic memory and tools. Deploy thousands of serverless agent pipes as easily as a website. Build and scale AI experiences powered by industry-leading 100+ LLM models and tools. Learn more about AI Pipes.
AI Memory
(Serverless RAG)
Langbase memory agents are the next frontier in semantic retrieval-augmented generation (RAG) as a serverless and infinitely scalable API designed for developers. 30-50x less expensive than compeition, with industry-leading accuracy in advanced agentic routing and intelligent reranking.
Memory is multi-tanent by design. Have tens of millions of memory RAG stores. Per user or per use-case memory RAGs. Memory is a powerful tool for developers to build AI features and products. Learn more about AI Memory agents.
AI Studio
(Dev Platform)
Langbase studio is your playground to build, collaborate, and deploy AI. It allows you to experiment with your pipes in real-time, with real data, store messages, version your prompts, and truly helps you take your idea from building prototypes to deployed in production with LLMOps on usage, cost, and quality. Access Langbase Studio.
A complete AI developers platform.
- Collaborate: Invite all team members to collaborate on the pipe. Build AI together.
- Developers & Stakeholders: All your R&D team, engineering, product, GTM (marketing and sales), literally all your stakeholders can collaborate on the same pipe. It's like a powerful version of GitHub x Google Docs for AI. A complete AI developers platform.

Join today

Langbase is free for anyone to get started. We process billions of AI messages tokens daily, used by thousands of developers. Tweet us — what will you ship with Langbase? It all started with a developer thinking … GPT is amazing, I want it everywhere, that's what ⌘ Langbase does for me.


API Reference

Pipe

Learn about the Pipe API and how to use it to build AI agents.

Memory

Learn about the Memory API and how to use it to build RAG.

Guides

Quickstart Guide

Get started with pipes through an AI example.

RAG Quickstart Guide

Begin creating your own RAG applications.

Integration: Next.js

Integrate a Pipe into your Next.js application.

Serverless RAG on docs

Build a Serverless RAG app on your docs.

Build a Serverless AI Email agent

Learn how to build an AI email agent

Build a Serverless AI coding agent

Learn how to build an AI coding agent

Features

Generate

A Pipe tailored for LLM completions, with all standard Pipe features.

Chat

A Pipe crafted for chat-like completions. Ideal for building chat-bots, ChatGPT, and similar apps.

Prompt

A prompt sets the context for the LLM and the user, shaping the conversation and responses.

Variables

Add variables to a prompts in Pipes to make them dynamic.

Few-shot

Few-shot messages enable LLMs to learn from simple system and AI prompt examples.

Safety

Define a safety prompt for any LLM.

Logs

Detailed logs of each Pipe request with information like LLM request cost etc.

Stream

Stream LLM responses for all supported models on both the API and Langbase dashboard inside a Pipe.

Moderation

Set custom moderation settings for OpenAI models in a Pipe.

JSON mode

JSON mode of Pipe instructs the LLM to give output in JSON.

Store messages

Store user prompts and LLM completions in a Pipe to review in Usage.

Versions

Versions in Pipe lets you see how its config has changed over time.

Experiments

Experiments test latest Pipe config with the last five "generate" requests to see its impact on LLM responses.

Readme

Add a README to a Pipe to provide additional information.

API

Langbase offers robust serverless APIs for Pipes and Memory, to interact with LLMs and integrate them into an applications.

Keysets

Add all LLM API keys once to seamless switch between models in a Pipe.

Usage

View insights of each Pipe request.

Fork

Make a copy of any Pipe either in your account or within any of your organizations.

Examples

Multiple ready to use examples to quickly setup the Pipe.

Model Presets

Configure response parameters of LLMs in a Pipe using model presets.

Organizations

Foster collaboration among users within a shared workspace via organizations.

Open Pipes

Open Pipes on Langbase allows users to create and share pipes with the public.

Tool calling

Call tools to perform operations like fetching data from an API, etc.