Langbase Docs

Langbase is the most powerful serverless AI cloud for building and deploying AI agents.

Build, deploy, and scale serverless AI agents with tools and memory (RAG). Simple AI primitives no bloated frameworks, all with a world-class developer experience without using any frameworks.

CHAI.newvibe code ai agents.
Get started

ProductsDescription
CHAI.new ✨Vibe code any AI agent. Chai turns prompts into prod-ready agents
Pipe AgentsPipes are a standard way to build serverless AI agents with tools and memory
AI MemoryAuto RAG AI primitve for agents to give them human like long-term memory
WorkflowBuild multi-step agents with built-in durable features like timeouts and retries
ThreadsStore and manage AI agents context and conversation history without managing DBs
AgentAgents is an AI primitive to build serverless agents with unified API over 600+ LLMs
ParserAI primitive to extract text content from various document formats, for RAG
ChunkerAI primitive to split text into smaller, manageable chunks for context, for RAG
EmbedAI primitive to convert text into vector embeddings unified API multiple models, for RAG
ToolsPre-built and hosted tools for AI agents like web crawler and web search with Exa
AI Studio
Langbase studio is your playground to build, collaborate, and deploy AI agents. Experiment on your agent changes and memory retrieval in real-time, with real data, store messages, version your prompts, optimize cost, see usage, and traces. Access Langbase Studio.
A complete AI developers platform.
- Collaborate: Invite all team members to collaborate on the pipe. Build AI together.
- Developers & Stakeholders: Work with all your R&D team, engineering, product, GTM (marketing and sales). It's like a powerful version of GitHub x Google Docs for AI.

Why Langbase?

Langbase is the best way to build and deploy AI agents.

Our mission: AI for all. Not just ML wizards. Every. Single. Developer.

Build AI agents without any bloated frameworks. You write the logic, we handle the logistics.

Compared to complex AI frameworks, Langbase is serverless and the first composable AI platform.

  1. Start by building simple AI agents (pipes)
  2. Then train serverless semantic Memory agents (RAG) to get accurate and trusted results

Get started for free:

  • CHAI.new: Vibe code any AI agent. Chai turns prompts into prod-ready agents.
  • AI Studio: Build, collaborate, and deploy AI Agents with tools and Memory (RAG).
  • Langbase SDK: Easiest wasy to build AI Agents with TypeScript. (recommended)
  • HTTP API: Build AI agents with any language (Python, Go, PHP, etc.).

Join today

Langbase is free for anyone to get started. We process billions of AI messages tokens daily, used by thousands of developers. Tweet us — what will you ship with Langbase? It all started with a developer thinking … GPT is amazing, I want it everywhere, that's what ⌘ Langbase does for me.


Get started

Agent Architectures

Explore different agent architectures and how to build them.

Langbase SDK

Learn how to use the Langbase SDK to build AI agents with TypeScript.

AI Agents (Pipes)

Build and deploy serverless AI agents we call pipes with tools.

Memory Agents

Build AI agents with Memory for Retrieval-augmented generation (RAG).


Guides

Docs AI Agent: AI in docs

Build an AI agent with RAG that answers questions from your docs.

AI Email Agent: Handle your emails

Build a composable multi agent architecture using Langbase SDK.

Build your own v0, Lovable, or Bolt

Build a coding agent like v0, lovable, bolt with Langbase SDK.

Understand how RAG works

Understand how RAG works and how to use it in your app.

Langbase Agent Examples

Explore open-source Langbase agent examples on GitHub.

Internet Research Agent Tool

Build a tool for AI agents that can help you research on the internet.



Features

CHAI.new ✨

Vibe code any AI agent. Chai turns prompts into prod-ready agents

Pipe Agents

Pipes are a standard way to build serverless AI agents with memory

AI Memory

Auto RAG AI primitive for agents to give them human like long-term memory

Workflow

Build multi-step agents with built-in durable features like timeouts and retries

Threads

Store and manage AI agents context and conversation history

Agent

AI primitive to build serverless agents with unified API over 600+ LLMs

Parser

AI primitive to extract text content from various document formats, for RAG

Chunker

AI primitive to split text into smaller, manageable chunks for context, for RAG

Embed

Convert text into vector embeddings unified API multiple models, for RAG

Tools

Hosted tools for AI agents like web crawler and web search with Exa

AI Studio

AI playground to experiment, build, collaborate, and deploy AI agents

Memory

Use agentic RAG memory with 30-50x in-expensive vector storage

Threads

Store and retrieve messages in a Pipe for a conversation-like experience

Versions

Create and manage versions of agents to track changes and compare results

Fork

Fork to make a copy of any open agent in your account and ship faster

Variables

Add variables to a prompts in Pipes to make them dynamic

Logs

Detailed logs of each Pipe request with information like LLM request cost etc

Auto tool calling

Auto tool calling enables agents to call external tools and APIs

Few-shot

Few-shot messages enable LLMs to learn from simple system and AI prompt examples

Prompt

Context for the LLM and the user, shaping the conversation and responses

Rerank

Rerank to reorder a list of documents based on semantic relevance

Prompt Optimization

Optimizer models optimize a prompt for enhanced results

Safety

Define a safety prompt for any LLM

Stream

Stream agent responses to improve the user's agent experience

Moderation

Set custom moderation settings for OpenAI models in a Pipe

JSON mode

JSON mode of Pipe instructs the LLM to give output in JSON

Experiments

Run experiments to test the performance of different LLMs and prompts

Readme

Add a README to a Pipe to provide additional information

API & SDK

Managed API to deploy and scale serverless AI agents with Langbase

Keysets

Add all LLM API keys once to seamless switch between models in a Pipe

Usage

View insights of each Pipe request

Examples

Multiple ready to use examples to quickly setup the Pipe

Model Presets

Configure response parameters of LLMs in a Pipe using model presets

Organizations

Foster collaboration among users within a shared workspace via organizations

Open Pipes

Open Pipes on Langbase allows users to create and share pipes with the public