Langbase Docs
Serverless AI developer platform. Creators of the open-source web AI framework: BaseAI.dev
Langbase is the only and most powerful serverless platform for building AI products.
Ship serverless AI agents with hyper-personalized memory (RAG). Best in class developer experience to build, collaborate, and deploy any AI agents (AI features). Our infra is serverless, pay as you go and processing billions of tokens/messages every day.
Our mission is to make AI accessible to everyone, any developer not just AI/ML experts. We are the only serverless composable AI infrastructure. That's all we do.
- Start by building AI agents Pipes
- Then create managed semantic memory (RAG) so your AI can talk to your data
Products | Description |
---|---|
⌘ AI Pipes (Serverless Agents) | Pipes are serverless AI agents with agentic tools. Work with any language or framework. Pipe is a serverless AI agent. It has agentic memory and tools. Deploy thousands of serverless agent pipes as easily as a website. Build and scale AI experiences powered by industry-leading 100+ LLM models and tools. Learn more about AI Pipes. |
⌘ AI Memory (Serverless RAG) | Memory is a managed search engine as an API for developers. Imagine an all in one severless RAG (Retrieval-Augmented Generation) with — vector store, file storage, attribution data, parsing + chunking, and semanitc similarity search engine. Memory is multi-tanent by design. Have tens of millions of memory RAG stores. Per user or per use-case memory RAGs. Memory is a powerful tool for developers to build AI features and products. Learn more about AI Memory. |
⌘ AI Studio (Dev Platform) | Langbase studio is your playground to build, collaborate, and deploy AI. It allows you to experiment with your pipes in real-time, with real data, store messages, version your prompts, and truly helps you take your idea from building prototypes to deployed in production with LLMOps on usage, cost, and quality. Access Langbase Studio. A complete AI developers platform. - Collaborate: Invite all team members to collaborate on the pipe. Build AI together. - Developers & Stakeholders: All your R&D team, engineering, product, GTM (marketing and sales), literally all your stakeholders can collaborate on the same pipe. It's like a powerful version of GitHub x Google Docs for AI. A complete AI developers platform. |
Guides
Quickstart Guide
Get started with pipes through an AI example.
RAG Quickstart Guide
Begin creating your own RAG applications.
Integration: Next.js
Integrate a Pipe into your Next.js application.
Serverless RAG on docs
Build a Serverless RAG app on your docs.
Build a Serverless AI Email agent
Learn how to build an AI email agent
Build a Serverless AI coding agent
Learn how to build an AI coding agent
Features
Generate
A Pipe tailored for LLM completions, with all standard Pipe features.
Chat
A Pipe crafted for chat-like completions. Ideal for building chat-bots, ChatGPT, and similar apps.
Prompt
A prompt sets the context for the LLM and the user, shaping the conversation and responses.
Variables
Add variables to a prompts in Pipes to make them dynamic.
Few-shot
Few-shot messages enable LLMs to learn from simple system and AI prompt examples.
Safety
Define a safety prompt for any LLM.
Logs
Detailed logs of each Pipe request with information like LLM request cost etc.
Stream
Stream LLM responses for all supported models on both the API and Langbase dashboard inside a Pipe.
Moderation
Set custom moderation settings for OpenAI models in a Pipe.
JSON mode
JSON mode of Pipe instructs the LLM to give output in JSON.
Store messages
Store user prompts and LLM completions in a Pipe to review in Usage.
Versions
Versions in Pipe lets you see how its config has changed over time.
Experiments
Experiments test latest Pipe config with the last five "generate" requests to see its impact on LLM responses.
Readme
Add a README to a Pipe to provide additional information.
Pipe API
Pipe offers two APIs, i.e., Generate and Chat, to interact with LLMs and integrate them into an applications.
Keysets
Add all LLM API keys once to seamless switch between models in a Pipe.
Usage
View insights of each Pipe request.
Fork
Make a copy of any Pipe either in your account or within any of your organizations.
Examples
Multiple ready to use examples to quickly setup the Pipe.
Model Presets
Configure response parameters of LLMs in a Pipe using model presets.
Organizations
Foster collaboration among users within a shared workspace via organizations.
Open Pipes
Open Pipes on Langbase allows users to create and share pipes with the public.
Tool calling
Call tools to perform operations like fetching data from an API, etc.