Langbase Documentation

Ship hyper-personalized AI apps in seconds.

Note

What is ⌘ Langbase

Ship hyper-personalized AI assistants with memory! Build AI assistants with Pipes, Semantic RAG, and extend LLM Memory by 10M+ tokens. The most "developer-friendly" full-stack LLM platform.

⌘ Langbase provides the developer experience and infrastructure to build, collaborate, and deploy secure, composable AI apps. Our mission is to make AI accessible to everyone, not just AI/ML experts.

19K developers on the waitlist. We're already processing billions of tokens and millions of requests every month. If you found this link, you can sign up already. Langbase is free for anyone to get started.

⌘ Langbase Pipes: Build custom AI assistants and agents with the LLM Computing primitive called Pipe. Pipe is the fastest way to ship your AI apps/features in production. It's like having a composable GPT anywhere.

⌘ Langbase AI Studio: Our dashboard is your playground to build, collaborate, and deploy AI. It allows you to experiment with your pipes in real-time, with real data, store messages, version your prompts, and truly helps you take your idea from building prototypes to deployed in production (with predictions on usage, cost, and effectiveness). Langbase is a complete developers platform for AI.

  • Collaborate: Invite all team members to collaborate on the pipe. Build AI together.
  • Developers & Stakeholders All your R&D team, engineering, product, GTM (marketing, sales), and even stakeholders can collaborate on the same pipe. It's like a Google Doc x GitHub for AI. That's what makes it so powerful.

GPT is amazing, I want it everywhere, that's what Langbase does for me. — AA

API Reference

Generate

Learn about the generate API and how to use it to generate completions from a generate.

Chat

Learn about the chat API and how to use it to generate chat completions from a chat pipe.

Guides

Quickstart Guide

Get started with pipes through an AI example.

Integration: Next.js

Integrate a Pipe into your Next.js application.

Features

Generate

A Pipe tailored for LLM completions, with all standard Pipe features.

Chat

A Pipe crafted for chat-like completions. Ideal for building chat-bots, ChatGPT, and similar apps.

Prompt

A prompt sets the context for the LLM and the user, shaping the conversation and responses.

Variables

Add variables to a prompts in Pipes to make them dynamic.

Few-shot

Few-shot messages enable LLMs to learn from simple system and AI prompt examples.

Safety

Define a safety prompt for any LLM.

Logs

Detailed logs of each Pipe request with information like LLM request cost etc.

Stream

Stream LLM responses for all supported models on both the API and Langbase dashboard inside a Pipe.

Moderation

Set custom moderation settings for OpenAI models in a Pipe.

JSON mode

JSON mode of Pipe instructs the LLM to give output in JSON.

Store messages

Store user prompts and LLM completions in a Pipe to review in Usage.

Versions

Versions in Pipe lets you see how its config has changed over time.

Experiments

Experiments test latest Pipe config with the last five "generate" requests to see its impact on LLM responses.

Readme

Add a README to a Pipe to provide additional information.

Pipe API

Pipe offers two APIs, i.e., Generate and Chat, to interact with LLMs and integrate them into an applications.

Keysets

Add all LLM API keys once to seamless switch between models in a Pipe.

Usage

View insights of each Pipe request.

Fork

Make a copy of any Pipe either in your account or within any of your organizations.

Examples

Multiple ready to use examples to quickly setup the Pipe.

Model Presets

Configure response parameters of LLMs in a Pipe using model presets.

Organizations

Foster collaboration among users within a shared workspace via organizations.

Open Pipes

Open Pipes on Langbase allows users to create and share pipes with the public.