Quickstart: Build an AI to Generate Titles

Let's build your first AI pipe in a minute.


In this quickstart guide, you will:

  • Create and use a Pipe on Langbase.
  • Use an LLM model like GPT, Llama, Mistral, etc.
  • Build your pipe with configuration and meta settings.
  • Design a prompt with system, safety, and few-shot messages.
  • Experiment with your AI pipe in playground (Langbase AI Studio).
  • Observe real-time performance, usage, and per million request predictions.
  • Deploy your AI features seamlessly using the Pipe API (global, higly available, and scalable).

Pipe is an AI assistant that helps you build your own sophisticated AI apps and features in a minute.

Ever found yourself amazed by what ChatGPT can do and wished you could integrate similar AI features into your own apps? That's exactly what Pipe is designed for. It’s like ChatGPT, but simple (simplest API), powerfull (works with any LLM), and developer-ready (comes with a suite of dev-friendly features).


Let's get started

Let’s build a Pipe together that will generate title ideas for your next blog using any LLM from OpenAI, Together, Anthropic, etc. This is going to be fun. Much fun!


Step #0Refresher: What's an AI Pipe?

Pipe is your custom-built AI assistant. It's the fastest way to turn ideas into AI (apps/features).


Pipe lets you build AI assistant, features, and apps without thinking about servers, GPUs, RAG, and infra.

It is a high-level layer to Large Language Models (LLMs) that creates a personalized AI assistant for your queries and prompts. A pipe can leverage any LLM models, tools, and knowledge base with your datasets to assist with your queries.

Pipe can connect any LLM to any data to build any developer API workflow.



P → Prompt Prompt engineering and orchestration.

I → Instructions Instruction training few-shot, persona, character, etc.

P → Personalization Knowledge base, variables, and safety hallucination engine.

E → Engine API Engine, with custom inference , and enterprise governance.


Step #1Create a Pipe

To get started with Langbase, you'll need to create a free personal account on Langbase.com and verify your email address. Done? Cool, cool!

  1. When logged in, you can always go to pipe.new to create a new Pipe.
  2. Give your Pipe a name. Let’s call it AI Title Generator. It's a generate type, pipe.
  3. Click on the [Create Pipe] button. And just like that, you have created your first Pipe.
Note

Start with a fork

You can also fork the AI Title Generator pipe we'll be creating in this guide by clicking on the Fork button. Forking a pipe is a great way to start experimenting with it.


Step #2Using an LLM model

If you have set up LLM API keys in your profile, the Pipe will automatically use them. If not, just hit the Add LLM Key button or head over to Settings to add Pipe-level LLM API keys.

Let's add an LLM provider API key now.

  1. Go to the pipe Settings tab.
  2. In the LLM API Keysets section, chose any LLM. For example, you can use OpenAI (for GPT) or Together (for Llama, Mistral, etc.) or any other supported model on ⌘ Langbase.
  3. Click on OpenAI [ADD KEY] button, add your LLM API key. Inside each key modal, you'll find a link Get a new key from here click it to create a new API key on any API provider's website.
Warning

Known issue with OpenAI Key

OpenAI expects you to add credits to your account to use their API. And sometimes it can take up to an hour or so for your OpenAI keys to work. It's an OpenAI issue which they're working on to fix.


Step #3Build your Pipe: Configure LLM model

Let's start building our pipe. Go back to the Pipe tab.

  1. Click on the gpt-3.5-turbo button to select and configure the LLM model for your Pipe.
  2. By default OpenAI gpt-3.5-turbo is selected. You can also pick any Llama or Mistral model.
  3. Choose one of the pre-configured presets for your model.
  4. You can also modify any of the model params. Learn more with the icon, next to param name.

Step #4Build your Pipe: Configure the Pipe's Meta

Use the Meta section to configure how your AI Title Generator Pipe should work.

  1. You can see it's a generate type pipe (set when creating the pipe).
  2. You can set the output format of the Pipe to JSON.
  3. Moderation mode can be turned on to filter out inappropriate content as a requirement by OpenAI.
  4. You can turn the streaming mode on and off.
  5. Turn off storing messages (input prompt and generated completion) for sensitive data like emails.

Step #5Design a Prompt

Now that you have your LLM model and Pipe meta configured, it's time to design your prompt.


What is a Prompt?

Prompt is the input you provide to the AI model to generate the output.

Typically, a prompt starts a chat thread with a system message, then alternates between user and assistant messages. Prompt design is important. At Langbase, we have a few key components to help you design a prompt:


Prompt: System Instructions

A system message in prompt acts as the set of instructions for the AI model.

  1. It sets the initial context and helps the model understand your intent.
  2. Now let's add a system instruction message. You can add this: You're a helpful AI assistant. Give me 5 title ideas for an article about {{Topic}}

Prompt: User Message

  1. Now let's add a user message. Click on the USER button to add a new message.
  2. You can add this: Generate 5 blog title ideas for an article about {{Topic}}

Prompt: Variables

  1. Any text written between double curly brackets {{}} becomes a variable.
  2. Variables section will display all your variable keys and values.
  3. Since you added a varaible {{Topic}} notice it appear in variables.
  4. Now assing the topic variable value as Large Language Models. This pipe will now replace {{Topic}} with its value in all messages.

✨ Variables allow you to use the same pipe to generate titles for any topic.

Hit the Run button to see the AI generate titles for you

👏 Congrats, you've created your first AI assistant to generate creative blog titles.


Prompt as Code

We're not writing code here, but if you were to write this prompt as code, it would look like this:

  1. Prompt is a messages array. Inside it are message objects.
  2. Each message object typically consists of two properties:
    1. role either "system", "user", or "assistant".
    2. content that you're sending or expecting to be generated from the AI LLM.
// Prompt example:
{
	messages: [
		{ role: 'system', content: 'You are a helpful assistant.' },
		{ role: 'user', content: 'Give me 5 title ideas' },
		{ role: 'assistant', content: 'Sure, here you go … …' }
	];
}

Step #6AI Studio: Playground & Experimentation

Now that you have your Pipe ready, it's time to run and experiment with it in the Langbase AI Studio.

Note

Langbase provides the developer experience and infrastructure to build, collaborate, and deploy secure, composable AI apps. Our mission is to make AI accessible to everyone, not just AI/ML experts. Langbase is free for anyone to get started.

Langbase is your AI Studio: Our dashboard is your AI playground to build, collaborate, and deploy AI. It allows you to experiment with your pipes in real-time, with real data, store messages, version your prompts, and truly helps you take your idea from building prototypes to deployed in production (with predictions on usage, cost, and effectiveness). Langbase is a complete developers platform for AI.

  • Collaborate: Invite all team members to collaborate on the pipe. Build AI together.
  • Developers & Stackholders All your R&D team, engineering, product, GTM (marketing, sales), and even stackholders can collaborate on the same pipe. It's like a Google Doc x GitHub for AI. That's what makes it so powerful.

Step #7Save and Deploy

Pipes can be saved as a sandbox or deployed to production.

  1. Deploy to Production: Make your changes available on the API (global, highly available, and scalable).
  2. Sandbox versions: You can save your changes without deploying them to production.
  3. Preview versions: Running your pipe with unsaved changes will create a new preview version.
  4. Version History: You can use the version selector on top left to go back to any deployed or sandbox version.

Pipe: Sandbox version

  1. When you make changes, a Draft fork of v1 current version is created but not saved.
  2. Press Save as Sandbox button to save your changes as a sandbox version.

Pipe: Deploy to production

  1. You can deploy any sandbox version or draft fork to production.
  2. Once you're ready, press the Deploy to Production button to make your changes available on the API.

✨ Woohoo! You've created and deployed your first AI pipe in production.


Step #8Pipe API

Now that you have deployed your AI Title Generator Pipe, you can use it in your apps, websites, or literally anywhere you want.

  1. Go to the API tab.
  2. Retrieve your API base URL and API key.
  3. Make sure to never use your API key on client-side code. Always use it on server-side code. Your API key is like a password, keep it safe. If you think it's compromised, you can always regenerate a new one.

Using the API key and base URL, you can now make requests to your Pipe.

curl https://api.langbase.com/generate \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer <PIPE_API_KEY>' \
-d '{
  "messages": [
    {
      "role": "user",
      "content": "Make the titles less wordy and more engaging"
    }
  ]
}'

You can also send new values to the variables in your prompt.

curl https://api.langbase.com/generate \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer <PIPE_API_KEY>' \
-d '{
  "messages": [
    {
      "role": "user",
      "content": "Make the titles less wordy and more engaging"
    }
  ],
  "variables": [
	{
	  "name": "topic", # For {{Topic}} var key is "topic"
	  "value": "Building AI with Langbase" # Any dynamic value works
	}
  ]
}'

One the API tab, you can also find interactive API components to test your pipe in real-time.


Step #9Usage

  1. In the Pipe tab, scroll down and expand the Runs section.
  2. Click on any row of the runs to see detailed logs.
  3. Here you can see detailed logs of all your pipe runs including each request cost cost, tokens, latency, etc.

For overall Pipe stats, navigate to the Usage tab.

  1. Here you can see the total number of requests, cost, and
  2. You can also check our AI prediction engine, predicting cost per million requests to your pipe.
  3. Finally, you can see the real-time run of your pipe in the Runs section again.

Congrats, you have created your first AI pipe. We're excited to see what you build with it.


Next steps

Feel free to experiment with different LLM models, prompts, and configurations.

  • Next up, use the API Reference to learn more about pipe's API.
  • You can also check out 20+ pipe features from the left sidebar.

Share your feedback and suggestions with us. Post on 𝕏 (Twitter), LinkedIn, or email us.

We're here to help you turn your ideas into AI.

Let's go!