Integrations: Configure Langbase with Next.js

Let's integrate an AI title generator pipe agent into a Next.js app.


In this integration guide, you will:

  • Set up a Next.js app.
  • Integrate Langbase SDK into your app via API route and Server Actions.
  • Generate title ideas for your next blog using Pipe.

Step #0

To build the agent, we need to have a Next.js starter app. If you don't have one, you can create a new app using the following command:

Initialize project

npx create-next-app@latest generate-titles

This will create a new Next.js application in the generate-titles directory. Navigate to the directory and start the development server:

Initialize project

cd generate-titles && npm run dev

Step #1

Install the Langbase SDK in this project using npm or pnpm.

Install Langbase SDK

npm install langbase

Step #2

Every request you send to Langbase needs an API key. This guide assumes you already have one. In case, you do not have an API key, please check the instructions below.

Create an .env file in the root of your project and add your Langbase API key.

.env

LANGBASE_API_KEY=xxxxxxxxx

Replace xxxxxxxxx with your Langbase API key.

Step #3

If you have set up LLM API keys in your profile, the Pipe will automatically use them. If not, navigate to LLM API keys page and add keys for different providers like OpenAI, TogetherAI, Anthropic, etc.

Step #4

Go ahead and fork the AI title generator agent pipe in Langbase Studio. This agent generates 5 different titles on a given topic.

Step #5

Let's learn how to use the Langbase SDK both on a serverless Next.js API route and a server action. Click on one of the buttons below to choose your preferred method.

Let's create an API Route to generate titles using the Langbase SDK.

api/run-agent/route.ts

import { Langbase } from 'langbase'; export async function POST(req: Request) { try { // Get request body from the client. const body = await req.json(); const variables = body.variables; const messages = body.messages; const shouldStream = body.stream; const langbase = new Langbase({ apiKey: process.env.LANGBASE_API_KEY! }); // STREAM if (shouldStream) { const { stream } = await langbase.pipes.run({ stream: true, messages, variables, name: 'ai-title-generator' }); return new Response(stream, { status: 200 }); } // NOT STREAM const { completion } = await langbase.pipes.run({ stream: false, messages, variables, name: 'ai-title-generator' }); return Response.json(completion); } catch (error: any) { console.error('Uncaught API Error:', error); return new Response(JSON.stringify(error), { status: 500 }); } }

Let's take a look at the code above:

  1. Create a new file api/run-agent/route.ts in the app directory of your Next.js app.
  2. Write a function called POST that uses Langbase Pipe to generate titles.
  3. Get the variables, messages, and stream from the request body.
  4. Based on the stream value, call the langbase.pipes.run method with stream: true or stream: false.
  5. Return the response from the API route.

Step #6

Go ahead and add the following code to the app/page.tsx file to call the API route.

Response on client side without streaming

app/page.tsx
'use client'; import { useState } from 'react'; import { getRunner } from 'langbase'; export default function Home() { const [topic, setTopic] = useState(''); const [completion, setCompletion] = useState<string>(''); const [loading, setLoading] = useState(false); const handleGenerateCompletion = async () => { setLoading(true); try { const response = await fetch('/api/run-agent', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ stream: false, messages: [{ role: 'user', content: topic }], variables: { topic: topic }, }) }) const completionData = await response.json(); const parsedData = JSON.parse(completionData); setCompletion(JSON.stringify(parsedData)); } catch (error) { console.error('Error generating completion:', error); } finally { setLoading(false); } }; return ( <main className="flex min-h-screen flex-col items-center justify-between p-24"> <div className="flex flex-col items-center"> <h1 className="text-4xl font-bold"> Generate Text Completions </h1> <p className="mt-4 text-lg"> Enter a topic and click the button to generate titles using LLM </p> <input type="text" placeholder="Enter a topic" className="mt-4 rounded-lg border border-gray-300 p-2 text-sm" value={topic} onChange={e => setTopic(e.target.value)} /> <button className="mt-4 rounded-lg bg-blue-500 p-2 text-white" onClick={handleGenerateCompletion} > Generate titles </button> {loading && <p className="mt-4">Loading...</p>} {completion && ( <div className="mt-4 w-full max-w-md"> <h2 className="text-xl font-semibold mb-2">Generated Titles:</h2> <ul className="space-y-2"> {completion} </ul> </div> )} </div> </main> ); }

Let's break down the code above:

  1. Create states for topic, completion, and loading.
  2. Create a function called handleGenerateCompletion that calls the API route.
  3. Use the fetch API to call the API route and pass the topic variable in the request body.
  4. If the stream is true, use the getRunner method to get the stream and update the completion state with the generated titles.
  5. If the stream is false, parse the response and update the completion state with the generated titles.
  6. Display the generated titles in a list format.

Step #7

To run the Next.js app, execute the following command:

Run the Next.js app

npm run dev

Open your browser and navigate to http://localhost:3000. You should see the app running with an input field to enter a topic, a button to generate titles, and a paragraph to display the generated titles.

Give it a try by entering a topic and clicking the Generate titles button.

Here are the example AI generated title for the topic Large Language Models:

1. "Unlocking the Power of Large Language Models" 2. "The Future of AI: Large Language Models Explained" 3. "Large Language Models: Transforming Communication" 4. "Understanding Large Language Models in AI" 5. "The Impact of Large Language Models on Society"