Build a composable AI Devin
A step-by-step guide to build an AI coding agent using Langbase SDK.
In this guide, we will build an AI coding agent, CodeAlchemist aka Devin, that uses multiple AI Pipes to:
- Analyze the user prompt to identify if it's related to coding, database architecture, or a random prompt.
- Based on user prompt, decide whether to call AI code Pipe or database Pipe
- Generate raw React code for code query
- Generate optimized SQL for database query
We will create a basic Next.js application that will use the Langbase SDK to connect to the AI Pipes and stream the final response back to user.
Let's get started!
Step #0Create a Next.js Application
To build the agent, we need to have a Next.js starter application. If you don't have one, you can create a new Next.js application using the following command:
npx create-next-app@latest code-alchemist
# or with pnpm
pnpx create-next-app@latest code-alchemist
This will create a new Next.js application in the code-alchemist
directory. Navigate to the directory and start the development server:
cd code-alchemist
npm run dev
# or with pnpm
pnpm run dev
Step #1Install Langbase SDK
Install the Langbase SDK in this project using npm or pnpm.
npm install langbase
# or with pnpm
pnpm add langbase
Step #2Fork the AI pipes
Fork the following AI Pipes in Langbase dashboard. These Pipes will power CodeAlchemist:
- Code Alchemist: Decision maker Pipe. Analyze user prompt and decide which AI Pipe to call.
- React Copilot: Generates a single React component for a given user prompt.
- Database Architect: Generates optimized SQL for a query or entire database schema
When you fork a Pipe, navigate to the API tab located in the Pipe's navbar. There, you'll find API keys specific to each Pipe, which are essential for making calls to the Pipes using the Langbase SDK.
Create a .env.local
file in the root directory of your project and add the following environment variables:
# Replace `PIPE_API_KEY` with the copied API key of Code Alchemist Pipe.
LANGBASE_CODE_ALCHEMY_PIPE_API_KEY="PIPE_API_KEY"
# Replace `PIPE_API_KEY` with the copied API key of React Copilot Pipe.
LANGBASE_REACT_COPILOT_PIPE_API_KEY="PIPE_API_KEY"
# Replace `PIPE_API_KEY` with the copied API key of Database Architect Pipe.
LANGBASE_DATABASE_ARCHITECT_PIPE_API_KEY="PIPE_API_KEY"
Step #3Create a Generate API Route
Create a new file app/api/generate/route.ts
. This API route will generate the AI response for the user prompt. Please add the following code:
import { callPipes } from '@/utils/call-pipes';
import { getPipeApiKeys } from '@/utils/get-pipe-api-keys';
import { validateRequestBody } from '@/utils/validate-request-body';
export const runtime = 'edge';
export async function POST(req: Request) {
try {
const { prompt, error } = await validateRequestBody(req);
if (error || !prompt) {
return new Response(JSON.stringify(error), { status: 400 });
}
const keys = getPipeApiKeys();
const { stream, pipe } = await callPipes({
keys,
prompt
});
if (stream) {
return new Response(stream.toReadableStream(), {
headers: {
pipe
}
});
}
return new Response(JSON.stringify({ error: 'No stream' }), {
status: 500
});
} catch (error: any) {
console.error('Uncaught API Error:', error.message);
return new Response(JSON.stringify(error.message), { status: 500 });
}
}
When the /api/generate
endpoint is called, the request is first validated using the validateRequestBody
function. If the validation passes, the Pipe API keys are retrieved from the .env.local
file, and the callPipes
function is called with the user prompt and Pipe API keys.
The callPipes
function calls the AI Pipes and returns a stream with the final AI response, which is then sent back to the user. We will define callPipes
function in the next step.
You can find all these functions in the utils directory of the CodeAlchemist source code.
Step #4Decision Maker Pipe
The Code Alchemist Pipe is a decision-making Pipe. It contains two LLM functions, runPairProgrammer
and runDatabaseArchitect
, which are called depending on the user's query.
Create a new file app/utils/call-pipes.ts
. This file will contain the logic to call all the AI Pipes and stream the final response back to the user. Please add the following code:
import 'server-only';
import { Pipe } from 'langbase';
type Params = {
prompt: string;
keys: {
REACT_COPILOT_PIPE_API_KEY: string;
CODE_ALCHEMY_PIPE_API_KEY: string;
DATABASE_ARCHITECT_PIPE_API_KEY: string;
};
};
type ToolCall = {
index: number;
id: string;
type: string;
function: {
name: string;
arguments: string;
};
};
First we have imported server-only
to ensure that this file is only executed on the server. Next we have imported Pipe
from the Langbase SDK.
Since we are writing TypeScript, we have defined the incoming function Params
type and the ToolCall
type.
Now let's define the callPipes
function in the same call-pipes.ts
file. Add the following code:
/**
* Asynchronously processes the given prompt using Langbase.
*
* @param {Params} params - The parameters for the Langbase function.
* @param {string[]} params.keys - The API keys required for Langbase.
* @param {string} params.prompt - The prompt to be processed.
* @returns {Promise<{ stream: Stream, pipe: string } | unknown>} - A promise that resolves to an object containing the processed stream and the pipe used, or an unknown value if the tool is called.
*/
export async function callPipes({ keys, prompt }: Params) {
const codeAlchemistPipe = new Pipe({
apiKey: keys.CODE_ALCHEMY_PIPE_API_KEY
});
const { stream } = await codeAlchemistPipe.streamText({
messages: [{ role: 'user', content: prompt }]
});
const [streamForCompletion, streamForReturn] = stream.tee();
let completion = '';
let toolCalls = '';
for await (const chunk of streamForCompletion) {
completion += chunk.choices[0]?.delta?.content || '';
toolCalls += JSON.stringify(chunk.choices[0].delta?.tool_calls) || '';
// if the toolCalls is not empty, break the loop
if (toolCalls.length) {
break;
}
}
// if the completion is not empty, return the stream
if (completion) {
return {
pipe: 'code-alchemist',
stream: streamForReturn
};
}
// if the toolCalls is not empty, call the tool
if (toolCalls) {
const calledTool = JSON.parse(toolCalls) as unknown as ToolCall[];
const toolName: string = calledTool[0].function.name;
return await AI_PIPES[toolName]({
prompt,
keys
});
}
}
type AI_PIPES_TYPE = Record<string, ({ prompt, keys }: Params) => Promise<any>>;
// Pipes map
export const AI_PIPES: AI_PIPES_TYPE = {
runPairProgrammer,
runDatabaseArchitect
};
async function runPairProgrammer({ prompt, keys }: Params) {}
async function runDatabaseArchitect({ prompt, keys }: Params) {}
Here is a quick explanation of what's happening in the code above:
- Initialized
codeAlchemistPipe
withCODE_ALCHEMY_PIPE_API_KEY
using Langbase SDK. - Called
streamText
method ofcodeAlchemistPipe
with prompt and messages, triggering the Langbase AI Pipe that returned a stream. - Used
tee
to split the stream into two. - Iterated over
streamForCompletion
, appending chunks to completion. - If
completion
is not empty, returnedstreamForReturn
since no LLM function was called, meaning the user prompt was not related to code or SQL. - If
toolCalls
isn't empty, broke the loop as an LLM function was triggered. - Parsed
toolCalls
to get the tool's name. - Called
AI_PIPES
map withtoolName
, passing prompt and keys.
Step #5React Copilot Pipe
The React Copilot Pipe is a code generation Pipe. It takes the user prompt and generates a React component based on it. It also writes clear and concise comments and use Tailwind CSS classes for styling.
We already defined runPairProgrammer
in the previous step. Let's write its implementation. Add the following code to the call-pipes.ts
file:
/**
* Runs the pair programmer AI pipe.
*
* @param {Params} params - The parameters for running the pair programmer.
* @param {string} params.prompt - The prompt for the pair programmer.
* @param {Keys} params.keys - The API keys for the pair programmer.
* @returns {Promise<string>} - A promise that resolves to the streamed text from the AI pipe.
*/
async function runPairProgrammer({ prompt, keys }: Params) {
const reactCopilotPipe = new Pipe({
apiKey: keys.REACT_COPILOT_PIPE_API_KEY
});
const { stream } = await reactCopilotPipe.streamText({
messages: [
{
role: 'user',
content: `${prompt}\n\nFramework: React`
}
],
variables: [
{
name: 'framework',
value: 'React'
}
]
});
return {
stream,
pipe: 'react-copilot'
};
}
Here is a quick explanation of what's happening in the code above:
- Initialized
reactCopilotPipe
withREACT_COPILOT_PIPE_API_KEY
using Langbase SDK. - Called
streamText
method ofreactCopilotPipe
with prompt and messages, triggering the Langbase AI Pipe that returned a stream. - Returned
stream
andpipe
asreact-copilot
.
Step #6Database Architect Pipe
The Database Architect Pipe generates SQL queries. It takes the user prompt and generates either SQL query or whole database schema. It automatically incorporate partitioning strategies if necessary and also indexing options to optimize query performance.
We already defined runDatabaseArchitect
in the step 4. Let's write its implementation. Add the following code to the call-pipes.ts
file:
/**
* Runs the database architect pipe to process a prompt and retrieve the result.
*
* @param {Params} params - The parameters for running the database architect pipe.
* @param {string} params.prompt - The prompt to be processed by the pipe.
* @param {Record<string, string>} params.keys - The API keys required for the pipe.
* @returns {Promise<string>} - A promise that resolves to the result of the pipe.
*/
async function runDatabaseArchitect({ prompt, keys }: Params) {
const databaseArchitectPipe = new Pipe({
apiKey: keys.DATABASE_ARCHITECT_PIPE_API_KEY
});
const { stream } = await databaseArchitectPipe.streamText({
messages: [
{
role: 'user',
content: prompt
}
]
});
return {
stream,
pipe: 'database-architect'
};
}
Here is a quick explanation of what's happening in the code above:
- Initialized
databaseArchitectPipe
withDATABASE_ARCHITECT_PIPE_API_KEY
using Langbase SDK. - Called
streamText
method ofdatabaseArchitectPipe
with prompt and messages, triggering the Langbase AI Pipe that returned a stream. - Returned
stream
andpipe
asdatabase-architect
.
Step #7Build the CodeAlchmemist
Now that we have all the pipes in place, we can call the /api/generate
endpoint to either generate a React component or SQL query based on the user prompt.
import { fromReadableStream } from 'langbase';
async function callLLMs({
e,
prompt,
}: {
prompt: string;
e: FormEvent<HTMLFormElement>;
}) {
e.preventDefault();
try {
// make a POST request to the API endpoint to call AI pipes
const response = await fetch('/api/generate', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ prompt })
});
// if the response is not ok, throw an error
if (!response.ok) {
const error = await response.json();
toast.error(error);
return;
}
// get the response body as a stream
if (response.body) {
const stream = fromReadableStream(response.body);
for await (const chunk of stream) {
const content = chunk?.choices[0]?.delta?.content || '';
content && setCompletion(prev => prev + content);
}
}
} catch (error) {
toast.error('Something went wrong. Please try again later.');
} finally {
setLoading(false);
}
}
- We make a POST request to the
/api/generate
endpoint with the user prompt. - If the response is not ok, we throw an error.
- If the response is ok, we get the response body as a stream.
- We use
fromReadableStream
to convert the response body to a readable stream. - We use
for await
to iterate over the stream and get the content of each chunk. - We set the completion to the content of the chunk.
Live demo
You can try out the live demo of the CodeAlchemist here.