Workflow
Langbase Workflow helps you build multi-step agentic applications by breaking them into simple steps with built-in durable features like timeouts and retries.
Building AI applications often requires orchestrating multiple operations that depend on each other. Workflow enables you to:
- Orchestrate operations in both sequential and parallel execution patterns
- Create conditional execution paths based on previous step results
- Implement resilient processes with configurable retry strategies
- Set time boundaries to prevent operations from running indefinitely
- Track execution flow with detailed step-by-step logging
Quickstart
This workflow example shows how to build a simple email processing workflow that summarizes the email, analyzes the sentiment, and determines if a response is needed.
Step #1Generate Langbase API key
Every request you send to Langbase needs an API key. This guide assumes you already have one. If not, please check the instructions below.
Step #2Setup your project
Create a new directory for your project and navigate to it.
Project setup
mkdir langbase-workflow
cd langbase-workflow
Initialize the project
Create a new Node.js project.
Initialize project
npm init -y
Install Langbase SDK
Install Langbase SDK:
Install the SDK
npm i langbase
Install dotenv
for environment variables and @types/node
packages:
Install other dependencies
npm i dotenv -D @types/node
Create an env file
Create a new file called .env
and paste the following code:
LANGBASE_API_KEY
: Your Langbase API keyLLM_API_KEY
: Your LLM API key
Set your API keys
LANGBASE_API_KEY=<USER/ORG_API_KEY>
LLM_API_KEY=<LLM_API_KEY>
Step #3Create a new workflow
Create a new file called workflow.ts
and paste the following code.
In this example, we create an email processing workflow that summarizes the email, analyzes the sentiment, and determines if a response is needed.
Each step in the workflow requires at least:
- An
id
to identify the step - A
run
function that contains the logic for the step
The run
function can be an async function that returns a value or a promise. The retruned value will be available to the next step in the workflow.
workflow.ts
import 'dotenv/config';
import {Langbase, Workflow} from 'langbase';
async function processEmail({emailContent}: {emailContent: string}) {
// Initialize Langbase
const langbase = new Langbase({
apiKey: process.env.LANGBASE_API_KEY!,
});
// Create a new workflow
const workflow = new Workflow({
debug: true,
});
try {
// Steps 1 & 2: Run summary and sentiment analysis in parallel
const [summary, sentiment] = await Promise.all([
workflow.step({
id: 'summarize_email', // The id for the step
run: async () => {
const response = await langbase.agent.run({
model: 'openai:gpt-4.1-mini',
instructions: `Create a concise summary of this email. Focus on the main points,
requests, and any action items mentioned.`,
apiKey: process.env.OPENAI_API_KEY!,
input: [{role: 'user', content: emailContent}],
stream: false,
});
return response.output;
},
}),
workflow.step({
id: 'analyze_sentiment',
run: async () => {
const response = await langbase.agent.run({
model: 'openai:gpt-4.1-mini',
instructions: `Analyze the sentiment of this email. Provide a brief analysis
that includes the overall tone (positive, neutral, or negative) and any notable
emotional elements.`,
apiKey: process.env.OPENAI_API_KEY!,
input: [{role: 'user', content: emailContent}],
stream: false,
});
return response.output;
},
}),
]);
// Step 3: Determine if response is needed (using the results from previous steps)
const responseNeeded = await workflow.step({
id: 'determine_response_needed',
run: async () => {
const response = await langbase.agent.run({
model: 'openai:gpt-4.1-mini',
instructions: `Based on the email summary and sentiment analysis, determine if a
response is needed. Answer with 'yes' if a response is required, or 'no' if no
response is needed. Consider factors like: Does the email contain a question?
Is there an explicit request? Is it urgent?`,
apiKey: process.env.OPENAI_API_KEY!,
input: [
{
role: 'user',
content: `Email: ${emailContent}\n\nSummary: ${summary}\n\nSentiment: ${sentiment}\n\nDoes this email
require a response?`,
},
],
stream: false,
});
return response.output.toLowerCase().includes('yes');
},
});
// Step 4: Generate response if needed
let response: string | null = null;
if (responseNeeded) {
response = await workflow.step({
id: 'generate_response',
run: async () => {
const response = await langbase.agent.run({
model: 'openai:gpt-4.1-mini',
instructions: `Generate a professional email response. Address all questions
and requests from the original email. Be helpful, clear, and maintain a
professional tone that matches the original email sentiment.`,
apiKey: process.env.OPENAI_API_KEY!,
input: [
{
role: 'user',
content: `Original Email: ${emailContent}\n\nSummary: ${summary}\n\n
Sentiment Analysis: ${sentiment}\n\nPlease draft a response email.`,
},
],
stream: false,
});
return response.output;
},
});
}
// Return the results
return {
summary,
sentiment,
responseNeeded,
response,
};
} catch (error) {
console.error('Email processing workflow failed:', error);
throw error;
}
}
async function main() {
const sampleEmail = `
Subject: Pricing Information and Demo Request
Hello,
I came across your platform and I'm interested in learning more about your product
for our growing company. Could you please send me some information on your pricing tiers?
We're particularly interested in the enterprise tier as we now have a team of about
50 people who would need access. Would it be possible to schedule a demo sometime next week?
Thanks in advance for your help!
Best regards,
Jamie
`;
const results = await processEmail({emailContent: sampleEmail});
console.log(JSON.stringify(results, null, 2));
}
main();
Step #4Run the workflow
Run the example
npx tsx workflow.ts
This workflow efficiently processes an email through multiple steps:
- Email Summarization and Sentiment Analysis are run in parallel.
- After the summary and sentiment analysis are complete, the response determination step is run.
- If a response is needed, the response generation step is run.
By running sentiment analysis and response determination in parallel, we optimize the workflow's execution time without sacrificing functionality. Each step still has access to the data it needs from previous steps.
Workflow output
{
"summary": "The sender is interested in learning more about the product for their
company of about 50 people. They are requesting pricing information, particularly for
the enterprise tier, and would like to schedule a demo next week.",
"sentiment": "positive",
"responseNeeded": true,
"response": "Subject: RE: Pricing Information and Demo Request
Hello Jamie,
Thank you for your interest in our platform! I'd be happy to provide you with
information about our pricing tiers and arrange a demo for your team.
For an enterprise-level organization with 50 users, our Enterprise tier would indeed
be the most suitable option. This package includes:
- Unlimited access for all team members
- Advanced security features
- Dedicated account manager
- Custom integrations
- Priority support
I've attached our complete pricing brochure with detailed information about all our
plans. For your team size, we offer custom pricing with volume discounts.
Regarding the demo, I'd be delighted to schedule one for next week. Could you please
let me know what days and times work best for you and your team? Our demos typically
take about 45 minutes and include time for questions.
If you have any specific features or use cases you'd like us to focus on during the demo,
please let me know so I can tailor the presentation to your needs.
Looking forward to hearing from you and showing you how our platform can benefit your
growing company.
Best regards,
[Your Name]
[Your Position]
[Contact Information]"
}
Next Steps
- Build something cool with Langbase APIs and SDK.
- Join our Discord community for feedback, requests, and support.