Building Products with Generative AI
In this workshop, you will learn how to build a simple AI Assistant using Langbase Pipes and Memory tools. We'll then convert this AI Assistant into a product — an AI chatbot that can be shipped to users.
- Introduction to Generative AI
- What is Generative AI?
- Use cases and applications
- Langbase Pipes and Memory sets
- Building an AI Assistant with Pipes
- Creating and deploying a simple AI Assistant
- Few shot training the AI Assistant
- Converting the AI Assistant into a Product
- Designing and building an open-source AI chatbot using [LangUI.dev]
- Integrating the AI chatbot with Pipes and Memory
- Shipping the AI chatbot to users
Basic understanding of Web technologies (HTML, CSS, JavaScript, React, Next.js, APIs).
An AI Chatbot example to help you create chatbots for any use case. This chatbot is built by using an AI Pipe on Langbase, it works with 30+ LLMs (OpenAI, Gemini, Mistral, Llama, Gemma, etc), any Data (10M+ context with Memory sets), and any Framework (standard web API you can use with any software).
Check out the live demo here.
- 💬 AI Chatbot — Built with an AI Pipe on ⌘ Langbase
- ⚡️ Streaming — Real-time chat experience with streamed responses
- 🗣️ Q/A — Ask questions and get pre-defined answers with your preferred AI model and tone
- 🔋 Responsive and open source — Works on all devices and platforms
- Check the AI Chatbot Pipe on ⌘ Langbase
- Read the source code on GitHub for this example
- Go through Documentaion: Pipe Quick Start
- Learn more about Pipes & Memory features on ⌘ Langbase
Let's get started with the project:
To get started with Langbase, you'll need to create a free personal account on Langbase.com and verify your email address. Done? Cool, cool!
- Fork the AI Chatbot Pipe on ⌘ Langbase.
- Go to the API tab to copy the Pipe's API key (to be used on server-side only).
- Download the example project folder from here or clone the reppository.
cd
into the project directory and open it in your code editor.- Duplicate the
.env.example
file in this project and rename it to.env.local
. - Add the following environment variables:
# Replace `PIPE_API_KEY` with the copied API key.
NEXT_LB_PIPE_API_KEY="PIPE_API_KEY"
# Install the dependencies using the following command:
npm install
# Run the project using the following command:
npm run dev
Your app template should now be running on localhost:3000.
NOTE: This is a Next.js project, so you can build and deploy it to any platform of your choice, like Vercel, Netlify, Cloudflare, etc.
This project is created by Langbase (@LangbaseInc) team members, with contributions from:
- Ahmad Awais (@MrAhmadAwais) - Founder & CEO, Langbase
- Ahmad Bilal (@AhmadBilalDev) - Founding Engineer, Langbase
Built by ⌘ Langbase.com — Ship composable AI with hyper-personalized memory!