hanson-cheng
    hanson-cheng/rag-chatbot
    Public

    Fork

    About

    Rag Chatbot lets you upload your documents and chat about their content. Enjoy personalized insights and detailed responses based on your specific documents, enhancing your understanding and productivity.

    Meta

    Tools

    No tools added to the Pipe.

    Readme

    RAG Chatbot by ⌘ Langbase

    License: MIT Fork to ⌘ Langbase

    Build a RAG Chatbot with Pipes — ⌘ Langbase

    A RAG Chatbot example to help you build and deploy a chatbot to talk to your documents. This chatbot is built by using an AI Pipe and Memory on Langbase, it works with 30+ LLMs (OpenAI, Gemini, Mistral, Llama, Gemma, etc), any Data (10M+ context with Memory sets), and any Framework (standard web API you can use with any software).

    Features

    • 💬 Built with a Memory and Pipe on ⌘ Langbase
    • ⚡️ Streaming — Real-time chat experience with streamed responses
    • 🗣️ Q/A — Ask questions and get answers from the document that you uploaded
    • 🔋 Responsive and open source — Works on all devices and platforms

    Learn more

    1. Check the RAG Chatbot Pipe on ⌘ Langbase
    2. Read the source code on GitHub for this example
    3. Go through Documentaion: Pipe Quick Start
    4. Go through Documentaion: Memory Quick Start
    5. Learn more about Pipes & Memory features on ⌘ Langbase

    Get started

    Let's get started with the project:

    To get started with Langbase, you'll need to create a free personal account on Langbase.com and verify your email address. Done? Cool, cool!

    1. Fork the RAG Chatbot Pipe on ⌘ Langbase.
    2. Create a Memory on Langbase, upload the document you want to talk to, and attach it to the Pipe you just forked.
    3. Go to the API tab to copy the Pipe's API key (to be used on server-side only).
    4. Download the example project folder from here or clone the repository.
    5. cd into the project directory and open it in your code editor.
    6. Duplicate the .env.example file in this project and rename it to .env.local.
    7. Add the following environment variables:
    sh
    1# Replace `PIPE_API_KEY` with the copied API key. 2NEXT_LB_PIPE_API_KEY="PIPE_API_KEY" 3 4# Install the dependencies using the following command: 5npm install 6 7# Run the project using the following command: 8npm run dev

    Your app template should now be running on localhost:3000.

    NOTE: This is a Next.js project, so you can build and deploy it to any platform of your choice, like Vercel, Netlify, Cloudflare, etc.


    Authors

    This project is created by Langbase team members, with contributions from:

    Built by ⌘ Langbase.com — Ship hyper-personalized AI assistants with memory!