AI-Powered Banking Chatbot: Build with LangChain, LangDB.ai & RAG (Part 1)


In the fast-paced world of AI innovation, crafting a Chat Assistant isn’t just about coding - it’s about engineering an intelligent ecosystem that delivers dynamic responses, integrates seamlessly with vector databases, and maintains conversational memory for enhanced user interactions. Today, we’re diving into how you can build a LangChain-powered RAG (Retrieval-Augmented Generation) Conversational AI using Streamlit, LangChain, LangDB.ai, and ChromaDB.
This is just the beginning! Stay tuned for this two-part series, where we will guide you step-by-step in building a robust AI-powered Chat Assistant. And yes—you get a free starter pack with all source code ready to go!
What’s on Our Agenda?
Here’s a sneak peek at what we’ll cover today:
Installation and Setup: Get your environment ready with the necessary dependencies.
Building a Simple Chatbot: Integrate LangChain, LangDB.ai, and ChromaDB for intelligent responses.
Adding Memory & Enhancing User Experience: Implement conversation history for a more natural flow.
Deploying with Streamlit: Run your chatbot with an intuitive UI.
Alternatively you can also follow our Youtube tutorial
What’s the Theme of Our AI?
To keep things practical, we are building a Banking FAQ Assistant chatbot that answers user queries about loan options, interest rates, and general banking FAQs.
🏦 Conversational AI Theme: Banking FAQ & Loan Inquiry Bot
Capabilities:
Answer frequently asked questions about banking services
Provide details on various loan types and interest rates
Retain conversational memory for personalized banking guidance
Step-by-Step Guide to Get Started
Installation and Setup
Before we dive in, let’s set up our development environment.
Install Dependencies
Ensure you have Python installed, then proceed with the following:
pip install streamlit langchain openai langchain-community requests
Building the LangChain Conversational AI
Setting Up the Core Components
import streamlit as st
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from langchain.memory import ConversationBufferMemory
import requests
Here, we import the necessary modules:
Streamlit for the assistant’s UI.
LangChain components (LLMChain, PromptTemplate) to manage the AI model and prompts.
OpenAI as our Large Language Model SDK to use LangDB.ai Gateway.
ConversationBufferMemory for maintaining chat history.
Load Environment Variables
os.environ["LANGDB_API_KEY"] = "your-langdb-api-key"
Make sure to update your project-id
Since we are using LangDB, we have to add LANGDB_API_KEY to LangDB’s API key instead of OPENAI_API_KEY
Creating the Prompt Template
PROMPT_TEMPLATE = """
You are a banking assistant specializing in answering FAQs about loans, interest rates, and general banking services.
If the user greets, respond with a greeting. If the user asks a question, provide an answer.
Use the following context too for answering questions:
{context}
Conversation History:
{history}
---
Answer the question based on the above context: {query}
"""
The Prompt Template provides structure to the assistant’s responses:
It greets users when necessary.
It uses contextual memory to fetch relevant banking information.
It provides structured responses based on the query and available context.
Initializing the Core AI Components
LANGDB_API_URL = "https://api.us-east-1.langdb.ai/your-project-id/v1"
llm = ChatOpenAI(
base_url=LANGDB_API_URL,
api_key=getenv("LANGDB_API_KEY"),
model="gpt-4o-mini", # Replace with the specific model name you are using
timeout=10 # Add a timeout of 10 seconds
)
memory = ConversationBufferMemory(
memory_key="history",
return_messages=True,
input_key="query",
)
Here’s what each component does:
OpenAI (
llm
): Our AI Gateway SDK to use LangDB.ai Models.Memory (
ConversationBufferMemory
): Retains chat history for continuity.
Replace your-project-id with your project id, Follow below Gif to fetch your project-id
Building the Chatbot Chain
prompt_template = PromptTemplate(
input_variables=["context", "history", "query"],
template=PROMPT_TEMPLATE
)
chain = LLMChain(llm=llm, prompt=prompt_template, memory=memory)
Here we chain the model with the prompt template and memory, allowing it to generate responses dynamically.
Deploying with Streamlit
Setting Up the UI
st.set_page_config(page_title="Banking Assistant", layout="wide")
st.title("Banking FAQ Assistant")
st.write("Ask questions about banking services, loan options, and interest rates!")
Above code sets up our Streamlit UI with a title and description.
Handling User Queries
user_input = st.text_input("Enter your query:")
send_button = st.button("Send")
Users can input their banking questions, and responses are triggered by clicking the Send button.
Processing the Query
if send_button:
if user_input:
try:
context = ""
response = chain.run(context=context, query=user_input)
st.session_state.messages.append({"role": "user", "content": user_input, "is_user":True})
st.session_state.messages.append({"role": "assistant", "content": response, "is_user":False})
st.rerun()
except Exception as e:
st.error(f"Error generating response: {e}")
else:
st.warning("Please enter a valid query.")
Above code snippet helps us with following functionalities:
Generates a response using LangChain.
Updates chat history in Streamlit’s session state.
Lets have a look at the complete code snippet
import os
from os import getenv
import streamlit as st
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain_openai import ChatOpenAI
from langchain.memory import ConversationBufferMemory
import requests
# Constants
PROMPT_TEMPLATE = """
You are a banking assistant specializing in answering FAQs about loans, interest rates, and general banking services.
If the user greets, respond with a greeting. If the user asks a question, provide an answer.
Use the following context too for answering questions:
{context}
Conversation History:
{history}
---
Answer the question based on the above context: {query}
"""
LANGDB_API_URL = "https://api.us-east-1.langdb.ai/your-project-key/v1" # Replace with your LANGDB project id
os.environ["LANGDB_API_KEY"] = "your-api-key"
st.set_page_config(page_title="Banking Assistant", layout="wide")
st.title("Banking FAQ Assistant")
st.write("Ask questions about banking services, loan options, and interest rates!")
# Initialize LangChain LLM
llm = ChatOpenAI(
base_url=LANGDB_API_URL,
api_key=getenv("LANGDB_API_KEY"),
model="gpt-3.5-turbo", # Replace with the specific model name you are using
timeout=10 # Add a timeout of 10 seconds
)
# Memory for conversation history
memory = ConversationBufferMemory(
memory_key="history",
return_messages=True,
input_key="query",
)
# Prompt Template for LangChain
prompt_template = PromptTemplate(
input_variables=["context", "history", "query"],
template=PROMPT_TEMPLATE
)
# LangChain LLM Chain
chain = LLMChain(llm=llm, prompt=prompt_template, memory=memory)
# Chatbox implementation
st.subheader("Chatbox")
# Container for chat messages
chat_container = st.container()
# Function to display chat messages
def display_message(message, is_user=True):
if is_user:
chat_container.markdown(f"<div style='text-align: right; padding: 10px; border-radius: 10px; margin: 5px;'>{message}</div>", unsafe_allow_html=True)
else:
chat_container.markdown(f"<div style='text-align: left; padding: 10px; border-radius: 10px; margin: 5px;'>{message}</div>", unsafe_allow_html=True)
# Initialize chat history in session state
if "messages" not in st.session_state:
st.session_state.messages = []
# Display chat history
with chat_container:
for chat in st.session_state.messages:
display_message(chat['content'], is_user=chat['is_user'])
# User Input Section
user_input = st.text_input("Enter your query:", key="user_input")
send_button = st.button("Send")
if send_button:
user_input = st.session_state.user_input.strip() # Ensure the input is not empty or just whitespace
if user_input:
try:
context = "" # to be used in next tutorial
response = chain.run(context=context, query=user_input)
# Update conversation memory
st.session_state.messages.append({"role": "user", "content": user_input, "is_user":True})
st.session_state.messages.append({"role": "assistant", "content": response, "is_user":False})
st.rerun()
except requests.exceptions.Timeout:
st.error("The request to the LLM timed out. Please try again.")
except Exception as e:
st.error(f"Error generating response: {e}")
else:
st.warning("Please enter a valid query.")
Final Thoughts: Scale Your AI with RAG!
Building a Banking FAQ chatbot with LangChain, LangDB, and ChromaDB enables users to access essential banking information effortlessly. By integrating memory and contextual awareness, this Conversational AI delivers clear and helpful responses.
🚀 What’s Next? In Part 2 of this series, we’ll dive into Building a RAG pipeline for more refined banking FAQs.
Subscribe to my newsletter
Read articles from Dishant Gandhi directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
