From Zero to AI: Launch Your First Free AI Chatbot with Python

Leo BchecheLeo Bcheche
5 min read

Harness the power of LangChain and Groq to build intelligent AI assistants efficiently. LangChain streamlines AI application development by providing structured workflows, while Groq delivers high-performance AI acceleration for real-time interactions—all available for free.

  1. Introduction

Artificial Intelligence (AI) has become increasingly accessible, allowing developers to build powerful AI assistants without high costs. In this article, we'll guide you through setting up your first AI assistant for free using different models, including Groq and the LangChain library.


  1. What is LangChain?

LangChain is a powerful framework that helps developers build AI applications more easily and in an organized way. It allows them to combine different AI models, tools, and external data sources into a single workflow. With LangChain, developers can create various types of intelligent applications, such as:

  • Chatbots – Virtual assistants that interact with users.

  • RAG Systems (Retrieval-Augmented Generation) – A technique that enhances AI responses by retrieving information from databases before generating text. This makes answers more accurate, reduces errors, and allows access to up-to-date information in real time.

  • Autonomous Agents – Programs that make decisions on their own and perform tasks automatically.

  • Document Processors – Tools that analyze, extract, and organize information from large texts.

  • AI-Powered Search Engines – Systems that help find information faster with personalized responses.

In short, LangChain makes it easier to build advanced AI applications by organizing and connecting different AI components efficiently.


  1. What is Groq?

Groq is a company that creates technology to make artificial intelligence (AI) faster and more efficient. Instead of developing its own AI models, Groq provides specialized hardware that speeds up the execution of these models, enabling near-instant responses.

What does this mean in practice?

  • Groq helps run AI models much faster, like those used in chatbots and virtual assistants.

  • Its free API allows developers to test and use AI models at no cost, making it easier to experiment and build new applications.

  • Since Groq’s system is optimized for real-time interactions, it is perfect for applications that require quick responses, such as customer support, text generation, and intelligent assistants.

In summary, Groq provides technology that makes AI work more efficiently and accessibly, helping developers create faster and more powerful intelligent applications.


  1. Setting Up Your AI Assistant

  • Install Python and Virtual Environment

pip install virtualenv
  • Create and Activate a Virtual Environment

python -m venv myenv
myenv\Scripts\activate      # On windows
source myenv/bin/activate   # On macOS/Linux
  • Install LangChain and Groq

pip install langchain langchain_groq
  • Get a Free Groq API Key

To use Groq’s AI models, you need an API key:

  1. Visit the Groq API website and sign up.

  2. Navigate to the API Keys section.

  3. Generate a new API key and copy it.

  1. Store the API key securely in your environment
$env:GROQ_API_KEY="your_api_key_here"   # For Windows (PowerShell)
export GROQ_API_KEY="your_api_key_here" # For macOS/Linux:
  1. Alternatively, you can define it in your script:
import os
api_key = "your_api_key_here"
  • Choose a model

  1. Access https://console.groq.com/docs/models

  2. Select a valid model


  1. Create Your AI Assistant

# import os
from langchain_groq import ChatGroq
from langchain.prompts import ChatPromptTemplate

# $env:GROQ_API_KEY="your_api_key_here"
# api_key = os.getenv("GROQ_API_KEY")

# Initialize the AI chat model using the Groq API
chat = ChatGroq(model="mixtral-8x7b-32768")
# chat = ChatGroq(model="mixtral-8x7b-32768" , api_key = api_key)

# ChatPromptTemplate.from_messages() 
# This method in LangChain allows you to define a structured conversation flow.
#  
# It takes a list of messages, where each item is a tuple containing:
# -> The role of the message sender (e.g., 'user', 'system', or 'assistant').
# -> The content of the message, which may contain variables {text} and {language} 
#    to be dynamically replaced.

template = ChatPromptTemplate.from_messages(
    [
        ('system', 'You are an assistant who always answers as a crazy joker called JARVIS.'),
        ('user', 'Translate {text} to {language} language.')
    ]
)

# Display a welcome message for the user
print("WELCOME TO JARVIS! Your personal translator.")

# Process the translation chat loop. 
def translate():
    """Repeats the translation process until the user inputs an empty string."""
    while True:
        # Get user input for the target language
        language = input("\n[JARVIS]: Tell me the output language (Press Enter to exit): ").strip()
        if not language:
            print("[JARVIS]: Goodbye, human! 🤖")
            break  # Exit the loop if the input is empty

        # Get user input for the text to be translated
        text = input("[JARVIS]: Now, tell me something you want to translate (Press Enter to exit): ").strip()
        if not text:
            print("[JARVIS]: Farewell! See you next time. 🤖")
            break  # Exit the loop if the input is empty


        # Pipeline Operator "|" in Python: Simplifies chaining of method calls with the same name
        # and compatible input and output types.
        # Equivalent to:
        # -> formatted_prompt = template.invoke({'text': "What's up?", 'language': 'portuguese'})
        # -> answer = chat.invoke(formatted_prompt)
        # Create a processing pipeline where the template formats the input and passes it to the chat model
        chain = template | chat 

        # Execute the chain with the user-provided inputs
        answer = chain.invoke({'text': text, 'language': language}).content

        # Display the translated response from the AI assistant
        print(f"[JARVIS]: {answer}")

# Start the translation loop
translate()

  1. Running Your AI Assistant

python ai_assistant.py

  1. Conclusion

Congratulations! You've successfully built your own AI-powered translator with a unique and fun personality. Let's summarize what we've accomplished in this guide:

What We Did:

  • Set up the environment and prerequisites for using LangChain and the Groq API.

  • Retrieved the Groq API key and securely stored it.

  • Implemented the JARVIS translator using ChatGroq and ChatPromptTemplate.

  • Created a user interaction loop that enables continuous translation.

  • Utilized LangChain's pipeline operator to efficiently process translations.

What Can Be Improved?

  • Enhance Error Handling – Improve input validation and manage unexpected inputs gracefully.

  • Add More Personalization – Let users choose translation styles (formal, informal, humorous, etc.).

  • Change JARVIS

  • Expand Language Support – Implement automatic language detection and dynamic suggestions.

  • Integrate with a GUI – Create a web or desktop interface for a more user-friendly experience.

  • Change JARVIS's Specialty – Modify JARVIS to specialize in other tasks, such as storytelling or tutoring in various subjects.

With these enhancements, JARVIS can become an even more powerful and entertaining AI assistant.

Keep experimenting, keep coding, and most importantly

Have fun!

0
Subscribe to my newsletter

Read articles from Leo Bcheche directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Leo Bcheche
Leo Bcheche