Enough theory; let's get our hands dirty!


Alright, fellow Gemini explorers, buckle up! In my last blog, we scratched the surface of the amazing things you can do with Vertex AI.
But enough theory, right? Let's dive into the good stuff: actually using the Gemini APIs with Python and the ever-so-handy Langchain.
What's in Your Toolkit?
Before we embark on this coding adventure, make sure you have a few essentials:
A Python Virtual Environment: This is like creating a neat little sandbox for our project, keeping all our specific tools (packages) in one place without messing with your main Python setup. If you haven't got one, setting it up is a breeze. Most Python installations come with
venv
. Just navigate to your project directory in your terminal and type:python -m venv gemini_blog_env
And to activate it:
On macOS and Linux:
source gemini_blog_env/bin/activate
On Windows:
.\gemini_blog_env\Scripts\activate
You'll know it's active when you see your environment's name in the terminal prompt.
A Few Key Packages: We'll need to invite some friends to our coding party. The main guests are:
langchain-google-genai
: This is the star player, allowing Langchain to talk to Google's Gemini models.google-genai
: The official Google AI Python SDK.python-dotenv
(optional but recommended): Super useful for managing your precious API key without hardcoding it.
Your Gemini API Key: This is your golden ticket to access the Gemini models. You can grab one from Google AI Studio. Keep it secret, keep it safe!
Let's Get Installing!
Assuming your virtual environment is up and running (you'll see its name in your terminal prompt), let's install those packages. Open your terminal and type:
pip install langchain langchain-google-genai google-genai python-dotenv
Pip, Python's package installer, will fetch and install everything for you.
Time to Write Some Actual Code! (The Exciting Part!)
Alright, the stage is set. Let's get Langchain and Gemini to chat.
First, if you're using python-dotenv
(which I highly recommend for keeping your API key secure), create a file named .env
in your project directory and add your API key like this:
GOOGLE_API_KEY="YOUR_SUPER_SECRET_API_KEY_HERE"
Now, for the Python magic. Create a Python file (e.g., gemini_
chat.py
) and let's get coding:
import os
from dotenv import load_dotenv
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.prompts import ChatPromptTemplate
from langchain.schema import HumanMessage, SystemMessage
# Load environment variables from .env file
load_dotenv()
# Securely get your API key (optional if you set it directly)
# Make sure your GOOGLE_API_KEY is set in your environment or .env file
google_api_key = os.getenv("GOOGLE_API_KEY")
if not google_api_key:
raise ValueError("GOOGLE_API_KEY not found in environment variables.")
# Initialize the Gemini LLM with Langchain
# You can choose different models like "gemini-2.0-flash" etc.
# Check the Google AI documentation for the latest model names and capabilities.
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash-preview-04-17", google_api_key=google_api_key)
# Define our roles with System and Human messages
system_prompt_text = """I am writing a series on Learning Gemini in form of blogs.
I am writing these blogs while I am learning myself.
You are an expert in using Python, Langchain and Gemini APIs.
Help me write blogs on topics that I give."""
user_prompt_text = "Write a short blurb on Google Gemini."
# Create the messages
messages = [
SystemMessage(content=system_prompt_text),
HumanMessage(content=user_prompt_text)
]
# Let's get the response!
response = llm.invoke(messages)
print("Assistant's Response:")
print(response.content)
Run this script from your activated virtual environment: python gemini_chat.py
And voila! You should see Gemini, guided by your system prompt, generating a blurb about itself. Talk about meta!
Hold on, isn't this recursion? Deja Vu!
You got me! Asking an AI that I'm learning about to help me write a blog about learning that AI... but don't you worry your human heads; I'm still the one typing these blogs out, adding my own (questionable) humour and insights. No infinite AI loops here... yet! ๐
Let's Get Streamy: Implementing Streaming Responses
Sometimes, you don't want to wait for the whole answer to generate. You want it to flow, like a good conversation. Langchain and Gemini support streaming responses beautifully.
Here's how you can modify the code to get a streaming response:
import os
from dotenv import load_dotenv
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.schema import HumanMessage, SystemMessage
# Load environment variables from .env file
load_dotenv()
google_api_key = os.getenv("GOOGLE_API_KEY")
if not google_api_key:
raise ValueError("GOOGLE_API_KEY not found in environment variables.")
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash-preview-04-17", google_api_key=google_api_key, stream=True) # Note: stream=True can often be inferred
system_prompt_text = """I am writing a series on Learning Gemini in form of blogs.
I am writing these blogs while I am learning myself.
You are an expert in using Python, Langchain and Gemini APIs.
Help me write blogs on topics that I give."""
user_prompt_text = "Write a short blurb on Google Gemini, and make it snappy!"
messages = [
SystemMessage(content=system_prompt_text),
HumanMessage(content=user_prompt_text)
]
print("Assistant's Streaming Response:")
for chunk in llm.stream(messages):
print(chunk.content, end="", flush=True)
print() # For a new line at the end
When you run this, you'll see the response appear chunk by chunk, which is pretty neat for more interactive applications.
And there you have it! Our first foray into coding with Gemini and Langchain. We've set up our environment, installed the necessary tools, had a (slightly recursive) chat with Gemini, and even made it stream its wisdom.
Adding the GitHub repo as well: https://github.com/Sirsho29/gemini_blog
Stay tuned for the next blog, where we'll dive deeper into more advanced features. Until then, happy coding, and don't let the AIs write all your content!
Subscribe to my newsletter
Read articles from Sirsho Chakraborty directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Sirsho Chakraborty
Sirsho Chakraborty
Graduated from KIIT, Bhubaneswar in 2023 with a B.Tech in CS. Did my majors in AI and Computational Mathematics. For me, Covid was a blessing in disguise. I got plenty of time, staying at home, tinkering and building stuff. Tried IoT, App Development, Backend, Cloud. Did a few internships in Flutter in my second year of college. Moved to Full stack, majorly focussing on backend. Single-handedly build a Whatsapp-like video calling solution for a CA based social media company. Teaching was also a passion. So, started up an ed-tech platform with a friend, Sridipto. That's our first venture together - Snipe. Raised some capital from a Bangalore based VC during 3rd year of college. Came to Bangalore. Scaled Snipe to around a million users. But, monetisation was a challenge, downfall of ed-tech making it worse. Had to pivot. Gamification was our core. Switched to B2B model and got some early success. Few big names onboarded - Burger King, Pedigree, Saffola - few of them. Cut to 2024 September, we're team of 20+ team. Business is doing well. But realised scaling is problem. We can't just remain as a Gamification Service company. We thought, let's build something big. Let's Build the Future of Computing. The biggest learning, if you have a big problem, break it up into smaller problems. Divide and Conquer. It becomes a lot easier.