Create Your Own AI Travel Planner: A Step-by-Step Guide
Hey There! I Built Something Cool 🚀
I’m super excited to share this with you because, well, technology is fun! And trust me, learning new stuff is always better when you’re actually building something cool. So, let’s dive in!
🎉 What Did I Build?
I’ve been learning a ton about front-end, back-end, and everything in between. Naturally, I wanted to put all of it into one project. The result? A travel planner web app! 🏖️
Forget spending hours online making notes about “top places to visit” or “best things to do.” Let the AI (yep, those smart LLMs) do all the heavy lifting for you. They’re trained on all the data out there, so in just one click, you’ll have a full travel plan ready. (Yeah, they definitely know more than you, me, and all of us combined 😅).
🛠️ Tech Stack:
Here’s the tech that powered this project:
- React JS – My favorite front-end library 🥲
- Tailwind CSS – Once you get the hang of Tailwind, plain CSS feels ancient 🗿
- Shadcn UI – A cool new front-end UI library ❤️
- React Router Dom – For that smooth routing action
- React Markdown – Because markdown is life
- Supabase – The magic in the backend
- Gemini API – Generates those awesome travel plans with AI
How It Works:
Let me walk you through my app. It's actually pretty simple:
- Input: The user enters the city and the number of days they plan to stay.
- AI Processing: This data is sent to the Gemini AI API using a custom prompt.
- Response: The AI responds with a fully curated travel plan in markdown, which is then rendered in the app.
Sounds simple, right? I’ll break it down a bit more below, but first, let’s talk about some of the challenges I faced. 😅
🏃♂️ First Hurdle: API Hunting
Finding the right API for my travel planner was a bit of a journey. Here are some of the challenges I faced:
1. API Costs:
Most LLM APIs are quite expensive, and free versions often come with strict usage limits. This makes it crucial to find an API that offers a generous free tier.
2. Different Models, Different Responses:
Every LLM model is trained differently, so their responses vary widely. Testing several APIs was an eye-opener. Some offered fantastic results; others, not so much.
My API Journey: (Or, How I Survived the API Maze)
1. OpenAI API:
My initial choice was the OpenAI API. But their free tier’s limits were brutal—I couldn’t even complete a single request without hitting those limits. 😭
2. AWAN LLM API:
Next, I stumbled upon the AWAN API. It was promising—unlimited tokens and a generous number of API requests. Everything was working smoothly… until one morning, the API failed. Yep, their backend crashed. I guess I can take credit for pushing it to its limits 💪 (just kidding!).
Also, their responses were giving some crazy expensive travel options—like “white men prices”, which was both hilarious and bizarre.
3. Google Gemini API:
Finally, I landed on Google’s Gemini API, and I love it! ❤️ It’s simple, intuitive, and reliable. After testing multiple models and reading all that documentation, it finally felt like I found the right one.
Setting Up the Google Gemini API:
Let me walk you through how you can set up the Google Gemini API for your own projects. 🎉
Step 1: Get the API Key
First things first, grab an API key from Gemini API docs. And remember, store it securely—don’t hardcode it in your project! Use environment variables like a pro. 😎
Step 2: Install the Client SDK
If you’re in a Node.js environment, you’ll need to install the SDK with:
npm install @google/generative-ai
Step 3: Import the Library
Here’s how you import the Gemini API library into your project:
const { GoogleGenerativeAI } = require("@google/generative-ai");
// or if you're using ES6 imports:
import { GoogleGenerativeAI } from "@google/generative-ai";
Step 4: Start Using the API
// import the GoogleGenerativeAI class
import { GoogleGenerativeAI } from "@google/generative-ai";
// Create a new instance of GoogleGenerativeAI with the API key
const genAI = new GoogleGenerativeAI(process.env.API_KEY);
// Specify which LLM model we want to use
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
// The prompt is what we’re sending to the AI
const prompt = "Write a story about a magic backpack.";
// Send the prompt to the model and wait for the response
const result = await model.generateContent(prompt);
// Get the actual text from the response and print it
console.log(result.response.text());
Breaking It Down:
- GoogleGenerativeAI Class: We import this class to get access to the Gemini AI’s functionality.
- Instance with API Key: You need your API key to make authorized requests to Gemini. We pass that in when we create a new instance of
GoogleGenerativeAI
. - Choosing the Model: We specify which AI model we want to use—there are different models optimized for different tasks, and here we use
"gemini-1.5-flash"
. - Sending the Prompt: The
generateContent()
method takes our prompt and sends it to the model. We then wait for the AI’s response. - Getting the Response: The
.text()
method extracts the actual text response. This is where you’ll see the AI’s output, in this case, the story about the magic backpack.
Pro tip: You can log the entire result
object to see more details like token usage. This helps you understand how much of your quota you're using with each request. Super handy when you’re fine-tuning your API calls!
💡 What I Learned (Things I Wish I Knew Sooner):
1. Rate Limits—LLMs Are Cool, But They’re Not Free 🤑
APIs have rate limits, so if you send too many requests, you’ll hit a cap. Make sure to read the documentation to understand these limits and avoid running into issues mid-project.
2. Tokens—Not the Fun Kind 🎮
LLMs break text into tokens. The more tokens, the longer (and more expensive) the response. Keep prompts concise to stay within token limits.
Here’s a helpful tokenizer tool to check how many tokens your text uses.
3. Temperature—Not About the Weather 🌡️
Temperature controls how “creative” or “predictable” the AI’s responses are. Higher temperatures (closer to 1) give more creative, sometimes unpredictable responses. Lower temperatures (closer to 0) give safe, reliable answers.
My Code Implementation:
Here's the function that handles sending user input to the Gemini API and receiving the AI-generated travel plan.
const prompt = `Plan a ${days}-day trip itinerary to ${city}.
Day Theme: [theme]
Morning: [activities, accommodations, food]
Afternoon: [activities, food]
Evening: [activities, accommodations, food]
General Tips: [travel, safety, culture]
Budget Breakdown: [accommodations, food, activities] in markdown.`;
// This prompt will be sent to the Gemini API
const search = async () => {
// Initialize the API client with the environment variable storing the API key
const geminiAI = new GoogleGenerativeAI(import.meta.env.VITE_GEMINI_API);
// Select the generative model and configure response generation
const model = geminiAI.getGenerativeModel({
model: "gemini-1.5-flash",
generationConfig: {
candidateCount: 1, // Get only one response
maxOutputTokens: 1500, // Limit the response length
temperature: 0.7, // Adds a bit of creativity to AI responses
},
});
// Ensure the user input is valid
if (!city) {
alert("Please Add City");
return;
}
try {
// Show loading state during API call
setLoading(true);
setProgress(0);
showAlert(); // Custom function to notify users
// Send the prompt to the AI model and wait for the result
const result = await model.generateContent(prompt);
// Set the travel plan with the AI's response text
setTravelPlan(result.response.text());
} catch (error) {
console.error('Error occurred:', error); // Log errors for troubleshooting
} finally {
// Reset progress and form after API call completes
setProgress(100);
resetForm();
setLoading(false);
}
};
Breaking It Down:
Prompt Customization: This prompt dynamically creates an itinerary based on the user's input, using template literals (like
${days}
and${city}
) to inject the relevant data. This allows for highly customizable travel plans depending on user input.API Client Initialization: The
GoogleGenerativeAI
client is initialized using theAPI_KEY
from environment variables, ensuring the key stays secure and isn't hardcoded in the app.Model Selection: The
gemini-1.5-flash
model is selected, which is the latest and one of the most performant models for generating high-quality, creative responses.Model Configuration: The
generationConfig
allows us to control how the model generates the response. ThemaxOutputTokens
parameter limits the response length, whiletemperature
tweaks the AI's creativity—higher values like 0.7 lead to more creative responses.Error Handling: Since API calls can fail for various reasons (invalid keys, rate limits, etc.), the
try-catch
block ensures that errors are caught and logged properly. This keeps the app from breaking unexpectedly.
Here’s the improved ending for your blog post, including the suggestions you mentioned:
🎉 Final Thoughts:
Being a great engineer isn’t just about coding everything yourself; it’s about knowing how to leverage the tools available to you. Build upon what others have created, trust the process, and don’t hesitate to explore new technologies like Supabase and the Gemini API.
If you’re curious to see the code and dive deeper into the implementation, check out my GitHub repository. This way, you can explore the code directly and focus on the fascinating aspects of generative AI and its applications!
Let’s Connect! 🤝
I’d love to hear your thoughts or any questions you might have! Feel free to connect with me on LinkedIn. Let’s share our experiences and learn from each other in this exciting world of tech!
Thanks for reading, and go out there and build something amazing! 🌟
Subscribe to my newsletter
Read articles from Prashant swaroop directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Prashant swaroop
Prashant swaroop
I’m a curious coder and passionate storyteller, always eager to explore the magic behind technology. I find joy in the moments when everything clicks and understanding dawns. With a love for cinema and narratives, I aim to create projects that inspire change and spark imagination.