How I Built a SaaS in a Weekend with AI as my Co-founder

Lakshay GuptaLakshay Gupta
15 min read

In this age of digital connectivity, one can be digitally connected with people around the globe, yet feel all alone. Every night when the room went silent, the voices in my head became the loudest. I wanted to feel heard. Writing my thoughts felt like dropping bottles with a message inside in vast oceans. I keep sending them, but no one responds. This is the story of how I built JurnAI, a platform designed to help people feel heard using AI, driven by my passion for building products.

TL;DR This blog covers how I built a SaaS product, from ideation to production, explaining how I incorporated AI in my workflows and the honest limitations I encountered along the way.

Understanding the Why of Building

At times, I felt lonely sharing about problems in my personal life. One may suggest sharing it with friends or family, but what if I tell you they all have their own problems to deal with? In such a situation, daily journaling came like a burst of rain in a deserted land of my thoughts. Suddenly, I started feeling a lot lighter. The voices found an opening and slowly started to vacate my mind. But not for long. At some point, when shit hit the ceiling, the daily journaling started feeling pointless. I wanted to feel heard. Writing down my thoughts in a diary felt like dropping bottles with a message inside in vast oceans. I keep sending them, but no one responds back. This is when it hit me: what if I could build something to solve this problem? A friend who will listen to my thoughts and offer me blunt, honest advice every morning. JurnAI was born from an idea of helping others like me to feel heard, supported, and loved. Ironically, the solution to my digital loneliness was a digital assistant. A few years ago, building something like this would have been a dream, often requiring lots of effort to develop. However, thanks to AI, I developed the first prototype over a weekend.

In the following sections, I'll break down the exact 8-step process I followed, the AI tools I used at each stage, and the honest limitations I encountered along the way.

Step 1: Idea Refinement

I started with an end goal—to make people’s mornings happier and make them feel loved by using their journal entries from last night. I had little clarity about how I was going to do it. I’m a software engineer by profession, so I was a bit inclined towards building a web app-based solution. To make sense of my idea, I turned to perplexity, utilizing deep research mode to learn more about it and explore possible solutions. To my surprise, it did a good job in articulating the steps I need to follow to achieve my goal. Perplexity researched online journaling, how people use it in their day-to-day lives, and what makes it hard for people to continue doing it. It then suggested to me how I can structure my web app, down to the tech stack I can use. In a corporate analogy, this is equivalent to a PRD or Product Requirements Document. It is supposed to cover requirements, breaking down into smaller tasks regarding what is needed from the end project. They can be seen as Functional Requirements.

Screenshot of a webpage titled "Brainstorming Session with Perplexity" showing a recommended free tool stack for developers. It includes services like Vercel, Railway, Neon PostgreSQL, Supabase Auth, and Mailgun, with details on free tier benefits and integration. The background is a gradient of red to yellow.

Pros: Using the deep research mode saved the time it would require to explore various existing solutions, identify the gaps, and research about what exactly needs to be built.
Cons: It can suggest solutions. But it may not match what you need. So you may require a little bit of two and fros.

At last, since I am working with AI, I asked perplexity to document the requirements in a markdown file, which is very helpful when working with AI agents while building the actual product.

Step 2: Tech Exploration

For building the project technically, I utilized the Gemini CLI tool. It works within your terminal with the entire context of your current project directory. I prefer using this over Cursor because Gemini's free tier limits are very generous and provide a decent quality of responses, often at par with other leading LLMs. The goal was to first identify various approaches I can take to build the project. In software analogy, this process is known as the “exploration phase”, where an engineer explores multiple ways to build a solution according to the requirements. While there are multiple ways to achieve the same solution, you should select the one that aligns with your skill set. To begin, I loaded the earlier downloaded Requirements.MD file into my Gemini agent’s memory and started giving it instructions to plan a potential solution.

Screenshot of the Gemini CLI interface on a colorful gradient background. The header reads "Gemini CLI - LLM in your Terminal." The interface provides tips for getting started and a prompt for typing messages or file paths. An update notification is displayed.

Using AI without defining appropriate constraints is like throwing a dart blindfolded and hoping it makes it to the dart board. While it is true that you may not know everything from the beginning, telling as much as you know is still helpful. For example, when discussing about building a journal help tool, I specifically told about my preferred tech stack for frontend and backend development, along with the use of relational databases over non-relational. This makes the output more deterministic.

The entire conversation felt like discussing with a fellow software engineer. You are discussing about what all you will need to make this project deployable, along with third party dependencies. I decided to spend a little longer time discussing approaches instead of jumping straight to coding, to prevent issues at a later stage.

Another critical aspect is not to trust AI blindly. Whenever it suggested me something I was not aware about, I decided to first explore it myself and then give it permission to go ahead. For example, for building this tool, I could follow two approaches:

  1. Build a self-hosted journaling platform and then generate insights on top of journal entries created in my app, requiring the overhead of maintaining and storing user entries.

  2. Build a public doc integration [like Notion], allowing users to connect their existing Notion account and securely read their entries.

The second approach reduces the overhead of maintaining journal entries at my end. When Gemini suggested using Notion integration, I was unsure if this even existed. So I did a small POC to ensure the integration serves my purpose as needed.

Once I was satisfied with the suggested approach, I decided to do subtasking with the help of AI and clearly define the requirements and approach to follow for each feature in a markdown format file. I split this into two parts - frontend and backend.

A screenshot of a project structure for "emotional-support-app." It includes a "backend" directory with Python files for a FastAPI application and a "frontend" directory with directories and a JSON file for a React application. There is also a README file.

In this way, I could spin up two agents, each working on completely different tasks. It helps in passing domain specific instructions to the AI agents and helps improve the output response. Tackling multiple things in the same session is generally not advised because of too much context overloading for the agents, too. Just like humans, they also struggle with multitasking, I guess :)

Step 3. Building the prototype - backend

Since backend interests me more than frontend, I decided to start implementing the backend part of the project. I cloned a template fastAPI project and loaded the Requirements.MD in my Gemini CLI agent. Then like a senior engineer, I drafted a crisp and clear prompt to explain what needs to be done and how. Doing all of this at once would have been disastrous. Hence, I started with small tasks first and gradually built the entire project.

We started with integrating the NotionSDK and setting up a dummy route to test it. I sipped on my coffee as the agent kept working in the background. I am not a fan of dangerous mode and prefer reviewing every code change. Although it takes more time, It allows me to manage the work of the agent in more detail, ensuring it does not go off track. Once the integration was done, we gradually moved to setting up the database.

Vibe coding is nice. But having prior experience makes the outcome more predictable. The AI doesn’t always suggest the best solution. In my case, while setting up the database, in the first try, Gemini used Python DB drivers, raw-querying the database via cursors. This is not a suggested approach ever since the introduction of ORMs (Object Relational Mappers), which abstract the query writing process. My prior experience made me notice this inconsistency and suggest using Tortoise instead. Upon getting corrected, it migrated all the existing flows within a couple of minutes. Something which would have taken me hours.

Before I hit my daily quota limits for the Gemini model, we were able to set up half of the project and test critical flows regarding notion data fetching and generating AI insights from the fetched content.

Step 4: Building the website

Back in college, I used to work on the frontend a lot, designing beautiful web pages, developing them with all those animations and gradients. But for the past one year, I had lost touch with building it. Additionally, I always felt there was a lot of repetitive work involved in building those UI components. The boom of AI tools in this space was promising enough to give them a try.

A screengrab showing the templates section on the v0 website

I asked a couple of my experienced frontend friends, and v0 appeared as a common suggestion for building landing pages. They were not wrong. v0 already had a large number of community-contributed templates that were enough for my use case. I selected one and decided to refine it using the v0 online editor. I gave prompts regarding what I am trying to build, along with a color theme that I had in mind.

Mockup of a website with a purple gradient background, featuring a journaling app called "MindfulPages." The site highlights features like AI-powered journaling, motivational messages, and connectivity with Notion.

Within a few minutes, the first draft was ready. It looked promising, but not there yet. Figuratively, it was a 60% match with what I had in my mind. Following up with v0 did not bear any positive results after a point. So I decided to clone the website in its current state locally and continue building it using Gemini.

As I'm concerned, Gemini did a great job understanding the project and how to implement the changes I had in mind. I ran into an issue due to a mismatch in node version while running the Next.js website locally, and Gemini aptly recognised the issue and suggested a solution. I was genuinely amazed that all my UI suggestions were just a prompt away.

Suggestions like removing some sections, making other sections responsive for phone, tinkering with background gradients, all done within a couple on minutes. This allowed me to focus on the content that I want to put inside, rather than spend my energy on designing the website. Apart from designing components, the Gemini agent also acted as my personal SEO expert, suggesting the use of SEO-focussed meta tags, helping in adding OpenGraph-related details, and adding a sitemap file. It also helped me refine the content, fixing grammatical mistakes and, at times, suggesting the use of better words to convey the same thing. It was like having a trio of a frontend developer, content writer, and SEO expert by your side, helping you close on important tasks with a couple of prompts.

Pros (of using v0): A good enough template to build upon, reduced TAT of building a landing page.
Cons: Saturated designs and difficult to stand out in the market, since everyone is using the same approach.

Step 5: Deployment - Where AI Reached Its Limits

After a couple of iterations, I had the frontend and backend both integrated and working together locally. Now it was time to deploy it online and test the entire flow end-to-end securely. Here, I was not able to utilize AI to its fullest. I had to hop between multiple service providers, create accounts, and configure them to read from my code repository. Every provider has their own steps, some of which were time-consuming. One-click deployments are cool, but they are at times expensive and may not offer all the features you need. Bare metal server providers may be cheaper, but they have a strong learning curve to begin with. I preferred a hybrid approach. I deployed the frontend using Vercel. The site was up and running in a few minutes. Deploying the backend was a little tricky. Based on my past experience, deploying a Python application has always been a pain in figuring out a compatible startup command. I decided to go ahead with Azure App Service. Since I had already worked with it before, I was familiar with the Azure dashboard. The integrated Copilot in the Azure dashboard was of no use to me. It could not correctly identify the issues I ran into while deploying. Using online tools like GPT was not up to the mark either. In one scenario, I needed to retain my Linux web app logs to analyze its functioning. Azure dashboard, by default, shows live log trails. When I asked GPT, it said it is not possible to retain logs for the Linux app. However, on doing a Google search, I landed upon a blog by the Azure team regarding how I can download a dump of my container logs for each day. This is exactly what I needed. What it signifies is that AI is still not good when you have to work with internal tools of third-party service providers because of their limited knowledge in that context. After setting up the continuous deployment pipelines, I went ahead to purchase a domain and configure the required DNS records. Here, I used AI as my knowledge partner to understand the significance of different types of DNS records and the role of Name Service Providers. If not for AI, I may not have put in the effort to search and learn more. AI eases the process of such impromptu learning sessions.

Step 6: Marketing Your SaaS

After having confidence that my product is working as expected, it was time to market it to the world and onboard real users. If you have read the blog till here, you may already know how strong (or weak) my content game is. This time, I switched to GPT-4, explaining about my project and drafting the marketing strategy. Here is how I executed it:

  1. Long Format posts

    1. Ideal for: LinkedIn, Reddit, Hacker News

    2. Tips: Draft an initial message and then refine it using AI. Don’t forget the human touch. It is very easy to identify AI slop, especially if you are posting on Reddit.

  2. Short AI-generated ads/Videos

    1. Ideal for: Instagram, X, Short Attention Span Platforms

    2. Tips: Veo3, along with basic video editing, does a pretty good job. Low effort, high output.

  1. Brand Story

    1. Ideal for: Product Hunt, Peerlist, Community Targeted Platforms

    2. Tips: Don’t use too much AI over here. Take some time to figure out what your product is about and then identify your product’s USP. Use AI to deep-research about your competitors, or back your claims with figures. The above platforms are best to get genuine feedback about your project and onboard early adopters.

  2. Shit Posting

    1. Ideal for: X, Reddit

    2. Tips: AI can never match humans in shit posting. So, post as you feel like. Reddit helped me the most in driving initial traffic to my website, but the conversion rate was trash. However, blunt criticism in some Reddit groups actually helped me improve upon my product before scaling it further

All in all, when marketing, use AI as a sidekick to brainstorm ideas. Copy-pasting AI generated content does more harm than actual good. I may be wrong, but it is very easy to spot AI-generated content. And people don’t interact much with such polished content. Instead, consider keeping it raw.

Step 7: Maintaining the Momentum

Being a solo developer, it is very fun to build a project. But to keep the momentum going takes a little bit more effort. It requires thinking from the user’s perspective to continue adding new features. You need to have an eye for good coding practices to identify engineering optimizations to make the system more resilient. After doing all this, you see the site visits getting tanked and doubt if all the efforts are worth it and whether your should focus on creating a distribution instead. I was in this exact situation. To overcome this, I started maintaining a personal project tracker. I divided each requirement into three sections: Product, Engineering, and Marketing.

Every time something new popped up in my head, I updated this list. Then I had a section containing the most important stuff I need to execute immediately. Items from all the other three sections progressively came into this section to be executed. This ensured an equal and balanced growth of my product in all directions.

With the help of AI, I only had to come up with ideas. Since executing them did not take as much time. It was because of AI that I had enough time to segregate my requirements and work on them in an organised manner. Otherwise, I would have been so occupied in fixing things that by the time I was free, I would have lost interest in adding any new feature.

Step 8: The End

With this, I had built my first SaaS product, which is currently being used by multiple people across the globe on a daily basis to help make their mornings happier. This journey proved to me that AI isn't a replacement for the developer; it's a force multiplier. It's the tireless junior dev, the brainstorming partner, and the marketing assistant that allows a single person to achieve what once took a team. Isn’t it crazy how you can test your crazy ideas in such a short span of time without being dependent on any other person?

While I loved the overall experience of building stuff with AI, the critical aspect is to maintain a balance. and use it efficiently. I have seen criticism that vibe coding is only good for building MVPs and no serious projects. But I think with an experienced person, you can build wonders using AI at an exponentially faster speed.

What's your take? How are you using AI in your workflow? Let me know in the comments. If you are reading this, please don’t forget to drop a like. It gives me motivation to put in more effort and continue sharing stuff with you all.

10
Subscribe to my newsletter

Read articles from Lakshay Gupta directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Lakshay Gupta
Lakshay Gupta

My friends used to poke me for writing long messages while texting. I thought why not benefit from it? I have led communities, ran startups, built viral apps, and made youtube vlogs.. Yes, I am an engineer.