I Built 2 FastAPI Projects to Compare SQLAlchemy and JSON. Here’s What I Found

Harshita SharmaHarshita Sharma
9 min read

As part of my journey to become a GenAI Engineer, I'm learning several key backend technologies and one of them is FastAPI, a modern, fast (and yes, “fast” is in the name!) Python framework for building APIs. While exploring FastAPI, I quickly came across two more essential tools that work hand-in-hand with it: SQLAlchemy and Pydantic.

Every time an API sends or receives data, it has to handle it properly either storing it in a database or validating it on the way in and out. That’s where SQLAlchemy, a Python ORM (Object Relational Mapper), helps by letting us interact with relational databases using Python classes instead of raw SQL. Pydantic, on the other hand, allows us to define and validate the structure of data using Python models making it perfect for handling API requests and responses cleanly.

To better understand how these tools work together in practice, I built two small FastAPI projects one that stores data using SQLAlchemy, and another that stores it using a plain JSON file. Through this hands-on comparison, I noticed some interesting differences in how data is handled, updated, and validated.

In this article, I won’t dive deep into explaining each technology from scratch. Instead, I’ll focus on sharing my real-world observations and what I learned while building and comparing these two approaches. My goal is to highlight practical differences and hopefully make your learning curve smoother if you’re exploring the same stack.

Quick Overview of both the projects

Project 1: Grocery Stock Alert System

Small business owners often struggle to keep track of inventory manually. This simple system helps monitor stock levels and alert them when products run low. I used SQLAlchemy for structured data storage and defined two database models to represent the system. In a real-world implementation, this would include more complex tables and relationships but for this project, I kept it intentionally simple to focus on understanding how models and relationships work in FastAPI with SQLAlchemy.

# stores details of all products
class Products(Base):
  __tablename__ = 'product_details'
  product_id = Column(Integer, primary_key=True)
  product_name = Column(String, nullable=False, unique=True)
  price_per_unit = Column(Float, nullable=False)

# stores stock information for each product
class Stocks(Base):
  __tablename__ = 'stock_details'
  product_id = Column(Integer, ForeignKey('product_details.product_id'), primary_key=True)
  available_in_stock = Column(Integer, nullable=False)
  threshold = Column(Integer, nullable=False)

The data format is validated using Pydantic schemas, and I created separate endpoints for different operations like adding products, checking stock, and updating quantities.

class Product(BaseModel):
  product_name: Annotated[str, Field(..., description='Product name')]
  price_per_unit: Annotated[float, Field(..., description='Price per unit of the product')]
  stock_amount: Annotated[int, Field(..., description='Available stock of the product', gt=0)]
  threshold: Annotated[int, Field(..., description='the availablity of the product should always be greater than this threshold. ', gt=0)]
# ---------------- GET ALL ITEMS
@app.get('/items')

# --------------- ADD A NEW PRODUCT
@app.post('/items')

# ------------ SHOW ITEMS WITH LOW THRESHOLD
# it'll show list a all items having quantity less than the given threshold value
@app.get('/items/low-stock')

# -------------- UPDATE AN ITEM
@app.put('/items/{item_id}')

# --------------- DELETE AN ITEM
@app.delete('/items/{item_id}')

Project 2: Patients Management System

To simplify the process of maintaining patient records, I developed this system using a basic local .json file for data storage. Each patient record is stored under a unique patient ID, and the overall structure is kept flat and easy to update. This approach is lightweight and quick to implement, making it ideal for quick tests or very small-scale usage. I designed endpoints to perform CRUD operations adding, updating, and viewing patient records using FastAPI and Pydantic for validation.

# ----------- EQUIVALENT PYDANTIC MODEL, for storing patient info
class Patient(BaseModel):
  id: Annotated[str, Field(..., description='ID of the patient', examples=['P001'])]
  name: Annotated[str, Field(..., description='Name of the patient', examples=['Harshita Sharma'])]
  city: Annotated[str, Field(..., description='name of the city the patient lives in.', examples=['Mumbai', 'Jaipur'])]
  age: Annotated[int, Field(..., description='Age of the patient', gt=0)]
  gender: Annotated[Literal['male','female','other'], Field(..., description='Gender of the patient.')]

  height: Annotated[float, Field(..., description='height of the patient', gt=0)]
  weight: Annotated[float, Field(..., description='weight of the patient', gt=0)]

  # computed fields -> bmi
  @computed_field
  @property
  def bmi(self) -> float:
    ......

  # computer field -> verdict
  @computed_field
  @property
  def verdict(self) -> str:
    .......

Data stored in json file will look something like this, where the key is the patient id itself:

{
    {"P001": 
        {"name": "Ananya Sharma", 
        "city": "Guwahati", 
        "age": 28, "gender": "female", 
        "height": 1.65, 
        "weight": 90.0, 
        "bmi": 33.06, 
        "verdict": "Obese"}, 
    "P002": 
        {"name": "Ravi Mehta", 
        "city": "Mumbai", 
        "age": 35, "gender": "male", 
        "height": 1.75, 
        "weight": 85, 
        "bmi": 27.76, 
        "verdict": "Overweight"}
}

The following end points have been developed:

# -------- to view all patients information
@app.get('/patient')

# -------- to view one patients data
@app.get('/patient/{patient_id}')

# --------- to add a new patient
@app.post('/patient')

# --------- view sorted patients list
@app.get('/sort')

# ---------- update patient
@app.put('/patient/{patient_id}')

# --------- delete patient
@app.delete('/patient/{patient_id}')

Key differences

  1. Adding a new entry is little tricky in SQLAlchemy

    In the Grocery Stock Alert System, SQLAlchemy is used to manage data persistence. When adding a new product, its stock entry must also be created. This requires instantiating the SQLAlchemy model, setting all its fields manually, using session.add(), and then committing the session:

     def create_item(new_product: Product):
         # checking if the product we are trying to add already exists.
       data = session.query(Products).filter_by(product_name = new_product.product_name).first()
    
       if data:
         raise HTTPException(status_code=409, detail='The product with this id already exists.')
    
       new_entry = Products(
         product_name = new_product.product_name,
         price_per_unit = new_product.price_per_unit
       )
       session.add(new_entry)
       session.commit()
    

    For a first-time user, this process feels more verbose and complex. However, it becomes straightforward with practice.

    In contrast, the Patient Management System uses a simple .json file for storage. Adding a new entry is as easy as reading the file, appending the new record to the data structure, and saving it back.

     def add_patient(patient: Patient):
     # load_data(): function to load the data from json file
       data = load_data()
    
       if patient.id in data:
         return HTTPException(status_code=409, detail='Patient with this id already exists.')
    
       # patient is a pydantic object -> convert it into dict to add to json
       data[patient.id] = patient.model_dump(exclude=['id'])
    
       save_data(data)
       return JSONResponse(content='Patient added successfully.')
    
  2. Manual Update steps are required in JSON File Storage

    Updating records in SQLAlchemy is relatively streamlined. You query the existing object, use Python functions like setattr() to modify attributes, and commit the session. SQLAlchemy automatically maps and updates the underlying table.

    In the Patient Management System, updating records involves more manual work:

    • Load the JSON file

    • Locate the patient by ID

    • Modify the corresponding fields

    • Reformat the structure

    • Save the updated data back to the file

This manual flow introduces more room for error and is harder to scale.

  1. put() operation should use separate Pydantic Models with Optional Fields

    Whether using SQLAlchemy or JSON, PUT operations (full updates) require more flexibility in validation.

    By default, your primary Pydantic model likely marks all fields as required but that’s not ideal when a user wants to update only one or two fields.

    To fix this, define a secondary Pydantic model for updates, where all fields are optional. This allows partial updates without forcing users to provide the entire payload.

    This approach enhances UX and prevents unnecessary frustration and it applies regardless of the storage backend. Example:

     # For POST
     class ProductCreate(BaseModel):
         name: str
         quantity: int
    
     # For PUT
     class ProductUpdate(BaseModel):
         name: Optional[str] = None
         quantity: Optional[int] = None
    
  2. When using SQLAlchemy, be mindful of Table Relationships

    In the Grocery Stock Alert System, one issue I faced was adding a new product and simultaneously inserting a stock entry linked to that product.

    Initially, I tried to add both entries in one go. But I encountered an error: the product_id in the product table didn’t exist yet.

    This happens because the product’s ID isn’t created until the session is committed. So, SQLAlchemy doesn't recognize it as a valid reference when you're inserting the stock record. An example of this is shown below:

     def create_item(new_product: Product):
       data = session.query(Products).filter_by(product_name = new_product.product_name).first()
    
       if data:
         raise HTTPException(status_code=409, detail='The product with this id already exists.')
    
       new_entry = Products(
         product_name = new_product.product_name,
         price_per_unit = new_product.price_per_unit
       )
       session.add(new_entry)
    
       new_stock_entry = Stocks(
         product_id = new_product.product_id,
         available_in_stock = new_product.stock_amount,
         threshold = new_product.threshold
       )
       session.add(new_stock_entry)
       session.commit()
    
       return JSONResponse(status_code=201, content=f'New product {new_entry.product_name} added successfully.')
    

    Correct approach:

    • First, create and commit the product

    • Then use the returned object (with a valid id) to create the related stock entry

    def create_item(new_product: Product):
        ...........
        new_entry = Products(
            product_name = new_product.product_name,
            price_per_unit = new_product.price_per_unit
        )
        session.add(new_entry)
          # we need to do this so that product with id gets 
            # permanently stored in the database. 
        session.commit()

          new_stock_entry = Stocks(
            product_id = new_entry.product_id,
            available_in_stock = new_product.stock_amount,
            threshold = new_product.threshold
          )
          session.add(new_stock_entry)
          session.commit()

Understanding this order of operations is crucial when dealing with foreign key constraints and relationships.

What I learned

  1. You can’t write backend code mindlessly - every small step matters.
    When building the update endpoint for the Patient Management System using JSON, I realized how critical it is to plan each step precisely. For example, updating nested or computed fields (like BMI) requires not just replacing values, but also carefully recalculating and validating them. (Have a look at the repo to understand the project) If I didn’t handle even one edge case, the entire data structure could break. Writing code like this is not just about syntax, it’s about having a clear mental checklist of what the logic needs to accomplish, step by step.

  2. A strong grasp of Python fundamentals is crucial - not optional.
    In one of the endpoints, I had to sort patient records by height, weight, or BMI. That required using sorted() and lambda functions effectively. These aren’t complex tools, but they’re essential, and having confidence with core Python really helped me focus on problem-solving rather than googling every second line.

  3. JSON is fast and simple, but it comes with long-term trade-offs.
    I loved how quickly I could get things working with .json file storage. It’s great for prototypes or learning. But I also saw how fragile and manual it becomes when the app grows, especially for updates, error handling, or data consistency. It made me appreciate why databases are still the default in production.

  4. SQLAlchemy and Pydantic are tough to learn together, but worth it.
    At first, the differences between a SQLAlchemy model and a Pydantic schema were confusing. They look similar, but behave differently, especially when validating input vs. saving to the database. But as I worked through more endpoints, I started understanding where each tool fits, and how they work together.

Conclusion

Tutorials helped, but these mini-projects taught me way more. I now understand why things like data validation, schema design, and storage architecture actually matter, because I’ve had to debug and fix them myself.

If you're also exploring FastAPI, here’s the CampusX FastAPI tutorial that I am following. It’s one of the best beginner-friendly resources out there. I also recommend their video on Pydantic for a solid conceptual foundation.

Github repo of the project: Mini projects in FastAPI, Pydantic and SQLAlchemy

Feedbacks and suggestions are always welcome. Do let me know if you’ve faced the same problems when you first started learning FastAPI.

0
Subscribe to my newsletter

Read articles from Harshita Sharma directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Harshita Sharma
Harshita Sharma

Learning to build AI applications.