Solving the N+1 Query Bottleneck: A Practical Guide with Go & SQL

As a backend engineer, my goal isn't just to build features that work, but to make them fast and scalable. One of the most frequent performance issues I've encountered and one I see a lot of developers run into is the N+1 query problem.
It's a subtle bug that can cripple an application's performance. In this post, I'll break down what this problem is, show you its real-world impact from my own experience, and walk you through the exact solution I use with Go and SQL.
The Problem: A Real-World Scenario
Let me set the scene with an example I ran into while working on an e-commerce application. I was building a feature to display a user's order history. Each order was tied to a specific product.
My initial code fetched the user's 50 most recent orders. Then, to get the product name for each order, my code made another database trip for every single one. I quickly realized my mistake:
1 query to fetch the 50 orders.
50 queries (one for each order) to get the product details.
I was making 51 total database queries just to render one page. This is the classic N+1 problem. While it didn't seem like a big deal on my local machine with a few test orders, it scaled terribly in the real world.
The Real-World Impact I've Seen
I learned the hard way that this isn't just a theoretical issue, it has tangible consequences.
High Latency & Poor User Experience: I've seen API endpoints with N+1 bugs that were noticeably slow. Whether it’s a social media feed fetching user info for 100 posts or a dashboard loading metadata for 50 items, the end-user feels that delay.
Increased Database Load: A database is a powerful tool, but it can get overwhelmed by thousands of tiny, repetitive queries. This N+1 pattern increases CPU load, eats up database connections, and can slow down the entire system.
Higher Operational Costs: A key lesson for me was that fixing the code is often far cheaper than throwing more powerful hardware at the problem. To compensate for the N+1 inefficiency, you might be forced to scale up your database instances, which directly increases infrastructure costs.
Identifying the Problem in My Go Code
Here’s how this problem typically showed up in my Go code. First, let's define the data structures:
type Post struct {
ID int
Title string
AuthorID int
}
type Author struct {
ID int
Name string
}
Now, here is the kind of inefficient function I used to write, which makes a new query inside the loop.
// The inefficient N+1 approach
func getPostsAndAuthors_N_Plus_1(db *sql.DB) {
// 1. The "1" query: Fetch all posts
rows, _ := db.Query("SELECT id, title, author_id FROM posts")
defer rows.Close()
var posts []Post
for rows.Next() {
var p Post
rows.Scan(&p.ID, &p.Title, &p.AuthorID)
posts = append(posts, p)
}
// 2. The "N" queries: Fetch author for each post
for _, p := range posts {
var authorName string
// This query runs inside the loop for every single post!
db.QueryRow("SELECT name FROM authors WHERE id = ?", p.AuthorID).Scan(&authorName)
fmt.Printf("post: %s, author: %s\n", p.Title, authorName)
}
}
When I checked my SQL logs, I'd see this inefficient and alarming pattern:
SELECT id, title, author_id FROM posts;
SELECT name FROM authors WHERE id = 101;
SELECT name FROM authors WHERE id = 102;
SELECT name FROM authors WHERE id = 101;
-- ... and so on for every post.
The Solution I Now Use: Eager Loading
The solution I now use to fix this is called eager loading. Instead of fetching related data in a loop, I fetch it all upfront in a more intelligent way. I get the posts, collect all the AuthorIDs
I need, and then fetch all the authors in a single, second query using an IN
clause.
Here’s what my refactored, more efficient code looks like:
// The efficient eager loading approach
func getPostsAndAuthors_Optimized(db *sql.DB) {
// 1. Still fetch all posts
rows, _ := db.Query("SELECT id, title, author_id FROM posts")
defer rows.Close()
var posts []Post
// Collect all the unique author IDs we will need
authorIDs := make(map[int]bool)
for rows.Next() {
var p Post
rows.Scan(&p.ID, &p.Title, &p.AuthorID)
posts = append(posts, p)
authorIDs[p.AuthorID] = true
}
// Create a map to easily look up authors by their ID
authorMap := make(map[int]string)
// 2. Fetch all required authors in a SINGLE second query
// ... code to build the IN clause and run the query ...
// 3. Now, combine the data in memory — no more database calls!
for _, p := range posts {
authorName := authorMap[p.AuthorID]
fmt.Printf("post: %s, author: %s\n", p.Title, authorName)
}
}
This revised code results in just two, highly efficient queries:
-- Query 1
SELECT id, title, author_id FROM posts;
-- Query 2
SELECT id, name FROM authors WHERE id IN (101, 102, ...);
My Key Takeaway
When I fetch a list of items now, I always ask myself: "Will I need related data for each item in this list?" If the answer is yes, I know it's time to use eager loading. This simple shift in mindset has been fundamental for me in writing high-performance, professional-grade backend services. I hope this breakdown helps you too!
Subscribe to my newsletter
Read articles from Rishab Kumar R directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
