How to Implement In-Memory Caching in GoLang

Andrew DavidAndrew David
9 min read

The Why?

I recently worked on Airtable OAuth authentication for a project and needed a place to store state and challenge code for later request validation. The obvious solution is to use a third-party solution like Redis (Which I am a big fan of), but I wanted to try out in-memory caching. Yes, I reviewed other tools like MemeCache, but am I not an engineer?

The What?

When you start your Go app, Python app, or Node app, you can store certain data in the process, making it available throughout the program's lifetime. This data lives in the RAM (Random Access Memory), making it easily accessible by the program whenever it needs it. Most caching systems like Redis also store their data in RAM, but unlike Redis, an in-memory cache is not persistent.

A simple in-memory cache typically maintains a key-value store in memory, tracks when each item should expire (using TTL — Time to Live), and optionally uses an LRU (Least Recently Used) policy to remove older items when a maximum size is reached. The cache runs in the same process as your application, making reads and writes extremely fast. However, because everything is stored in memory, all data is lost when the application stops.

Less Talk, more code !!!

Before we code, let’s go over a few things I think you should know

  • LRU (Least Recently Used) → My phone has a bunch of apps that I do not use—apps that have been on my phone for the longest time, occupying good space. LRU is an eviction policy that removes the least-used item in a cache. Essentially, think of it as removing that app you haven’t used in months so you can make room for the new game you want to install. Essentially, you are removing the oldest unused item, regardless of whether it has been accessed a lot in the past.

  • LFU (Least Frequently Used) → Imagine this: your bag can only hold four books. If you get a new one, you'll need to remove a book to make room, and usually, you'll remove the book you read less. The same thing applies to an in-memory cache. LFU is an eviction policy that ranks items based on how often they’re used, and when space runs out, it removes the least-frequently-used item to make room for new ones.

  • TTL (Time To Live) → This defines how long we want an item to stay in our cache. It's like setting an expiry date — once the TTL has elapsed, the item is considered stale and no longer valid. Any future attempts to get that item will fail or trigger a refresh. TTL is useful for keeping data fresh and automatically clearing out old or irrelevant entries without needing manual intervention or cleanup.

  • Cache Hit, Cache Miss → Whenever we request an item from the cache, a cache hit occurs if the item exists and is still valid. A cache miss, on the other hand, happens when the item is not found or has expired.

So now we code, right?

Well, not just yet, there are still a few things we need to talk about, sorry :)

The Design

To build our cache, we’re going to keep things simple with just three components:

  • A map to store the items

  • The cache itself

  • A Janitor to clean up our mess

Now we code

Let’s start by defining a very simple struct to hold all the data. After that, I’ll walk you through what each part represents.

// Cache Items
type Item struct {
    value    interface{}
    duration int64
}

// Main cache struct
type cache struct {
    items   map[string]Item
    mu      sync.RWMutex
    janitor *Janitor
}

// handles clean up
type Janitor struct {
    interval time.Duration
    stop     chan bool
}

The Item struct contains two fields: Value and Duration. Normally, you might use time.Time for expiration, but here, Duration is stored as an integer timestamp—a design choice to keep things simple and flexible.

As for the Cache struct, you might notice that it's unexported (made private). This allows us to expose a global constructor to create the cache instance. The real reason behind this design, though, is that I implemented it using the singleton pattern—meaning I wanted to ensure that only one instance of the cache exists throughout the entire program.

One thing I may have forgotten to mention is that our cache needs to be thread-safe. This means it should prevent race conditions, where two processes or goroutines try to modify the state of the cache at the same time.

Go provides a solution for this through the use of a mutex—a locking mechanism that ensures only one thread or goroutine can access a resource at a time. By locking the cache during operations, we prevent data corruption and maintain consistency.

Another important component is the Janitor. This is a lightweight scheduler that periodically runs in the background to clean up expired items from the cache. It helps free up memory and makes room for newer items, keeping the cache lean and efficient.

Expanding Item

//Check if Item is expired
func (this Item) Expired() bool {
    if this.duration == 0 {
        return false
    }
    return time.Now().UnixNano() > this.duration
}

This method returns a bool indicating whether an item has expired by checking if its timestamp (Duration, remember) is less than the current time. But before doing that, it first checks if the Duration is 0. If it is, the method returns false, meaning the item should not expire.

This is intentional. Sometimes, you may not want certain items in the cache to ever expire—you want them to persist for as long as the program is running. In this design, any item with a Duration value of 0 is treated as having no expiration, and therefore is never evicted by the cleanup process.

Set

Now we can begin working on our cache. The first method we’re going to add is Set—this allows us to populate the cache with data that we can retrieve and use later.

// Store a value in cache with a defined ttl
func (c *cache) Set(key string, value interface{}, ttl time.Duration) {
    defer c.mu.RUnLock()
    var d int64
    if ttl == 0 {
        ttl = DefaultDuration
    }
    if ttl > 0 {
        d = time.Now().Add(ttl).UnixNano()
    }
    c.mu.RLock()
    c.items[key] = Item{
        value:    value,
        duration: d,
    }
    return
}

The Set method stores a value in the cache under a specific key, along with an optional TTL (Time To Live). If no TTL is provided, and the DefaultDuration is 0, the item will be stored without expiration, meaning it will stay in the cache indefinitely unless manually removed.

If a TTL is provided (i.e. greater than zero), it is converted to a Unix timestamp (int64) for efficient comparison during retrieval or cleanup.

Finally, the method locks the cache for writing to ensure thread safety, then inserts the item into the cache map.

Let’s do Get, Delete, Len, you get the gist now

// Get an Item from store using its key
func (c *cache) Get(key string) (interface{}, bool) {
    c.mu.RLock()
    item, ok := c.items[key]
    if !ok {
        return nil, false
    }
    if item.Expired() {
        c.mu.RUnlock()
        return nil, false
    }
    c.mu.RUnlock()
    return item.value, true
}

// Delete an item from store by its key
func (c *cache) Delete(key string) bool {
    c.mu.RLock()
    if _, ok := c.items[key]; !ok {
        return false
    }
    delete(c.items, key)
    c.mu.RUnlock()
    return true
}

// Return the lenght of items in store
func (c *cache) Len() int {
    c.mu.RLock()
    n := len(c.items)
    c.mu.RUnlock()
    return n
}

Alright, we’re almost done—I promise. The Get function retrieves an item from the cache using its key. First, it locks the cache for reading and checks if the key exists in the items map. If it doesn’t, it returns nil and false.

If the item does exist, the function checks whether it has expired using the Expired() method. If it has, it unlocks the cache and returns nil and false. Otherwise, it returns the item’s value along with true to indicate a successful cache hit.

The next function is Delete. This function removes an item from the cache by its key. It first locks the cache for reading and checks if the key exists in the items map. If it doesn’t, it returns false.

If the key exists, it deletes the entry from the map and then unlocks the cache. Finally, it returns true to indicate that the item was successfully removed.

And finally Len. This returns the length of items in the items map; nothing too fancy here.

But wait, there is one more thing

// Delete all expired items from store
func (c *cache) DeleteExpired() {
    c.mu.RLock()
    for k, v := range c.items {
        if v.Expired() {
            c.Delete(k)
        }
    }
    c.mu.RUnlock()
}

Remember when I mentioned we have a Janitor? The DeleteExpired function is part of that cleanup process. It loops over all the items in the store and checks whether each one has expired. If an item is expired, it calls the Delete function to remove it from the cache.

This helps ensure the cache doesn't keep stale data and frees up memory for new entries, keeping the cache clean.

The Janitor

Now let’s talk about how the Janitor runs.

// Run janitor clean up after intervals
func (j *Janitor) Run(c *cache) {
    ticker := time.NewTicker(j.interval)
    for {
        select {
        case <-ticker.C:
            c.DeleteExpired()
        case <-j.stop:
            ticker.Stop()
        }
    }
}

The Run method starts a ticker that fires at intervals defined by the user. On each tick, it calls DeleteExpired() on the cache — this is how we automatically clean up expired items without the user needing to call anything manually.

We also listen for a stop signal through the stop channel. Once that signal is received, we stop the ticker and exit the loop.

func stopExceution(c *cache) {
    c.janitor.stop <- true

}

func startJanitor(interval time.Duration, c *cache) {
    janitor := &Janitor{
        interval: interval,
        stop:     make(chan bool),
    }
    c.janitor = janitor
    go janitor.Run(c)
}

There’s also a stopExecution function that sends the stop signal to the janitor. This gives us a way to gracefully shut down the cleanup process when the program is closing or we no longer need the cache.

Finally, the runJanitor function wires everything together. It creates a new Janitor with a cleanup interval, assigns it to the cache, and runs it in a goroutine so it works in the background.

Putting it all together

func GetCache() *cache {
    items := make(map[string]Item)
    once.Do(func() {
        defaultCache = &cache{
            items: items,
        }
        runJanitor(DefaultCacheInterval, defaultCache)
        runtime.SetFinalizer(defaultCache, stopExceution)
    })
    return defaultCache
}

Now we can run our cache and see our little in-memory cache in action

Conclusion

This is a very small and maybe not production-ready cache, but I still remember the excitement I felt when it finally worked—and how it helped me with my little project (which I promise I did not abandon).

In my next article, we’ll look at how to expand this further by adding LRU support to the cache.

Now remember: just because you can build something yourself doesn’t always mean you should. It’s often a smart move to use existing tools, especially when caching isn’t the core of your project.

That said... I do think you should try building your own multi-auth system at least once. Until next time ✌🏽

0
Subscribe to my newsletter

Read articles from Andrew David directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Andrew David
Andrew David