Memory management in CPython

Cyber NaskarCyber Naskar
4 min read

As we all know, python is a garbage-collected language. However, have you ever wondered how memory is managed under the hood? It's very different than what happens in other garbage-collected programming languages like Java, Go, Javascript, etc. Today we are going to dive deep into the internals of the concept of how exactly memory is managed in Python.

When it comes to the main methods being used to manage memory, there are two main methods that handle two different use cases:

  1. Reference counting

  2. Cycle detection using Mark and Sweep

Before proceeding further to know more about these methods, it is useful to know that there are two types of memory locations - Stack and Heap.

Heap here is not the heap data structure that we commonly know about, rather it is a memory management structure that performs memory management operations such as allocation and deallocation of memory and is primarily managed by the Python runtime.

As for what is stored where, it is important to note that all function calls and variables within the local scope are stored in the stack, whereas any objects including classes and data structures are stored in the heap. As a general rule of thumb, if the exact size of the data can be predicted beforehand - in most cases it is going to get stored in the stack. However, it may differ on a case-by-case basis.

Reference counting

When a variable is created, the reference of that variable is stored in the memory, just like in other languages. You can check the memory location of an object by using the id function. These references are maintained in the memory by maintaining a count of these.

>>> a = 'crunchcode'
>>> id(a)
140654166163184
>>> hex(id(a))
'0x7fec999c3ef0'

There are quite a lot of ways to get the reference counts, however, I like to use the ctypes module because of its brevity.

>>> import ctypes
>>> ctypes.c_long.from_address(id(a)).value
1

This shows the number of references we have stored in the memory that point to the object crunchcode.

Let's see what happens when I assign a new variable b to a (an existing variable).

>>> import ctypes
>>> a = [79]
>>> ctypes.c_long.from_address(id(a)).value
1
>>> b = a
>>> ctypes.c_long.from_address(id(a)).value
2

As you might notice now, our reference count increased by 1. This is because both variables a and b are referring to the same memory address.

To understand this better, we need to understand another conceptual difference between a shallow copy and a deep copy.

Usually, when we assign a variable to another variable in Python like in the above example, python stores its reference to the same memory location and increases the reference count by 1. This is referred to as a shallow copy.

This can have unintended effects as well in your code:

>>> a = [1, 2]
>>> b = a
>>> a
[1, 2]
>>> b
[1, 2]
>>> a[1] = 100
>>> b
[1, 100]

However, the same behavior is not observed for strings:

>>> a = 'crunchcode'
>>> b = a
>>> a = 'aa'
>>> b
'crunchcode'

What the hell just happened? Strings are immutable. A new string is created every time a value is changed, even when the content is the same. This is a whole different topic that is out of the scope of this post, that I intend to cover in the future.

Now let's talk about deepcopy

>>> import copy
>>> import ctypes
>>> a = [1, 2]
>>> b = copy.deepcopy(a)
>>> id(a)
140478929984320
>>> id(b)
140478929986816
>>> a[1] = 100
>>> b
[1, 2]
>>> a
[1, 100]
>>> ctypes.c_long.from_address(id(a)).value
2
>>> ctypes.c_long.from_address(id(b)).value
1

When we are using deepcopy, the Python runtime does not point the variable to the same memory address, instead, it actually copies the value present in the memory address and allocates a new memory location, as proved by the different id results above. As a result, reference counts are also stored separately for each variable.

As soon as the runtime detects that there are 0 reference counts for a particular object, the memory space is freed up.

Cycle detection

Python also implements an algorithm called Mark and Sweep in its garbage collector to deallocate memory that has cyclic references.

The explanation of the algorithm is out of the scope of this article, however, you can learn more about it on Mark-and-Sweep: Garbage Collection Algorithm - GeeksforGeeks if you want to.

However, when the mark and sweep algorithm is performed, the tiny fragment of the memory that was being occupied is now released. When this is done several times, this results in an issue known as fragmentation, where contiguous blocks of memory are not really contiguous anymore, resulting in a lesser and lesser amount of space, which ultimately leads to memory exceptions. A defragmentation algorithm can be run to fix this, however, this leads to additional overhead.

Conclusion

While mark and sweep are used to reclaim back memory that has cyclic references, reference counting helps to free memory as soon as the reference count reaches 0. This helps to free up memory space for both short-lived entities and much more complex ones.

Python also implements a gc module for garbage collection. Let's learn about it in the next post!

2
Subscribe to my newsletter

Read articles from Cyber Naskar directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Cyber Naskar
Cyber Naskar

As an experienced Python developer with over 5 years of hands-on experience, I have had the opportunity to work with various web frameworks in domains like agtech, ecommerce, and edutech. Through this blog, I aim to share my insights and experiences in the world of Python development, providing valuable information and helpful tips for fellow developers and enthusiasts. Let's embark on this learning journey together!