Traditional way to scaling Python

hai nguyenhai nguyen
3 min read

Some traditional methods to scale a Python program

Threading

Pros:

  • Lightweight compared to processes

  • Multiple threads can share memory (this can also be a downside)

Cons:

  • Python has a GIL, so it's not good for CPU-heavy tasks

  • Sharing memory between threads requires synchronization, which is difficult and error-prone

Threading is still good for mixed CPU and IO tasks because IO operations give the GIL some "breathing room."

In practice: We can create multiple threads to handle heavy IO tasks. For example, FastAPI and Uvicorn create separate threads to run "async" functions.

Process

Pros:

  • No GIL lock, so it's good for using the full CPU core for CPU-heavy tasks.

  • No shared memory, so no race conditions.

Cons:

  • Heavy; running too many processes (more than the number of CPU cores) isn't efficient.

Running multiple processes essentially means "copying" one process to multiple places. In practice, we can choose where to run the "copy" function. We often have two options:

  1. Create and manage processes ourselves using the multiprocessing or concurrent.futures modules.

  2. Use a process manager to handle our processes. For example, in a Python web application, Gunicorn can create multiple processes to run our program (using the --worker number_of_workers parameter). In the Docker world, we also use this method. We often write our image as a single-process program and let Kubernetes or Swarm create multiple instances (processes) of our program.

AsyncIO

In my opinion, AsyncIO in Python is tricky. It only works well if everything in your project uses AsyncIO.

AsyncIO's event loop uses system calls like select/epoll/kqueue for file/socket operations, so a regular socket-based program won't work with AsyncIO. For instance, if your program uses SQLAlchemy, you can't make it AsyncIO compatible because SQLAlchemy doesn't use these system calls for socket communication.

If you want a fully AsyncIO program, you should use NodeJS because all its libraries support the event loop.

For example, if your web app needs to work with PostgreSQL, Redis, Kafka, MongoDB, Cassandra, etc., you will need to find an AsyncIO-compatible library for each one. This is a lot of work, and you might find that the Python ecosystem lacks many mature AsyncIO libraries.

NodeJS is built for AsyncIO, so use it if there are no constraints on programming language or tech stack.

AsyncIO isn't new in the Python world. Twisted is an old Python framework that supports an event loop based on select/epoll, but it seems to have faded away for the reasons mentioned above. Creating an AsyncIO ecosystem like NodeJS is a huge task; for example, you would need to rewrite the entire SQLAlchemy library to make it AsyncIO compatible!

Be aware that many tutorials or frameworks try to use a separate thread to run blocking IO calls to mimic AsyncIO behavior.

0
Subscribe to my newsletter

Read articles from hai nguyen directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

hai nguyen
hai nguyen