Deep Learning Frameworks

Retzam TarleRetzam Tarle
4 min read

Hello!

Welcome to another surprise mid-series special! 🤗

Some time back I contributed to a project called unify.ai. The vision of unify.ai is to create a platform that unites popular machine-learning frameworks and models through APIs. So I thought we should learn about the deep-learning ecosystems and the frameworks that drive the system as we will soon start building and deploying our deep-learning models.

Disclaimer: This is not a sponsored post for unify.ai.

Let us first understand what a deep learning framework is. First of all, what is a framework? A framework is a pre-built collection of tools, libraries, and conventions that provide a structured foundation for developing software applications. In that same breath, we can say a deep learning framework is a set of tools, libraries, etc that assists developers in building and training deep learning models.

Let me give a short illustration of a framework. In the construction industry, we can say one framework consists of tools like bricks, another blocks yet another just glass. A building constructor would not need to make their own bricks or blocks, they can buy them in the size and shape they want and make the building they want. That's basically what frameworks in tech mean, already-made building blocks.

We have already used machine learning frameworks when working on supervised and unsupervised learning tasks. We used the Scikit Learn (SK Learn) library to obtain already-built implementations of models like Naive Bayes and Random Forest, so we just need to train the model. Deep learning frameworks are similar.

There are a lot of deep-learning frameworks and more are being created as I type this. We'll learn about a few popular ones here:

  1. TensorFlow

    • Developed by: Google Brain.

    • Description: TensorFlow is one of the most widely-used deep learning frameworks, offering flexibility and scalability for building machine learning models. It supports both high-level APIs like Keras for rapid prototyping and low-level APIs for fine-grained control.

    • Strengths: Highly scalable, excellent community support, strong visualization tools (TensorBoard), and broad compatibility with various platforms and hardware.

    • Website: www.tensorflow.org

  2. PyTorch

    • Developed by: Facebook AI Research (FAIR).

    • Description: PyTorch is known for its dynamic computational graph, which makes it easier to debug and more intuitive for researchers. It has become very popular in academia and research. Tesla's autopilot feature is powered by PyTorch.

    • Strengths: Dynamic computation graph, easy to learn and use, strong community support, and seamless integration with Python.

    • Website: https://pytorch.org/

  3. Keras

    • Developed by: Initially developed independently, now part of the TensorFlow project.

    • Description: Keras is a high-level API that provides a user-friendly interface for designing and training deep learning models. It is built on top of frameworks like TensorFlow and Theano.

    • Strengths: Simple and easy to use, great for beginners, quick prototyping, and supports multiple backends.

    • Website: https://keras.io/

  4. Jax

    • Developed by: Google Research.

    • Description: JAX is a relatively new framework designed for high-performance numerical computing. It uses the NumPy API and adds automatic differentiation, allowing users to build and train neural networks.

    • Strengths: Optimized for TPU and GPU, supports just-in-time (JIT) compilation, and seamless integration with NumPy.

    • Website: https://github.com/google/jax

  5. PaddlePaddle

    • Developed by: Baidu.

    • Description: PaddlePaddle (Parallel Distributed Deep Learning) is an open-source deep learning platform that offers flexibility and scalability for a variety of tasks.

    • Strengths: Strong support for distributed training, good for large-scale applications, and optimized for industrial applications in China.

    • Website: https://www.paddlepaddle.org.cn/en

Finally, how does unify.ai come into this? It does so by creating tools and libraries that enable developers to create a model using one framework and be able to convert that to another framework easily. Before that, a developer trying to port a model from one framework to another would have to manually rewrite it.

This means that if I train a model with TensorFlow, I can use unify.ai to convert the model to a PyTorch model or PaddlePaddle in a very simple way. That is awesome because it makes it very easy to switch between frameworks and find the best accuracy for your model.

I hope we now understand with good clarity what deep learning frameworks are, and their different implementation philosophies. We will learn about the ideas behind different models but won't need to implement them ourselves, we can always have a standard implementation we can use from our deep-learning framework so we can focus on training our models.

How was this mid-series special? I think I had fun writing this one!

Keep safe and I'll see you in the next one! 👽

⬅️ Previous Chapter

0
Subscribe to my newsletter

Read articles from Retzam Tarle directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Retzam Tarle
Retzam Tarle

I am a software engineer.