Basics Of Parallel Computing

Garvit SinghGarvit Singh
5 min read

Introduction

This article is a part of my series on System Design topics. In this article, I have explained the fundamental concepts behind parallel computing, the hardware architectures and the approaches adopted for parallel systems.

What is Parallel Computing?

  1. Parallel Systems refer to tightly coupled systems.
  2. The term 'Parallel Computing' refers to a model where the computation is divided among several processors sharing the same memory.
  3. The architecture of a parallel computing system is characterised by homogeneity of components, each processor is of the same type and posseses the same capabilities.
  4. The shared memory has a single address space, which is accessible to all the processors.
  5. Parallel programs are then broken down into several units of executions that can be allocated to different processors, and can communicate with each other through the shared memory.

What is Parallel Processing?

  1. Processing of multiple tasks simultaneously on multiple processors is called parallel processing.
  2. The parallel program consists of multiple active processes(tasks) simultaneously solving a given problem.
  3. A given task is divided into mutiple subtasks using divide-and-conquer technique, and each one of them is processed on different CPUs.
  4. Programming on multi-processor system using divide-and-conquer technique is called parallel programming.
  5. Parallel processing provides a cost-effective solution by increasing the number of CPUs in a computer and by adding an efficient communication system between them.
  6. The workload can now be shared between different processors. This results in higher computing power and performance than a single processor system.

Hardware Architectures for Parallel Processing

The core elements of parallel processing are CPUs. Based on a number of instruction and data streams that can be processed simultaneously, computing systems are classified into four categories:

  • Single Instruction Single Data (SISD)
  • Single Instruction Multiple Data (SIMD)
  • Multiple Instruction Single Data (MISD)
  • Multiple Instruction Multiple Data (MIMD)

1. Single Instruction Single Data (SISD)

  • A SISD Computing system is a uniprocessor machine capable of executing a single instruction, which operates on a single data stream.
  • In SISD, machine instructions are processed sequentially, and hence computers adopting this model are popularly called sequential computers.
  • All the instructions and data to be processed have to be stored in the primary memory.

2. Single Instruction Multiple Data (SIMD)

  • A SIMD computing system is a multiprocessor machine capable of executing the same instruction on all CPUs, but operating of different data streams.
  • Machines based on SIMD model are well suited for scientific computing since they involve lots of vector and matrix operations.

3. Multiple Instruction Single Data (MISD)

  • A MISD computing system is a multiprocessor machine capable of executing different instructions on different processors, but all of them operate on the same data set.
  • Machine built using MISD model are not useful for most applications, they lack practical application.

4. Multiple Instruction Multiple Data (MIMD)

  • A MIMD computing system is a multiprocessor machine capable of executing multiple instructions on multiple data sets.
  • Each processor in MIMD has separate instructions and data streams, and hence machine built using this model are well suited for all kinds of applications.
  • Processors in MIMD work asynchronously.

Types Of MIMD Machines

1. Shared Memory MIMD Machine

  • All processors are connected to a single global memory and they all have access to it.
  • Systems based on this model are also called tightly-coupled multiprocessor systems.
  • The communication between processors takes place through the shared memory.
  • Easier to program but less tolerant to failures and hard to extend.

2. Distributed Memory MIMD Machine

  • All processors have a local memory
  • Also called loosely coupled multiprocessor systems
  • Communication between processors takes place through the interconnection network. The network can be configured to tree, mesh, cube etc.
  • Each processor operates asynchronously.
  • More tolerant to failures and easier to extend.
  • Distributed MIMD architectures are better than Shared Memory MIMD in every way.

Approaches To Parallel Programming

  1. Data Parallelism

    • Divide & conquer technique is used to split data into multiple sets and each data set is processed by different processors using the same instruction.
  2. Process Parallelism

    • A given operation has multiple distinct activities, which can be processed on multiple processors.
  3. Farmer & Worker Model

    • A job distribution approach is used.
    • One processor is configured as the master and all others are designated as the slaves.
    • The master processor assigns jobs to the slave processors, and they inform the master processor upon completion. Master collects the results.

Conclusion

You can read other articles written by me through these links.

System Design Series
Introduction To Parallel Computing
Deep Dive Into Virtualization
Insights Into Distributed Computing

Cloud Computing Series
1. Cloud Service Models
2. Cloud Deployment Models
3. Cloud Security
4. Cloud Architecture
5. Cloud Storage
6. Networking In The Cloud
7. Cloud Cost Management
8. DevOps In Cloud & CI/CD
9. Serverless Computing
10. Container Orchestration
11. Cloud Migration
12. Cloud Monitoring & Management
13. Edge Computing In Cloud
14. Machine Learning In Cloud

Computer Networking Series
1. Computer Networking Fundamentals
2. OSI Model
3. TCP/IP Model : Application Layer
4. TCP/IP Model : Transport Layer
5. TCP/IP Model : Network Layer
6. TCP/IP Model : Data Link Layer

Version Control Series
1. Complete Guide to Git Commands
2. Create & Merge Pull Requests
3. Making Open Source Contributions

Linux
Complete Guide to Linux Commands

Thanks For Reading! ๐Ÿ’™
Garvit Singh

21
Subscribe to my newsletter

Read articles from Garvit Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Garvit Singh
Garvit Singh

๐Ÿ‘‹ Hi, I'm Garvit, an IT Undergraduate. I'm passionate about expanding my knowledge in the field of Computer Science. ๐Ÿ’ป Computer Science Skills and the topics I write blogs on โœ… Proficient in Linux. โœ… Git/GitHub for version control. โœ… Networking Fundamentals. โœ… Proficient in Java, C and Python. โœ… Object-Oriented Programming in Java. โœ… Data Structures & Algorithms in Java. โœ… MERN Stack Web Development. โœ… System Design. โœ… Bash scripting and automation. โœ… Python for Scripting, Mini Projects. โœ… Cyber Warfare & Ethical Hacking. โœ… Cloud Computing. โœ… Distributed Computing, Parallel Computing, Real Time Systems, Virtualization. โœ… DevOps - Docker, Kubernetes โœ… Operating Systems. โœ… Software Testing. โœ… Databases - SQL, NoSQL and more...