Getting Started with Distributed Processing And MapReduce 🗯️
What is Distributed Processing ? ⛷️
Distributed processing is a phrase used to refer to a variety of computer systems that use more than one computer (or processor) to run an application. This includes parallel processing in which a single computer uses more than one CPU to execute programs.
More often, however, distributed processing refers to local-area networks (LANs) designed so that a single program can run simultaneously at various sites. Most distributed processing systems contain sophisticated software that detects idle CPUs on the network and parcels out programs to utilize them.
Another form of distributed processing involves distributed databases. This is databases in which the data is stored across two or more computer systems. The database system keeps track of where the data is so that the distributed nature of the database is not apparent to users.
What is MapReduce ? ⛷️
MapReduce is a Java-based, distributed execution framework within the Apache Hadoop Ecosystem. It takes away the complexity of distributed programming by exposing two processing steps that developers implement: 1) Map and 2) Reduce. In the Mapping step, data is split between parallel processing tasks.
How MapReduce Works ? ⛷️
At the crux of MapReduce are two functions: Map and Reduce. They are sequenced one after the other.
The Map function takes input from the disk as <key,value> pairs, processes them, and produces another set of intermediate <key,value> pairs as output.
The Reduce function also takes inputs as <key,value> pairs, and produces <key,value> pairs as output.
Subscribe to my newsletter
Read articles from Bikram Chatterjee directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Bikram Chatterjee
Bikram Chatterjee
I'm Proficient in a variety of Big Data technologies, including AWS, Azure, GCP, Hadoop, Databricks, Pyspark, Sql, Python, Docker, Jenkins, Git/GitHub, Kubernetes, Azure Data Engineering.