Exploratory Data Analysis (EDA) with Python: Unveiling the Hidden Stories in Data
Table of contents
Introduction to Exploratory Data Analysis
Exploratory Data Analysis (EDA) is an essential step in the data analysis process, where data scientists and analysts explore datasets to gain insights, discover patterns, and uncover relationships between variables. EDA helps in understanding the data's structure, identifying missing values, outliers, and other data anomalies. Python, with its rich ecosystem of data analysis libraries like Pandas, NumPy, matplotlib, and Seaborn, is the ideal tool for performing EDA and generating visualizations that aid in making data-driven decisions.
Getting Started with EDA - Loading and Summarizing Data
To begin our EDA journey, we need a dataset to work with. We'll use the popular "Iris" dataset, which contains measurements of iris flowers, including sepal length, sepal width, petal length, petal width, and species. Let's load the data and perform a preliminary summary of the dataset.
import pandas as pd
# Load the Iris dataset
iris_df = pd.read_csv('iris_dataset.csv')
# Display the first few rows of the dataset
print(iris_df.head())
# Generate a summary of the dataset
print(iris_df.info())
print(iris_df.describe())
Visualizing Data - Uncovering Patterns and Trends
Data visualization is a crucial aspect of EDA. It allows us to understand the data distribution, relationships between variables, and any anomalies that might exist. Let's create various visualizations using Matplotlib and Seaborn.
Scatter Plots - Visualizing Relationships
import matplotlib.pyplot as plt
import seaborn as sns
# Scatter plot of sepal length vs. sepal width
plt.figure(figsize=(8, 6))
sns.scatterplot(x='sepal_length', y='sepal_width', hue='species', data=iris_df)
plt.title('Scatter Plot of Sepal Length vs. Sepal Width')
plt.xlabel('Sepal Length (cm)')
plt.ylabel('Sepal Width (cm)')
plt.show()
Histograms - Understanding Data Distribution
# Histogram of petal length for each species
plt.figure(figsize=(8, 6))
sns.histplot(data=iris_df, x='petal_length', hue='species', kde=True)
plt.title('Histogram of Petal Length for Each Species')
plt.xlabel('Petal Length (cm)')
plt.show()
Handling Missing Values and Outliers
In EDA, it's essential to identify and handle missing values and outliers that might affect the analysis and modeling process.
# Check for missing values
print(iris_df.isnull().sum())
# Boxplot to visualize outliers
plt.figure(figsize=(8, 6))
sns.boxplot(data=iris_df, x='species', y='petal_length')
plt.title('Boxplot of Petal Length by Species')
plt.ylabel('Petal Length (cm)')
plt.show()
You can also use Inter quartile range (IQR) method for analyzing and removing outliers
import numpy as np
def find_outliers(data):
# Calculate the first quartile (Q1) and third quartile (Q3)
Q1 = np.quantile(data, 0.25)
Q3 = np.quantile(data, 0.75)
# Calculate the interquartile range (IQR)
IQR = Q3 - Q1
# Define the lower and upper bounds for outliers
lower_bound = Q1 - 1.5 * IQR
upper_bound = Q3 + 1.5 * IQR
# Identify outliers in the data
outliers = [x for x in data if x < lower_bound or x > upper_bound]
return IQR, outliers
# Example usage:
data = [10, 15, 20, 25, 30, 35, 40, 45, 50, 200]
iqr, outliers = find_outliers(data)
print("IQR:", iqr)
print("Outliers:", outliers)
Correlation Analysis - Identifying Relationships
Correlation analysis helps us identify the strength and direction of relationships between numerical variables in the dataset.
# Calculate the correlation matrix
correlation_matrix = iris_df.corr()
# Create a heatmap of correlations
plt.figure(figsize=(8, 6))
sns.heatmap(correlation_matrix, annot=True, cmap='coolwarm', fmt='.2f', linewidths=0.5)
plt.title('Correlation Heatmap')
plt.show()
Data Preprocessing - Data Transformation and Feature Engineering
Data preprocessing involves transforming data to make it suitable for analysis and modeling. Let's perform feature engineering to extract additional information from the data.
# Feature engineering - Extracting the first initial of species as a new feature
iris_df['species_initial'] = iris_df['species'].apply(lambda x: x[0])
# Grouping by species initial and calculating the mean petal length
species_initial_grouped = iris_df.groupby('species_initial')['petal_length'].mean()
# Bar plot of mean petal length by species initial
plt.figure(figsize=(8, 6))
sns.barplot(x=species_initial_grouped.index, y=species_initial_grouped.values)
plt.title('Mean Petal Length by Species Initial')
plt.xlabel('Species Initial')
plt.ylabel('Mean Petal Length (cm)')
plt.show()
Conclusion
Exploratory Data Analysis is a powerful technique that enables data scientists and analysts to gain valuable insights, understand data distributions, and identify relationships between variables. Python, with its versatile libraries like Pandas, NumPy, matplotlib, and Seaborn, provides a robust environment for conducting EDA and generating insightful visualizations. Through this blog, you have learned the essential steps of EDA, including data loading, summarizing, visualizing data, handling missing values, and performing data preprocessing. Armed with these skills, you can now confidently explore and analyze datasets, making data-driven decisions that impact your business and research projects.
Subscribe to my newsletter
Read articles from Ayesha Irshad directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Ayesha Irshad
Ayesha Irshad
I am a Developer Program Member at GitHub, where I collaborate with a global community of developers and contribute to open source projects that advance the field of Artificial Intelligence (AI). I am passionate about learning new skills and technologies, and I have completed multiple certifications in Data Science, Python, and Java from DataCamp and Udemy. I am also pursuing my Bachelor's degree in AI at National University of Computer and Emerging Sciences (FAST NUCES), where I have gained theoretical and practical knowledge of Machine Learning, Neural Networks, and Data Analysis. Additionally, I have worked as an AI Trainee at Scale AI, where I reviewed and labeled data for various AI applications. Through these experiences, I have developed competencies in Supervised Learning, Data Science, and Artificial Neural Networks. My goal is to apply my skills and knowledge to solve real-world problems and create positive impact with AI.