Guide to Transforming Netflix Raw Data into Valuable Insights for Data Wranglers

Table of contents
- Introduction: The Hidden Stories in Netflix’s Data
- Step 1: Discovery — Understanding Our Dataset
- Step 2: Structuring — Creating an Analysis-Ready Framework
- Converting Dates and Splitting Duration
- Cleaning IDs and Standardizing Data Types
- Step 3: Cleaning — Handling Missing Values and Inconsistencies
- Step 4: Enriching — Adding Analytical Value
- Step 5: Validation — Ensuring Data Quality
- Step 6: Publishing — Delivering the Final Dataset
- Key Takeaways and Practical Applications
- Conclusion: The Art of Data Wrangling

Introduction: The Hidden Stories in Netflix’s Data
As data professionals, our first challenge is transforming this raw information into a clean, structured dataset ready for analysis.
As I dive deeper into my second week of the Cyber Shujaa program, I’ve been applying what I’m learning in real-time and this project is a perfect example. It’s not just about cleaning data; it’s about unlocking value from it.
In this comprehensive walkthrough, we’ll explore how to prepare Netflix’s content data for meaningful analysis using Python and pandas. We’ll cover everything from initial exploration to final dataset publication, with practical techniques you can apply to your own data projects.
Step 1: Discovery — Understanding Our Dataset
We begin by importing our dataset and conducting a thorough examination:
import pandas as pd
#Import the data to a pandas DataFrame
df = pd.read_csv('/kaggle/input/netflix-shows/netflix_titles.csv')
#Have a quick overview of the data
df.info()print("\n")
#number of rows and coulums
print(f"Rows and columns (rows = {df.shape[0]} ,columns = {df.shape[1]})")
print("\n")
#list all column names
column_names = df.columns.tolist()
column_number = 0
for column_name in column_names:
column_number = column_number + 1
print(f"column {column_number} : {column_name}")print("\n")
#Data types in each column
print("Data Types: \n", df.dtypes)
print("\n")
#Group and count of null values of each column
print("Null values per column: \n", df.isnull().sum())
print("\n")
#Group and count duplicated rows
print("Number of duplicated rows: \n", df.duplicated().sum())
print("\n")
Key Findings:
8,807 titles (movies and TV shows)
12 columns of metadata
Significant missing values in director (2,634), cast (825), and country (831) fields
Mixed date formats in the
date_added
columnDuration values combining numbers and units (e.g., “90 min”, “2 Seasons”)
Step 2: Structuring — Creating an Analysis-Ready Framework
Converting Dates and Splitting Duration
# Standardize date format
df['date_added'] = pd.to_datetime(df['date_added'], format='mixed')
Cleaning IDs and Standardizing Data Types
# Clean show_id by removing 's' prefix
df['show_id'] = df['show_id'].str.strip('s').astype('Int64')
# Convert appropriate columns to string type
string_cols = ['type', 'title', 'director', 'cast', 'country','rating', 'duration', 'listed_in', 'description', 'duration_unit']
df[string_cols] = df[string_cols].astype('string')
Step 3: Cleaning — Handling Missing Values and Inconsistencies
Dropping duplicates if any
#finding and droping duplicates if any
print("Duplicate rows \n", df.duplicated().sum())
Strategic Imputation for Directors
#create director cast pairs
df['dir_cast'] = df['director'] + '---' + df['cast']
#frequent collaborations
counts = df['dir_cast'].value_counts()
filtered_counts = counts[counts >= 3]
filtered_values = filtered_counts.index
list_dir_cast = list(filtered_values)
#building a lookup dictionary
dict_direcast = {}
for pair in list_dir_cast:
director,cast = pair.split('---')dict_direcast[director] = cast
#fill missing directors
for i in range(len(dict_direcast)):
df.loc[(df['director'].isna()) & (df['cast'] == list(dict_direcast.items())[i][1]), 'director'] = list(dict_direcast.items())[i][0]
#handle remaining gaps
df.loc[df['director'].isna(),'director'] = 'Not Given'
Geographic Imputation Using Director Information(direct unpacking vs indexed loop)
The direct unpacking version is clearly more readable and efficient, which is why it’s generally preferred in Python programming. The indexed version was included in the notebook mainly to show the alternative approach and help those transitioning from other programming languages.
#Assign directors and countries to variables
directors = df['director']countries = df['country']
#zip them
zipped_dir_country = zip(directors,countries)
#convert tuple to dictionary
zipped_to_dict = dict(list(zipped_dir_country))
#match director to country to fill null country values(indexed loop)
for n in range(len(zipped_to_dict)):
df.loc[(df['country'].isna()) & (df['director'] == list(zipped_to_dict.items())[n][0]), 'country'] = list(zipped_to_dict.items())[n][1]
#match director to country to fill null country values(direct unpacking)
'''
for director,country in zipped_to_dict.items():
df.loc[(df['country'].isna()) & (df['director'] == director), 'country'] = country
'''
#handle remaining gaps
df['country'].fillna('Not Given')
Final cleaning steps
# Handle remaining missing values
df['cast'].fillna('Not Given', inplace=True)
Drop other null records
#drop other null records
df.drop(df[df['date_added'].isna()].index, axis = 0, inplace=True)
df.drop(df[df['rating'].isna()].index, axis = 0, inplace=True)
df.drop(df[df['duration'].isna()].index, axis = 0, inplace=True)
Further cleaning
#Errors
#check if there are any added dates that come before release year
import datetime as dt
#check the number of inconsistencies
sum(df['date_added'].dt.year < df['release_year'])
#replace any date_added that is less than release_year with the release_year
df.loc[(df['date_added'].dt.year < df['release_year']),'date_added'] = pd.to_datetime(df['release_year'].astype(str))
#check if the rows have been fixed
df.iloc[[1551,1696,2920,3168]]
#verify
sum(df['date_added'].dt.year < df['release_year'])
Step 4: Enriching — Adding Analytical Value
Creating a Recent Content Flag
import datetime as dt
current_year = dt.datetime.now().year
df['is_recent'] = df['release_year'] >= current_year - 5
df.iloc[[1,2,3,4,5]]
This simple binary feature enables easy analysis of trends in newer versus older content.
Step 5: Validation — Ensuring Data Quality
Final Checks and Optimization
# Drop not-needed columns
columns_to_drop = ['dir_cast','duration']
for col in columns_to_drop:
if col in df.columns:
df.drop(columns=col, inplace=True)
#checking data types
print("Data types \n",df.dtypes)
#sampling a few rows
df.sample(5)
#reset the index
df.reset_index(drop=True)
Column Reordering for Usability
optimal_columns =
[
'show_id', 'type', 'title', 'is_recent', 'director', 'cast','country',
'date_added', 'release_year', 'rating','duration_value',
'duration_unit', 'listed_in'
]
df = df[optimal_columns]
Step 6: Publishing — Delivering the Final Dataset
df.to_csv('/kaggle/working/cleaned_netflix.csv', index=False)
Key Takeaways and Practical Applications
Through this process, we’ve transformed the raw Netflix dataset into a clean, analysis-ready resource. Here’s what we’ve accomplished:
Standardized Data Formats: Consistent types and structures across all fields
Comprehensive Missing Value Handling: Smart imputation strategies for directors and countries
Enhanced Analytical Potential: Added features like
is_recent
for deeper insightsOptimized Structure: Logical column ordering and removed redundant fields
This cleaned dataset now enables powerful analyses such as:
Content trends over time by country or genre
Director and actor network analysis
Duration patterns for movies vs. TV shows
Recent versus classic content comparisons
Conclusion: The Art of Data Wrangling
Data wrangling is both science and art requiring technical skills to manipulate data while maintaining the intuition to preserve its meaning and value. The Netflix dataset provides an excellent case study in transforming messy real-world data into a structured analytical asset.
By following this systematic approach of discovery, structuring, cleaning, enriching, validating, and publishing, you can tackle data preparation challenges across domains. The techniques demonstrated here from strategic imputation to feature engineering are widely applicable to many data science projects.
Remember: quality data preparation enables quality analysis. The time invested in proper data wrangling pays dividends throughout the entire analytical pipeline.
Subscribe to my newsletter
Read articles from Capwell Murimi directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
