Facial Recognition for Fun (and Somehow It Worked 😅) using DeepFace👩🔍”


“Sometimes the most fun experiments are the ones you least expect to work.” 🤖✨
Introduction
Soo….. I attended my first hackathon a couple of months back 🎉, and this was the project we worked on. I must admit, I was sooo lost 😭😭. The gurus took charge while trying to explain, and I just sat back and listened. The concept seemed simple enough: “Collect a database of faces and make the program recognise those faces in another picture.” I thought, hey, this would be fun to try on my own. So with a bit of Python knowledge, a rough idea of how the project worked, curiosity, lots of documentation, and ChatGPT as my right-hand man, I embarked on this facial recognition journey for fun.
What is DeepFace?
DeepFace is a deep learning-based facial recognition and analysis framework used to perform tasks related to facial detection, facial analysis, and facial recognition.
The Concept
So the idea is simple.
Collect known faces – I gathered sample images of people and labelled them based on their names.
Augment the images – To help the system recognise faces better, I used data augmentation to create more variations of each face.
Detect faces in a new image – I used DeepFace to extract individual faces from a test image.
Compare and match – Each detected face was then compared with the augmented database to find the closest match, and ideally, return the correct name.
The Process(How it works?)
-Download and import the necessary libraries
First, we start by downloading and importing the necessary Python libraries to make our program work.
import os
import cv2
import numpy as np
from imgaug import augmenters as iaa
from deepface import DeepFace
from matplotlib import pyplot as plt
-os: this is for handling file directories and path\
-cv2: this is used for image reading, writing and colour conversion.
-NumPy: for handling image arrays
-imgaug: to carry out augmentation by creating different variants of images
-deepface: this handles detection, feature extraction and comparison.
To install these libraries, use the code below in your terminal:
pip install deepface opencv-python-headless matplotlib imgaug
-Augmentation of Sample Data
To improve accuracy and enable the system to detect faces better, you need to give it different variations of the same data (e.g., a picture a bit blurred, a picture tilted to the right, a picture in dark lighting, a picture in bright lighting, etc.). We can easily achieve this using Data augmentation; we would use imgaug to accomplish this.
# This script performs data augmentation on a set of images in a specified directory.
faces_db_path = "faces" # folder containing known faces
#DATA AUGMENTATION
augmented_dir = "augmented_faces"
augmentations = iaa.Sequential([
iaa.Fliplr(0.5), # horizontal flip
iaa.Flipud(0.5), # vertical flip
iaa.Affine(rotate=(-45, 45)), # rotate
iaa.AdditiveGaussianNoise(scale=(0, 0.1*255)), # add noise
iaa.Multiply((0.5, 1.5)), # change brightness
iaa.GammaContrast((0.5, 2.0)), # change contrast
iaa.Crop(percent=(0, 0.1)), # crop
])
Next, let’s create a function that automatically performs augmentation and saves all the augmented images of a person into a folder named after them.
#function that automatically augments and saves
def augment_and_save(img_path,person_name,save_dir,num_augments=10):
filename = os.path.basename(img_path)
person_name= os.path.basename(filename).split('_')[0]
image= cv2.imread(img_path)
if image is None:
print(f"Error reading image {img_path}")
return
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
person_folder = os.path.join(save_dir, person_name)
os.makedirs(person_folder, exist_ok=True)
images=[image]*num_augments
augmented_images = augmentations(images=images)
for i,aug_img in enumerate(augmented_images):
filename = f"{person_name}_{i}.jpg"
save_path = os.path.join(person_folder, filename)
aug_img_bgr = cv2.cvtColor(aug_img, cv2.COLOR_RGB2BGR)
cv2.imwrite(save_path, aug_img_bgr)
print(f"Augmented images saved in {person_folder}")
# Loop through all images in the face database and apply augmentation,
# saving results to the appropriate person-named folder
for filename in os.listdir(faces_db_path):
img_path = os.path.join(faces_db_path, filename)
if os.path.isfile(img_path):
augment_and_save(img_path,filename,augmented_dir)
Face detection and extraction
Now we have a database full of augmented faces. The next step is to detect all the faces from the test image and save them individually for matching. we used Deep.extract_faces();
to achieve this.
test_image_path = "test1.jpg"
faces = DeepFace.extract_faces(img_path=test_image_path, detector_backend="retinaface", enforce_detection=False)
if len(faces) == 0:
print("No faces detected in the test image.")
else:
print(f"{len(faces)} face(s) detected in the test image.")
image_bgr = cv2.imread(test_image_path)
image_rgb = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB)
# Iterate over each detected face
for idx, face_info in enumerate(faces):
face = face_info["face"]
if face.dtype == np.float32 or face.dtype == np.float64:
face_uint8 = (face * 255).clip(0, 255).astype(np.uint8)
else:
face_uint8 = face
# Save each detected face temporarily
face_path = f"temp_face_{idx + 1}.jpg"
cv2.imwrite(face_path, cv2.cvtColor(face_uint8, cv2.COLOR_RGB2BGR))
# Display the face
plt.imshow(face)
plt.title(f"Face {idx + 1}")
plt.axis('off')
plt.show()
Comparing the extracted faces and matching
Now comes the real test — matching! For each face we extracted, we compared it with the augmented faces stored in our folders. Whichever folder (a.k.a.. person’s name) has the most matches is declared the best match for that face. We used DeepFace.find()
to achieve this.
# Perform face matching with the augmented images
try:
results = DeepFace.find(
img_path="temp_face_4.jpg",
db_path=augmented_dir,
model_name="ArcFace",
distance_metric='cosine',
enforce_detection=False
)
if results and not results[0].empty:
print(f"Matches for Face {idx + 1}:")
match_counts = {} # Dictionary to store match frequency
for _, row in results[0].iterrows():
matched_path = row["identity"]
# Extract folder name (i.e., person name)
person_name = os.path.basename(matched_path).split('_')[0]
match_counts[person_name] = match_counts.get(person_name, 0) + 1
# Print all matches grouped
for name, count in match_counts.items():
print(f" - {name}: {count} match(es)")
# Find best match
best_match = max(match_counts, key=match_counts.get)
print(f"✅ Best match for Face {idx + 1}: {best_match}")
except:
print("No match found")
os.remove(face_path)
Check out the full version on GitHub
Conclusion
And there you have it — my simple facial recognition program 💅.
Was it perfect? Not really. But did I learn a ton? Absolutely.
From hours spent wrestling with DeepFace to error messages that almost broke me 😭, to deep late-night convos with GPT about giving up — and finally watching my code make an accurate match 😆 — it’s been a roller coaster.
I’m still learning, still growing, and still building. I can’t wait to see what “hmm… this could be fun” project I jump into next.
Hopefully, this inspires someone to take a shot at their own “fun idea” too. Trust me, it’s worth it.
Subscribe to my newsletter
Read articles from Oluwatofunmi Otuneye directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
