When AI Replaces Your Face: A Personal Account of Algorithmic Identity Loss


Introduction
Recently, I attempted what I thought was a simple task: generate a polished LinkedIn profile picture using AI. I provided a real photo of myself, along with detailed instructions not to alter my identity—no facial modifications, no skin tone smoothing, no guesswork. Just background replacement and lighting tweaks.
What I got back instead was something entirely different.
It was not me. It was a statistical approximation of someone like me.
This is what happens when AI systems prioritize demographic probability over personal identity.
What Went Wrong
Despite uploading a clear, high-resolution image, the generative engine repeatedly ignored the original photo. The system rendered a new face—a generic “professional Black man in a blue shirt”—rather than enhancing the real subject (me).
Here is where the breakdown occurred:
The uploaded image was not treated as an anchor.
The model relied almost entirely on prompt-based logic.The output was inspired by demographic traits, not grounded in my actual photo.
My face was effectively replaced by a templated composite.There was no respect for individuality—only conformity to an archetype.
That is not enhancement. It is erasure.
The Bigger Issue: High-Stakes Misuse
If an AI cannot preserve my identity for something as benign as a headshot, what happens when such systems are applied in:
Police surveillance?
Suspect image reconstruction?
Facial recognition at borders or public spaces?
The same behavior—substituting real people with statistically probable ones—could have catastrophic consequences:
Misidentification
Racial profiling
Wrongful arrests
Generative AI cannot be trusted to “fill in” identity-critical gaps.
What it creates is not neutral—it is encoded with the biases of its training data.
Lessons from a Simple Headshot
What started as a productivity hack turned into a cautionary tale. Here is what I took away:
AI models often do not understand the difference between demographic traits and personal identity.
These systems are trained on overgeneralized data and will often default to biased composites.
The consequences become dangerous the moment you apply these tools in serious contexts.
Visual Reference
Here is the comparison between the image I uploaded and the face the AI engine generated in its place.
Subscribe to my newsletter
Read articles from Solomon M. Kamanga directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Solomon M. Kamanga
Solomon M. Kamanga
I'm also just a boy sitting in front of a screen asking the errors to behave.