When AI Detectors Get It Wrong


Back on June 12th, 2009, I wrote a creative blog post titled “Dear God.” It was raw and personal, part of a blog where I explored thoughts and emotions through writing. That post later became the opening piece in my published book Art of Absolution, setting the tone for the collection that followed. You can still find it archived on the Wayback Machine.
Recently, I ran that exact piece through JustDone’s AI Detector. To my surprise, it flagged the post as 97% AI-generated. Using incognito mode to bypass the one-time analysis limit, I tested it again. The results ranged wildly, from 79% to 100% AI-written—on the same text, written by me in 2009, long before AI writing tools were mainstream or even accessible.
This isn’t just a curiosity—it’s a warning. If detectors are misclassifying genuine human writing as machine-generated, the consequences could be serious. Writers risk being falsely accused of cheating or plagiarism. Their credibility can be undermined by nothing more than an algorithm’s flawed guess.
These detection tools may appear scientific, but they are not reliable or consistent. The same text can yield completely different results depending on how and when it's submitted. Despite this inconsistency, they’re already being used by educators, employers, and publishers as if they are accurate, objective, and final.
This isn’t just about my blog post. It’s about the erosion of trust in human creativity. When expressive, original writing is mistaken for artificial output, it’s clear the tools aren’t ready for the authority they’re being given. And that should concern anyone who writes, reads, or values truth.
What’s even worse, the lowest score I could get, which was 62%, was for this article, which, except for this last sentence, is 100% AI*.*
Subscribe to my newsletter
Read articles from Beep Beep directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
