Asking AI to Find AI Is the Dumbest Thing Humans Have Ever Done
AI is now ruling the writing world — empowering content platforms while unfairly punishing real writers who have never even touched AI. This madness MUST stop NOW!
I wrote this story on Medium and want to share it with our community for FREE for awareness and a healthy debate about it. I also blogged it on our community site. Enough is enough for these garbage tools!
Yesterday, an 81-year-old friend — just a few years my senior — told me his draft was rejected by a Medium publication because their so-called AI detection tool flagged it as 60% ‘AI-generated.’ This man earned his PhD in psychology and anthropology long before personal computers existed, let alone AI. I invited this elderly to illuminate young generation through platforms like Medium and Substack which I discovered only a few years ago.
A British academic migrated to Australia, he has taught English at Cambridge, Oxford, and the University of Melbourne where I studied. Back when I was doing my postdoc, he was my senior, guiding PhD students through their theses. And now, after decades of scholarly work, he’s being told by an algorithm that his writing isn’t ‘human’ enough. The absurdity of it all is beyond belief.
Imagine this — if a 19-year-old submits a health science article to my publication on Medium.com, the junior editors would excitedly report that it scored 0% AI-generated. A perfect human-written piece, right?
But when I, a seasoned health scientist, review it, I can’t make sense of a single sentence. It’s a garbled mess — Wikipedia paragraphs hastily copied, reworded by an AI generator, and then ‘humanized’ by another tool.
The result is a Frankenstein’s monster of words that somehow passes AI detection with flying colors, while real experts get flagged as fakes. What a joke.
For established writers — especially those of us with decades of knowledge — being silenced. This is more than just frustrating. It’s disorienting. One moment, we’re encouraged to share our wisdom; the next, we’re ignored, flagged, or labeled as irrelevant.
The Human Casualties of AI Detection
Every day, real writers, students, and professionals are falsely accused of using AI when they haven’t. Their reputations, careers, and academic records are put on the line — not because they cheated, but because an algorithm thinks they might have.
Universities and companies are blindly trusting these faulty tools, handing out penalties based on nothing more than flawed probability scores.
Imagine spending years perfecting your craft, only to be told that your work is “too polished” to be human. That’s the world we’re living in now.
AI Detection Tools Are Inconsistent and Biased
If AI detection were reliable, at the very least, it should give the same results every time. But it doesn’t. The same passage can be run through different tools and get completely different verdicts.
Here’s a perfect example of this madness — I submitted my carefully researched story on cancer cells, which had already been vetted by 30 senior editors, approved for a boost nomination, after given the green light by four medical doctors and two health scientist who served as subject matter experts.
Yet, Medium’s curators rejected it. I guess it was because Copyleaks flagged it as 61.6% AI-generated. Three other AI detection tools scanned the same piece and found it to be 100% human-written. So, which is it?
Please take a closer look at this garbage:
“Well, isn’t this a treat? This so-called ‘advanced’ software is now bragging that it’s pulling ‘dozens of signals’ from AI tools that were trained on 50 years of my hard work. Who would’ve imagined this breakthrough? Nothing like watching decades of sweat and brilliance get handed over to some shiny new garbage tool.
Is my work a product of AI sorcery or human expertise?
When even the so-called ‘experts’ in AI detection can’t agree, how can anyone take these tools seriously?
Even running the same text twice through the same tool can lead to contradictory results. Worse, these tools disproportionately flag well-written content, punishing people for knowing how to write well.
The irony? If you want to pass AI detection, you might have to deliberately write worse. That’s where we are — dumbing down our own intelligence to satisfy a broken system.
AI Writing vs. Human Writing: The Great Irony
AI-generated text is vague, generic, and emotionless, yet these detection tools somehow mistake good human writing for AI.
If you write with clarity, coherence, and depth, you’re suddenly a suspect.
Meanwhile, if you produce bland, repetitive, soulless drivel, you’re safe.
What does that tell us?
AI detectors aren’t actually detecting AI. They’re detecting competence. And in this strange new world, competence is a red flag.
The People Behind These Tools Are Clueless About Writing
Many AI detection tools are built by engineers, not writers, editors, or linguists. They approach language like a math problem — looking for patterns, not meaning.
But writing isn’t just about sentence structure and word frequency. It involves emotion, cognition, intent, intuition, and personal voice.
Machines can’t measure creativity. They can’t recognize wit, nuance, or subtext. And yet, we’re letting them decide what’s “real” writing and what isn’t.
The result?
A system where algorithms judge human expression without understanding it. And we punish humans for being humans thanks to the judgement of machines. What a crazy world is this!
We now live in a world where soulless algorithms sit in judgment over human expression — without the slightest understanding of it.
And worse, we punish real people for being too human, all because a machine says so. This isn’t just flawed logic; it’s a dystopian nightmare.
How did we get here? How did we let machines, blind to meaning, emotion, and creativity, become the gatekeepers of our words?
What an utterly insane world we’ve built!
The AI Detection-Humanizer Scam: Selling the Problem and the Fix
Here’s where things go from ridiculous to downright unethical.
AI detection companies don’t just flag innocent writers — they also conveniently partner with “humanizer” tools.
That’s right, the same people selling the AI detection solution are also selling the AI bypass solution. It’s like a security company breaking into your house and then charging you for a better lock.
These humanizers don’t actually make writing more human — they just reword AI-generated content to trick detection tools. And guess what?
Some of these AI detection companies own or invest in humanizing software. It’s a racket. They create the problem, then profit from the solution. And who suffers? Honest writers.
Here is what one humanizing tool selling like hotcakes on the Internet says:
They collaborate with other tools like this:
This means that if you use that humanizer, these so-called leading tools like Originality.ai, Copyleaks, Scribbr, or GTPZero will not detect any AI-generated content.
So what a great thing for thousands of unemployed people just to sit in front of a PC with free ChatGPT, Gemini, or CoPilot, produce 15 story a day and spam Medium. As curators will see 0% AI they might even boost and send to extended distribution.
Take this recent example — a sensational story went viral, and many readers immediately suspected it was AI-generated.
Meanwhile, I wrote a carefully researched piece on the same topic, drawing from my scientific expertise, only for it to be buried by the algorithm. It received just a handful of views from direct links. Why? Because Medium’s system, likely relying on Copyleaks, flagged it as AI-written and instantly treated it as spam.
If Meidum Staff followed by 101 million people, truly care, here is my flagged story — one that received 0.0433% of the views compared to a piece written by someone with no real knowledge of the subject, riddled with insensitivity.
Yes, this very story — written with sincerity, from the heart and mind of an elderly, retired scientist — was published on ILLUMINATION, a platform with over 190,000 readers.
Yet, it received only 13 views. This is a story that deserves to be seen, shared, and read by the very people it speaks to — the elderly, who deserve uplifting, and the younger generation, who need to learn to respect those who came before them.
The Science Behind Aging Body Odor
Challenging Myths About Aging Body Odor with a Science-Backed Perspective
I truly hope the bosses and shareholders of Medium take a moment to listen, to review my story, and to reconsider the way their platform limits its reach. It’s a disservice to the audience, to the message, and to the very essence of what sharing knowledge should be about.
And yet, human curators chose to promote that dubious story to thousands of paying members. How does that happen? How can we justify elevating sensationalism while silencing expertise? If this is the future of online writing, we have a serious problem ahead.
The Inevitable Future: AI Detecting AI Detecting AI…
Here’s the real kicker — AI writing tools are getting smarter, and AI detection tools are struggling to keep up.
Eventually, we’ll need AI to detect AI that was designed to detect AI. It’s an endless, ridiculous feedback loop.
And where does that leave us?
In a world where content is judged by machines, not humans. Where creativity is shackled by algorithms. Where real writers are sidelined because they dared to write well and from the heart.
It’s time to stop worshipping bad tech.
AI detection tools are not the gatekeepers of truth. They’re just clumsy, unreliable guesswork wrapped in fancy branding which uses great communities like Medium as their clients or sponsors.
Here is how they advertise:
If we don’t push back now, we risk handing over the future of writing to machines that don’t understand it.
Real writers deserve better. They deserve recognition, respect, and a platform that values their craft — not abuse from systems that fail to understand the depth and soul of their work.
When writers feel invisible on a platform they once trusted, it’s not just a minor annoyance; it can trigger genuine distress, affecting motivation and self-worth.
It’s time to stand up for genuine human expression and demand accountability from the tools we rely on. Writers, of all people, should never have to prove their humanity to a machine.
Check out this 15 minute interactive audio to better understand what I am trying to say but couldn’t articulate as good as these people who used a 16 hour interview of a thought leader in the field.
You can also vote here. My vote is a big NO!
It is so sad that due to these tools some writers lost their MPPs and some even got suspended with no explanation.
Thank you for reading my story, and I look forward to your comments.
To read my stories, you may subscribe to my content.
Lessons Learned from My Personal Stories
I am a retired healthcare scientist in his mid-70s, and I have several grandkids who keep me going and inspire me to write on this platform. I am also the chief editor of the Health and Science publication on Medium.com. As a giveback activity, I volunteered as an editor for Illumination publications, supporting many new writers. I will be happy to read, publish, and promote your stories. You may connect with me on LinkedIn, Twitter, and Quora, where I share stories I read. You may subscribe to my account to get my stories in your inbox when I post. You can also find my distilled content on Substack: Health Science Research By Dr Mike Broadly.
If you are interested in being a writer for my Health and Science publication on Medium, you can send your Medium ID via our writer registration portal here.
Thank you for writing this valuable and eye-opening story so clearly, Mike. Every word you said in this story deeply resonated with me as I experienced it firsthand and observed it clearly. We tested around 30 of these tools in 2022, and they all were wrong. So we stopped them. I don't understand why Medium still uses Copyleaks, which is the worst.
Your article hits the nail on the AI head. It's crazy when AI (a robot) asks you, a human, to confirm that you're not a robot. This world is becoming topsy-turvy.