The Internet Was Always Bad at Writing. Now It’s Just More Obvious.

Writing sucked long before LLMs showed up. Sure, today’s doomsayers love pointing at ChatGPT as the villain — but they’re missing the point. Amateur blogs were copy-pasting each other’s content for years. “Serious” newsrooms published outright lies without blinking. We were already knee-deep in mediocrity. We just didn’t want to say it out loud.

Here’s the part nobody wants to hear: people actually write better today. Yes, with AI’s help. And even ChatGPT’s worst hallucinations? They don’t come close to the confident garbage a conspiracy blogger would publish on a Tuesday afternoon without a second thought.

So voice matters now more than it ever has. It’s the one thing that cuts through.

Do LLMs Write Better Than the Average Writer?

Technically? Yes. Clean grammar, solid structure, no embarrassing agreement errors. ChatGPT would have outperformed most of the blogs clogging up the internet back in 2015, no contest.

But creativity is a whole different fight. A study in the Journal of Intelligence (2026) put human writers head-to-head with LLMs on creative tasks. The finding: LLMs win on technical execution, but humans hold the edge when the task demands real depth. The reason comes down to mechanism. LLMs recombine. Humans, at their best, transform.

One thing worth flagging: the study used students, not working writers. There’s a real difference between benchmarking ChatGPT against a first-year blogger and measuring it against someone who’s spent a decade finding their voice. That comparison hasn’t been done yet in academic research. Make of that what you will.

Now the uncomfortable bit: most readers can’t tell the difference. Put a well-prompted AI article next to something a mediocre writer turned in, and the average person won’t know which is which. That’s not a knock on human writing. It’s actually the best case for it — if AI already matches the mediocre writer, the only move is to stop being mediocre.

ChatGPT’s Worst Hallucination vs. Journalism’s Worst Failures

Humans lie. LLMs hallucinate. Both have receipts.

In 2023, a New York lawyer used ChatGPT to research a legal case. What came back: six court cases that had never existed. Convincing names, real-looking docket numbers, fabricated judicial opinions. The judge searched for them. Nothing. The lawyer got fined $5,000 and, in his own words, became “the poster child for the dangers of dabbling with new technology.” (Mata v. Avianca, Inc., S.D.N.Y. 2023)

In 1980, Janet Cooke wrote a front-page Washington Post story about Jimmy, an 8-year-old heroin addict. Heartbreaking detail, vivid prose, impossible to put down. It won the Pulitzer. Two days later, she handed it back. Jimmy was never real.

So which was worse? No clean answer. But the same thing shows up in both cases: nobody checked. The lawyer assumed ChatGPT couldn’t lie. The Post’s editors assumed their reporter wouldn’t. Publishing without verifying — that’s always been the real problem.

And it goes back further than you’d think. Jayson Blair made up stories at the New York Times for years. Stephen Glassinvented entire companies to fool The New Republic. Der Spiegel’s star reporter Claas Relotius fabricated articles for a decade before anyone caught him. All credentialed. All edited. All failed the same way ChatGPT fails: stating false things with zero hesitation.

The difference is ChatGPT doesn’t have an ego to protect. No deadline panic. No career on the line. And it still gets it wrong. Because it doesn’t know it’s wrong. It’s just predicting what word comes next.

Which brings us to the only actual fix: expertise. You can’t catch what ChatGPT invents if you don’t already know more than it does about the topic. A sharp lawyer would have spotted those fake cases immediately. A sharper editor would have asked Cooke to take them to meet Jimmy.

Simple rule: use AI to write about things you actually know. It amplifies your judgment. It doesn’t replace it. Without that foundation, it doesn’t matter whether the error comes from a language model or a Pulitzer winner. The result is the same.

Why Right Now Is Actually a Great Time to Be a Good Writer

If there’s any hope here, it lives in three things: voice, judgment, and responsibility. But let’s not kid ourselves — the industry isn’t slowing down out of principle. For most companies, this is a dream scenario: more content, less cost, no headcount. That train isn’t stopping.

The trick just has a shelf life. Reader fatigue is real, and it’s building. Not because audiences are suddenly more sophisticated, but because sameness gets old fast. When everything sounds identical, nothing lands.

Here’s the contradiction I’m sitting with: I said most people can’t tell AI writing from human writing. That’s still true. But the ones flooding the internet with generated content aren’t writers using AI as a tool. They’re finance teams cutting budgets, students gaming deadlines, and writers who confused speed with skill. AI in the wrong hands isn’t a revolution. It’s a factory for mediocrity, running at industrial scale.

Voice is the only thing that doesn’t scale that way. It can’t be averaged. Judgment comes from actually knowing your subject, well enough to catch the AI when it’s bluffing. Responsibility is that moment before you hit publish, when you decide if what you’re about to put out is genuinely yours, or just sounds like it could be.

For the mundane stuff — emails nobody will read twice, bureaucratic summaries, boilerplate product copy — use AI. Seriously, no guilt. That’s what it’s built for.

Everything else, the stuff that sticks, that someone screenshots and sends to a friend, that someone reads again six months later — that still needs a real person behind it.

The Question Nobody Wants to Answer

Not “will AI replace you?” That’s the wrong question.

The real one: did you have something to say before it showed up?

AI is going to displace a lot of people. Mostly in repetitive, process-heavy roles that were never really about thinking in the first place. Past that? We’re not there yet. Because AI imitates. It doesn’t originate. Everything it produces is a remix of something that already existed. World-class imitator. Still just an imitator.

In writing, the question gets personal fast: did you have a voice before ChatGPT? Something to actually say? If not, I don’t think AI changes that. The mediocre writer stays mediocre, just with cleaner sentences. The one who publishes without thinking keeps doing it, just faster.

AI didn’t create the problem.

It just gave it a megaphone.


Since 2019, I’ve been building País Lector, a Spanish-language literary platform that reached 50,000+ monthly readers through SEO and editorial judgment alone. If any of this landed, that’s where the rest of the experiment lives.

Liked Liked