AI Wrote This Headline
(Ok, no it didn’t—but it probably would’ve made it punchier.)
A friend of mine has gone back to college and is taking an intro to writing class this summer. We were talking about the syllabus when he mentioned something I’ve now heard a dozen times: the professor explicitly banned AI tools.
Same day, I read a pearl-clutching article in traditional media about how every student in a classroom admitted to using AI to write essays.
The tone? Panic.
The subtext? Moral collapse.
The headline might as well have been:
Robots Are Ruining Our Youth. Also Get Off My Lawn.
So let’s go ahead and say the quiet part:
I use AI for these posts. Every single one of them.
And guess what? They’re still mine. Still my voice. Still my ideas.
AI didn’t think any of this up—I did.
It just helps me refine, structure, challenge, and sharpen.
You want to know what that process actually looks like? You’re looking at it.
This post itself is the example.
Here’s how it works:
I write a rough draft.
Then I tell the AI:
“Ask me a series of questions—one at a time—that you need to understand in order to help me improve this.”
That’s it.
Not “fix it.”
Not “make it better.”
Just: push me.
Force me to confront where I’m unclear, lazy, or stuck inside my own echo chamber.
And that process?
That’s called editing.
Which brings us to the punchline:
The same media and academic types now raging against AI have always had access to this exact kind of refinement.
They just called it… editors.
You know, the people who sit with you and ask the hard questions, flag what’s missing, and shape your raw idea into something readable.
But instead of being honest about that process, they built a whole mythology around it.
They’re “real writers.”
We’re just pressing buttons.
What they’re actually mad about is this:
AI gives the rest of us an editor.
And suddenly their gatekeeping doesn’t look like talent—it looks like privilege.
To be clear:
I’m not saying AI writing is always good.
In fact, most of it is garbage.
But here’s the part no one says out loud:
Most writing has always been garbage.
AI doesn’t make bad writers worse. It just makes bad thinking faster.
The real problem isn’t the machine—it’s the user.
Which is why professors banning AI isn’t just short-sighted—it’s irresponsible.
“But what about cheating?”
No one’s learning when they’re forced to handwrite 1,200 words about Of Mice and Men like it’s 1998.
AI is a tool, not plagiarism.
It doesn’t create anything unless you prompt it to.
And prompting it well requires actual thought.
Which, last I checked, is kind of the point of education.
Instead of teaching students to fear the tool, maybe teach them how to use it.
How to think clearly enough to instruct it.
How to take ownership over a message, then sharpen it.
But that would require professors to let go of the myth that “writing” equals “suffering alone in a Word doc with no help and a deadline-induced ulcer.”
And that’s hard when your entire identity is built on that suffering.
I’ve been lucky to have landed in the career field I did.
I’ve worked in advertising agencies overseeing account planning for years—where the job is to clarify the idea, distill the emotion, and hand it to a creative team who brings it to life.
Sound familiar?
It’s prompting. Just with humans.
I’ve also worked with some ridiculously talented copywriters and creative directors. And no, AI will never replace them. Real artistry isn’t repeatable.
But for a huge percentage of communication?
Good enough is good enough.
AI can help you write that client email, shape your blog post, draft your talking points—and now, you don’t have to beg someone for a favor or wait for “feedback.”
You’ve got it at your fingertips.
And for the media folks who say that’s dangerous?
Let’s be honest:
What’s actually dangerous is pretending the gate still exists when the walls have already come down.
AI is the printing press of this moment. The world is forever changed and will never go back.
It won’t kill writing.
But it will kill the illusion that only a chosen few are allowed to do it well.
And that, right there, is what actually scares the hell out of media institutions.
Because let’s be honest:
They’re bloated.
Layer after layer of roles that were essential when distribution was scarce—when you needed a newsroom, a publisher, an ad sales team, a print layout staff.
But now?
A smart, interesting thinker with good prompts and a Substack account can produce work that lands at the same level.
Same polish. Same clarity. Same insight.
And once people realize that—once they stop paying attention to the legacy press, stop believing its gatekeepers, and stop seeing its perspectives as sacred—the whole game changes.
Why would I subscribe to a Sunday paper full of centrist fluff and safe orthodoxy…
when I can read twenty unfiltered, high-signal voices who are actually saying the quiet part out loud?
Why would advertisers keep spending on bloated newsrooms…
when the eyeballs are gone?
That’s not just disruption.
That’s a paradigm shift.
The same way Craigslist gutted the newspaper classified business, and blogs blew up opinion pages, and podcasts knocked talk radio off its throne—
AI is coming for the myth that credibility only lives behind a masthead.
The tools are here. The walls are down.
The mic is open.
So if you’re smart, say something.
And if you’re scared, maybe stop pretending you aren’t.
—David



This was super helpful. I always avoid using AI when I write my posts but I tried it with the prompt you suggested and it did find something useful I could add to a recent post.
What I used to think about the mainstream media was that at least you were getting a known quantity and a level of something you could trust. I'm no longer convinced that this is the case, but if anyone can write an equally good (in terms of writing quality) piece, how do you distinguish between the actual good quality, in terms of information, and the garbage/lies that are also out there? This isn't really a question about AI, more a question about where to go for real news in the world these days.