I was already writing a version of this column when the news of a massive 21% staff cut at Business Insider hit. The core of it was this nagging truth: the rapid decline of traditional search engine traffic exposes the fallacy that writers, editors, or any publishing/news staff ever had any real control over their own traffic successes or failures.
Business Insider's internal memo, as widely reported, said it all:
"We’re at the start of a major shift in how people find and consume information, which is driving ongoing volatility in traffic and distribution for all publishers...We must be structured to endure extreme traffic drops outside of our control."
That key phrase: "outside of our control," jumped out at me.
AI may have focused the lens for us, but it's always been true that the inner workings of driving internet traffic, through Google in particular, is a mysterious black box. This gave rise to SEO experts who claimed to be able to read those particular tea leaves.
My own clashes with this druidic SEO class goes back decades.
The biggest lie the digital media generation was ever sold was that we could have a direct cause-and-effect influence on traffic (and by extension, the financial success of our publications). It’s a familiar refrain I've heard countless times: an article underperformed, not because of its intrinsic merit or lack thereof, but because we failed to:
Put a particular word in the headline in a specific spot
Adding enough links to the first paragraph
Put a specific version of a product name in the headline
Put the year in the headline
Made the headline more exciting
Made the headline less exciting
Used the same headline structure as everyone else
Use a different headline structure than everyone else
If only we had published five minutes earlier...or five minutes later
The list goes on. One of my favorite examples is when Apple would put out a couple of nearly identical laptops at the same time. If I combined the two models into one initial article, the SEO team would tell us Google didn't pick it up (or give us a high search engine ranking) because we wrote one article about both laptops instead of two separate ones.
The next cycle, we'd break them out into two articles (for example, one for a new 13-inch MacBook Air and one for a new 13-inch MacBook Pro), only to have SEO tell us we should have combined them into one article, and *that* was why the article traffic wasn't great and Google wasn't giving it good placement.
My own clashes with this druidic SEO class goes back decades.
It was a masterclass in unfalsifiable advice…and a perfect example of the Gell-Mann Amnesia effect. Whatever we did to maximize the success of any particular article in the hyper-competitive world of Google search, Google News, and Google Discovery traffic-farming, the Monday morning quarterbacks would always say we should have done something different.
The reality, having worked in digital news publishing since the dotcom 1.0 era, is that it's always been mostly luck...and a good site structure that isn't full of ancient junk code.
So, when a publication like Business Insider hits a rough traffic patch and layoffs follow, it's depressingly predictable who bears the brunt. The frontline writers and editors carry most of the layoff burden, rather than the SEO experts who specialize in advice without accountability, or the executives who are forever pivoting from one get-click-quick scheme to another.
The Night Chicago Died
The other big death-of-media story happening at the same time was the AI-generated summer reading list that appeared in actual print papers like the Chicago Sun-Times.
If you missed the controversy, several regional papers ran a syndicated summer content supplement provided by King Features, a division of Hearst. Lots of big-name legacy media companies in the mix here, but apparently not a single copyeditor or fact-checker among them, because the full-page "Summer Reading List" contained numerous fake books invented by AI, often credited to real authors.
The chain of cascading failures here is almost impressive.
The author, a King/Hearst freelancer, used AI to write all or most of the article, not even bothering to see if the recommended books existed.
[This reminds me of a years-ago interaction with an executive, who after seeing a longform video fireside chat I had with a tremendously famous, well-regarded author, said it was boring watching people talk about books and that we should do a 60-second lightning-round-style video with the author instead. My under-my-breath comment about this exec at the time: "Clearly not a reader."]
Then it appears no one at King Features or Hearst bothered to read the article. No producer, print layout designer, etc. noticed anything amiss. Even someone with a passing casual familiarity of current mainstream authors should have clocked at least a couple of these...
Finally no one at the Chicago Sun-Times (or the Philadelphia Inquirer or the handful of other print papers that ran the supplement) read or noticed what they were running in their own paper.
There are a lot of excuses offered here by the Sun-Times' CEO about how the print supplement was actually the territory of the Circulation group, but it certainly wasn't labeled as such, and deliberately so.
If you're going to use your legacy media cred to bundle cheap syndicated content -- and then charge print subscribers an extra $3 (!) for this zero-effort supplement -- what runs under your banner in your paper is ultimately your responsibility.
Who guards the guardrails?
I bring all this up, because it contrasts so strongly with my early experiences in media, especially versus how digital publishing mostly operates now.
Back in the mid-to-late 2000s, any article I wrote or edited would typically go through copyediting, then to professional web producers, followed by at least two rounds of previews circulated among an editorial mailing list for editorial feedback. Yes, it took longer, but the guardrails were there; very little unvetted material saw the light of day.
Fast forward to the 2010s and up to the present, and most writers and editors are a one-person band: writing, producing, and hitting 'publish' with minimal, if any, oversight. A post-publication 'backread' by a copyeditor is often the best one can hope for.
So why does inaccurate AI slop get published, accidentally or not? It's because there are few-to-no eyes on it before it gets published. To paraphrase the essential problem with AI-generated written content: "Why should I bother to read what no one could be bothered to write?" Various versions of this takedown date back to at least 2023.
Despite all this, I remain cautiously net-positive on AI as a useful tool for all sorts of endeavors, even creative ones. I find AI useful as a note-taking, schedule-keeping personal assistant; for finding key moments in long transcripts or technical documents; for answering search queries delivered using natural language; for making suggestions on vegetarian food options along every step of a travel itinerary; or even for tightening up headlines or fitting longer thoughts into the limited word count of social media posts.
The only thing close to as annoying as pro-AI acolytes (many of whom were probably dishing similar hype about NFT art several years ago), are the never-AI purists, who insist any use of any public AI model is tantamount to theft, because of the vast oceans of training material used to train these models. That's a topic to deep-dive into another time, as it's a complex, nuanced one.
I'm more concerned right now with the loss of the eyeball pipeline from search engines and social media sites to news publishers. That'll kill more jobs than AI-powered robo-journalists in both the short and long run.
In truth, AI isn't going to directly take most of our jobs, because it's generally more expensive to run and does a worse job than the human it might replace — although I've seen several media and tech C-suite execs joke about AI taking over their jobs. Now that's an idea I might endorse...
I haven’t fired my AI assistant…yet
A few posts ago, I talked about my experiments with building a custom AI personal assistant…and how it wasn’t going great. The gang at CBS Mornings had me on to talk about the subject, and the slow progress I’m making in training my virtual assistant.
And yes, some new gadgets and games
I’ve been playing around with a handful of new devices lately, including a new vertical ergonomic mouse from Razer (which I used to play Doom: The Dark Ages, of course), and the new SteamOS version of the Lenovo Legion Go S, which might be the best PC-based handheld to date. But more on those next time!
> Get my book here: The Tetris Effect
> Threads: threads.net/@danackerman
> Bsky: danackerman.bsky.social
> LinkedIn: linkedin.com/in/danackerman
> TikTok: tiktok.com/@danacknyc
> YouTube: youtube.com/danackerman
> IG: instagram.com/danack
> MC News: microcenter.news
Oh man I'm dying to hear your review of some handheld steam devices!