I've seen a subtle shift in the last few weeks, accelerated by what I'm calling the Ghibli Effect. For all the cheerleading, massive investment, and business plans built around artificial intelligence, we've hit a tipping point where the term "AI" has more negative connotations than positive ones.
To be sure, there have always been staunch anti-AI voices, and there will always be a pro-AI to counter that. We’ll continue to have short turnaround news cycles about trends like everyone making AI mint-on-card action figures of themselves, but the great mass of us in the middle -- the silent majority -- have experienced a vibe shift, to the point where it's now a potential faux pas to invoke AI in casual conversation.
And there's a good reason for that. The most obvious examples of AI that most people outside tech circles see are machine-made art, music, and videos. These might have been curious novelties a year ago but as the technology has improved, it's generating a different kind of response.
We've seen wholesale outrage at OpenAI image generator examples that copy the feel and style of Studio Ghibli art, and social media folks dunking on posters who show off movie clips screenshotted and run through AI filters, with the ludicrous claim that they're recreating multi-million-dollar movies for pennies.
The mainstream audience is catching on to what enthusiast audiences have been doing for the past couple of years -- throwing massive shade on anyone using AI art in a video game, tabletop game, or RPG handbook.
While the news media has been going through its own AI armageddon over a few years, the outcry against that mostly comes from the journalists themselves. It's hard to glance at words on a page and immediately get the ick from it, but AI art and music are much easier to spot immediately, and it triggers a kind of fight-or-flight response.
Maybe that's because AI art, especially generated photos and videos of humans, mostly lives in that uncanny valley where it looks pretty good...but there's still something about it that's off and that makes us viscerally uncomfortable. It makes me think about the slightly nutty theory that we're so affected by uncanny valley CGI and AI art because our primitive ancestors learned to fear things that looked almost human, but not quite.
I've personally been through the AI wringer a couple of times at my old publications, seeing poor-quality AI slop content being sold to the public with predictably poor outcomes. Now, once-thriving publications are being sold off for scrap, their URLs used to post AI junk, which is perhaps the only thing worse than the cheap human-produced "aggregation-style" news so many publications have been churning out for years.
Quartz, the once-buzzy biz publication I briefly shared a roof with during my G/O days, is the latest example of that. Almost its entire staff has been fired in yet another smaller and smaller lifeboats acquisition, and the site will reportedly lean on AI content now.
Here are a couple of good reads on the subject from this past week, both related to the topic of Quartz.
The first is What was Quartz? from that site's founder, Zach Seward, about the long history of the site, from The Atlantic to the G/O meat grinder to its current zombie status.
The second is an interview with Seward about his current gig at the NY Times at its Editorial Director of Artificial Intelligence Initiatives, written by Mark Yarm (who was one of my editors back in the newsstand magazine freelancing days).
On a personal level, I think tinkering with AI, generative or otherwise, is fine. I had fun chasing the latest trend, uploading photos to make a boxed action figure version of myself. And I have my AI clone, which can appear in talking head videos for me, but that's mostly because I think of it as an amusingly unhinged nod to Max Headroom, and it's proudly labeled as "AI Dan." Or the AI action movie poster I made for my kid's 13th birthday, or an AI dungeon tile randomizer I made to test a new game concept I'm working on. The list goes on.
But personal use doesn't equate to industry-wide replacement of humans and is far different from tossing out your entire creative department and hoping AI can fill their shoes, or using AI art for commercial book covers, or replacing human support agents with AI bots.
AI search, however, is another subject entirely, and while it's vulnerable to the same SEO-stuffers and charlatans that have made nearly every Google search result an exercise in frustration, it really is quickly becoming the new search default for a lot of people -- not so much because it's better (although I'd argue in a lot of cases it is), but because it's a less tortuous user experience than the default Google one right now. Of course, maybe that wouldn't have happened if we hadn't let one company gain a monopolistic hold over web search, but that's a story for another time.
> Get my book here: The Tetris Effect
> Threads: threads.net/@danackerman
> Bsky: danackerman.bsky.social
> LinkedIn: linkedin.com/in/danackerman
> TikTok: tiktok.com/@danacknyc
> YouTube: youtube.com/danackerman
> IG: instagram.com/danack
I wish google’s ai answers to searches was correct but we need to instead treat it like how school teachers treated wikipedia in its early years, that it’s unreliable for research, until things get corrected
This was excellent!