B&W 35mm photo of a graffiti covered wall with a sign reading "DON'T EVEN THINK OF PARKING HERE"

I’m an “A.i.” abolitionist

I’m an “A.i.” abolitionist.

No consumer-facing LLMs or generative “A.i.” in anything.

Sure, machine learning in science and a few other limited applications is fine.

But consumer-facing LLMs and generative “A.i.” that are based on theft, push disinfo, enable fraud, destroy the environment, jack up consumer prices, foster psychosis, encourage suicides, degrade cognitive function, and can literally go Nazi after being manipulated by billionaires?

No.

There are fuzzy lines here that I haven’t fully considered, I’m sure. Am I okay with using machine learning to remove powerlines and jet trails from video footage in period movies? I’m pretty sure!

Am I okay with using generative “A.i.” to build entire scenes based on stolen footage? No!

I’ll cross various bridges when I come to them and amend my stance as necessary. But for now, my automatic answer when seeing any new LLM or generative “A.i.” implementation in any consumer product or government service or educational setting is NO.

A HUGE part of the problem is that the “A.i.” industry markets this tech as a flawless solution for everything – in search for example. The marketing implies that “A.i.” gives true answers when “truth” has nothing to do with its process and it consistently delivers bullshit. That’s INCREDIBLY dangerous.

If you make knives and market them as knives, that’s cool! If you make knives and market them as plush toys, we have a problem.

Another huge problem is that the massive companies pushing “A.i.” cannot be trusted on a moral and ethical level. They are run by billionaires who can manipulate the products to push disinfo or outright bigotry. Put their “A.i.” in everything, and everything can become their megaphone for garbage. Trump knows this – that’s why he’s not only trying to block states from regulating “A.i.,” he’s trying to push companies to make “A.i.” more racist.

Also please note that when “A.i.”-pushing billionaires talk about the need to regulate “A.i.,” they’re almost always trying to redirect people away from the actual, current harms of their evil products by raising the specter of some future, hostile, Skynet-style sentient “A.i.”

Despite writing comics and science fiction, I am not concerned about “A.i.” becoming sentient and destroying humanity. This technology remains a mechanical variation of fancy autocomplete, a stochastic parrot, to use the term of Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell; it’s not becoming sentient.

But I AM concerned about the morality and ethics of the people who control it.

Finally (for now), the unaccountable “move fast and break things” business model of the companies pushing this trash is morally and ethically bankrupt and enriches billionaires at the expense of everyday people.

“Break things” means “break people,” and I don’t think that’s acceptable.

I’ll try to update this post with more links and possibly more nuance over time. For now, I recommend everyone try to excise consumer-facing “A.i.” from their lives and organizations as much as possible. It’s not “inevitable” or “here to stay” unless we let it.

More links:

https://www.rollingstone.com/culture/culture-features/ai-chatbot-journal-research-fake-citations-1235485484

https://www.forbes.com/sites/zakdoffman/2025/12/15/how-your-private-chatgpt-and-gemini-chats-are-sold-for-profit

https://www.texastribune.org/2025/12/15/texas-universities-ai-course-audits/

https://reactormag.com/new-kindle-feature-ai-answer-questions-books-authors/

https://www.hcn.org/articles/the-big-data-center-buildup/

https://www.ru.nl/en/research/research-news/opposing-the-inevitability-of-ai-at-universities-is-possible-and-necessary

https://www.wsj.com/tech/personal-tech/why-every-family-needs-a-code-word-e077ab76?st=APjzvi&reflink=desktopwebshare_permalink

https://spitfirenews.com/p/ai-generated-influencer-scandals-are-here

https://www.theguardian.com/us-news/2025/oct/24/baltimore-student-ai-gun-detection-system-doritos

https://www.theguardian.com/technology/2025/oct/12/meta-ai-adviser-robby-starbuck

https://www.theverge.com/news/798388/openai-chatgpt-political-bias-eval

https://www.bloomberg.com/graphics/2025-ai-data-centers-electricity-prices

https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html

https://futurism.com/chatgpt-marriages-divorces

https://www.washingtonpost.com/technology/2025/09/16/character-ai-suicide-lawsuit-new-juliana

https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb?st=Hp4Ajw&reflink=desktopwebshare_permalink

https://www.yahoo.com/news/articles/doctors-catch-cancer-diagnosing-ai-144500810.html

https://www.nature.com/articles/s41746-025-01746-4

https://abcnews.go.com/US/wireStory/boys-school-shared-ai-generated-nude-images-after-128611202

https://futurism.com/artificial-intelligence/openai-chatgpt-sponsored-ads

A version of this post originally appeared as a Bluesky thread on July 8, 2025.

B&W photo of the side of a garbage tip with a sign reading "Do not drop your DOG SHIT in the garbage cans. Take it home with you. THANK YOU.