The Real Danger of AI

We may earn money or products from the companies mentioned in this post.

Last month I wrote several articles about why I don’t think AI art is the threat some people make it out to be. Today I’m going to talk about the thing that I do think is scary as hell about AI. Oddly, this doesn’t seem to be something people are too concerned about. I’ve mostly only seen it brought up in sources that focus on AI and the occasional alt news source. Even there, Breaking Points is the only place where I’ve seen fairly consistent coverage. Social media seems pretty quiet about it, but since the guys who run social media are lined up to be AI’s biggest beneficiaries, maybe that’s just THE ALGORITHM doing its job well.

There’s a meme that circulates occasionally that says something like “I want AI to answer emails and build spreadsheets so people can make art and write stories.” Like most ideas that assume two diametrically opposed extremes, this is dumb. It’s also inaccurate. I generally assume that it’s posted mostly out of tribal solidarity. If you actually think about the words for 5 minutes before hitting repost, the dumbness is pretty obvious: AI’s capable of doing both. In fact, it’s already doing both, and it’s set to take over our jobs long before it takes over our art.

Unfortunately, this isn’t going to lead to the meme’s magical world where everyone is free from their job and can just do art, because we live in late-stage capitalism. If we continue to believe that’s a good place to be, AI is going to cause a lot of people to lose their jobs. Its’ also going to make a lot of people very rich, but those are mostly going to be people who are already very rich. You know the “wealth gap” that people keep talking about? It’s about to get much bigger, probably downright dystopian. And it’s going to happen a lot faster than most people expect.

How fast? Well, in some sectors it’s already happening. Companies are already laying off (or simply not hiring replacements for) workers in their customer services departments because AI can reliably handle most routine requests for less than it costs to hire a cubicle slave. A few companies jumped the gun a little and fired the whole department, but most of them have since backtracked and realized that they need at least a few humans to act as an escalations department for things the AI can’t figure out. Most of the AI talking heads I’ve seen seem to think this will be the prevailing model in the future: Instead of a department full of people doing a job, the job will be done by AI with a skeleton crew of human “managers” who tell the AI what to do and step in to deal with problems the AI isn’t built for.

When I say “in the future,” I don’t mean that this is something that your kids are going to have to deal with or that will happen when you’re a hovering over your grave. I’m talking within an election cycle or two. Anthropic CEO Dario Amodel got some headlines recently with his comments about how fast AI is improving. Amodel used Moore’s Law, which says that processing power doubles roughly every two years, as a comparison. AI isn’t really limited by processing power, but by (in Amodel’s words) “its ability to maintain coherence across long sequences.” Basically, unlike the self-diagnosed mental case making excuses about failing to deliver on their Kickstarter, AI actually suffers from ADHD. The longer it can stay focused without getting confused, the more powerful and useful it is, which is why the power of AI is measured based on how long it would take a human to perform the same task. A few years ago, AI was good for a couple minutes. [Insert your own premature ejaculation joke here]. Right now it can last about an hour. Amodel estimates that AI’s ability to maintain coherence is doubling about every 7 months—more than twice the rate it takes processor speed to double.

Dario also had a few specific predictions that are great for people like him (Anthropic makes the Claude AI), but not so good for those of us who work for a living. There are predictions from other tech oligarchs as well as some current stats that show how fast AI is moving, and none of it’s good for those of us who weren’t born with a trust fund. Here’s a sampling:

    • Amodel predicts that 50% of entry-level white-collar jobs will be eliminated within 5 years.
    • Aaron Bastani, author of Fully Automated Luxury Communism (which I just started reading) believes that AI will launch the third “disruption” in human history. The first two? The adoption of agriculture (ie., the founding of civilization) and the Industrial Revolution. If you’re not a history buff, let me assure you that both of these events were kind of a big deal.
    • When asked when he thought we’d see the first billion-dollar company with only one human employee would appear, Dario Amodel predicted that it will happen next year (2026).
    • 90% of software will be written by AI within the next 6 months. (!!!) Within a year, nearly all code will be written by AI (Amodel again). Currently about 30% of Microsoft and Meta’s code is generated using AI. Google’s a little behind at 25%.
    • Meta is currently in the process of transitioning 90% of its privacy and safety assessments to AI.
    • ChatGPT passed the Turing Test in either 2022 or 2025, depending on who you ask.
    • Bill Gates, who spent a good chunk of his career trying to spend his way out of government regulation, believes that governments need to regulate AI.
Zuck
He just wants to be loved. Is that so wrong?
  • Mark Zuckerberg thinks AI is going to become so powerful that it will allow him to finally make a friend (IMO, this is the hardest prediction to believe).
  • Sam Altman has an even rosier view, but he’s a tech oligarch with enough financial cushioning to remain solvent for centuries in any situation short of the complete breakdown of civilization. If you read the article closely, you’ll notice that the happy world Altman and people like him will inherit is built on massive job losses for average Americans, most of whom don’t have enough savings to cover a major car repair.

If you’re a Libertarian type who thinks that The Invisible Hand will bitch slap companies who go too overboard with AI too soon, there’s a big problem with that assumption: We don’t have a free market. The companies who are are likely to adopt AI most quickly at scale are large corporations who enjoy monopolies or duopolies with a side of regulatory capture. By the time AI trickles down to businesses who have to be concerned about things like market forces and regulation, it will already be ubiquitous. For more information on how this works, subscribe to Matt Stoller’s Big newsletter or read Cory Doctorow’s recent non-fiction books.

Long story long, AI is likely to to cause some big changes in our society. This isn’t something that’s going to happen sometime in the future, it’s something that’s already happening. If we stick with neoliberal business as usual, the gap between the rich and the poor is going to get Dickensian. We need to be talking about it and pushing for change now, because otherwise we’re headed for a world where our current level of societal disfunction looks like a Utopia.

The good news is that we already have a tool at our disposal that I think will at least keep things at our present level of awfulness, and might even allow for improvements. It predates AI, already has a lot of supporters, and has functioned above expectations in nearly every proof-of-concept experiment that’s been conducted so far. It’s called Universal Basic Income, and I’ll talk about it next time.