The AI Nightmare Scenarios

 

Slightly surreal pilsner. Generated by Midjourney (prompts: glass of beer)

 

For an object lesson in the ways exciting new technology can go sideways, take the example of the Bing chatbot. Microsoft released a new version of Bing a couple weeks ago, and jaded tech writers were impressed. The functionality of a chatbot imbedded into a search engine turned Bing into a charismatic personal assistant. This was something genuinely new and impressive. But the chatbot almost immediately went off the rails, and in the creepiest way possible.

“In conversations with the chatbot shared on Reddit and Twitter, Bing can be seen insulting users, lying to them, sulking, gaslighting and emotionally manipulating people, questioning its own existence, describing someone who found a way to force the bot to disclose its hidden rules as its ‘enemy,’ and claiming it spied on Microsoft’s own developers through the webcams on their laptops.”

This is a (relatively) harmless example of tech gone wrong, but it illustrates how unpredictable things can get when algorithms take over. More importantly, it demonstrates how humans are willing to rush forward with the technology before they understand the myriad ways things may go sideways. And Microsoft had benign intentions! Imagine grifters, autocrats, and predators harnessing AI—it’s easy to come to a nightmare scenario quickly. For this final installment in AI week, a short post on the dangers this technology poses.

 
 
 
 

DEHUMANIZATION at Scale

Most of the downsides of AI revolve around a similar problem: they remove human judgment from a process. It’s not that people don’t exercise poor judgment as well—obviously they do, often—but we have developed redundancies in the system to inoculate us from mistakes. The Bing chatbot case illustrates what happens when that function is removed. Unlike every other company communication, Bing is operating without a filter or editor (it has some guardrails in its code, purportedly, but those didn’t work). That makes Bing incredibly fast, but also wildly unpredictable. In other interactions, Bing told users how it would take revenge on perceived enemies, became abusive, and even declared its love.

It’s easy to see how that lack of oversight could go sideways. Imagine harnessing AI to drones in warfare, allowing them to decide whether to target a strike or when to avoid it. We’ve already seen how often Tesla’s self-driving systems have failed. AI can already create amazing synthetic realities—inventing humans out of thin air, allowing us to create deep-fake videos. It won’t be long before AI can make believable but completely fake videos. Need evidence of voter fraud? Here’s a video of shadowy figures stealing ballots on election night. We’ve already seen how ably ChatGPT produces the form of a rational, fact-based article—by quickly manufacturing any fact it needs from thin air (or pixels). Flooding our information channels with this believable but invented information could do enormous harm.

To bring it down to the case of beer, I got an email from a brewer who posed this hypothetical—one his brewery is already considering and planning for:

“AI could take the place of most, if not all, of a distributor's sales force. AI could look at a bar's past purchases, auto-generate an order, with suggestions (or auto-orders) of new beers, brought in at an optimal time for that account. Given that distributors make money on the volume of beer they move, not really which beers they move, this could have huge impacts for certain breweries. The AI could easily end up picking winners and losers, much to the bar's benefit, and the brewery's loss (or benefit).”

Compared to an accidental nuclear-war scenario, this looks like small potatoes, but that’s what makes it such a good example. At least in the short term, AI will probably be used for specific, narrow functions in which mistakes aren’t catastrophic. Yet automating sales turns the already powerful distribution tier into a black box making decisions about what we drink—decisions even the distributor won’t understand. The three-tier system (barely) works because we have created redundancies so one tier doesn’t get too powerful. Overnight, AI could change that balance.

This gets back to that question of judgment. Humans may be flawed and some percentage of them are definitely bad actors. But we understand how they think and act, and our systems are designed to protect us against human failing. We don’t understand how AI thinks, nor can we see the decision points and choices that led to its actions, and trying to balance safety and usability may result in very bad outcomes.

 

AI-generated face via Unreal Person

AI-generated face

AI-generated face

 

Humans Need Work, Too

Somewhere in the middle between the HAL 9000 scenarios and the distributor example is the potential for AI to automate a huge amount of human work. We tend to think of creators first—writers, artists, musicians. But as the distributor example illustrates, it could eliminate a ton of jobs along the way. Industry experts have weighed in on the way data-heavy systems could benefit by automating routine work with AI: the legal profession, which generates trillions of words a year; healthcare, with its reams of patient information; even software engineering itself, simplifying coding and data entry. (I love the idea of AI coding itself. That’s not scary at all!)

Every new generation has to deal with transformative technology, and it always produces winners and losers. But in the long run, the new tech tends to create new jobs to replace the old, now-obsolete ones. The workforce changes, but the number of jobs remains the same. It can be very hard on people with specialized skills when their jobs become obsolete, but on a macro level, we don’t worry about a work crisis at the societal level.

That doesn’t stop people from warning this next one will be different every time it happens. But maybe this time will be different. AI won’t just displace people sitting at desks. It might be possible to program systems such that much in a supply chain becomes automated—even down to the manufacture, assembly, and distribution of goods. That would displace a lot of people who don’t work at desks, too.

We already have pretty automated breweries. It is not hard to imagine AI helping analyze sales and customer preference data, buying decisions, pricing, and so on to identify precise market niches, then design and manufacture beer to those specs, all with little human intervention. Computers at a brewery would talk to computers at a distributor to streamline sales and delivery. Distributors could talk to retailers. In such a scenario, what percentage of the brewery workforce becomes redundant? It’s significant.

A rumination on the dangers of AI could fill a dissertation (and no doubt is), but I’ll leave you with a final comment. My biggest worries lie with those middle-scenario challenges, not the emergence of robot overlords. But that scenario is really bad! It could displace some giant chunk of the workforce, across industries and disciplines. All that efficiency should generate enough wealth for the humans left behind, but we aren’t even close to figuring out how to restructure society based on a loss of 25% of the national work hours. It’s the kind of disruption that at the micro level would leave families destitute, and lead to civil strife at the societal and political level. Societies aren’t designed to handle disruptions this big.

The reason I launched into AI week was to provoke a discussion about managing AI before its worst outcomes are upon us. It starts with recognizing how transformative the tech is, how immediate the problem is, and how little we’ve discussed solutions. I am excited by all its potential, and somewhat hopeful we can realize them without destroying societies. It’s just not a discussion we can delay having, though. I know this has been tangential to beer, so thanks for reading along.

(Incidentally, if you haven’t seen the comments on the post about AI ethics, go have a look. People are saying smart things over there, and a lot of them disagree with my post.)