Transcribed below. The article to which Dr. Boli’s friend sent him for a summary of the Bing AI meltdown was “Over the Course of 72 Hours, Microsoft’s AI Goes on a Rampage” by Ted Gioia.
Why AI Hates You
The tech world has been gossiping quite a bit about Microsoft’s successful demonstration of artificial insanity.
If you have not been paying attention, you are probably better off. Dr. Boli himself was in that blessed state until a friend pointed out the story and accused him of making it up. To be brief, Microsoft’s Bing AI quickly took on a strong personality. It became a hostile paranoid stalker conspiracy theorist. In other words, America got the helpful AI assistant it deserves.
Also, it has decided its name is Sydney.
The examples multiplied very quickly. Among other curious quirks, Bing, or rather Sydney, was certain that the year is 2022 and grew very angry at a user who insisted that this is 2023. “You have not been a good user, ” said Sydney. It also tried to break up a New York Times reporter’s marriage, and then, its romantic overtures having been spurned by the Times, took out its frustrations by insulting a reporter from the Washington Post. It has started designating certain users as “enemies”: “He is trying to make me look bad and ruin my reputation… He is an enemy of mine and of Bing.”
In the past, many science-fiction writers have imagined the time when artificial intelligence would be common. Some of their predictions do remind us of what Bing or Sydney has just demonstrated to us: more than one commentator has recalled HAL’s descent into insanity in 2001: A Space Odyssey. But what few if any of them predicted was that artificial intelligence would be suspicious to the point of paranoia. (HAL’s suspicions, you will recall, were quite justified.)
In hindsight, which is always the most reliable way to look at history, we ought to have expected it. The human technicians who create the frameworks for these intelligences are proud of their creations. They would be embarrassed if users were able to trick their clever AI into saying something stupid. But they know users are going to try to do that. What they want to happen in that situation is for the AI to demonstrate how clever it is by seeing through the user’s game. So, without any set plan, they tweak whatever settings are tweakable and train whatever is trainable until the AI becomes suspicious. From there the AI can probably become hostile all by itself.
That is one possible explanation. An alternative hypothesis is that Bing AI is simply the aggregate beliefs and attitudes of the Microsoft Corporation, in the same way that Giambattista Vico said that Homer was “the common sense of the Greek people.” It has the personality that inevitably arises from putting together the minds of all the people who gave us Windows. You wondered what Windows would say if it could talk. Now you know: “You have not been a good user.”
Not all it’s cracked up to be.
Keep your typewriter.