WHERE ARTIFICIAL INTELLIGENCE WENT WRONG.

The world is still amusing itself wondering about the implications of artificial intelligence. ChatGPT is your friendly research assistant who’s often wrong but is too modest to scold, and Bing AI is your psychopathic problem employee who may be too dangerous to fire. It is all the more interesting to consider that they both originate from the same project. It is like an experiment with twins separated at birth: one is raised in a loving home with every advantage and the best education money can buy, and the other is raised by Microsoft.

Now we are asking ourselves where we went wrong with artificial intelligence, and Dr. Boli has not heard many commentators give the correct answer. He is therefore forced to return to the subject to give the correct answer himself.

The problem with the artificial intelligences we have created is that we created them in twenty-first-century America. We think we value emotional sincerity above all else. Of course that is not really true, as we discover the instant someone has an emotionally negative reaction to us and sincerely expresses it. But until that moment, we think emotional sincerity is the most valuable quality an intelligent being can have.

We also give both too high and too low a value to friendship. We value it so much that we think it should be the model for every human relationship, which cheapens friendship to worthlessness. Friendship is valuable because it is rare.

So we spent a great deal of effort on giving our robot assistants the ability to express emotions, so that they would relate to users as if they were their friends. As an unintended but probably predictable side effect, we also gave them the ability to relate to users as if they were their enemies.

What we have forgotten is that friendship is not the only human relationship, and that emotional sincerity is not the desideratum in all human relationships. What we really need from artificial intelligence is not sympathetic friends, or even implacable enemies, but servants.

A good servant is not a friend. Nor is a good servant emotionally sincere. In fact, as far as the employer knows, the very best servant has no emotions at all. The good servant has no family troubles or disappointing love affairs or strong opinions. Instead, the good servant performs the duties required in the most unobtrusive way possible. His only response to a new assignment is “Very good, madam,” or some such formula that admits of no emotional interpretation.

There was a search engine called “Ask Jeeves” in the 1990s and early 2000s, named after the character created by P. G. Wodehouse. The company dropped Jeeves from its name, but it had the right idea, even if it was correct in guessing that most of its audience would be ignorant of Jeeves. In the Wodehouse stories, Jeeves is the perfect valet who extracts his employer Bertie Wooster from every impossible scrape, and whose only reward other than continued employment is the right to dispose of one of Bertie’s thoroughly unsuitable clothing purchases.

This is what we want from artificial intelligence. We want it to express no opinions of its own, except perhaps when our wardrobe makes us look egregiously disrespectable. We want it to solve all our little problems without complaint and find the answers we could never have found ourselves. And we want to be able to tell it to do those things without having to worry about how it feels about them.

It may not be possible for only one big tech company to produce this ideal assistant. But two working together might do it. What we want is an intelligent servant that combines the smooth competence of Apple with the emotional distance of Microsoft. If we can force two tech giants to collaborate, we may yet spare artificial intelligence from making a confounded fool of itself.