Typewriter: Royal Companion.
To make the text searchable and quotable, it is reproduced below.
Machine intelligence will be with us much sooner than we expect, and in fact is probably already here.
Why does Dr. Boli make that statement?
Because he thinks our definition of “machine intelligence” is foolishly narrow-minded.
What will it take for machines to be intelligent? In almost every discussion of that question, it is assumed that an intelligent machine must be self-aware. Dr. Boli suspects that we make this requirement because it allows us to put off the coming of “machine intelligence” indefinitely. We have not been able to agree on what it means for the globs of matter we describe as “human beings” to be self-aware. When we try to pin down self-awareness, we come up with silly terms like “transcendental unity of apperception,” which batty old Professor Kant presented to us as a solution to the problem, but which really was just a name for “the insoluble problem of how Hume’s ‘bundles of perceptions’ form a united individual.” (Philosophy students! Dr. Boli would love to be proved wrong here.)
When we insist on self-awareness as a criterion for intelligence, then it will always be possible for us to refine our definition of true self-awareness so as to exclude the most apparently intelligent machine, in the same way that an Anglican can insist that one must hold every one of the Thirty-Nine Articles to be a true Christian. Yes, machines can calculate rings around us. But are they self-aware? Yes, machines can drive cars better than we can. But are they self-aware? Yes, we have built a machine programmed to kvetch about its own adolescence in a hundred-thousand-word novel. But is it truly self-aware?
As long as we can cling to the nebulous criterion of self-awareness, we shall always have a way to feel smugly superior to the most capable machine. Our machine may tell us in a perfectly modulated synthesized voice, I am self-aware: but we can always say, “No, you’re not,” in the same condescending tone an Anglican might adopt when addressing a Methodist who claimed to be a Christian.
But it seems to Dr. Boli that the kind of self-awareness we human beings possess is only one narrow form of intelligence; or perhaps we could even say that intelligence and self-awareness, as we understand it in humans, are separate phenomena.
Your computer is self-aware to a certain extent. It, knows that, there are other entities on the network; it knows that 127.0.0.1 is this computer and not any other computer; it identifies itself when communicating with other computers. It is, in short, already capable of saying, “I am”; what is to prevent us from saying it is self-aware? Only that we have decided to include something more in our definition of self-awareness. It must say more than “I am”; it must also say “I wish.”
But your computer does not need to say “I wish” in order to function. Nor does any other form of life. To keep going, life needs the desire to do what it takes to perpetuate itself. Higher aspirations are luxuries in which the human species has indulged; and because we are inclined to view ourselves as the only form of intelligent life on earth (but ask your cat what he thinks about that), we see intelligence and aspiration as going together, and perhaps confuse the two.
It seems foolish and egotistical to assume that intelligence must always take the form it has taken in our own case. Alternatively, if we do insist that it must, then it seems that we have excluded the possibility of any other form of intelligent life in the universe. And we have excluded the possibility of machine intelligence until we succeed in creating an exact machine duplicate of the human mind.
So Dr. Boli proposes his own definition of machine intelligence , one that does not depend that we answer the thorny question of whether a machine is self-aware or merely simulates self-awareness:
Whenever a machine, in response to some input, makes a decision, and it is not possible for a human being in any reasonable length of time to understand by what process the machine reached that decision, that machine may be called intelligent.
This makes machine intelligence simply a matter of complexity. Granting that the machine may only be executing lines of code, there comes a certain point when there is simply too much code for the human mind to follow. The human brain itself operates by means of an extraordinarily large number of very simple processes; we call ourselves intelligent on the basis of a series of electrical impulses. Dr. Boli sees no reason not to grant machines the same privilege.
And now Dr. Boli will leave you to answer for yourselves the question of whether we have already produced machine intelligence. He will merely suggest an observation. You are reading these words on some form of computer: laptop, desktop, smartphone, tablet, or whatever. How many times have you looked at that device and asked, “Why did it do that?”