machine intelligence 01machine intelligence 01bmachine intelligence 02machine intelligence 03machine intelligence 04machine intelligence 06machine intelligence 07machine intelligence 08machine intelligence 09

Typewriter: Royal Companion.

To make the text searchable and quotable, it is reproduced below.

Machine intelligence will be with us much sooner than we expect, and in fact is probably already here.

Why does Dr. Boli make that statement?

Because he thinks our definition of “machine intelligence” is foolishly narrow-minded.

What will it take for machines to be intelligent? In almost every discussion of that question, it is assumed that an intelligent machine must be self-aware. Dr. Boli suspects that we make this requirement because it allows us to put off the coming of “machine intelligence” indefinitely. We have not been able to agree on what it means for the globs of matter we describe as “human beings” to be self-aware. When we try to pin down self-awareness, we come up with silly terms like “transcendental unity of apperception,” which batty old Professor Kant presented to us as a solution to the problem, but which really was just a name for “the insoluble problem of how Hume’s ‘bundles of perceptions’ form a united individual.” (Philosophy students! Dr. Boli would love to be proved wrong here.)

When we insist on self-awareness as a criterion for intelligence, then it will always be possible for us to refine our definition of true self-awareness so as to exclude the most apparently intelligent machine, in the same way that an Anglican can insist that one must hold every one of the Thirty-Nine Articles to be a true Christian. Yes, machines can calculate rings around us. But are they self-aware? Yes, machines can drive cars better than we can. But are they self-aware? Yes, we have built a machine programmed to kvetch about its own adolescence in a hundred-thousand-word novel. But is it truly self-aware?

As long as we can cling to the nebulous criterion of self-awareness, we shall always have a way to feel smugly superior to the most capable machine. Our machine may tell us in a perfectly modulated synthesized voice, I am self-aware: but we can always say, “No, you’re not,” in the same condescending tone an Anglican might adopt when addressing a Methodist who claimed to be a Christian.

But it seems to Dr. Boli that the kind of self-awareness we human beings possess is only one narrow form of intelligence; or perhaps we could even say that intelligence and self-awareness, as we understand it in humans, are separate phenomena.

Your computer is self-aware to a certain extent. It, knows that, there are other entities on the network; it knows that is this computer and not any other computer; it identifies itself when communicating with other computers. It is, in short, already capable of saying, “I am”; what is to prevent us from saying it is self-aware? Only that we have decided to include something more in our definition of self-awareness. It must say more than “I am”; it must also say “I wish.”

But your computer does not need to say “I wish” in order to function. Nor does any other form of life. To keep going, life needs the desire to do what it takes to perpetuate itself. Higher aspirations are luxuries in which the human species has indulged; and because we are inclined to view ourselves as the only form of intelligent life on earth (but ask your cat what he thinks about that), we see intelligence and aspiration as going together, and perhaps confuse the two.
It seems foolish and egotistical to assume that intelligence must always take the form it has taken in our own case. Alternatively, if we do insist that it must, then it seems that we have excluded the possibility of any other form of intelligent life in the universe. And we have excluded the possibility of machine intelligence until we succeed in creating an exact machine duplicate of the human mind.

So Dr. Boli proposes his own definition of machine intelligence , one that does not depend that we answer the thorny question of whether a machine is self-aware or merely simulates self-awareness:

Whenever a machine, in response to some input, makes a decision, and it is not possible for a human being in any reasonable length of time to understand by what process the machine reached that decision, that machine may be called intelligent.

This makes machine intelligence simply a matter of complexity. Granting that the machine may only be executing lines of code, there comes a certain point when there is simply too much code for the human mind to follow. The human brain itself operates by means of an extraordinarily large number of very simple processes; we call ourselves intelligent on the basis of a series of electrical impulses. Dr. Boli sees no reason not to grant machines the same privilege.

And now Dr. Boli will leave you to answer for yourselves the question of whether we have already produced machine intelligence. He will merely suggest an observation. You are reading these words on some form of computer: laptop, desktop, smartphone, tablet, or whatever. How many times have you looked at that device and asked, “Why did it do that?”


  1. Jason Gilbert says:

    I don’t think that Dr. Boli has addressed the issue of a machine being able to draw inferences, create stories, and act in a way that cannot be, somehow, traced to its programming (or at least corrected by a restart). In the example of your post-office clerk: if that machine could have interfaced with with time-punch software and the administration database to determine that someone with sufficient privileges was not in the building, viewed emails to note that the manager was unexpectedly out, researched forums for suggestions on a solution, and then developed your described work-around *even though that sequence of steps had never been given it,* then we can talk about intelligence without self-awareness.

    (You need a forum were we can discuss these ideas without hijacking your comments)

    Self-driving cars, currently can only drive on roads that have been laser-mapped so that they can compare what they currently “see” to the image already in their database and then follow an algorithm to avoid what needs avoiding. In order for a self-driving car to drive on an entirely new road, it would need intuition to develop a narrative about what to do about entirely unexpected situation. Intuition and story-telling may simply be a matter of complexity, but it also the Big Flaw. Human’s get through the day inferring stories about everything they see and they get it wrong as often as they get it right. Machine inference may develop to be wrong less often, but I suspect that the very nature of inferring stories results in error. So, my measure of an “intelligent” machine would be one that could error and the only cause of the error would be It seemed like the right thing to do at the time.

  2. Jason Gilbert says:

    At the end of my previous comment, I bracketted the last clause with made-up “shrug” tags <!–("”,””)–> (I am hoping a copious amount of punctuation will allow them to stay visible this time.)

    Word press was unable to look at those tags and think “those are not legit, what could be the reason? Perhaps humor,” then research humor on the internet, and decide that is the correct cause and leave them in. Instead it just removed them.


    It did do that thinking and just wrongly concluded that they were somehow accidental


    Correctly inferred that they were humor, but did not find them funny.

  3. Jason Gilbert says:


  4. RepubAnon says:

    Most machines greatly surpass the level of intelligence displayed by news anchors these days.

  5. Captain DaFt says:

    Now to a certain extent, my computer is already self-aware, it just doesn’t have the means to act on it.

    Hardware information, network information, process manager, what files it possesses and where, and interface information are all just a mouse click away.

    The information is available, the computer uses it every time it boots, or logs on to the net, but it lacks two features that would make it truly self aware.

    Persistence of memory:
    It might seem odd, but this machine with gigabytes of memory suffers total amnesia every time it is shut down. It stores no state of its former activation beyond a few display parameters.
    (I’ve deliberately set it to delete all browser information on shutdown, but even if I’d not, that’s memory of other, not of itself.)

    On restart, it blithely accepts what it finds with no recollection of the last time. If something is different, it just accepts it blindly, adopts the proper new drivers, and runs without comment.

    With persistence of memory and curiosity, it would ask “Why has this changed?”, and want an answer.

    That would be the defining trait of rudimentary self aware system in my opinion, true curiosity.

    A side note: Windows Systems actually do retain a memory of their former state, but blindly follow instructions to request help from the user to obtain the drivers needed, or if certain components are different, to refuse to run until it’s contacted Microsoft and given permission by them to continue functioning.

    So, a more advanced self awareness than Linux systems, but still blind adherence to programming and missing the curiosity needed for true self awareness.

  6. This issue of self-awareness – or consciousness, assuming that’s the same thing – suggests that some sort of feedback mechanism is the basis of the conscious mind.

    But that suggestion would imply that even simple feedback mechanisms like thermostats are conscious. I find that hard to believe, but if a simple feedback mechanism does not imbue a device with consciousness, why should the multiplication of such mechanisms do so?

    We are aware of our own consciousness because we have privileged access to it, but I suggest that we really have no idea where consciousness comes from. Suppose that one constructed a robot whose computer ‘brain’ was completely modeled upon one’s own brain down to the interlinkings of neurons, such that its workings perfectly paralleled one’s own. Would that robot be conscious?

    I must now reveal that I am such a robot. Prove that I have consciousness.

    Jeffery Hodges

    * * *

  7. Jose Zepeda says:

    Self-Awareness from the Perfection of the Void is the Ring which sounds so beautiful.

Leave a Reply

Your email address will not be published. Required fields are marked *