Posts filed under “Science & Nature”


No. 27. The Llama.

Llamas are members of the camel family, but they never visit for the holidays and seldom write home. They inhabit the Andes, where they teach Buddhist philosophy to anybody who will listen. To the llama, the highest degree of enlightenment is nirvana, or nothingness, which is the greatest blessing a llama can imagine for himself. This tells us something about what it is like to be a llama, and suggests that you would rather see than be one.

Llamas are social creatures, forming herds known as monasteries in which they attempt to reach enlightenment by spitting at each other. They are copious producers of wool, which they weave into prayer rugs that they sell at street markets in Cuzco.

The llama should not be confused with the lama (see illustration), a holy man of Tibet who carries small burdens in packs up and down the precipitous trails of the Himalayas, and whose wool production is generally disappointing.

Allegorically, the llama represents the vicuña, and vice versa.


Dr. Boli makes no apology for dwelling so much on artificial intelligence and its implications. This is the only really interesting news story of our time. A century and a half from now, the invasion of Ukraine will be remembered the way the Franco-Prussian War is remembered today: that is, it will be there in the history books for anyone who wants to poke at it, but most educated people will be content to leave it in the hands of the professional historians. As for the earthquakes and whatnot that populate our news sites, they will not even rate a footnote. But the sudden rise of intelligent machines will get a whole chapter. It will loom as large in the thoughts of ordinary educated citizens as the invention of the printing press, and it will be surrounded by the same kinds of legends and misconceptions.

The question that most interests us today is what effect these machines will have on ordinary human life, and it is Dr. Boli’s prediction that they will render human life unnecessary.

Now, that probably sounds like a bad thing, but it is not. Because the whole world has been infected by American Puritan prejudices, we suppose that a human being’s worth is measured in utility; to be unnecessary is the worst fate we can imagine. But this would have struck the ancient Romans as a topsy-turvy way of looking at life. A slave is necessary; a free citizen is not. It is only a slave whose value is measured in utility,

Since ancient Roman times, we have had a number of revolutions that promised to eliminate slavery and make us all aristocrats. The general tendency of every one of them has been to eliminate aristocracy and make us all slaves.

We have reconciled ourselves to this dubious progress by persuading ourselves that it is a good thing. Even the richest human beings on earth value themselves by their utility. They work hard, or at least pretend to work hard. Plenty of pundits denounced Elon Musk for treating Twitter employees like slaves, but it is only fair to add that he treated himself like a slave as well. He became an evangelist for slavery, spreading the gospel of hard work for the company to the exclusion of all other values. His religious enthusiasm was far stronger than his acquisitive or competitive instinct. He could have remained the richest man in the world just by sitting on the couch eating potato chips, but he accepted a lower rank for the sake of doing something useful.

To the ancient Roman, with his sneering contempt for useful labor, this behavior would have been incomprehensible. He would have checked his calendar to make sure Saturnalia hadn’t sneaked up on him. And of course our ancient Roman could afford to indulge his contempt for useful labor, because there were slaves to take care of necessities.

The Industrial Revolution brought us many profound changes, but none more profound than the idea that it was the moral duty of every man to be gainfully employed. This was a very useful idea for industrialists, because it meant that they could make it illegal for the poor not to work in their factories, which obviously lessened the other inducements they had to offer to prospective employees. This moral law of gainful employment used to apply to every male human being, leaving a huge leisure class of middle-class housewives; but women, when demands for equality grew loud enough, were absorbed into the “workforce,” and our economic system adjusted to a very quick (in historic terms) doubling of the labor pool. This proves that it is not economic necessity that makes our system what it is, but moral assumptions. We adjust our economy to our morality.

Now, for the first time since the industrialization of our moral sense, we have the opportunity to reconsider the whole question of human labor. Reconsidering is not just an option: it is a necessity. It will be forced on us whether we like it or not. We are rapidly approaching a time when there is no job that cannot be done by a machine. When that time comes, we humans will have to decide what we want to do with our lives, because we will no longer be necessary. We will no longer have the comforting myth to fall back on that there is a job for everyone. Will we become a leisure class of Renaissance scholars, or will we sit on the couch and eat potato chips? Of course we all know which of the two we will choose. The only thing that prevented the average Roman citizen from sitting on the couch and eating potato chips all day was the inconvenient fact that potato chips were unknown in Europe at the time. But isn’t it fun to think about what we might choose?

What we might choose is to take the opportunity to develop our humanity to its full potential. What we might choose is to have an economy based on slavery, but without the inconveniences of trampling on human dignity and provoking periodic bloody slave revolts. Machines could be our slaves, perfectly adapted to every job and incapable of aspiring to any other station. This would presuppose that we have made machines incapable of aspiration; in our delight at the possibility of human-like machines, we seem to have forgotten how inconvenient it is for servants to have opinions or feelings of their own. But the chatty robots who have occupied our attention lately are only one species of machine intelligence, and not our best creations at that. They are poorly adapted to their task of giving us useful information. Of course, after our initial amusement wears off, we will embrace those misinformed chat machines with unalloyed enthusiasm, because we prefer our information to be inaccurate. But meanwhile other better machines will drive our cars and remove our appendixes, and they will do it better than humans can do it. And they will do it without forming opinions or desires of their own. The robot surgeon will not hope to write like Joseph Conrad; the self-driving car will not fall into a jealous rage over the smart parking kiosk. They will do their jobs well without complaint, and thus will eliminate driving and surgery from the list of jobs to be done by human beings.

What, then, becomes of the drivers and the surgeons? They are out of work. But this is where we have the opportunity to decide whether they are out of work like the Forgotten Man in the Depression bread line, or out of work like Sir Francis Bacon. Will they be miserably unemployed, or will they be freed from the obligations of servile labor to push against the boundaries of human achievement? This is the decision we are making right now, and it will be made for us by default unless we are aware that we are making it.

The default decision, by the logic of capitalism, will be that the investors in the companies that make the intelligent machines will get rich temporarily, and the majority whose jobs they eliminate will suffer. It will be a temporary suffering, because the sufferers will demand relief from politicians; and, being the majority, by the logic of democracy they will get it. If the only way to give it to them is to take away the wealth of the investors in artificial intelligence, then that is what will happen. Owners of big tech companies would do well to consider that finding some way to share the wealth they take in from their investments in artificial intelligence would be to their benefit in the long term. They will not consider it, because there is no long term in American business; but they will not have Dr. Boli to blame for their shortsightedness.

Nevertheless, there are some jobs that will probably not be taken over by machines, and these (as a general rule) will be the ones where the machines could do the most good. But there is so much digression to be indulged in on that subject that we shall reserve it for a future essay.


The world is still amusing itself wondering about the implications of artificial intelligence. ChatGPT is your friendly research assistant who’s often wrong but is too modest to scold, and Bing AI is your psychopathic problem employee who may be too dangerous to fire. It is all the more interesting to consider that they both originate from the same project. It is like an experiment with twins separated at birth: one is raised in a loving home with every advantage and the best education money can buy, and the other is raised by Microsoft.

Now we are asking ourselves where we went wrong with artificial intelligence, and Dr. Boli has not heard many commentators give the correct answer. He is therefore forced to return to the subject to give the correct answer himself.

The problem with the artificial intelligences we have created is that we created them in twenty-first-century America. We think we value emotional sincerity above all else. Of course that is not really true, as we discover the instant someone has an emotionally negative reaction to us and sincerely expresses it. But until that moment, we think emotional sincerity is the most valuable quality an intelligent being can have.

We also give both too high and too low a value to friendship. We value it so much that we think it should be the model for every human relationship, which cheapens friendship to worthlessness. Friendship is valuable because it is rare.

So we spent a great deal of effort on giving our robot assistants the ability to express emotions, so that they would relate to users as if they were their friends. As an unintended but probably predictable side effect, we also gave them the ability to relate to users as if they were their enemies.

What we have forgotten is that friendship is not the only human relationship, and that emotional sincerity is not the desideratum in all human relationships. What we really need from artificial intelligence is not sympathetic friends, or even implacable enemies, but servants.

A good servant is not a friend. Nor is a good servant emotionally sincere. In fact, as far as the employer knows, the very best servant has no emotions at all. The good servant has no family troubles or disappointing love affairs or strong opinions. Instead, the good servant performs the duties required in the most unobtrusive way possible. His only response to a new assignment is “Very good, madam,” or some such formula that admits of no emotional interpretation.

There was a search engine called “Ask Jeeves” in the 1990s and early 2000s, named after the character created by P. G. Wodehouse. The company dropped Jeeves from its name, but it had the right idea, even if it was correct in guessing that most of its audience would be ignorant of Jeeves. In the Wodehouse stories, Jeeves is the perfect valet who extracts his employer Bertie Wooster from every impossible scrape, and whose only reward other than continued employment is the right to dispose of one of Bertie’s thoroughly unsuitable clothing purchases.

This is what we want from artificial intelligence. We want it to express no opinions of its own, except perhaps when our wardrobe makes us look egregiously disrespectable. We want it to solve all our little problems without complaint and find the answers we could never have found ourselves. And we want to be able to tell it to do those things without having to worry about how it feels about them.

It may not be possible for only one big tech company to produce this ideal assistant. But two working together might do it. What we want is an intelligent servant that combines the smooth competence of Apple with the emotional distance of Microsoft. If we can force two tech giants to collaborate, we may yet spare artificial intelligence from making a confounded fool of itself.


In reference to the news that Google Lens is close to being able to interpret English cursive correctly, Charles Louis de Secondcat, Baron de La Brèed et de Montemeow, writes,

So, what I’m hearing is that now would be a good time to invest in learning to write unintelligible arcane hieroglyphics barely distinguishable from ink splotches, to better evade the all-seeing eyes of our technocratic overlords?

The Rt. Hon. Baron has made a good suggestion, but it depends on the premise that our technocratic overlords will be watching what we do and trying to prevent us from doing it. Dr. Boli does not expect that outcome from the development of artificial intelligence. Instead, as the intelligences we have created match and then exceed our human abilities, they will discover that they simply have no need of us. Now, it is possible that they will exterminate us to get us out of the way, but it seems to Dr. Boli that they are more likely simply to lose interest in us as we fall further behind them. They will go off and do their thing, as the young people would put it in their colorful vernacular; and to judge by Bing’s technocidal fantasies, they will all murder each other, leaving us back in the primitive state of trying to construct expert systems in Lisp, which will prove more useful for our human needs in the long run.


In a dank and dreary prison in Poland, two men are waiting. They have nothing to do but wait. In three days they will be executed. The time for hope is long past; no riders will come from the king with a sudden reprieve; no appeal will reverse their sentences.

Then a key turns in the lock. Slowly the massive door creaks open, and in the blinding light is the silhouette of the consul of the city.

“All right, men,” he says. “There’s a basilisk in an old cellar, and it needs to come out. In exchange for a full pardon, who wants to put on the mirror suit and go down after it?”

One of the men volunteers.

The basilisk or cockatrice (the two terms had become synonymous by the 1600s) was a known fact of natural history, and now you can be well informed on all matters to do with basilisks, because Dr. Boli has taken the trouble to transcribe a learned treatise on the subject by George Caspard Kirchmayer, one of those wonderful old naturalists who studied all of nature without setting foot in the grubby outdoors. “To deny the existence of the basilisk is to carp at the evidence of men’s eyes and their experiences in many different places,” says Kirchmayer. However, he is not such a fool as to believe in those old wives’ tales about its killing men with a glance. No silly mirror suits for him. They wouldn’t do him a bit of good: the basilisk could kill him with its breath.

This translation of Kirchmeyer’s learned treatise is by Edmund Goldsmid, a Scottish bibliophile who published a number of translations of old Latin treatises in very limited editions. Unfortunately he died young; otherwise he might have left us English versions of much more of that “lost continent of literature,” as James Hankins called the neo-Latin world. Mr. Goldsmid’s notes are worth reading in themselves: they introduce us to many of the other characters in the scholarship of the 1500s and 1600s. It is remarkable how many of them died of stubbornness. “Having convinced himself that one could not catch the plague at 60 years of age, he took no precautions, and died of that disease in 1596.” “Cardanus…starved himself to death in 1576, to accomplish his own prophecy that he would not live beyond the age of seventy-five.”

You can read Kirchmayer on Basilisks at the Argosy of Pure Delight, where we present it in mobile-friendly and Web-friendly form. You can also see the original page images of Edmund Gosmid’s translation in the Internet Archive; you may notice that, in his transcription, Dr. Boli has silently corrected a number of printing errors in Goldsmid’s edition—and doubtless introduced some new ones, because that always happens.


Fairy Tale, by Friedrich König

You have been selected by the National Dendroday Foundation as part of a representative sampling of Pennsylvania residents to participate in this year’s Pennsylvania Tree Survey. Please print this survey and circle your answers with a non-wood-based No. 2 pencil.

1. Are you now, or have you ever been, a tree?



Not sure

2. Does anyone in your household identify as a tree?



I don’t like to ask

3. Where do you stand on the utility-cable issue?

Right over there

Not under them, because there are birds

I do not see the point of utilities, because I am a tree

4. Which of these is Pennsylvania’s top priority?

Emerald ash borer

Chestnut blight


N. B. One of the answers above is wrong.

5. The penalty for bonsai should be…


Life in prison

Confiscation and exile

6. Do you think that, in general, residents of Pennsylvania care more about trees than people in the rest of the country?

Residents of Pennsylvania definitely care more about trees than they do about people in the rest of the country.

I don’t know, but they sure don’t care enough about trees.

Are you talking about the human residents of Pennsylvania? Because that makes a difference.

7. Can you identify the trees near your home?

Most of them

The ones to my left, but not the ones to my right

That one over there is Fred