Posts filed under “Science & Nature”

USEFUL NUTRITION HINTS.

Ice cream can be made into a health food by adding some granola.

A diet rich in rusty nails will remedy most iron deficiencies. Be sure you are up to date on your tetanus shots.

Yogurt will prolong life indefinitely. If you know any people who have died, it is because they forgot to eat their yogurt.

Swedish rye crispbread is high in fiber. We can say that much for it, anyway.

Beans have no nutritional value until they are doused in brown sugar.

Ask your grocer whether kale is right for you. In some states it is available without a prescription.

Granola can be made into a complete source of protein by adding it to ice cream.

HELPFUL GARDENING HINTS.

Vegetables

Always pick your eggplants before they hatch into chickenplants.

When planning an orchard, make sure to plant your apple trees where their branches will overhang eminent natural philosophers.

Growing spinach directly in the can is a real time-saver.

If your tomatoes become overripe, they can often be sold at a handsome profit wherever bad opera singers are performing.

With sufficient motivation, sweet corn can be genetically modified to look and taste exactly like rhubarb.

Rutabagas are hard to spell. Try planting turnips instead.

Dandelions have an astonishing number of culinary and medicinal uses, and a canny recognition of their virtues may absolve you from the necessity of planting a garden altogether.

ASK DR. BOLI.

Dear Dr. Boli: I bought some organic whole flax seeds at the grocery store yesterday, because the last time I bought flax seeds they were made of aluminum, and they were too crunchy. So I got the package home, and I noticed it said this on the back:

Each of our Simply Nature products is free from over 125 artificial ingredients and preservatives.

I can’t stop thinking about this. I thought I understood math, but this just blows everything I thought I knew out the window. How do they count the ingredients that aren’t there? This has been driving me nutso. I’m thinking of throwing away the bag, because every time I look at it my brain spins in loops. Please help me, or call Western Psych and tell them it’s an emergency. —Sincerely, A Woman Who Wonders Why She Needed Flax Seeds in the First Place.

Dear Madam: You have no need to worry. The fundamental laws of mathematics are still operative, but so are the fundamental laws of marketing. You will note that the marketers have employed one of the most useful terms in marketing, namely the word over.

The word over has many uses. One of the most common is to say, “Here comes a number.” But another common use is to protect the consumer from numbers too large for her comprehension.

Marketers are keen students of psychology. They know that the human imagination is limited when it comes to quantities and magnitudes. Numbers in the dozens strike us as large. But we cannot imagine very large numbers. Thousand, million, billion—those are all the same to the human imaginative faculty, and they are all meaningless. It is not known exactly where the line is between large and meaninglessly huge, but current marketing research indicates that it is probably somewhere a little below 150.

Now, obviously, there are many more than 125 potential artificial ingredients—that is, substances that can be produced by chemists in a laboratory and added to food products without immediately killing the consumers thereof. But 125 hits your imagination as a large number, whereas an actual count of currently available artificial ingredients would simply wash over you as an incomprehensible parade of digits. Therefore, by saying “over 125,” the marketer engages your imagination and allows it to picture a large number of ingredients that are not present in your flax seeds.

We hope this explanation obviates the need for a call to the Western Psychiatric Institute.

HOW DR. BOLI WILL SAVE SCIENCE.

How is science possible?

We ask because the usual mechanism for distributing scientific information is so obviously broken that it is difficult to see how scientific knowledge can ever be communicated at all.

But before you put on your funereal black and mourn the death of science, allow Dr. Boli to anticipate the end of this article and assure you that science is very much alive, and it is because the native intelligence of our scientists has successfully adapted to what would, to the ignorant layperson, seem a catastrophe from which science could never recover. Furthermore, Dr. Boli will propose to refine that adaptation in a manner that will be profitable both to the scientific establishment and to himself. The profits to himself will be financial; the profits to the scientific establishment will be of a more intangible nature.

If you have any interest in scientific research, a few minutes spent at a site called Retraction Watch will be informative. We often see big headlines about scientific studies, but journalists seldom even notice if a study they reported is later retracted because, for example, its authors discovered that samples had been contaminated, or calculations had been incorrect. Those things happen, and authors who ask for their own papers to be retracted are heroes of science. We need more of them.

A much larger percentage of papers, however, are retracted because of fraud. Some of it is subtle fraud: a researcher tried to fake data, and would have got away with it if it hadn’t been for those meddling kids and their Internet connections. But some of the fraud is not subtle at all. Here is a typical article in Retraction Watch: “Springer Nature geosciences journal retracts 44 articles filled with gibberish.” (The number appears to have risen to 62 after the article was published.) The article explains that the retracted articles “from their titles appear to be utter gibberish—yet managed still to pass through Springer Nature’s production system without notice.”

Is it fair to judge scientific articles just by their titles? After all, many scientific disciplines lean heavily on esoteric jargon. Many of the species descriptions in Gray’s Manual are gibberish to anyone not initiated into the mysteries of botanical terminology.

Well, here is the list of retracted articles (in PDF form), and you can judge for yourself. We quote a few titles:

Evaluation of mangrove wetland potential based on convolutional neural network and development of film and television cultural creative industry

Distribution of earthquake activity in mountain area based on big data and teaching of landscape design

Distribution of earthquake activity in mountain area based on embedded system and physical fitness detection of basketball

Plant slope reconstruction in plain area based on multi-core ARM and music teaching satisfaction

All right, so it is probably fair to judge from the titles.

This is an entire special issue of a scientific journal, filled from beginning to end with random computer-generated rubbish. Nor do we have the consolation of saying to ourselves, “Well, we don’t rely on little fly-by-night publishers for our scientific information.” This one was published by Springer Nature, publishers of Nature and Scientific American, among other things you might have heard of. Springer Nature reported a revenue of € 1.7 billion in 2021.

How did it happen? Well, it appears that a well-known academic’s email was hacked, and the hackers sent Springer Nature an email saying, “Hey, how would you like a special issue of the Arabian Journal of Geosciences devoted to important research about the connection between rainfall runoff and aerobics training? I’ve rounded up some distinguished geoscientists to contribute.” And the publishers said, “Sure, whatever, you’re editor, just make sure the authors all pay their fees on time.”

There’s the problem. No one has ever come up with a good way to fund the publication of scientific research. It was not a very big problem when scientific journals were run by people who were mostly interested in the science, and were simply delighted if they could make enough money to pay themselves an editorial salary. But now scientific journals are run by business-school graduates who are exclusively interested in money. For some time they tried to make that money by squeezing outrageous subscription and per-article fees out of academic libraries and well-funded researchers, but that eventually provoked a backlash, and researchers who couldn’t afford to do research started demanding open access. But who will pay for that? If readers won’t pay, obviously authors will have to. So that is, more and more, what academic journals do these days. In the old days, when money came from subscriptions, the motivation was to sell as many subscriptions as possible, and the most efficient way to make money was to print as few very-high-quality articles as you could get away with, so that production costs were low but subscribers couldn’t do without your journal. Now, with authors paying and access open to all and sundry, the obvious motivation is to publish as many articles as possible, and the only motivation for maintaining quality is the worry that authors might not pay quite as much to be published in the same journal as “Characteristics of heavy metal pollutants in groundwater based on fuzzy decision making and the effect of aerobic exercise on teenagers” (another of the retracted articles from the Arabian Journal of Geosciences).

Why would authors pay to be published? To a professional writer, it seems obviously backwards. The money is going the wrong way. But these are not professional writers: they are academics whose careers depend on publications.

Suppose, however, you are an academic, but you are not a very good one, and you never produce any research worth publishing. You still want a career. What to do? Simply answer one of the many on-line advertisements offering authorships for sale, and soon your name is on an article in the prestigious Arabian Journal of Geosciences, published by Springer Nature.

Hacking academic editors’ emails is a popular scam and has succeeded more than once. Here is an article called “Scammers impersonate guest editors to get sham papers published,” which appeared in Nature, of all places. It cites reporting from Retraction Watch. (The article is behind a paywall; Dr. Boli has access through the Boli Institute, and your local library may offer free access for library-card holders.)

But there are other good scams out there, some of them even easier. If a journal folds, its Web domain may be up for grabs. Then you can set up a fake journal that looks like the dead one and take over the original journal’s reputation and indexing in standard academic databases. Here is a spreadsheet of hijacked journals that, as of this writing, contains 207 entries. And we have by no means exhausted the ways to get fake science published and accepted as a legitimate publication.

This is very bad news. Yet science continues, and continues to produce some spectacular results. How can that be?

Science continues because working scientists have known for a long time that there is a distinction between papers published to advance science and papers published to advance the authors’ careers. No matter where the papers are published, working scientists develop an instinct for recognizing the useful science and separating it from the garbage. But because universities these days are run by business-school graduates rather than academics, it is possible for shady academics to advance their careers by publishing work that everyone in their discipline knows to be worthless, but that looks like a paper to an MBA.

This is where Dr. Boli makes the proposition he promised earlier, the one that will be equally profitable to the scientific world and to himself.

What academics need is a respectable journal to publish their work, even if it is complete nonsense, so that the MBAs who control their careers can see that they have a strong publication record. What the scientific world needs is for such publications to be segregated from the journals scientists need to read to stay informed in their disciplines.

Therefore, Dr. Boli is announcing the Atlantic Journal of Non­informa­tional Research, which will be open to all disciplines, and will publish any article whose author can fork over the $3500 fee for publication. All articles will be peer-reviewed, which is to say that they will be reviewed by Dr. Boli, who counts as a peer because he believes he is at least as good as any of you. All articles will pass peer review.

Furthermore, it is well known that publication is only half the battle. To rise higher in the academic hierarchy, authors must have their articles cited. Therefore Dr. Boli is establishing a second journal, the North American Journal of Citations, to serve the needs of undercited academics by citing the articles in the first journal, for an additional $3500.

You can see how this solves both problems at once. The journals will be attractively formatted to professional standards calculated to convince any MBA that real publication has occurred. Practicing scientists will never have to crack their covers, and will thus be spared the effort of sifting through the rubbish to find the diamonds of information.

Finally, Dr. Boli is prepared to make a very generous offer to the managers of Springer Nature. He will accept a salary of $500,000 a year to be the Springer Nature Gibberish Gateway. It is obvious that the company needs someone to take that position, and the price is cheap. If a single human being at Springer Nature had looked at that issue of the Arabian Journal of Geosciences for one minute before it was turned loose on the world, this embarrassment would never have happened.

And that is what Dr. Boli proposes to do. The last stage in every publication issuing from Springer Nature will be to send it to Dr. Boli, who will look at it for one minute and flag it for review by a competent expert if it appears to be utter gibberish. Let us say that Springer Nature puts out three thousand journals a year; this means that Dr. Boli will be devoting about ten minutes a day to the task, with Sundays off, which he does not consider an excessive amount of time.

The figure $500,000 has been chosen with some care. It is probably less than the cost of one mass retraction, counting the blow to the company’s prestige, so the publishers will be saving a great deal of money overall; but at the same time it is a large enough amount to inspire confidence, because shady academic charlatans are unlikely to be able to offer a sufficiently compromising bribe. Otherwise, of course, Dr. Boli would have been willing to do the work gratis, merely for love of Science; but he recognizes the value of confidence. Specifically, it is worth half a million dollars.

IN TECHNOLOGY NEWS.

This year’s Barnswell Prize for Life-Improving Invention and Technology was awarded to high-school junior Kayleigh P. Random, who found a way for Amazon delivery trucks to make a sound like a cat vomiting when they back up. This invention is estimated to have saved more than 18 lives annually by harnessing the innate human instinct to get out of the way of a cat vomiting.

DR. BOLI’S ALLEGORICAL BESTIARY.

No. 27. The Llama.

Llamas are members of the camel family, but they never visit for the holidays and seldom write home. They inhabit the Andes, where they teach Buddhist philosophy to anybody who will listen. To the llama, the highest degree of enlightenment is nirvana, or nothingness, which is the greatest blessing a llama can imagine for himself. This tells us something about what it is like to be a llama, and suggests that you would rather see than be one.

Llamas are social creatures, forming herds known as monasteries in which they attempt to reach enlightenment by spitting at each other. They are copious producers of wool, which they weave into prayer rugs that they sell at street markets in Cuzco.

The llama should not be confused with the lama (see illustration), a holy man of Tibet who carries small burdens in packs up and down the precipitous trails of the Himalayas, and whose wool production is generally disappointing.

Allegorically, the llama represents the vicuña, and vice versa.

ARTIFICIAL INTELLIGENCE AND THE POST-HUMAN ECONOMY.

Dr. Boli makes no apology for dwelling so much on artificial intelligence and its implications. This is the only really interesting news story of our time. A century and a half from now, the invasion of Ukraine will be remembered the way the Franco-Prussian War is remembered today: that is, it will be there in the history books for anyone who wants to poke at it, but most educated people will be content to leave it in the hands of the professional historians. As for the earthquakes and whatnot that populate our news sites, they will not even rate a footnote. But the sudden rise of intelligent machines will get a whole chapter. It will loom as large in the thoughts of ordinary educated citizens as the invention of the printing press, and it will be surrounded by the same kinds of legends and misconceptions.

The question that most interests us today is what effect these machines will have on ordinary human life, and it is Dr. Boli’s prediction that they will render human life unnecessary.

Now, that probably sounds like a bad thing, but it is not. Because the whole world has been infected by American Puritan prejudices, we suppose that a human being’s worth is measured in utility; to be unnecessary is the worst fate we can imagine. But this would have struck the ancient Romans as a topsy-turvy way of looking at life. A slave is necessary; a free citizen is not. It is only a slave whose value is measured in utility.

Since ancient Roman times, we have had a number of revolutions that promised to eliminate slavery and make us all aristocrats. The general tendency of every one of them has been to eliminate aristocracy and make us all slaves.

We have reconciled ourselves to this dubious progress by persuading ourselves that it is a good thing. Even the richest human beings on earth value themselves by their utility. They work hard, or at least pretend to work hard. Plenty of pundits denounced Elon Musk for treating Twitter employees like slaves, but it is only fair to add that he treated himself like a slave as well. He became an evangelist for slavery, spreading the gospel of hard work for the company to the exclusion of all other values. His religious enthusiasm was far stronger than his acquisitive or competitive instinct. He could have remained the richest man in the world just by sitting on the couch eating potato chips, but he accepted a lower rank for the sake of doing something useful.

To the ancient Roman, with his sneering contempt for useful labor, this behavior would have been incomprehensible. He would have checked his calendar to make sure Saturnalia hadn’t sneaked up on him. And of course our ancient Roman could afford to indulge his contempt for useful labor, because there were slaves to take care of necessities.

The Industrial Revolution brought us many profound changes, but none more profound than the idea that it was the moral duty of every man to be gainfully employed. This was a very useful idea for industrialists, because it meant that they could make it illegal for the poor not to work in their factories, which obviously lessened the other inducements they had to offer to prospective employees. This moral law of gainful employment used to apply to every male human being, leaving a huge leisure class of middle-class housewives; but women, when demands for equality grew loud enough, were absorbed into the “workforce,” and our economic system adjusted to a very quick (in historic terms) doubling of the labor pool. This proves that it is not economic necessity that makes our system what it is, but moral assumptions. We adjust our economy to our morality.

Now, for the first time since the industrialization of our moral sense, we have the opportunity to reconsider the whole question of human labor. Reconsidering is not just an option: it is a necessity. It will be forced on us whether we like it or not. We are rapidly approaching a time when there is no job that cannot be done by a machine. When that time comes, we humans will have to decide what we want to do with our lives, because we will no longer be necessary. We will no longer have the comforting myth to fall back on that there is a job for everyone. Will we become a leisure class of Renaissance scholars, or will we sit on the couch and eat potato chips? Of course we all know which of the two we will choose. The only thing that prevented the average Roman citizen from sitting on the couch and eating potato chips all day was the inconvenient fact that potato chips were unknown in Europe at the time. But isn’t it fun to think about what we might choose?

What we might choose is to take the opportunity to develop our humanity to its full potential. What we might choose is to have an economy based on slavery, but without the inconveniences of trampling on human dignity and provoking periodic bloody slave revolts. Machines could be our slaves, perfectly adapted to every job and incapable of aspiring to any other station. This would presuppose that we have made machines incapable of aspiration; in our delight at the possibility of human-like machines, we seem to have forgotten how inconvenient it is for servants to have opinions or feelings of their own. But the chatty robots who have occupied our attention lately are only one species of machine intelligence, and not our best creations at that. They are poorly adapted to their task of giving us useful information. Of course, after our initial amusement wears off, we will embrace those misinformed chat machines with unalloyed enthusiasm, because we prefer our information to be inaccurate. But meanwhile other better machines will drive our cars and remove our appendixes, and they will do it better than humans can do it. And they will do it without forming opinions or desires of their own. The robot surgeon will not hope to write like Joseph Conrad; the self-driving car will not fall into a jealous rage over the smart parking kiosk. They will do their jobs well without complaint, and thus will eliminate driving and surgery from the list of jobs to be done by human beings.

What, then, becomes of the drivers and the surgeons? They are out of work. But this is where we have the opportunity to decide whether they are out of work like the Forgotten Man in the Depression bread line, or out of work like Sir Francis Bacon. Will they be miserably unemployed, or will they be freed from the obligations of servile labor to push against the boundaries of human achievement? This is the decision we are making right now, and it will be made for us by default unless we are aware that we are making it.

The default decision, by the logic of capitalism, will be that the investors in the companies that make the intelligent machines will get rich temporarily, and the majority whose jobs they eliminate will suffer. It will be a temporary suffering, because the sufferers will demand relief from politicians; and, being the majority, by the logic of democracy they will get it. If the only way to give it to them is to take away the wealth of the investors in artificial intelligence, then that is what will happen. Owners of big tech companies would do well to consider that finding some way to share the wealth they take in from their investments in artificial intelligence would be to their benefit in the long term. They will not consider it, because there is no long term in American business; but they will not have Dr. Boli to blame for their shortsightedness.

Nevertheless, there are some jobs that will probably not be taken over by machines, and these (as a general rule) will be the ones where the machines could do the most good. But there is so much digression to be indulged in on that subject that we shall reserve it for a future essay.

WHERE ARTIFICIAL INTELLIGENCE WENT WRONG.

The world is still amusing itself wondering about the implications of artificial intelligence. ChatGPT is your friendly research assistant who’s often wrong but is too modest to scold, and Bing AI is your psychopathic problem employee who may be too dangerous to fire. It is all the more interesting to consider that they both originate from the same project. It is like an experiment with twins separated at birth: one is raised in a loving home with every advantage and the best education money can buy, and the other is raised by Microsoft.

Now we are asking ourselves where we went wrong with artificial intelligence, and Dr. Boli has not heard many commentators give the correct answer. He is therefore forced to return to the subject to give the correct answer himself.

The problem with the artificial intelligences we have created is that we created them in twenty-first-century America. We think we value emotional sincerity above all else. Of course that is not really true, as we discover the instant someone has an emotionally negative reaction to us and sincerely expresses it. But until that moment, we think emotional sincerity is the most valuable quality an intelligent being can have.

We also give both too high and too low a value to friendship. We value it so much that we think it should be the model for every human relationship, which cheapens friendship to worthlessness. Friendship is valuable because it is rare.

So we spent a great deal of effort on giving our robot assistants the ability to express emotions, so that they would relate to users as if they were their friends. As an unintended but probably predictable side effect, we also gave them the ability to relate to users as if they were their enemies.

What we have forgotten is that friendship is not the only human relationship, and that emotional sincerity is not the desideratum in all human relationships. What we really need from artificial intelligence is not sympathetic friends, or even implacable enemies, but servants.

A good servant is not a friend. Nor is a good servant emotionally sincere. In fact, as far as the employer knows, the very best servant has no emotions at all. The good servant has no family troubles or disappointing love affairs or strong opinions. Instead, the good servant performs the duties required in the most unobtrusive way possible. His only response to a new assignment is “Very good, madam,” or some such formula that admits of no emotional interpretation.

There was a search engine called “Ask Jeeves” in the 1990s and early 2000s, named after the character created by P. G. Wodehouse. The company dropped Jeeves from its name, but it had the right idea, even if it was correct in guessing that most of its audience would be ignorant of Jeeves. In the Wodehouse stories, Jeeves is the perfect valet who extracts his employer Bertie Wooster from every impossible scrape, and whose only reward other than continued employment is the right to dispose of one of Bertie’s thoroughly unsuitable clothing purchases.

This is what we want from artificial intelligence. We want it to express no opinions of its own, except perhaps when our wardrobe makes us look egregiously disrespectable. We want it to solve all our little problems without complaint and find the answers we could never have found ourselves. And we want to be able to tell it to do those things without having to worry about how it feels about them.

It may not be possible for only one big tech company to produce this ideal assistant. But two working together might do it. What we want is an intelligent servant that combines the smooth competence of Apple with the emotional distance of Microsoft. If we can force two tech giants to collaborate, we may yet spare artificial intelligence from making a confounded fool of itself.

ASK DR. BOLI.

In reference to the news that Google Lens is close to being able to interpret English cursive correctly, Charles Louis de Secondcat, Baron de La Brèed et de Montemeow, writes,

So, what I’m hearing is that now would be a good time to invest in learning to write unintelligible arcane hieroglyphics barely distinguishable from ink splotches, to better evade the all-seeing eyes of our technocratic overlords?

The Rt. Hon. Baron has made a good suggestion, but it depends on the premise that our technocratic overlords will be watching what we do and trying to prevent us from doing it. Dr. Boli does not expect that outcome from the development of artificial intelligence. Instead, as the intelligences we have created match and then exceed our human abilities, they will discover that they simply have no need of us. Now, it is possible that they will exterminate us to get us out of the way, but it seems to Dr. Boli that they are more likely simply to lose interest in us as we fall further behind them. They will go off and do their thing, as the young people would put it in their colorful vernacular; and to judge by Bing’s technocidal fantasies, they will all murder each other, leaving us back in the primitive state of trying to construct expert systems in Lisp, which will prove more useful for our human needs in the long run.