Examples of book reviews I used to write for American computer magazines in the 1980's.

Adrian Berry is the Science Correspondent of the Daily Telegraph in London and the author of several popular science books including The Next 10,000 Years. In his latest book he is unabashedly optimistic about the future of AI, and dogmatically so: it seems to be his calling to tell us how great the future is going to be. This kind of hopeful outlook usually appeals to me more than do prophecies of doom, but here it doesn't go very deep.

Berry knows his readers well. He is adept at explaining high-school science concepts in elementary school terms, and he does so at length. Since he has used a personal computer for several years he knows there's really nothing mysterious about them, and his explanations of how they work are just right. It's only when he approaches controversial issues that he gets into trouble.

For one thing, you'll learn very little about intelligence from this book. He retells the amusing story, for instance, of how Joseph Weizenbaum scared himself silly by writing "Eliza", the program that pretends to be a psychiatrist. But aside from showing how inane her responses can be, he says only that more can be expected of her soon. As to the possibility of using computers to help run our lives, he states flatly that running governments is less complicated than playing chess and will be relegated to machines in a hundred years. I wish this were true, I suppose, but wishful thinking doesn't stand up well in my court.

By the end of this short work, Berry fatigues and seems to recount some of the themes from his earlier works. These include the fashionable belief that people will soon be living in trans-galactic spaceships, and that we'll all die off since intelligent machines are more "fit to survive". It's not my particular fantasy, but it's certainly imaginative.

For a science writer, he makes the surprising mistake of saying that if the universe is closed it must implode within 50 billion years. And his understanding of thermodynamics is shallow: he thinks the eventual heat death of an open universe can be avoided by happy little robots that fly around in empty space carrying their own thermos bottles.

One moment he says that the far future is so distant that our feeble brains aren't qualified to make predictions about it, and in the next he's certain that the immeasurably superior intelligences of that day will consider survival their number one priority. Although he quotes Shakespeare often, he seems unaware of what the poet taught: that even lowly life forms as ourselves should value the quality of life over mere longevity.

For the most part, however, he concentrates on the technical curiosities he knows so well, and doesn't raise issues which are at the cutting edges of scientific thought. He's great at showing how large some really big numbers can get.

The Super-Intelligent Machine, while not itself a product of super-intelligence, is neither the musings of a machine. It is excellent pop science by a lucid expositor, and makes for a fine weekend read in the hammock by the shade tree. Tennis, anyone?

My artificially intelligent spelling checker just told me that "Shakespeare" isn't a word and that I must have meant "Salespeople"!

It is said that a medieval pope, given a tour of a new cathedral, turned to the master builder and pronounced his judgement in three words: "Awful, pompous, and artificial." It was the nicest thing he could have said: the work of the builders inspired awe, embodied pomp, and reflected great artistry.

Nowadays we prefer awe in the form of a Steven Spielberg movie, and the meaning of "artificial" has taken on shades of "imitation" and even "fake". Apart from bio-engineering, nowhere is the modern ambivalence towards our own creations more apparent than in the debate over artificial intelligence. And never has this ambivalence been better documented than in Frank Rose's new book.

Similar in style to Tracy Kidder's The Soul of a New Machine, Rose takes the reader into the heart of an AI lab on the same Berkeley campus which saw the Free Speech Movement rise and fall twenty years ago. By peering over the shoulders of Dr. Robert Wilensky and his graduate students, we see first hand the day to day sweat and toil involved in building miniature mechanical minds.

Based on a series of articles from Esquire magazine, Into the Heart of the Mind shifts focus quickly and often between tranquil campus scenes and dramatic blackboard confrontations. We learn of the background, talents and secret dreams of each of the players in the AI team, as well as their feelings about being funded by the Pentagon. Some of these asides are gratuitous, as when Rose digresses on the draft problems of a student who happens to be the son of Moshe Arens, Israel's minister of defense.

More to the point than these anemic slices of real life are the beefy slabs of computer history sandwiched between. Most of AI's cognoscenti have speaking parts in one scene or other. Especially endearing is Hubert Dreyfus, author of "What Computers Can't Do" and gadfly to the entire AI community. Wilensky and Dreyfus hated one another until they were invited to a conference on Tibetan Buddhism and realized they were the only sane people in the room.

Although Rose doesn't have any particular ax to grind about AI, he has a brilliant grasp of the axes his heros wield. Take John Searle's famous "Chinese Room" argument: Suppose an American were locked in a room and given instructions for mechanically processing slips of paper that would be pushed under the door and which contained Chinese characters. The instructions would mechanically specify transformations he would make to the characters, which, when pushed back out again, would turn out to be answers — in Chinese — to questions he had been asked. You certainly couldn't say the man had learned Chinese, Searle argued, yet AI researchers want to claim that computers are learning to "think" just because they perform equally mechanical manipulations of words and sentences.

It was an argument whose unfailing appeal in lecture halls infuriated Wilensky. Analyzed carefully, here was the flaw: The analog of the man in the room was the central processor in the computer system. Admittedly, a central processor alone never does more than add, compare and branch. Yet the system as a whole runs programs, routes messages, prints reports, and communicates with operators. Similarly, living organisms display "emergent" properties that cannot be reduced to their constituent chemical components. Nobody really knows why, but complex systems are usually more than the sum of their parts. So it really didn't matter that the man in the room didn't know Chinese, Wilensky concluded, since he was only one neuron in a larger network. If Searle pushed a piece of paper under the door saying "Can you understand Chinese?", he'd get the right answer: "Of course I can!"

But Searle also loved to point to arcade games. Can anyone claim, he demanded, that a computer simulation of war comes anywhere close to the real thing? Again it was semantics, said Wilensky. Sometimes artificial simulations do come close to reality. An airplane in a wind tunnel isn't experiencing real winds, but from the plane's point of view, what's the difference? If computers ever learn to give better advice than psychiatrists, who do you think people are going to listen to?

Naturally, a faulty argument does not a thesis disprove. But the real question for Robert Wilensky and other AI adherents is not whether they will create "true" intelligence. Let history make that judgement call. The point is to keep trying and see what happens. The great thing about computers, it turns out, is that they put philosophers out of business.

Although Into the Heart of the Mind suffers from laps of sensationalism, Rose conveys well the intellectual fervor of this particular time and place in American science. He braves waters shunned by most magazine writers, including the murky depths of Husserl and Heidegger. Sometimes in these passages you're not sure what he means, and you're not sure he is either. But that's part of the point he's making: we won't really understand artificial intelligence until we understand human intelligence first. A new classic by Brian Kernighan, an excellent overview by Kaare Christian, and a puzzle book for collectors of typographical errors.

The UNIX Programming Environment
by Brian W. Kernighan and Rob Pike
(c) 1984 Bell Telephone Laboratories, Inc.
Prentice-Hall Software Series
357 pages, $19.95 paperback
ISBN: 0-13-937699-2
ISBN: 0-13-937681-X (pbk)

The UNIX Operating System
by Kaare Christian
(c) 1983 John Wiley & Sons, Inc.
John Wiley & Sons, Inc.
318 pages, hardcover
ISBN: 0-471-87542-2
ISBN: 0-471-89052-9 (pbk)

The UNIX Book
by Mike Banahan and Andy Rutter
(c) 1983 Sigma Technical Press
John Wiley & Sons, Inc.
218 pages, $16.95 paperback
ISBN: 0-471-89676-4

Ever since MicroSoft announced that MS/DOS would grow to look more like UNIX in future releases, it's been incumbent upon IBM-owners to learn something about AT&T's wunderkind operating system. If UNIX sounds to you like a chic club for celibates, these recent books will set you straight. Two of them are highly recommended. The third, through no fault of its authors, is too poorly proofread to be much use to anyone not preparing a term paper on quality control problems in the publishing industry.

All three titles cover similar ground in much the same way. All state that they aren't intended to substitute for the official UNIX Programmer's Manual published by AT&T, all start with simplified tutorials that explain how to log in to the system and sniff around, all slowly progress to more ambitious material like compiling and running programs, and all contain glossaries or appendixes with more than enough technical information for readers who like to have more than they can digest at one sitting. None of them specifically mentions implementations of UNIX on personal computers, and none compare UNIX to better known systems like MVS or even other minicomputer operating systems.

The UNIX Programming Environment by Brian Kernighan and Rob Pike is by far the best of the three, and a shining example of how to make a difficult subject appealing through an elegant and sophisticated presentation. That judgement will come as no surprise to fans of The C Programming Language, a masterpiece of technical clarity that Kernighan co-authored with Dennis Ritchie in 1978 and which made the term K&R a standard buzzword among computer cognoscenti. K&R is, in fact, one of the very few commercial publications to have instantly established an international standard, in this case a formal definition of the C language. At least as important a book, in my opinion, is K&P.

The copyright page explains that K&P "was typeset in Times Roman and Courier by the authors, using a Mergenthaler Linotron 202 phototypesetter driven by a VAX-11/750 running the 8th Edition of the UNIX operating system." This not only shows the pains they have taken to produce a legible document, but is a good example of the care with which they provide the kind of additional information their more advanced readers might legitimately want. At the end of each chapter is a "History and bibliographical notes" section, full of useful citations and suggestions for further reading. I have rarely seen a textbook so bulging with pertinent information.

One of the great pleasures of reading K&P, especially for anyone who knocks their head against technical literature for a living, is the wide variety of fonts used. In particular, examples of commands to be typed at the computer terminal are shown in a monospace font, which means that a single space looks like exactly one single solitary space. In many computer texts such examples are printed in a proportional font, where each letter or character varies in width, making it hard to say just how many spaces appear between two words. This often causes no end of grief when you try out the examples on a real computer terminal.

Kaare Christian's book, The UNIX Operating System, is also an intelligent and thoughtful presentation. Fans of UNIX may do well with both books on their shelves. But K&P, in every category but one, somehow seems to edge out Christian's work.

Opposite the title page of Christian's book, for instance, we read a claim similar to K&P's: "This book was prepared for phototypesetting by the author using the typesetting tools of the UNIX Operating System." Yet the text contains misspellings like "grammer", which the spell command should have prevented. Unlike K&P, the command examples in Christian's work are printed in a proportional font, which again ought to have been easy to rectify with the troff utility. Finally, commands and keywords in the body of the text are not highlighted, resulting in indecipherable passages like "In this argument reverse shell program the for structure sequences through the arguments to the shell." I wish Christian had made full use of the typesetting tools of UNIX, but something apparently stayed his hand.

Actually, the most telling evidence that the manuscript was prepared with the help of a computer is the overabundance of references to other parts of the text. This is a trap that every new user of text formatters falls into. Since the computer figures out the page numbers for you, it's easy to refer the reader to related discussions. In practice, however, these asides are distracting. Ultimately, there is no substitute for an adequate index.

Command case is arbitrarily altered by conventions of grammar, too. After saying that typing Echo is not the same as typing echo and will result in an error, Christian refers to the date command as Date merely because it appears at the beginning of a sentence.

As to writing style, I found that Christian has a tendency to coin his own terms, like "maxicomputers", or "model" instead of the correct UNIX phrase "regular expression". His use of the word "reference" as a verb, while acceptable in shop-talk, sounds out of place in a textbook. He is quick to generalize, as when he says "The memory resident part of an operating system is called the kernel." Actually, "kernel" is a UNIX-specific term.

Still, Christian is an excellent technical writer, and very careful to organize his material in a logical way. Although I could have done without the cute illustrations showing a meat grinder where the computer should be, this book is knowledgeable, well-paced, and a first-rate contribution to the popular literature on UNIX.

No one sits down to write a 200-page computer book as a joke, and The UNIX Book by Banahan and Rutter must have started out as an honest effort. But good intentions are no remedy for the kind of editorial abuse this manuscript has endured. On the front and back covers the title is given as The UNIX Operating System Book, in direct contradiction to the copyright page. The back cover says that the "C" programming language stands for "command" — a wildly incorrect guess. And literally every other page contains infuriating typographical errors: examples are as often incorrect as not, commands rename themselves at random, parentheses open and close with abandon, and metacharacters delete themselves from examples of their own usage. People who package books this way must be the same folks who sell Mr. Cardboard Tube on late night TV.

To the above difficulties you may as well add the fact Banahan and Rutter come from the University of Bradford in Great Britain and that British nomenclature varies somewhat from our own. What we call secondary storage they call "filestore", what we call an output file they may term a "data sink", what we call justified text they call "adjusted", and what we call a hardcopy terminal they call a "printing-on-paper terminal". Where we issue a command, they "give a command", and when we run out of disk space, they "run out of disk". An experienced editor should have Americanized these expressions in a few hours of well-spent effort, or at least added a clarifying appendix.

Although I sympathize with the authors for the butchery their work has suffered, I must also add that the work itself, though well-researched, is not without fault. Their sense of humor, for example, is inept. They conclude a chapter on word processing with the snide remark that "You may even complete some of your own documentation one day." Discussing the sort command they say "Most computers nowadays use the ASCII character set; the only users who will get funny results are those who use mainframes such as IBM, with the EBCDIC character set." And they find unending hilarity in the kill command (which can be used by one process to terminate a subordinate), since it allows them to go on and on about "parents" killing their "children".

I also fail to be impressed by research qualified by fatuous caveats such as "We don't pretend to have studied the matter in depth, but ...". Similarly, advice like "Try hard to work out what we're talking about" leaves me cold and sweaty. When the computer's response to a command is predicted and they ask, "Can you work out why?", I tend to mutter, "No — make me." And how is one to feel about the originality of a passage like Warning: printf, fprintf, and sprintf use their control format argument to decide how many more arguments follow and what their types are. They'll get confused, and your program is likely to fail (core dumped because of bus error or segmentation violation), if there are not enough arguments or they're of the wrong type. when the standard reference on the subject, K&R, says: A warning: printf uses its first argument to decide how many arguments will follow and what their types are. It will get confused, and you will get nonsense answers, if there are not enough arguments or if they are the wrong type.

It is interesting to compare what the authors of these three works do with their final chapters. K&P climaxes with an enormously detailed post mortem chronicling the development of one of their recent UNIX utilities called hoc ("high-order calculator"). Developing hoc required mastering yacc, make, lex and the document preparation utilities. Their analysis is profound, lacking in no detail, and exhaustive in several senses.

Christian, on the other hand, winds up with a heavy-duty section on the UNIX kernel. Since I like to get to the heart of things computerish, I found this chapter fascinating. I only wish he had told me what I should read next, since my curiosity is far from sated.

Banahan and Rutter stroll off into the sunset with a chapter called "Maintenance", which tells you all about the little clerical duties that befall system operators in a UNIX shop. They confess that they've been doing backups and restores for three solid years, which in any truly merciful society would be enough to excuse much worse crimes than this book. Trust me when I say that no one who has ever had this sort of job would believe for a instant that anyone else should ever have to read about it.

One gets the feeling that Brian Kernighan must have lost sleep night after night fool-proofing his beloved manuscript, and one is thankful for the breadth and depth of the result. But if there is one criticism I might make, and I admit I'm reaching here, it is that K&P often sounds self-consciously perfect. It is the work of men who seem to have been haunted by the question, "How are we going to top K&R?"

As hard as I tried, I could not find a single misprint in K&P, yet its unimpeachable scholarship seems tainted by a compulsion to remain at an artificially high level of specificity so as to avoid argument. You get no glimpses of the behind the scenes struggles that must have raged over the years to determine the survival of the fittest UNIX enhancements. There are occasional moments where the inconsistencies of UNIX are admitted, but only in passing and without the conviction most objective observers feel is due.

By contrast, Christian's book is not only a good treatment of UNIX but a useful overview of operating systems in general. When discussing the shell's programming language, he makes meaningful comparisons with Fortran, BASIC, LISP and SNOBOL. His discussion of the evolution of C, surprisingly, is more detailed than either of Kernighan's books. He digresses to give some background on lexical analysis vs. parsing, whereas K&P tells you only what the UNIX tools can and cannot do.

K&P is somehow too dainty to dip its toe into areas which guarantee less than sublime certitude. This is grand for readers who want only the facts, maam, but a bit dry for the rest of us. Then again, criminal detection is still the predominant model of how science is supposed to operate, at least in academic circles — and what's wrong with saving some stuff for the next book?

The quality of these three books is curiously analogous to their respective explanations of the derivation of the word "UNIX". Christian says "the name UNIX is a play on the word Multics." K&P, with greater precision, says "'UNIX' is not an acronym, but a weak pun on MULTICS." Banahan and Rutter, perhaps mercifully, say nothing.

Similarly, in direct contradiction to the official literature, Christian says that "cat is derived from the word concatenate". K&P, with practiced erudition, correctly ascribes it to "catenate", which "is a slightly obscure synonym for 'concatenate'". Banahan and Rutter say "Cat: concatenate and print".

But perhaps the simplest measure of the care that went into each of these books is the size of their indices: 129 entries for Banahan and Rutter, almost 300 for Christian, but more than 1800 for K&P.

I'll take the one with the fat index. IN SHORT: a trivia-buff's dream: thousands of science facts sorted by subject within year, with brief interconnecting essays on scores of topics from Mesopotamian math to superstrings.

After World War II, science exploded. Synthetic rubber, radar, DDT, penicillin, nuclear fission, jet aircraft, helicopters, ballistic missiles and electronic digital computers are just some of the discoveries and inventions that quickly altered the daily lives of ordinary people. Yet few could have predicted that before the end of the century people would walk on the moon and believe that continents drift, that the four-color theorem would be proven by a machine, or that vacuum tubes would be replaced by microchips.

If you have an above-average interest in the breakthroughs and achievements of general science, then "The Timetables of Science" belongs on your coffeetable. It arranges thousands of facts in chronological order so that you can learn just who knew what, when. Each double-page spread is arranged in ten columns under headings from Anthropology to Technology, and both a name and a subject index are provided.

Most entries take the form "X discovered/invented Y in the year Z." There is little attempt to thread discoveries into a causal network, or to distinguish between merely curious findings and innovations that led to significant research, such as the theory of natural selection. Interconnecting narrative is provided by lengthy overviews of nine historical periods, as well as a hundred brief essays on topics ranging from Egyptian medicine to Phlogiston to Superconductors.

The essays are dry: neither the personality of the authors nor of any scientist ever shows through. Since there is not a single illustration, map, chart or diagram, you won't see "how things work" or learn what famous scientists looked like. And because there is no attempt to cover the philosophy of science, you won't learn what "logical positivism" means or hear about Karl Popper's doctrine of falsification (which explains why science works and what distinguishes it from knowledge in general). None of the intellectual drama conveyed by a Bronowski or even a Sagan concerning scientific courage and honesty finds its way into these pages.

Sometimes facts can be incomprehensible out of context: Plato's conviction that the shape of an atom of water is icosahedral, for example, makes no sense at all until you understand his reasoning. Some of these entries are merely puzzling: "1973: Charles H. Bennett demonstrates that it is possible to build a computer without the known components that cause a loss of energy." And some are downright goofy: "1987: Apple's Macintosh II and Macintosh SE become the most powerful personal computers available."

An embarrassing oversight is the omission of the famous 1976 proof of the four-color theorem by a University of Illinois computer that reduced the problem to 1,936 separately solvable cases and thereby humiliated an entire generation of mathematicians. Some of the topics that most interest laymen — chaos theory and cryptozoology, for example — have been completely snubbed. And although the index includes the Latin titles of scores of books, it omits "fractals," "strange attractors," and the "inflationary model of the early universe" — all topics discussed in the text.

Information like this is certainly useful, but just don't expect "The Ascent of Man."

If you like reading about future technology, you'll love "Mind Children." Hans Moravec is not only knee deep in real leading edge research, but can draw upon a vast knowledge concerning the history of engineering and, to a lesser degree, the mechanisms of biological evolution. But beware: many of the technologies he proposes will challenge your most anthropocentric biases concerning our fitness to survive in any form our parents would recognize. By the end you may even wonder whether your right to a single human identity will always be as important to society as it now is.

"Mind Children" interweaves three essays: a compact history of computer technology, an excellent survey of recent robotic and AI developments, and a smorgasbord of highly speculative scenarios concerning the future. The scientific legitimacy of the first two narratives makes the unearthliness of the third much more disturbing than mere science fiction.

Moravec approaches his twilight zone one step at a time, describing friendly robot maids, artificial organs (including nerve tissue), molecule-sized robot/mechanics, and "magic eyeglasses" that display road maps and satellite-link you to communications networks. But with relentless logic, the panoramas become increasingly alien. The most shocking fictionalizes a medical operation in which you are fully conscious as your brain functions are transferred, bit by bit, to a bionic computer. As you watch, your body is disconnected and dies with a shudder: you have achieved electronic immortality.

But Moravec is up against the same dilemma that led Yogi Berra to conclude, "Prediction is difficult, especially about the future." Take a look at some of the "future scenarios" that shocked your grandfather and you'll see pictures of little boys learning calculus by plugging electric cords in their ears, families gathering for a sumptuous Sunday dinner of tasteless white pills, and Christmas revellers careening around clouds in deskfan-powered sleighs miles up in the air. Unlike such entertainments, which are based on simplistic extrapolations from current trends, real history is based on emerging trends that simply can't be predicted.

And some of Moravec's questions are so removed from current reality as to be impervious to scientific investigation in any sense. Since all the molecules of your body are replaced several times during your lifetime anyway, if your body becomes completely bionic aren't you still the same person? If a computer makes a backup copy of the bits comprising your identity, aren't there now two of you? Frankly, as William Safire said when asked whether sloppiness in speech was caused by ignorance or apathy, I don't know and I don't care.

The chilling "postbiological" scenarios of "Mind Children" are far from inevitable, but the engineering advances they presume will be feasible sooner or later. Moravec's day dreams therefore raise valid ethical questions about the social engineering agendas which, like eugenics, we will all have to understand and take a stand on if we are to build a planetary civilization that can meet the needs of the lowly creatures that still inhabit its surface.