Notes for Joseph Weizenbaum Computer Power and Human Reason: From Judgment to Calculation
Key concepts: autonomous machine, compulsive programmer, effective procedure, logicality, protocol taking, tool, unconscious.
Related theorists: Jerome Bruner, Noam Chomsky, John Dewey, Erik Erikson, J. W. Forrester, Erich Fromm, Max Horkheimer, Aldous Huxley, Steven Marcus, George Miller, Marvin Minsky, Philip Morrison, Lewis Mumford, Newell, Michael Polanyi, Marc J. Roberts, Roger C. Schank, Simon, Studs Terkel, A. N. Whitehead, Norbert Wiener, Terry Winograd.
Computer as metaphor to understand the human.
But a major point of this book is precisely that we, all of us, have
made the world too much into a computer, and that this remaking of
the world in the image of the computer started long before there were
any electronic computers. Now that we have computers, it becomes
somewhat easier to see this imaginative transformation we have worked
on the world. Now we can use the computer itself—that is the idea
of the computer—as a metaphor to help us understand what we have
done and are doing.
(x) The rest of the book contains the major arguments, which are in essence, first, that there is a difference between man and machine, and, second, that there are certain tasks which computers ought not be made to do, independent of whether computers can be made to do them.
(x) I am thinking primarily of Lewis Mumford, that grand old man, of Noam Chomsky, and of Steven Marcus, the literary critic. . . . And, as Lewis Mumford often remarked, it sometimes matters that a member of the scientific establishment say some things that humanists have been shouting for ages.
(4) DOCTOR, and ELIZA playing psychiatrist came to be known, soon became famous around the Massachusetts Institute of Technology, where it first came into existence, mainly because it was an easy program to demonstrate.
(5) The shocks I experienced as DOCTOR became widely known and “played” were due principally to three distinct events.
that some practicing psychiatrists believed his DOCTOR program could
become part of an automatic form of psychotherapy, Weizenbaum
foreshadows what Turkle calls the robotic moment, when humans accept
machine interaction as adequate substitutes for human response.
(5-6) 1. A number of practicing psychiatrists seriously believed the DOCTOR computer program could grow into a nearly completely automatic form of psychotherapy. . . . What must a psychiatrist who makes such a suggestion think he is doing while treating a patient, that he can view the simplest mechanical parody of a single interviewing technique as having captured anything of the essence of a human encounter?
Awakened to Polanyi scientific outlook produced mechanical conception of man.
Such questions were my awakening to what [Michael] Polanyi
earlier called a “scientific outlook that appeared to have produced
a mechanical conception of man.”
(6-7) 2. I was startled to see how quickly and how very deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it. . . . What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.
(7) 3. Another widespread, and to me surprising, reaction to the ELIZA program was the spread of a belief that it demonstrated a general solution to the problem of computer understanding of natural language. . . . The subsequent, much more elegant, and surely more important work of [Terry] Winograd in computer comprehension of English is currently being misinterpreted just as ELIZA was.
(8) I shall thus concern myself with the following kinds of questions:
(8) 1. What is it about the computer that has brought the view of man as a machine to a new level of plausibility?
(9) 2. . . . But then, such an explanation would also suggest that the computing machine represents merely an extreme extrapolation of a much more general technological usurpation of man's capacity to act as an autonomous agent in giving meaning to his world. It is therefore important to inquire into the wider sense in which man has come to yield his own autonomy to a world viewed as machine.
(9-10) 3. . . . If his reliance on such machines is to be based on something other than unmitigated despair or blind faith, he must explain to himself what these machines do and even how they do what they do. This requires him to build some conception of their internal “realities.” Yet most men don't understand computers to even the slightest degree. . . . For today when we speak of, for example, bureaucracy, or the university, or almost any social or political construct, the image we generate is all too often that of an autonomous machine-like process.
(11) It is therefore important that I construct my discussion of the impact of the computer on man and his society so that it can be seen as a particular kind of encoding of a much larger impact, namely, that on man's role in the face of technologies and techniques he may not be able to understand and control.
(11) Certain individuals . . . expressed grave concern about the conditions created by the unfettered march of science and technology; among them are Mumford, Arendt, Ellul, Roszak, Comfort, and Boulding.
From juridicial to logical basis of spiritual cosmology and rationality.
But at bottom, no matter how it may be disguised by technological
jargon, the question is whether or not every aspect of human thought
is reducible to a logical formalism, or, to put it into the modern
idiom, whether or not human thought is entirely computable. That
question has, in one form or another, engaged thinkers in all ages.
Man has always striven for principles that could organize and give
sense and meaning to his existence. But before modern science
fathered the technologies that reified and concretized its otherwise
abstract systems, the systems of thought that defined man's place in
the universe were fundamentally juridicial. . . . The spiritual
cosmologies engendered by modern science, on the other hand, are
infected with the germ of logical necessity.
(13) I would argue that, however intelligent machines may be made to be, there are some acts of thought that ought to be attempted only by humans. One socially significant question I thus intend to raise is over the proper place of computers in the social order. But, as we shall see, the issue transcends computers in that it must ultimately deal with logicality itself—quite apart from whether logicality is encoded in computer programs or not.
(13-14) Beginning perhaps with Francis Bacon's misreading of the genuine promise of science, man has been seduced into wishing and working for the establishment of an age of rationality, but with his vision of rationality tragically twisted so as to equate it with logicality. Thus have we very nearly come to the point where almost every genuine human dilemma is seen as a mere paradox, as a merely apparent contradiction that could be untangled by judicious applications of cold logic derived from a higher standpoint. . . . And so the rationality-is-logicality equation, which the very success of science has drugged us into adopting as virtually an axiom, has led us to deny the very existence of human conflict, hence the very possibility of the collision of genuinely incommensurable human interests and of disparate human values, hence the existence of human values themselves.
(14) For the only certain knowledge science can give us is knowledge of the behavior of formal systems, that is, systems that are games invented by man himself and in which to assert truth is nothing more or less than to assert that, as in a chess game, a particular board position was arrived at by a sequence of legal moves.
(16) When I say that science has been gradually converted into a slow-acting poison, I mean that the attribution of certainty to scientific knowledge by the common wisdom, an attribution now made so nearly universally that it has become a commonsense dogma, has virtually delegitimatized all other ways of understanding. People viewed the arts, especially literature, as sources of intellectual nourishment and understanding, but today the arts are perceived largely as entertainments. . . . They seek to satiate themselves at such scientific cafeterias as Psychology Today, or on popularized versions of the works of Masters and Johnson, or on scientology as revealed by L. Ron Hubbard. Belief in the rationality-logicality equation has corroded the prophetic power of language itself.
Tools also have pedagogical function; symbol of activity, model for reproduction, script for reenactment of skill.
(17-18) His tools, whatever their primary practical function, are
necessarily also pedagogical instruments. They are then part of the
stuff out of which man fashions his imaginative reconstruction of the
(18) They symbolize the activities they enable, i.e., their own use. . . . A tool is also a model for its own reproduction and a script for the reenactment of the skill it symbolizes.
(19) But devices and machines, perhaps known to (and certainly owned and operated by) only a relatively few members of society, have often influenced the self-image of its individual members and the world-image of the society as a whole quite as profoundly as have widely used hand tools.
(20) Many machines are functional additions to the human body, virtually prostheses.
(21) For if victory over nature has been achieved in this age, then the nature over which modern man reigns is a very different nature from that in which man lived before the scientific revolution. Indeed, the trick that man turned and that enabled the rise of modern science was nothing less than the transformation of nature and of man's perception of reality.
(21) The paramount change that took place in the mental life of man, beginning during roughly the fourteenth century, was in man's perception of time and consequently of space. . . . The idea that nature behaves systematically in the sense we understand it—i.e., that every part and aspect of nature may be isolated as a subsystem governed by laws describable as functions of time—this idea could not have been even understood by people who perceived time, not as a collection of abstract units (i.e., hours, minutes, and seconds), but as a sequence of constantly recurring events.
(22) Cosmological time, as well as the time perceived in daily life, was therefore a sort of complex beating, a repeating and echoing of events.
(23) Lewis Mumford calls the clock, not the steam engine, “the key machine of the modern industrial age.”
(23) Mumford goes on to make the crucial observation that the clock “disassociated time from human events and helped create the belief in an independent world of mathematically measurable sequences: the special world of science.”
Clock as autonomous machine rather than prosthesis (Mumford).
The clock is clearly not a prosthetic machine; it extends neither
man's muscle nor his senses. It is an autonomous machine.
(24) An autonomous machine is one that, once started, runs by itself on the basis of an internalized model of some aspect of the real world.
(25) The various states of this model were given names and thus reified. And the whole collection of them superimposed itself on the existing world and changed it, just as much as a cataclysmic rearrangement of its geography or climate might have changed it. . . . It is important to realize that this newly created reality was and remains an impoverished version of the older one, for it rests on a rejection of those direct experiences that formed the basis for, and indeed constituted, the old reality.
(25) The rejection of direct experience was to become one of the principal characteristics of modern science.
(26) “Every thinker,” John Dewey wrote, “puts some portion of an apparently stable world in peril and no one can predict what will emerge in its place.” So too does everyone who invents a new tool or, what amounts to the same thing, finds a new use for an old one. . . . We have little choice but to project the lessons yielded by our understanding of the past, our plausible hypotheses, onto the present and the future. And the difficulty of that task is vastly increased by the fact that modern tools impact on society far more critically in a much shorter time than earlier ones did.
(28) There are corresponding beliefs about the need for computers in the management of large corporations and of the military, about the indispensablity of computers in modern scientific computations, and, indeed, about the impossibility of pursuing modern science and modern commerce at all without the aid of computers.
(28-29) The computer was not a prerequisite to the survival of modern society in the postwar period and beyond; its enthusiastic, uncritical embrace by the most “progressive” elements of American government, business, and industry quickly made it a resource essential to society's survival in the form that the computer itself had been instrumental in shaping.
We ignore the possibility of different responses than computerization that may have occurred in telling the history of technological advance; Forrester focused on inability to act as impetus to build SAGE.
The “inability to act” which, as [J. W.] Forrester
out, “provided the incentive” to augment or replace the
low-internal-speed human organizations with computers, might in some
other historical situation have been an incentive for modifying the
task to be accomplished, perhaps doing away with it altogether, or
for restructuring the human organizations whose inherent limitations
were, after all, seen as the root of the trouble. . . . But the
computer was used to build, in the words of one air force colonel, “a
servomechanism spread out over an area comparable to the whole
American continent,” that is, the SAGE air-defense system.
(31) An enormous acceleration of social invention, had it begun then, would now seem to us as natural a consequence of man's predicament in that time as does the flood of technological invention and innovation that was actually stimulated.
(31) The computer, then, was used to conserve America's social and political institutions. It buttressed them and immunized them, at least temporarily, against enormous pressures for change.
Most fateful social change was eschewing all deliberate thought of substantive change.
(31-32) But of the many paths to social innovation it opened to man,
the most fateful was to make it possible for him to eschew all
deliberate thought of substantive change. . . . But if the triumph of
a revolution is to be measured in terms of the profundity of the
social revisions it entrained, then there has been no computer
(32-33) It is noteworthy that Thomas Savery, the builder of the first steam engine that was applied practically in industry (circa 1700), was also the first to use the term “horsepower” in approximately its modern sense. . . . It is to be expected that some potent symbols will survive the passage nearly intact, and will exert their influence on even the new framework.
(33) Computers had horses of another color to replace. . . . Tab rooms were the horse-tramways of business data processing, tab machines the horses.
Automation of tab rooms with computers compared to mere substitution of horses by steam engines; transformation awaited its application to operations research and systems analysis.
(33-34) Still, business used the early computers to simply “automate”
its tab rooms, i.e., to perform exactly the earlier operations, only
now automatically and, presumably, more efficiently. The crucial
transition, from the business computer as a mere substitute for
work-horse tab machines to its present status as a versatile
information engine, began when the power of the computer was
projected onto the framework already established by operations
research and systems analysis.
(34-35) It is important to understand very clearly that strengthening a particular technique—putting muscles on it—contributes nothing to its validity. . . . If astrology is nonsense, then computerized astrology is just as surely nonsense.
(35) It may seem odd, even paradoxical, that the enhancement of a technique may expose its weaknesses and limitations, but it should not surprise us.
(37) What is less often said, however, is that the society's newly created ways to act often eliminate the very possibility of acting in older ways. An analogous thing happens in ordinary language. . . . Terms like “free” (as in “the free world”), “final solution,” “defense,” and “aggression” have been so thoroughly debased by corrupt usage that they have become essentially useless for ordinary discourse.
(38) But the widely believed picture of managers typing questions of the form “What shall we do now?” into their computers and then waiting for their computers to “decide” is largely wrong. What is happening instead is that people have turned the processing of information on which decisions must be based over to enormously complex computer systems. They have, with few exceptions, reserved for themselves the right to make decisions based on the outcome of such computing processes. People are thus able to maintain the illusion, and it is often just that, that they are after all the decisionmakers. But, as we shall argue, a computing system that permits the asking of only certain kinds of questions, that accepts only certain kinds of “data,” and that cannot even in principle be understood by those who rely on it, such a computing system has effectively closed many doors that were open before it was installed.
WHERE THE POWER OF THE COMPUTER COMES FROM
(40) Machines, when they operate properly, are not merely law abiding; they are embodiments of law.
(41) The machines that populate our world are no longer exclusively, or even mainly, clanking monsters, the noisy motion of whose parts defines them as machines.
New paradigm of machines as information transmitters rather than motion transmitters.
(41) The arrival of all sorts of electronic machines, especially of the electronic computer, has changed our image of the machine from that of a transducer and transmitter of power to that of a transformer of information.
Effective procedure set of state-transition rules telling player precisely how to behave from moment to moment, allowing treatment of formal language as game.
The laws embodied by a machine that interacts with the real world
must perforce to be a subset of the laws governing the real
(44) A crucial property that the set of rules of any game must have is that they be complete and consistent.
(45) Using this terminology, we may characterize the rules of an abstract game as state-transition rules.
(46) Such a set of rules—that is, a set of rules which tells a player precisely how to behave from one moment to the next—is called an effective procedure.
(48) The problem that thus arises would be solved if there were a single inherently unambiguous language in which we could and would write all effective procedures.
(50) A formal language is a game. That is not a mere metaphor but a statement asserting a formal correspondence. But if that statement is true, we should, when talking about a language, be able to easily move back and forth between a game-like vocabulary and a corresponding language-like vocabulary.
Importance of constructing universal machines; example of a Turing-machine like game using toilet paper, white and black stones, and a die.
Turing proved that a universal Turing machine exists by showing how
to construct one.
(63) Turing answered that question as well: a Turing machine can be built to realize any process that could naturally be called an effective procedure.
(64) Such a way of knowing is very weak. We do not say we know a city, let alone that we understand it, solely on the basis of having a detailed map of it. Apart from that, if we understand the language in which a procedure is written well enough to be able to explicate its transformation rules, we probably understand what rules stated in that language tell us to do.
(67) Leaving to one side everything having to do with formally undecidable questions, interminable procedures, and defective procedures, the unavoidable question confronts us: “Are all decisionmaking processes that humans employ reducible to effective procedures and hence amenable to machine computation?”
(68-69) If we wish to continue to identify languages with machines even when discussing natural language, then we must recognize that, whatever machines correspond to natural languages, they are more like machine that transform energy and deliver power than like the abstract machines we have been considering; i.e., their laws must take cognizance of the real world. Indeed, the demands placed on them are, if anything, more stringent than those placed on mere engines. For although the laws of engines are merely subsets of the laws of physics, the laws of a natural-language machine must somehow correspond to the inner realities manifest and latent in the person of each speaker of the language at the time of his speaking.
(70-71) What is so remarkable is how incredibly few things we must know in order to have access, in principle, to all of mathematics. In ordinary life we give each other directions, i.e., describe procedures to one another, that, although perhaps technically ambiguous in that they are potentially subject to various interpretations, are, for all practical purposes, effective procedures. They rest at bottom on extremely widely shared vocabularies whose elements, when they appear in highly conventionalized contexts, have effectively unique interpretations.
(71-72) To now assert that there are things we know but cannot tell is not to answer the question but to shift our attention from the concept of telling, where until now we have tried to anchor it, to that of knowing. We shall see that this is a very proper and crucially important shift, that the question of what we can get a computer to do is, in the final analysis, the question of what we can bring a computer to know.
HOW COMPUTERS WORK
(88) In this we followed a quite universally accepted programming practice: whereas many public washrooms display a sign urging users to leave the room as they found it, we adopt just the opposite convention. We say “Put the room in the condition you wish it to be in before you begin serious work.”
Importance of conditional branching for autonomous behavior.
(96) The ability of computers to execute conditional-branch instructions—i.e., to modify the flow of control of their programs as a function of the outcome of tests on intermediate results of their own computations—is one of their most crucial properties, for every effective procedure can be reduced to a series of nothing but commands (i.e., statements of the form “do this” and “do that”) interlaced with conditional-branch instructions.
Programmers sense power of computer by their ability to program it to do things, even if they do not know how it works.
What is our position today if we are so much farther removed from understanding how computer technologies work than when Weizenbaum wrote: we simply believe in their universal power because they do many things.
(103) If today's programmers are largely unaware of the detailed
structures of the physical machines they are using, of their
languages, and of the translators that manipulate their programs,
then they must also be largely ignorant of many of the arguments I
have made here, particularly of those arguments concerning the
universality of computers and the nature of effective procedures. How
then do these programmers come to sense the power of the
(103-104) Their conviction that, so to say, the computer can do anything—i.e., their correct intuition that the languages available to them are, in some nontrivial sense, universal—comes largely from their impression that they can program any procedure they thoroughly understand. That impression, in turn, is based on their experience of the power of subroutines and of the reducibility of complex decision processes to hierarchies of binary (i.e., two-way branching) choices.
(105) The computer programmer's sense of power derives largely from his conviction that this instructions will be obeyed unconditionally and that, given his ability to build arbitrarily large program structures, there is no limit to at least the size of the problems he can solve.
(107) There are many ways to “tell” a computer something.
Only partial understanding is needed to program, being experimental like writing; myth of depth (consider against Turkle).
(107-108) The idea that a person can write a program that embodies anything he “thoroughly understands” is at least equally problematical. . . . In effect, we all constantly use subroutines whose input-output behavior we believe we know, but whose details we need not and rarely do think about. To understand something sufficiently well to be able to program it for a computer does not mean to understand it to its ultimate depth.
Programming in order to come to understand; compare to Kittler saying the solution only comes when all internal speech dissipates.
The other side of the coin is the belief that one cannot program
anything unless one thoroughly understands it. This misses the truth
that programming is, again like any form of writing, more often than
not experimental. One
programs, just as one writes, not because one understands, but in
order to come to understand.
(109) It is in fact very hard to explain anything in terms of a primitive vocabulary that has nothing whatever to do with that which has to be explained. Yet that is precisely what most programs attempt to do.
(110) The relationship between understanding and writing thus remains as problematical for computer programming as it has always been for writing in any other form.
SCIENCE AND THE COMPULSIVE PROGRAMMER
(115) Moreover, and this is a crucial point, systems so formulated and elaborated act out their programmed scripts. They compliantly obey their laws and vividly exhibit their obedient behavior. No playwright, no stage director, no emperor, however powerful, has ever exercised such absolute authority to arrange a stage or a field of battle and to command such unswervingly dutiful actors or troops.
Power corrupts; any surprise that there is a typographic error in this key part of the book?
(115) One would have to be astonished if Lord Acton's observation that power corrupts were not to apply in an environment in which omnipotence is so easily achievable. It does apply. And the corruption evoked by the computer programmer's omnipotence manifests itself in a form that is instructive in a domain far larger [(sic)] that the immediate environment of the computer. To understand it, we will have to take a look at a mental disorder that, while actually very old, appears to have been transformed by the computer into a new genus: the compulsion to program.
Compulsive programmer computer bum replaced and quantitatively outnumbered by compulsive gamer, social networker, and other consumer practices, time wasted in front of the screen and behind the wheel.
Oblivious to their bodies joins dynamic creation of philosopheme PHI Diogenes Laertius.
Weizenbaum, in pointing out the growing zombie hordes of compulsive computer users decades before MMORPGs, documents the early effects of the human computer symbiosis gone bad; but in emphasizing extreme cases draws our attention away from the mundane, long term effects of using particular technologies, just as writers who analyze geek cultures shift focus from what has happened to the everyday America.
Wherever computer centers have become established, that is to say, in
countless places in the United States, as well as in virtually all
other industrial regions of the world, bright young men of disheveled
appearance, often with sunken glowing eyes, can be seen sitting at
computer consoles, their arms tensed and waiting to fire their
fingers, already poised to strike, at the buttons and keys on which
their attention seems to be as riveted as a gambler's on the rolling
dice. . . . Their rumpled clothes, their unwashed and unshaven faces,
and their uncombed hair all testify that they are oblivious
to their bodies
to the world in which they move. They exist, at least when so
engaged, only through and for the computers. These are computer
bums, compulsive programmers.
(117) The professional regards programming as a means toward an end, not as an end in itself. His satisfaction comes from having solved a substantive problem, not from having bent a computer to his will.
Hacker viewed as possessing technique but not knowledge, pleasurelessly driven like a compulsive gambler; compare to Turkle bricoleur versus hard mastery programming styles.
I have already said that the compulsive programmer, or hacker as he
calls himself, is usually a superb technician. It seems therefore
that he is not “without skill” as the definition would have it.
But the definition fits in the deeper sense that the hacker is
“without definite purpose”: he cannot set before himself a
clearly defined long-term goal and a plan for achieving it, for he
has only technique, not knowledge.
(119) But since there is no general theory of the whole system, the system itself can be only a more or less chaotic aggregate of subsystems whose influence one one another's behavior is discoverable only piecemeal and by experiment.
(120) His apparently devoted efforts to improve and promote his own creation are really an assault on it, an assault whose only consequence can be to renew his struggle with the computer.
(121) The compulsive programmer is driven; there is little spontaneity in how he behaves; and he finds no pleasure in the fulfillment of his nominal wishes. He seeks reassurance from the computer, not pleasure. The closest parallel we can find to this sort of psychopathology is in the relentless, pleasureless drive for reassurance that characterizes the life of the compulsive gambler.
(122) The gambler, according to the psychoanalyst Edmund Bergler, has three principal convictions: first, he is subjectively certain that he will win; second, he has an unbounded faith in his own cleverness; third, he knows that life itself is nothing but a gamble.
(125) These three mechanisms, called by Polanyi circularity, self-expansion, and suppressed nucleation, constitute the main defensive armamentarium of the true adherent of magical systems of thought, and particularly of the compulsive programmer. Psychiatric literature informs us that this pathology deeply involves fantasies of omnipotence.
(126) The compulsive programmer is merely the proverbial mad scientist who has been given a theater, the computer, in which he can, and does, play out his fantasies.
(127) Hence we can make out a continuum. At one of its extremes stand scientists and technologists who much resemble the compulsive programmer. At the other extreme are those scientists, humanists, philosophers, artists, and religionists who seek understanding as whole persons and from all possible perspectives.
Huxley drunk looking for keys under lamplight applied to computational cognitive science.
Science can proceed only by simplifying reality. The first step in
its process of simplification is abstraction. And abstraction means
leaving out of account all those empirical data which do not fit the
particular conceptual framework within which science at the moment
happens to be working, which, in other words, are not illuminated by
the light of the particular lamp under which science happens to be
looking for keys. Aldous Huxley
on this matter with considerable clarity.
(129) Simon also provides us with an exceptionally clear and explicit description of how, and how thoroughly, the scientist prevents himself from crossing the boundary between the circle of light cast by his own presuppositions and the darkness beyond.
(130-131) Science and technology are sustained by their translations into power and control. . . . But that power of the computer is merely an extreme version of a power that is inherent in all self-validating systems of thought. . . . We must also learn that the same danger is inherent in other magical systems that are equally detached from authentic human experience, and particularly in those sciences that insist they can capture the whole man in their abstract skeletal frameworks.
THEORIES AND MODELS
(135-136) The position of a human being observing another human being is not so very different from that of the explorers who wish to understand the computers they have encountered. We too have extremely limited access to the neurophysiological material that appears to determine how we think. . . . A microanalysis of brain functions is, moreover, no more useful for understanding anything about thinking than a corresponding analysis of the pulses flowing through a computer would be for understanding what program the computer is running. Such analyses would simply be at the wrong conceptual level. They might help to decide crucial experiments, but only after such experiments had been designed on the basis of much higher-level (for example, linguistic) theories.
Chomsky hypothesis that human degrees of freedom imposed by genetic endowment: universal grammar projective description of mind.
(136-137) In fact, Chomsky's most profoundly significant working
hypothesis is that man's genetic endowment gives him a set of highly
specialized abilities and imposes on him a corresponding set of
restrictions which, taken together, determine the number and kinds of
degrees of freedom that govern and delimit all human language
(137) Chomsky's hypothesis is, to put it another way, that the rules of such a universal grammar would constitute a kind of projective description of important aspects of the human mind.
(139) Clearly, Simon's and Newell's ambition is taken seriously both by powerful U.S. government agencies and by a significant sector of the scientific community.
(140) A theory is first of all a text, hence a concatenation of the symbols of some alphabet. But it is a symbolic construction in a deeper sense as well; the very terms that a theory employs are symbols which, to paraphrase Abraham Kaplan, grope for their denotation in the real world or else cease to be symbolic.
(142) One use of a theory, then, is that it prepares the conceptual categories within which the theoretician and the practitioner will ask his questions and design his experiments.
Models based on theories can figure things out, giving agency to texts; a computer program can be both theory and model, giving preferred status to writing programs to investigate even humanities questions.
Of course, a theory cannot “figure out” anything. It is, after
all, merely a text. But we can often build a model on the basis of a
theory. And there are models which can, in an entirely nontrivial
sense, figure things out.
(144-145) Computers make possible an entirely new relationship between theories and models. . . . The point is precisely that computers do interpret texts given to them, in other words, that texts determine computers' behavior. . . . A theory written in the form of a computer program is thus both a theory and, when placed on a computer and run, a model to which the theory applies.
(149-150) We select, for inclusion in our model, those features of reality that we consider to be essential to our purpose. . . . But again, judgment must be exercised to decide what the something might be, and whether it is “essential” for the purpose the model is intended to serve. The ultimate criteria, being based on intentions and purposes as they must be, are finally determined by the individual, that is, human, modeler.
Models also have properties of their own not shared by what they model.
(150) The problem associated with the question of what is and what is
not “essential” cuts the other way as well. A model is, after
all, a different object from what it models. It therefore has
properties not shared by its counterpart.
(152) Computer models have, as we have seen, some advantages over theories stated in natural language. But the latter have the advantage that patching is hard to conceal. If a theory written in natural language is, in fact, a set of patches and patches on patches, its lack of structure will be evident in its very composition. Although a computer program similarly constructed may reveal its impoverished structure to a trained reader, this kind of fault cannot be so easily seen in the program's performance. A program's performance, therefore, does not alone constitute an adequate validation of it as theory.
(152) Computer programs tend to reveal their errors, especially their lack of consistency, quickly and sharply. And, in skilled hands, computer modeling provides a quick feedback that can have a truly therapeutic effect precisely because of its immediacy.
COMPUTER MODELS IN PSYCHOLOGY
(157) The computer has become a source of truly powerful and often useful metaphors. . . . The public vaguely understands—but is nonetheless firmly convinced—that any effective procedure can, in principle, be carried out by a computer. Since man, nature, and even society carry out procedures that are surely “effective” in one way or another, it follows that a computer can at least imitate man, nature, and society in all their procedural aspects. Hence everything (that word again!) is at least potentially understandable in terms of computer models and metaphors. Indeed, on the basis of this unwarranted generalization of the words “effective” and “procedure” the word “understanding” is also redefined. To those fully in the grip of the computer metaphor, to understand X is to be able to write a computer program that realizes X.
Computer as number cruncher valorizes analytic techniques over the ideas they enable to explore (George Miller).
Yet the folk wisdom that perceives the computer as a basically
trivial instrument rests on an accurate insight: the computer, used
as a “number-cruncher” (that is, merely as a fast numerical
calculator, and it is so used especially in the behavioral sciences),
has often, as George Miller
also pointed out, put muscles on analytic techniques that are more
powerful than the ideas those techniques enable one to explore.
(160) We can say in anticipation that the power of a metaphor to yield new insights, depends largely on the richness of the contextual frameworks it fuses, on their potential mutual resonance.
Performance, simulation, and theory modes of AI work are often conflated, for example Newell and Simon General Problem Solver.
Workers in AI tend to think of themselves as working in one of two
modes, often called performance
(165) A third mode of operation should perhaps be mentioned in this context: theory mode.
(167-168) The modern literature on problem solving is punctuated by two important books, George Polya's How to Solve It and Newell's and Simon's Human Problem Solving. . . . Heuristics are thus not algorithms, not effective procedures; they are plausible ways of attacking specific problems.
Protocol taking that is basis of Newell and Simon exemplifies information-processing psychology but not neurophysiology.
that is, watching other people solve problems, became virtually a
hallmark of Newell and Simon's procedure.
(170) Information-processing psychology is, however, not information-processing neurophysiology.
(171) The most ambitious information-processing system that has been built for the purpose of studying human problem-solving behavior as Newell and Simon's General Problem Solver (GPS).
(174) It is the information-processing theory of man which concerns us here, not GPS as such. And we are concerned with that theory precisely because it, in one variation or another, sometimes explicitly and sometimes implicitly, underlies almost all the new information-processing psychology and constitutes virtually a dogma for the artificial-intelligence community.
(176) It is precisely this unwarranted claim to universality that demotes their use of the computer, computing systems, programs, etc., from the status of a scientific theory to that of a metaphor.
(178) To say that GPS is, in any sense at all, an embodiment of a theory of human problem solving is equivalent to saying that high school algebra is also such an embodiment.
Access to external world and acculturation of general vocabulary is key.
(178-179) [quoting Newell and Simon] “Due account must be taken of
the limitations of GPS's access to the external world. The initial
part of the explicit instructions to GPS have been acquired long ago
by the human in building up his general vocabulary. This
[information] has to be spelled out to GPS.” There, precisely, is
where the question is begged. For the real question is, what happens
to the whole man as he builds his general vocabulary? How is his
perception of what a “problem” is shaped by the experiences that
are an integral part of his acquisition of his vocabulary? How do
these experiences shape his perception of what “objects,”
“operators,” “differences,” “goals,” etc., are relevant
to any problems he may be facing? And so on. No theory that sidesteps
such questions can possibly be a theory of human problem
(180) But the point is precisely that the perversion—we might well say perversion—of everyday thought by the computer metaphor has turned every problem into a technical problem to which the methods here discussed are thought to be appropriate.
THE COMPUTER AND NATURAL LANGUAGE
(183) If people from outside the computer fields are to be able to interact significantly with computers, then either they must learn the computer's languages or it must learn theirs.
(184) Man's capacity to manipulate symbols, his very ability to think, is inextricably interwoven with his linguistic abilities. Any re-creation of man in the form of machine must therefore capture this most essential of his identifying characteristics.
Example of the house blew it versus blue it reflects internalized English grammar.
(185) We all have some criteria, an internalized grammar of the
English language, that allow us to tell that the string of words “The
house blue it” is ungrammatical. That is a purely syntactic
judgment. On the other hand, we recognize that the sentence “The
house blew it” is grammatical, even though we may have some
difficulty deciding what it means, that is, how to understand it. We
say we understand it only when we have been able to construct a story
within which it makes sense, that is, when we can point to some
contextual framework within which the sentence has a meaning, perhaps
even an “obvious” meaning.
(185-186) It is, of course, far easier to get a grip on the problem of machine understanding of natural language than on the corresponding problem for vision, first of all because language can be represented in written form, that is, as a string of symbols chosen from a very small alphabet. . . . A worker on machine understanding of English text makes no important intellectual commitment to any particular research hypothesis or strategy when he adopts certain symbols as primitive, that is, as not themselves analyzable. But the worker on vision problems will have virtually determined major components of his research strategy the moment he decides on, say, edges and corners as elements of his primitive vocabulary.
(186) It has happened many times in the history of modern computation that some technological advance in computer hardware or programming (software) has triggered a virtually euphoric mania.
(187-188) The recognition that a contextual framework is essential to understanding natural text was first exploited by so-called question-answering systems. . . . The specification of a very highly constrained universe of discourse enormously simplifies the task of understanding—and that is, of course, true for human communication as well.
(188) The first program that illuminated this other side of the man-machine communication problem was my own ELIZA.
(190) On a much higher level, each participant brings to the conversation an image of who the other is. . . . We are, in other words, all of us prejudiced—in the sense of pre-judging—about each other.
Schank theory proposes specific underlying mechanisms for analyzing natural language utterances.
What I wish to emphasize here is that [Roger C.] Schank's
theory proposes a formal structure for the conceptual bases
underlying linguistic utterances, that it proposes specific
mechanisms (algorithms) for basing predictions on such conceptual
structures, and that it proposes formal rules for analyzing
natural-language utterances and for converting them into the
(196) Newell, Simon, Schank, and Winograd simply mistake the nature of the problems they believe themselves to be “solving.” As if they were benighted artisans of the seventeenth century, they present “general theories” that are really only virtually empty heuristic slogans, and then claim to have verified these “theories” by constructing models that do perform some tasks, but in a way that fails to give insight into general principles. . . . The most important and far-reaching effect of this failure is that researchers in artificial intelligence constantly delude themselves into believing that the reason any particular system has not come close to realizing AI's grand vision is always to be found in the limitations of the specific system's program.
Philosophical reduction to two questions for AI: formalizability of conceptual bases underlying linguistic understanding, impact of appropriateness of objectives for humans versus machines for understanding.
(197) There are then, two questions that must ultimately be
confronted. First, are the conceptual bases that underlie linguistic
understanding entirely formalizable, even in principle, as Schank
suggests and as most workers in AI believe? Second, are there ideas
that, as I suggested, “no machines will ever understand because
they relate to objectives that are inappropriate for machines?”
(198) The fact that these questions have become important at all is indicative of the depth to which the information-processing metaphor has penetrated both the academic and the popular mind.
(200) But what is most important in both instances is that the theories be convertible to computer programs.
(200) At best, what we see here is another example of the drunkard's search. A theory purports to describe the conceptual structures that underlie all human language understanding. But the only conceptual structures it admits as legitimate are those that can be represented in the form of computer-manipulatable data structures. These are then simply pronounced to constitute all the conceptual structures that underlie all of human thought.
Alternate reasonable grand goal for AI of individual life extension via machinery in line with media convergence and virtual reality dystopia of The Matrix.
(202) Both Simon and Schank have thus given expression to the deepest
and most grandiose fantasy that motivates work on artificial
intelligence, which is nothing less than to build a machine on the
model of man, a robot that is to have its childhood, to learn
language as a child does, to gain its knowledge of the world by
sensing the world through its own organs, and ultimately to
contemplate the whole domain of human thought.
(203) I shall argue that an organism is defined, in large part, by the problems it faces. Man faces problems no machine could possibly be made to face.
Equivocation with success and intellectual abilities measurable by IQ is large scale social prejudice.
The trouble with I.Q. Testing is not that it is entirely spurious,
but that it is incomplete. It measures certain intellectual abilities
that large, politically dominant segments of western European
societies have elevated to the very stuff of human worth and hence to
qua non of
(205) Yet forms of the idea that intelligence is measurable along an absolute scale, hence that intelligences are comparable, have deeply penetrated current thought.
Does it matter whether future states of the art track or deviate from this putative empirical fact?
(208) First (and least important), the ability of even the most advanced of currently existing computer systems to acquire information by means other than what Schank called “being spoon-fed” is still extremely limited.
Importance of embodiment for humans for ground of experience grounding interests as well as for interpersonal communication.
(208-209) Second, it is not obvious that all human knowledge is
encodable in “information structures,” however complex. . . .
There are, in other words, some things humans know by virtue of
having a human body.
(209) Third . . . there are some things people come to know only as a consequence of having been treated as human beings by other human beings.
(209) The human use of language manifests human memory. And that is a quite different thing than the store of the computer, which has been anthropomorphized into “memory.” The former gives rise to hopes and fears, for example. It is hard to see what it could mean to say that a computer hopes.
Socializability of both humans and machines seems to entail there must be fundamental differences as between any set of organic species, for example losing paradise of infancy, although Berry ethic of being a good stream seems to instantiate the machine perspective (Erikson catastrophe).
If both machines and humans are socializable, then we must ask in
what way the socialization of the human must necessarily be different
from that of the machine.
(210) Every organism is socialized by the process of dealing with problems that confront it. The very biological properties that differentiate one species from another also determine that each species will confront problems different from those faced by any other. Every species will, if only for that reason, be socialized differently.
(211) A catastrophe, to use Erik Erikson's expression for it, that every human being must experience is his personal recapitulation of the biblical story of paradise.
Unless human intelligence is transferred and then takes on new problems, machine intelligence will always be alien.
(213) We, however, conclude that however much intelligence computers may attain, now or in the future, theirs must always be an intelligence alien to genuine human problems and concerns.
Logic is only small component of ordinary human thinking, extending by intuition into embodiment beyond the monolithic CPU paradigm, an argument supported by brain hemisphere studies.
There is, however, still another assumption that
information-processing modelers of man make that may be false, and
whose denial severely undermines their program: that there exists one
and only one class of information processes, and that every member of
that class is reducible to the kind of information processes
exemplified by such systems as GPS and Schank-like
language-understanding formalisms. Yet every human being has the
impression that he thinks at least as much by intuition, hunch, and
other such informal means as he does “systematically,” that is by
means such as logic.
(214) Within the last decade or so, however, neurological evidence has begun to accumulate that suggests there may be a scientific basis of the folk wisdom.
(218-219) We learn from the testimony of hundreds of creative people, as well as from our own introspection, that the human creative act always involves the conscious interpretation of messages coming from the unconscious, the shifting of ideas from the left hand to the right, in [Jerome] Bruner's phrase.
Whole man, whole poking fun at Simon ant getting intelligence from complexity of environment also applying to humans, mysterious spectacle much richer than reduced equivalence in computable logic; dares to invoke unconscious and infant socialization as example of human ability computers cannot simulate, and admits default to Whitehead fallacy of misplaced concreteness.
Even calculating reason compels the belief that we must stand in awe
of the mysterious spectacle that is the whole man—I would even add,
that is the whole ant.
(222) The lesson here is rather that the part of the human mind which communicates to us in rational and scientific terms is itself an instrument that disturbs what it observes, particularly its voiceless partner, the unconscious, between which and our conscious selves it mediates.
(222) We are capable of listening with the third ear, of sensing living truth that is truth beyond any standards of provability. It is that kind of understanding, and the kind of intelligence that is derived from it, which I claim is beyond the abilities of computers to simulate.
(222-223) But gradually, even slyly, our own minds become infected with what A. N. Whitehead called the fallacy of misplaced concreteness. We come to believe that these theoretical terms are ultimately interpretable as observations, that in the “visible future” we will have ingenious instruments capaable of measuring the “objects” to which these terms refer.
(225) This sort of knowledge is acquired with the mother's milk and through the whole process of socialization that is itself so intimately tied to the individual's acquisition of his mother tongue. It cannot be learned from books; it cannot be explicated in any form but life itself.
(226) What could be more obvious than the fact that, whatever intelligence a computer can muster, however it may be acquired, it must always and necessarily be absolutely alien to any and all authentic human concerns?
Shifts to ethical stance against giving computers tasks demanding wisdom.
(227) There have been may debates on “Computers and Mind.” What I conclude here is that the relevant issues are neither technological nor even mathematical; they are ethical. They cannot be settled by asking questions beginning with “can.” The limits of the applicability of computers are ultimately statable only in terms of oughts. What emerges as the most elementary insight is that, since we do not now have any ways of making computers wise, we ought not now to give computers tasks that demand wisdom.
Likely disagree with this statement that there are no marketable AI results today; examples of DENTRAL and MACSYMA best he can muster.
(229) With few exceptions, there have been no results, from over
twenty years of artificial-intelligence research, that have found
their way into industry generally or into the computer industry in
(229) Two exceptions are the remarkable programs DENTRAL and MACSYMA that exist at Stanford University and at M.I.T., respectively.
(229-230) DENTRAL interprets outputs of mass spectrometers, instruments used for analyses of chemical molecules. In ordinary practice, chemists in postdoctoral training are employed to deduce the chemical structures of molecules given to this instrument from the so-called mass spectra it produces. . . . Stated in general terms, then, DENTRAL is a program that analyzes mass spectra and produces descriptions of the structures of molecules that, with very high probability, gave rise to these spectra. The program's competence equals or exceeds that of human chemists in analyzing certain classes of organic molecules.
(230-231) MACSYMA is, by current standards, an enormously large program for doing symbolic mathematical manipulations. . . . What is important here is that, just as for DENDRAL, there exist strong theories about how the required transformations are to be made. . . . And just as for DENTRAL, MACSYMA's task is one that is normally accomplished only by highly trained specialists.
(231) These two programs are distinguished from most other artificial intelligence programs precisely in that they rest solidly on deep theories.
Heuristic basis of AI and other programs appeals to ad hoc construction by groups of individuals over long periods; compare to software products like automation systems.
(232) But most existing programs, and especially the largest and
most important ones, are not theory-based in this way. They are
heuristic, not necessarily in the sense that they employ heuristic
methods internally, but in that their construction is based on rules
of thumb, stratagems that appear to “work” under most foreseen
circumstances, and on other ad hoc mechanisms that are added to them
from time to time.
(232) What is much more important, however, is that almost all the very large computer programs in daily use in industry, in government, and in the universities are of this type as well. These gigantic computer systems have usually been put together (one cannot always use the word “designed”) by teams of programmers, whose work is often spread over many years. By the time these systems come into use, most of the original programmers have left or turned their attention to other pursuits. It is precisely when such systems begin to be used that their inner workings can no longer be understood by any single person or by a small team of individuals.
Misattribution that programmer understands every detail of the processes embodied by programs realized by Wiener.
(232) Norbert Wiener, the father of cybernetics, foretold this
phenomenon in a remarkably prescient article published almost fifteen
(233) What Norbert Wiener described as a possibility has long since become reality. The reasons for this appear to be almost impossible for the layman to understand or accept. His misconcpetion of what computers are, of what they do, and of how they do what they do is attributable in part to the pervasiveness of the mechanistic metaphor and the depth to which it has penetrated the unconscious of our entire culture. . . . To him [Minsky] computers and computer programs are “mechanical” in the same simple sense as steam engines and automobile transmissions.
(234) The layman, having heard the slogan in question, believes that the very fact that a program runs on a computer guarantees that some programmer has formulated and understands every detail of the process which it embodies.
Legal/bureaucratic view of program formulation appeals to vicissitudes of execution, although lay person believes programmers know every detail and theoretical bases: knowledge is much more sparse and brittle (MacKenzie).
(234) Program formulation is thus rather more like the creation of a
bureaucracy than like the construction of a machine of the kind Lord
Kelvin may have understood.
(235) It is undoubtedly this kind of trust that Minsky urges us to invest in complex artificial-intelligence programs that grow in effectiveness but which come to be beyond our understanding.
Legitimation of knowledge base of programs that are not understood by their users; fallacy of misplaced concreteness?
(236-237) Our society's growing reliance on computer systems that
were initially intended to “help” people make analyses and
decisions, but which have long since both surpassed the understanding
of their users and become indispensable to them, is a very serious
development. . . . And their growth and the increasing reliance
placed on them is then accompanied by an increasing legitimation of
their “knowledge base.”
(237) Professor Philip Morrison of M.I.T. wrote a poignant parable on this theme [of seismological world map based only on data collected after 1961 that was digitized].
Annihilation of historical memory by elimination of data that is not already digitized in standard formats: compare to Ong on destruction of oral cultures.
(238) The computer has thus begun to be an instrument for the
destruction of history. For when society legitimates only those
“data” that are “in one standard format” and that “can
easily be told to the machine,” then history, memory itself, is
(239) Modern technological rationalizations of war, diplomacy, politics, and commerce (such as computer games) have an even more insidious effect on the making of policy. . . . The enormous computer systems in the Pentagon and their counterparts elsewhere in our culture have, in a very real sense, no authors. Thus they do not admit of any questions of right or wrong, of justice, or of any theory with which one can agree or disagree. They provide no basis on which “what the machine says” can be challenged.
Despite tranquilizing myths of inevitability and Fromm escape from freedom, there are actors who are obliged to master programming and control of computers; good evidence that philosophy of computing and programming occurred in focus on debugging, yet couches human intentions as a problem of technique.
(240) One would expect that large numbers of individuals, living in a
society in which anonymous, hence irresponsible, forces formulate the
large questions of the day and circumscribe the range of possible
answers, would experience a kind of impotence and fall victim to
mindless rage. . . . Yet an alternative response is also very
pervasive; as seen from one perspective, it appears to be
resignation, but from another perspective it is what Erich Fromm
long ago called “escape from freedom.”
(241) Today even the most highly placed managers represent themselves as innocent victims of a technology for which they accept no responsibility and which they do not even pretend to understand. . . . The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it.
(241) But, in fact, there are actors!
Allusion to goal of automatic programming, ease of use, and trustworthiness in unnamed university planning paper.
(242-244) [quoting unnamed planning paper by director of major
university computer laboratory] “The importance of the role stems,
as has been noted, from the fact that the computer has been
incorporating itself, and will surely continue to incorporate
itself, into most of the functions that are fundamental to the
support, protection, and development of our society. Even now, there
is no turning back, and in a few years it will be clear that we
are as vitally dependent upon the informational processing of our
computers as upon the growth of grain in the field and the flow of
fuel from the well. . . . Nevertheless, debugging should be in the
focus of the research effort undertaken to master programming. The
reason is that research on debugging will yield insight into many
problems in the formulation and expression of human intention.” . .
. Once we understand “human intentions,” itself a technical
problem, all else is technique. . . . “Eventually, if the effort is
successful, the model becomes the automatic programmer. . . .
The convergence of direction . . . involves making computers not only
easy to use but, as has been stressed here, trustworthy.”
What Simon says counts; therefore, his philosophy should be scrutinized.
(245 note) Professor Simon is one of the most influential statesmen
of science in America today. What he says really counts.
(248) The various systems and programs we have been discussing share some very significant characteristics: they are all, in a certain sense, simple; they all distort and abuse language; and they all, while disclaiming normative content, advocate an authoritarianism based on expertise.
(248) But the philosophical differences between the two attitudes [Skinner and Forrester] are slight. Forrester sees literally the whole world in terms of feedback loops.
Reason reduced to domination of things, man, and nature; links to Nietzsche, Heidegger and Kittler.
(249) But these systems are simple in a deeper, and more important
sense as well. They have reduced reason itself to only its role in
the domination of things, man, and, finally, nature.
(250) In the process of adapting ourselves to these systems, we, even the admirals among us, have castrated not only ourselves (that is, resigned ourselves to impotence), but our very language as well. For now language has become merely another tool, all concepts, ideas, images that artists and writers cannot paraphrase into computer-comprehensible language have lost their function and their potency.
All social problems treated as technical problems, exemplified by Vietnam war; link to Golumbia, Edwards, procedural rhetoric and videogame criticism.
(251-252) When every problem on the international scene is seen by the “best and brightest” problem solvers as being a mere technical problem, wars like the Viet Nam war become truly inevitable. The recognition of genuinely conflicting but legitimate interests of coexisting societies—and such recognition is surely a precondition of conflict resolution or accommodation—is rendered impossible from the outset. Instead, the simplest criteria are used to detect differences, to search for means to reduce these differences, and finally to apply operators to “present objects” in order to transform them into “desired objects.” It is, in fact, entirely reasonable, if “reason” means instrumental reason, to apply American military force, B-52's, napalm, and all the rest, to “communist-dominated” Viet Nam (clearly an “undesirable object”), as the “operator” to transform it into a “desirable object,” namely, a country serving American interests.
Computers as fetish and concrete form of Horkheimer eclipse of reason.
(252) Horkheimer, long before computers became a fetish and
gave concrete form to the eclipse of reason, gave us the needed
(255) The alternative to the kind of rationality that sees the solution to world problems in psychotechnology is not mindlessness. It is reason restored to human dignity, to authenticity, to self-esteem, and to individual autonomy.
(257) On the other hand, it may be that religion was not addictive at all. Had it been, perhaps God would not have died and the new rationality would not have won out over grace. But instrumental reason, triumphant technique, and unbridled science are addictive. They create a concrete reality, a self-fulfilling nightmare.
AGAINST THE IMPERIALISM OF INSTRUMENTAL REASON
(258) There is a parable in that, too: the power man has acquired through his science and technology has itself been converted into impotence.
Studs Terkel common people believe power exercised by leaders, yet American Secretary of State believes events befall us, and Chief of Staff a slave to computers; see Edwards.
(259) Perhaps the common people [as portrayed by Studs Terkel] believe that, although they are powerless, there is power, namely, that exercised by their leaders. But we have seen that the American Secretary of State believes that events simply “befall” us, and that the American Chief of the Joint Chiefs of Staff confesses to having become a slave of computers. Our leaders cannot find the power either.
Biofeedback movement as proto-bioengineering, stripping power of choice.
(259) The now ascendant biofeedback movement may be the penultimate
act in the drama separating man from nature; man no longer even
senses himself, his body, directly, but only through pointer
readings, flashing lights, and buzzing sounds produced by instruments
attached to him as speedometers are attached to automobiles.
(259) Power is nothing if it is not the power to choose. Instrumental reason can make decisions, but there is all the difference between deciding and choosing.
Routinely do things with computer technology like morally questionable experiments, such as violent video games and pornography; treating everything as an object puts our souls at peril.
(260) Is not the overriding obligation on men, including men of
science, to exempt life itself from the madness of treating
everything as an object, a sufficient reason, and one that does not
even have to be spoken?
(261) Our time prides itself on having finally achieved the freedom from censorship for which libertarians in all ages have struggled. . . . But, on closer examination, this victory too can be seen as an Orwellian triumph of an even higher ignorance: what we have gained is a new conformism, which permits us to say anything that can be said in the functional languages of instrumental reason, but forbids us to allude to what Ionesco called the living truth.
(261) If that is so, then those who censor their own speech do so, to use an outmoded expression, at the peril of their souls.
Attack on human spirit by reduction to functional language, and making decisions that lock future generations into particular technological forms (Stallman on cloud computing).
(262) These responsibilities are
especially grave since future generations cannot advocate their own
cause now. We are all their trustees.
(263) [Marc J.] Roberts chose to illustrate that scientific hypotheses are not “value free” by citing the values enter into the scientist's choice to tolerate or not to tolerate the potential cost of being wrong.
(265) There simply is a responsibility—it cannot be wished away—to decide which problems are more important or interesting or whatever than others. Every specific society must constantly find ways to meet that responsibility. The question here is how, in an open society, these ways are to be found; are they to be dictated by, say, the military establishment, or are they to be open to debate among citizens and scientists?
(265) A central question of knowledge, once won, is its validation; but what we now see in almost all fields, especially in the branches of computer science we have been discussing, is that the validation of scientific knowledge has been reduced to the display of technological wonders.
Counter dehumanization by social engineering by appealing to personal judgment intrinsic worth; try to get a machine to do this.
(266) The individual is dehumanized
whenever he is treated as less than
a whole person. The various forms of human and social engineering we
have discussed here do just that, in that they circumvent all human
contexts, especially those that give real meaning to human
(267) This is not an argument for solipsism, nor is it a counsel for every man to live only for himself. But it does argue that every man must live for himself first. For only by experiencing his own intrinsic worth, a worth utterly independent of his “use” as an instrument, can he come to know those self-transcendent ends that ultimately confer on him his identity and that are the only ultimate validators of human knowledge.
Strong philosophy of computing and programming positions: no morally repugnant projects, but obscene and irreversible applications should be avoided; the animal experiments and robotic moment have happened.
(267) This spirit dictates that I
must exhibit some of my own decisions about what I may and may not do
in computer science.
(268) There is, in my view, no project in computer science as such that is morally repugnant and that I would advise students or colleagues to avoid.
(268) There are, however, two kinds of computer applications that either ought not be undertaken at all, or, if they are contemplated, should be approached with utmost caution.
(268-269) The first kind I would call simply obscene. . . . The proposal I have mentioned, that an animal's visual system and brain be coupled to computers, is an example.
(269-270) I would put all projects that propose to substitute a computer system for a human function that involves interpersonal respect, understanding, and love in the same category. . . . The point is (Simon and Colby to the contrary notwithstanding) that there are some human functions for which computers ought not to be substituted. It has nothing to do with what computers can or cannot be made to do. Respect, understanding, and love are not technical problems.
(270) The second kind of computer application that ought to be avoided, or at least not undertaken without very careful forethought, is that which can easily be seen to have irreversible and not entirely foreseeable side effects. If, in addition, such an application cannot be shown to meet a pressing human need that cannot readily be met in any other way, then it ought not to be pursued.
Interesting choice of speech recognition as an application to avoid (contrary to Licklider); either too expensive or will lead to surveillance state.
(270) The example I wish to cite here is that of the automatic
recognition of human speech.
(271) But here we have to remember that the problem is so enormous that only the largest possible computers will ever be able to manage it.
(271) This project then represents, in the eyes of its chief sponsor, a long step toward a fully automated battlefield.
(271-272) But such listening machines, could they be made, will make monitoring of voice communication very much easier than it now is. Perhaps the only reason that there is very little government surveillance of telephone conversations in many countries of the world is that such surveillance takes so much manpower. . . . As a citizen I ask, why should my government spend approximately 2.5 million dollars a year (as it now does) on this project?
(273) That so many people so often ask what they must do is a sign that the order of being and doing has become inverted.
(273) In a world in which man increasingly meets only himself, and then only in the form of the products he has made, the makers and designers of these products—the buildings, airplanes, foodstuffs, bombs, and so on—need to have the most profound awareness that their products are, after all, the results of human choices.
(273) It is hard, when one sees a particularly offensive television commercial, to imagine that adult human beings sometime and somewhere sat around a table and decided to construct exactly that commercial and to have it broadcast hundreds of times. But that is what happens. These things are not products of anonymous forces.
(274-275) The intention of most of these men was not to invent or recommend a new technology that would make warfare more terrible and, by the way, less costly to highly industrialized nations at the expense of “underdeveloped” ones. Their intention was to stop the bombing. . . . Yet, who can tell what effect it would have had if forty of America's leading scientists had, in the summer of 1966, joined the peace groups in coming out flatly against the war on moral grounds?
(275) The first is that it was not technological inevitability that invented the electronic battlefield, nor was it a set of anonymous forces. . . . This kind of intellectual self-mutilation, precisely because it is largely unconscious, is a principal source of the feeling of powerlessness experienced by so many people who appear, superficially at least, to occupy seats of power.
Do computer games and simulations distance or rather permit empathy?
(275-276) A second lesson is this. These men were able to give the
counsel they gave because they were operating at an enormous
psychological distance from the people who would be maimed and killed
by the weapons systems that would result form the ideas they
communicated to their sponsors. The lesson, therefore, is that the
scientist and technologist must, by acts of will and of the
imagination, actively strive to reduce such psychological distances,
to counter the forces that tend to remove him from the consequences
of his actions.
(276) When instrumental reason is the sole guide to action, the acts it justifies are robbed of their inherent meanings and thus exist in an ethical vacuum.
Civil courage in small contexts of governmentality, especially teachers of computer science; good entry point for critical programming, obligation of the university to do more than train.
(276) It is a widely held but a grievously mistaken belief that civil
courage finds exercise only in the context of world-shaking events.
To the contrary, its most arduous exercise is often in those small
contexts in which the challenge is to overcome the fears induced by
petty concerns over career, over our relationships to those who
appear to have power over us, over whatever may disturb the
tranquility of our mundane existence.
(276) And, because this book is, after all, about computers, let that call be heard mainly by teachers of computer science.
(277) He must teach the limitations of his tools as well as their power.
(277) Almost anyone with a reasonably orderly mind can become a fairly good programmer with just a little instruction and practice. . . . Immature students are therefore easily misled into believing that they have truly mastered a craft of immense power and of great importance when, in fact, they have learned only its rudiments and nothing substantive at all.
(278) When such students have completed their studies, they are rather like people who have somehow become eloquent in some foreign language, but who, when they attempt to write something in that language, find they have literally nothing of their own to say.
(278) The function of a university cannot be to simply offer prospective students a catalog of “skills” from which to choose. . . . Surely the university should look upon each of its citizens, students and faculty alike, first of all as human beings in search of—what else to all it?--truth, and hence in search of themselves.
(278) Just because so much of a computer-science curriculum is concerned with the craft of computation, it is perhaps easy for the teacher of computer science to fall into the habit of merely training.
(279) Finally, the teacher of computer science is himself subject to the enormous temptation to be arrogant because his knowledge is somehow “harder” than that of his humanist colleagues.
(280) Without the courage to confront one's inner as well as one's outer worlds, such wholeness is impossible to achieve. Instrumental reason alone cannot lead to it. And there precisely is a crucial difference between man and machine: Man, in order to become whole, must be forever an explorer of both his inner and his outer realities.
Weizenbaum, Joseph. Computer Power and Human Reason: From Judgment to Calculation. San Francisco: W. H. Freeman and Company, 1976. Print.