Notes for Terry Winograd “Thinking Machines: Can There Be? Are We?”
Key concepts: bureaucracy of mind, connectionism, emergent intelligence, hermeneutic constructivism, heuristically adequate, mathematization of experience, mechanization or rationality, micro-truths, patchwork rationality, satisficing.
AI based on bureaucratic subject. Mathematical modeling replaced with symbolic emphasis, but still simplification. Intelligence identified with rule-governed symbol-manipulating device, with representation as the essential link. Minsky's belief in emergent intelligence from computational interactions; agents as subroutines slips to society of homunculi. Heidegger and phenomenologist challenge: readiness-to-hand versus present-to-hand. Great comparison between patchwork rationality AI techniques and bureaucracy. Turkle connectionist emergent intelligence different than Minksy emergent intelligence. Computer is language machine, not a thinking machine. AI assumes mind linguistic down to microscopic level. Drawing on hermeneutic and phenomenological philosophies of language, and speech act philosophy, lead to emphasis on embodiment, situatedness, context, social aspects of world creation through language. Suggests Heim computer as component objectives including medical reference, language structure detection, tracking associations like cookies and other web tracking technologies.
Related theorists: Austin, Descartes, Gadamer, Gee, Habermas, Heidegger, Heim, Hobbes, Lee, Leibniz, Lenat, Minskey, Newell, Searle, Simon, Suchman, Turkle, Wittgenstein.
AI based on bureaucratic subject, guided by shallow research drawn from rationalism and logical empiricism.
(198) But its shortcomings are far more
mundane: we have not yet been able to construct a machine with even a
modicum of common sense or one that can converse on everyday topics
in ordinary language.
(199) The basic philosophy that has guided the research is shallow and inadequate had has not received sufficient scrutiny. It is drawn from the traditions of rationalism and logical empiricism but has taken a novel turn away from its predecessors.
(199) I will argue that “artificial intelligence” as now conceived is limited to a very particular kind of intelligence: one that can usefully be likened to bureaucracy in its rigidity, obtuseness, and inability to adapt to changing circumstances.
THE MECHANIZATION OR RATIONALITY
Mathematical modeling replaced with symbolic emphasis, but still simplification.
(199) Although Descartes himself did
not believe that reason could be achieved through mechanical devices,
his understanding laid the groundwork for the symbol-processing
machines of the modern age.
(200) The first decades of computing emphasized the application of numerical techniques. . . . The “mathematization” of experience required simplifications that made the computer results—accurate as they might be with respect to the models—meaningless in the world.
(200) The developers of artificial intelligence have rejected traditional mathematical modeling in favor of an emphasis on symbolic, rather than numerical, formalisms.
THE PROMISE OF ARTIFICIAL INTELLIGENCE
AI goals of explaining human mental processes as mechanical devices and creating intelligent tools.
(201) In building
models of mind, there are two distinct but complementary goals. On
the one hand is the quest to explain human mental processes as
ordinary mechanical devices. On the other hand is the drive to create
intelligent tools – machines that apply intelligence to serve some
purpose, regardless of how closely they mimic the details of human
(201) Researchers such as Newell and Simon (two other founding fathers of artificial intelligence) have sought precise and scientifically testable theories of more modest scope than Minsky suggests.
THE FOUNDATIONS OF ARTIFICIAL INTELLIGENCE
AI abandoned certainty and truth, building patchwork of micro-truths and employing methodologies that are merely heuristically adequate.
Artificial intelligence has abandoned the quest for certainty and
truth. The new patchwork rationalism is built on mounds of
“micro-truths” gleaned through commonsense introspection, ad hoc
programming, and so-called knowledge acquisition techniques for
(203) The artificial intelligence methodology does not demand a logically correct answer but one that works sufficiently often to be “heuristically adequate.”
(203) Minsky places the blame for lack of success in explaining ordinary reasoning on the rigidity of logic and does not raise the more fundamental questions about the nature of all symbolic representations and of formal (though possibly “nonlogical”) systems of rules for manipulating them.
The Physical Symbol System Hypothesis
Intelligence identified with rule-governed symbol-manipulating device, with representation as the essential link.
The fundamental principle is the identification of intelligence with
the functioning of a rule-governed symbol-manipulating device.
(204) The essential link is representation – the encoding of the relevant aspect of the world. . . . Complete and systematic symbolic representation is crucial to the paradigm. The rules followed by the machine can deal only with the symbols, not their interpretations.
Problem Solving, Inference, and Search
Simon satisficing supplanted optimizing decision theories for adequate plans of action.
He [Simon] supplanted decision theories based on optimization with a
theory of “satisficing”
effectively using finite decision-making resources to come up with
adequate, but not necessarily optimal, plans of action.
(206) The cognitive modeler does not build an overall model of the system's performance on a task but designs the individual rules in the hope that appropriate behavior will emerge from their interaction.
(206) Minsky makes explicit this assumption that intelligence will emerge from computational interactions among a plethora of small pieces.
Belief in emergent intelligence from computational interactions; agents as subroutines slips to society of homunculi.
(207) With a simple “might indeed become versatile,” we have slipped from a technically feasible but limited notion of agents as subroutines to an impressionistic description of a society of homunculi, conversing with one another in ordinary language.
Knowledge as a Commodity
Lenat task of encoding all knowledge reflects idea of knowledge as a commodity.
(208) Lenat has embarked on this task of “encoding all the world's knowledge down to some level of detail.”
THE FUNDAMENTAL PROBLEMS
Expert systems for managing processes too complex or rapid for unassisted humans are brittle.
(209) Applied AI is widely seen as a means of managing processes that
have grown too complex or too rapid for unassisted humans.
(209) It is a commonplace in the field to describe expert systems as “brittle” - able to operate only within a narrow range of situations.
Gaps of Anticipation
Heidegger and phenomenologist challenge to patchwork rationalism: readiness-to-hand versus present-to-hand, decontextualized representation is blind; compare to Suchman and Gee.
The hope of patchwork rationalism is that with a sufficiently large
body of rules, the thought-through spots will successfully
interpolate to the wastelands in between.
(210) To say that “all of the world's knowledge” could be explicitly articulated in any symbolic form computational or not), we must assume the possibility or reducing all forms of tacit knowledge (skills, intuition, etc.) to explicit facts and rules. Heidegger and other phenomenologists have challenged this, and many of the strongest criticisms of artificial intelligence are based on the phenomenological analysis of human understanding as a “readiness-to-hand” of action in the world, rather than as the manipulation of “present-to-hand” representations.
(211) The problem is one of human understanding – the ability of a person to understand how a new situation experienced in the world is related to an existing set of representations and to possible modifications of those representations.
The Blindness of
(211) To build a successful symbol system, decontextualized meaning is necessary: terms must be stripped of open-ended ambiguities and shadings.
Restriction of the Domain
Restricted domain required for successful AI; explicit facts always fit within cultural orientation.
(212) The most successful artificial intelligence programs have
operated in the detached puzzlelike domains of board games and
technical analysis, not those demanding understanding of human lives,
motivations, and social interaction.
(213) Every explicit representation of knowledge bears within it a background of cultural orientation that does not appear as explicit claims but is manifest in the very terms in which the “facts” are expressed and in the judgment of what constitutes a fact.
THE BUREAUCRACY OF MIND
Comparison between AI techniques and bureaucracy by Lee consonant with other theorists, notably Foucault, from whom the dominant organizational characteristic of the epoch is bureaucratization, even for the conception of the mind, subjectivity, consciousness.
(213) [Quoting Lee “A Bureaucracy of Intelligence”] Stated simply, the techniques of artificial intelligence are to the mind what bureaucracy is to human social interaction.
Problem of client satisfaction mismatch between decontextualized application of rules and human interpretation of symbols appearing in them.
(214) Indeed, systems based on symbol manipulation exhibit the
rigidities of bureaucracies and are most problematic in dealing with
“client satisfaction” - the mismatch between the decontextualized
application of rules and the human interpretation of the symbols that
appear in them.
(214) The “I just follow the rules” of the bureaucratic clerk has its direct analogue in “That's what the knowledge base says.”
Patchwork rationality of bureaucratic intelligence results in forgetfulness of individual commitment.
(214) This forgetfulness of individual commitment is perhaps the most subtle and dangerous consequence of patchwork rationality.
Turkle connectionist emergent intelligence different than Minksy emergent intelligence.
In this work, each computing element (analogous to a neuron) operates
on simple general principles, and intelligence emerges from the
evolving patterns of interaction.
(215-216) Connectionism is one manifestation of what Turkle calls “emergent AI.” The fundamental intuition guiding this work is that cognitive structure in organisms emerges through learning and experience, not through explicit representation and programming. The problems of blindness and domain limitation described above need not apply to a system that has developed through situated experience.
(216) Connectionism, like its parent cognitive theory, must be placed in the category of brash unproved hypotheses, which have not really begun to deal with the complexities of mind and whose current explanatory power is extremely limited.
AI assumes mind linguistic down to microscopic level: drawing on hermeneutic and phenomenological philosophies of language, and speech act philosophy, lead to emphasis on embodiment, situatedness, context, social aspects of world creation through language.
Computer is language machine, not a thinking machine: precisely the point so many find defective in OHCO hypothesis about textuality is the one Hayles exclaims overdetermines early posthuman conceptions as inherently discursive, symbolic.
The computer is a physical embodiment of the symbolic calculations
envisaged by Hobbes and Liebniz. As such, it is really not a thinking
machine, but a language machine.
The very notion of “symbol system” is inherently linguistic, and
what we duplicate in our programs with their rules and propositions
is really a form of verbal argument, not the workings of mind. . . .
intelligence has operated with the faith that mind is linguistic down
to the microscopic level.
(217) We begin with some fundamental questions about what language is and how it works. In this, we draw on work in hermeneutics (the study of interpretation) and phenomenology, as developed by Heidegger and Gadamer, along with the concepts of language action developed from the later works of Wittgenstein through the speech act philosophy of Austin, Searle, and Habermas.
Insights of hermeneutic constructivism are that people create world through language, which is always interpreted in tacitly understood background.
(217) Two guiding principles emerge: (1) people create their world through language; and (2) language is always interpreted in a tacitly understood background.
This situatedness of natural language is perhaps why Ong dismisses programming languages.
(218) The unavoidable dependence of interpretation on unspoken background is the fundamental insight of the hermeneutic phenomenologists, such as Gadamer. It applies not just to ordinary language but to every symbolic representation as well.
Suggests Heim computer as component objectives including medical reference, language structure detection, tracking associations like cookies and other web tracking technologies.
(218-219) We are already beginning to see a movement away from the
early vision of computers replacing human experts. . . . The rules
can be thought of as constituting an automated textbook, which can
access and logically combine entries that are relevant to a
particular case. The goal is to suggest and justify possibilities a
doctor might not otherwise have considered.
(219) Another opportunity for design is in the regularities of the structure of language use. . . . The theory of such conversations has been develpoed as the basis for a computer program called The Coordinator, which is used for facilitating and organizing computer-message conversations in an organization.
(219-220) Rather than seeing the computer as working with objectified refined knowledge, it can serve as a way of keeping track of how the representations emerge from interpretations: who created them in what context and where to look for clarification.
Questioning engages projection of human image onto machine then back onto human; in AI tradition, language activity onto symbolic manipulations of machine, then back into human mind as language of thought.
(220) In asking this kind of question, we engage in a kind of projection – understanding humanity by projecting an image of ourselves onto the machine and the image of the machine back onto ourselves. In the tradition of artificial intelligence, we project an image of our language activity onto the symbolic manipulations of the machine, then project that back onto the full human mind.
Winograd, Terry. (1991). “Thinking Machines: Can there be? Are we?” in James J. Sheehan, editor, The Boundaries of Humanity: Humans, Animals, Machines. Berkeley: University of California Press.
Winograd, Terry. “Thinking Machines: Can There Be? Are We?” The Boundaries of Humanity: Humans, Animals, Machines. Ed. James J. Sheehan. Berkeley: University of California Press, 1991. 198-220. Print.