Notes for David Golumia The Cultural Logic of Computation

Key concepts: analog, computational linguistics, computationalism, cultural politics, culture of cool, formal linguistics, governmentality, language of thought, language organ, Leviathan principle, mastery, neoliberalism, OHCO thesis, oligarchical capitalism, philosophical functionalism, possessive materialism, presentism, striation.


Related theorists: Alison Adam, Sven Birkerts, Nicholas Carr, Noam Chomsky, Wendy Chun, Andy Clark, Gilles Deleuze, Daniel Dennett, Jacques Derrida, Jerry Fodor, Michel Foucault, Thomas Friedman, Alexander Galloway, Felix Guattari, N. Katherine Hayles, Thomas Hobbes, Wilhelm von Humboldt, Alan Liu, Adrian Mackenzie, C.B. Macpherson, George Miller, Nicholas Negroponte, Frederick Newmeyer, Hilary Putnam, Quine, Allen Renear, G.C. Spivak, Sherry Turkle, McKenzie Wark, Warren Weaver, Joseph Weizenbaum, Terry Winograd, Slavoj Zizek.

CHAPTER ONE
The Cultural Functions of Computation

(1) To a greater degree than do some of those earlier concepts, computing overlaps with one of the most influential lines in the history of modern thought, namely the rationalist theory of mind.

Computationalism, mind itself as computer, surprisingly underwrites to traditional conceptions of humanity, society, politics.

(1-2) This book foregrounds the roles played by the rhetoric of computation in our culture. I mean thereby to question not the development of computers themselves but the emphasis on computers and computation that is wide spread throughout almost every part of the social fabric. . . . my concern is that belief in the power of computation—a set of beliefs I call here computationalism—underwrites and reinforces a surprisingly traditionalist conception of human being, society, and politics.
(2) The primary goal is to understand our own culture, in which computers play a significant but not decisive role.

Rejects historical rupture associated with rise of electronic computing machinery.

(2-3) Too often the rhetoric of computation, especially that associated with so-called new media, suggests that we are in the process of experiencing a radical historical break of just this millennial sort. . . . Networks, distributed communication, personal involvement in politics, and the geographically widespread sharing of information about the self and communities have been characteristic of human societies in every time and every place: a burden of this book is to resist the suggestion that they have emerged only with the rise of computers.

The Circulation of Computational Discourse
(3) This book focuses primarily on the ways in which the rhetoric of computation, and the belief-system associated with it, benefits and fits into established structures of institutional power.
(3-4) I focus on the institutional effects of computing not merely to ensure that cultural criticism fully addresses our moment; rather, I am convinced both intellectually and experientially that computers have different effects and meanings when seen from the nodes of institutional power than from the ones they have when seen from other perspectives. . . . I am convinced that from the perspective of the individual, and maybe even from the perspective of informal social groups, the empowering effects of computerization appear (and may even be) largely salutary. But from the perspective of institutions, computerization has effects that we as citizens and individuals may find far more troubling.
(4-5) Inside our existing institutions of power, computerization tends to be aligned with relatively authority-seeking, hierarchical, and often politically conservative forces—the forces that justify existing forms of power. This is true in academic disciplines (where it is especially visible in analytic philosophy, the subject of Chapter 3, and in linguistics, the subject of Chapter 4); it is true in corporations and the corporate monitoring and control of everyday life, including the worldwide spread of capital and accompanying surveillance known as globalization (Chapter 6); and it is true even in politics, despite the obvious utility of computers for communicating and political organizing (Chapters 8 and 9). . . . Following a line of criticism that extends at least as far back as Kant (at least on one interpretation of Kant's views), and that has recent avatars in figures as diverse as established scholars like Lewis Mumford (1934, 1964), Harold Innis (1950, 1951), Jacques Ellul (1964, 1980, 1990), Joseph Weizenbaum, Martin Heidegger, Norbert Wiener (1954, 1964), Terry Winograd, and Theodore Roszak (1986), and more recent writers like Langdon Winner (1977, 1988), Mark Poster (1990, 200, 2006), Michael Adas, Philip Agre (1997), Christopher May (2002), Kevin Robins and Frank Webster (1999), Alison Adam, McKenzie Wark, Scott Lash (2002), Vincent Mosco, Dan Schiller, Lisa Nakamura, and others discussed below, I argue that computationalism meshes all too easily with the project of instrumental reason.
(5) In addition, therefore, to championing practices such as hacking . . . I argue that we must also keep in mind the possibility of de-emphasizing computerization.

Birkerts and Turkle on danger of pleasurable lure away from physical forms of social interaction.

(6) But Birkerts also points to a line of critique that must be taken more seriously, which goes something like this: how do we guarantee that computers and other cultural products are not so pleasurable that they discourage us from engaging in absolutely necessary forms of social interaction?

Constructivist philosophical form; interpretive method of cultural politics.

(6-7) This book is philosophical in form, but interpretive in method. . . . We are always talking about cultural politics, even when we appear not to be doing so. . . . My goal is not to articulate an alternative to computationalist presumptions about language, mind, and culture. It is to show the functions of that discourse in our society, to think about how and why it is able to rule out viable alternative views, and to argue that it is legitimate and even necessary to operate as if it is possible that computationalism will eventually fail to bear the philosophical conceptual burden that we today put on it.

Computationalism
(7) In its received (sometimes called its “classical”) from, computationalism is the view that not just human minds are computers but that
mind itself must be a computer.

Computationalism related to Hayles regime of computation, neoliberalism, Deleuze and Guattari war machine.

Differentiating influence of computational rhetoric from general mass computerization links mechanist views with conservatism particularly American neoliberalism, whose historical fact Foucault, Deleuze and Guattari recognize generating subjectivity, including potential to learn know how savoir of state sovereignty illustrated by civil disobedience hacking.

(8-9) While philosophers use the term computationalism to refer to others of their kind who believe that the human mind is ultimately characterizable as a kind of computer, here I deploy the term more expansively as a commitment to the view that a great deal, perhaps all, of human and social experience can be explained via computational processes. . . . In this sense, by computationalism I mean something close—but not identical—to what Hayles (2005) rightly calls the “regime of computation”; I believe it is accurate to say that the regime of computation targets the combined effects of computational rhetoric and mass computerization; here, at least in great part, my effort is to separate these two phenomena, even if we often want to examine how they work in tandem.
(9) In the most explicit accounts of Western intellectual history, mechanist views cluster on the side of political history to which we typically think of as the right, or conservatism, or Tory politics, or in our day, and perhaps more specifically relevant to this inquiry,
neoliberalism. . . . Resistance to the view that the mind is computational is often found in philosophers we associate with liberal or radical (usually, but not always, left) views, despite a significant amount of variety in their views—for example, Locke, Hume, Nietzsche, Marx.
(10) Just in order to take advantage of what
Deleuze and Guattari (1982, 1987) call the “war machine,” and then subsequently as a method of social organization in general, the State uses computation and promotes computationalism. . . . Interiority qua universal subjectivity emerges from numerical rationality applied as an understanding of human subjectivity, and not vice versa. . . . Because each citizen has the power to reason (to calculate ratios, or in our terms to compute) for himself, each citizen has access to the know-how (Foucault's savoir) of State sovereignty.
(11) In this sense, computers wrap the “legacy data” of the social world in formal markup, whose purpose is to provide the sovereign with access for post-hoc analysis, and secondarily to provide filter-style control. Computation can then be used, at sovereign discretion, as part of instruction, as a way of conditioning subjects to respond well to the computational model.
(11-12) The closing words of [Nicholas]
Negroponte's best-selling book completely lack exemplary support, and with good reason. Their staging of an artificial, precomputerization past where things like collaboration as opposed to competition existed seems purely ideological, and the observation that computers alone teach a kind of perspectivalism instanced in the ability to read code as poetry is nothing short of bizarre.
(12) Just as importantly, it is critical not to accept
a priori the idea that computation as such refers only to the operations of the particular physical objects we understand as computers.
(12-13) it must look to what computers are doing in our world, from the implementation of widely distributed identification and surveillance to the reduction of all the world's cultures for profit.
(13) We need to find a way to generate critical praxis even of what appears as an inarguable good.

Key point that adopting hyper rationalism shunts alternative discourses follows same argument that computing aligns with rationality better than other sciences.

(13) For at least one hundred years and probably much longer, modern societies have been built on the assumption that more rationality and more technē (and more capital) are precisely the solutions to the extremely serious problems that beset our world and our human societies. Yet the evidence that this is not the right solution can be found everywhere. . . . To some extent this is a perpetual tension in all societies, not just in ours or in so-called modern ones; what is distinctive about our society, historically, is its emphasis on rationalism and its terrific adeptness at ruling out any discourse that stands against rationalism.
(14) The computer, despite its claims to fluidity, is largely a proxy for an idealized form of rationalism. This book shows how the rationalist vision could be mutated into something like a full articulation of human society, despite the obvious, repeated,
a priori and a posteriori reasons that this could never and will never be the case. On this view, the main reason figures like Kant, Hegel, Plato, Hume, the late Wittgenstein, and even Derrida and Spivak look odd at all to us in precisely because of the sheer power held by the rationalist vision over so much of society.

The Deconstruction of Computation

Definition of computation as mathematical calculation that can stand for nonmathematical propositions, invoking Spivak, Landow and Derrida.

Computationalism is discourse since humans are intimately involved alongside the machinic others it creates.

(14) Despite its rigid formal characteristics, in part because of them, then, computationalism is in every sense what Foucault calls a discourse, one that we are actively creating and enabling, and among whose fundamental principles is the elaboration of centralized power. . . . There is little more to understanding computation than comprehending this simple principle: mathematical calculation can be made to stand for propositions that are themselves not mathematical, but must still conform to mathematical rules.
(15) Few writers have doubted the importance of rational calculation in the operation of human thinking. What is in question is the degree to which that sort of calculation explains all the facts of cognition, all the effects of culture, and all that is possible in the realm of culture and cognition.
(16) It is no accident that [G. C.]
Spivak uses the term programmed to describe the kind of thought that Kant did not think encompasses all of human reason, precisely the kind of cognitive practice that would eliminate the ambiguity that so troubled Leibniz and others.
(17) Despite the efforts of pro-computer writers like George Landow to make hypertext sound like the realization of Derridean dreams of a language without binding or hierarchical structures (Landow 1992), in fact from his earliest writing Derrida has been concerned precisely with the difference between human language and something like computer code.
(17-18)
Derrida is no Luddite. . . . But the computer in particular is a technology that caused him great concern, for precisely the reason that it offers to substitute for the flux of experience an appearance of certainty that cannot, in fact, adequately represent our experience.

Languages are not codes because the former rarely have a single correct interpretation; thus a deliberate utilitarian metaphor whose artificiality has been forgotten.

(19) Programming languages, as Derrida knew, are codes: they have one and only one correct interpretation (or, at the absolute limit, a determinate number of discrete interpretations). Human language practice almost never has a single correct interpretation. Languages are not codes; programming “languages” like Basic and FORTRAN and scripting “languages” like HTML and JavaScript do not serve the functions that human languages do. The use of the term language to describe them is a deliberate metaphor, one that is meant to help us interact with machines, but we must not let ourselves lose sight of its metaphorical status, and yet this forgetting has been at stake from the first application of the name.
(20) it is instead because we are material beings embedded in the physical and historical contexts of our experience, and it turns out that what we refer to as “thought” and “language” and “self” emerge from those physical materialities.

Poststructuralism hinges on denial of substantive human nature.

Iterate poststructuralist and neoliberal.

(20) We need, then, to distinguish between the concept human prior to poststructuralism and after it—to keep in mind the object of criticism Derrida and Foucault meant to target, while not jettisoning a robust enough conception of human life to sustain political and cultural reflection.
(20-21) It would be inaccurate to say that we have passed beyond the notion of a substantive human nature in our own society; such a concept functions powerfully in popular discourse around gender, race, and sexuality, among other places. . . . Whatever our particular characteristics, we are all human, and we accept the fact that his term has little substantive content.
(21) There are nevertheless a set of capacities and concerns that characterize what we mean by human being: human beings typically have the capacity to think; they have the capacity to use one or more (human) languages; they define themselves in social relationships to each other; and they engage in political behavior. These concerns correspond roughly to the chapters that follow. In each case, a rough approximation of my thesis might be that most of the phenomena in each sphere, even if in part characterizable in computational terms, are nevertheless
analog in nature.

Importance of the analog for rejecting computationalism even for poststructuralist human nature; however, Clark extended cognition countering digital representation hypothesis seems to leave opening for mixed analog and digital computationalism, especially if highlighting involutions and convolutions of distributed agency and vicissitudes of execution (Mackenzie and Chun).

(22) While enough frames-per-second can make digital animations appear as smooth as analog ones, there is still a translation occurring inside the computer that the animal body does not need to make. . . . Lawn mowers, toasters, drills, typewriters, elbow joints, pianos, and jaws may be mechanical, but there is no reason to suspect them of being digital (Derrida [1993] offers an excellent account of the machinic qualities of the organic world that nevertheless remain different from digital representation).

Striation a key concept from Deleuze and Guattari, although focus typically on the virtual.

(23) Nevertheless, the emphasis on the virtual as Deleuze and Guattari's chief contribution to the cultural study of computers has helped to obscure their much more sustained and meaningful ideas that center on the term striation, and that clearly have computation as an historical abstraction, and not just material computers, as their object of analysis.

We do not want to admit overwhelming forces of striation in hegemonies of governmentality afforded by computationalism; liberal political analysis favors two positions of democratizing technological determinism or resisting through protocol.

(23-24) Schematized in this way, it is clear that most of the thought and practice surrounding computers promotes striated over smooth space. It is remarkable, then, how much of the cultural-political discussion of computers uses the rhetoric of smooth space while simply not addressing issues of striation—of territorialization rather than deterritorialization. . . . While the rhetoric of computation looks for those places in which the network allows for smooth practices, arguably this is not because the computational infrastructure is itself hospitable to such practices. Rather, it is because we simply do not want to admit how overwhelming are the forces of striation within computers and computation, and we grasp at precisely those thin (but of course real) networks of smoothness that remain as computers grow ever more global in power.
(24) In today's left, political analysis of computation largely focuses on one of two political possibilities. The first, expressed in liberal writings like those of Joseph Trippi and Markos Moulitsas, comes close to a kind of technological determinism: it suggests that the Internet is inherently democratizing, and we simply need to have faith that global computerization will produce democracy as a necessary side-effect.
(25) A second view, more prominent within academic and creative thought about computing, suggests that there actually are problems inside of the contemporary computing infrastructure, but that it is “
through protocol that one must guide one's efforts, not against it” (Galloway 2004, 17).

Computing is our governmentality, not just an industry and communications medium; must resist both through and, in more sophisticated like Derrida not Luddite, against protocol.

Golumbia names computing governmentality for its expansiveness, critically interrogating not just networking but the entire milieu.

(25-26) my point is to raise the question whether the shape, function, and ubiquity of the computing network is something that should be brought under democratic control in a way that it is not today. I do not think computing is an industry like any other, or even a communications-medium like any other; rather, it is a name for the administrative control and concentration powers of our society—in a sense, precisely what Foucault would call our governmentality. . . . Thus to [Alexander] Galloway's dictum I offer this simple emendation: resistance “through protocol, and against it.”
(26) Trying to broaden the space from which informed leftist thought can insist that the question of how much computer technology is used, and how and where it is used, raises questions that must be open to the polis and not simply decided by technocrats.
(26) No doubt there are a whole range of technical questions that can be left to specialists. The ubiquity of computer technology is not one of them.

Not enough evidence that computers bring the democratic actions liberal discourse proclaims, beyond social media effects, while there is plenty of evidence suggesting increased authoritarianism, especially through surveillance, and corporate facism.

Support for Winner mythinformation by Golumbia.

(26-27) We don't see people who use computers extensively (modern Americans and others around the world) breaking out everywhere in new forms of democratic action that disrupt effectively the institutional power of capital (see Dahlberg and Siapera 2007, Jenkins and Thorburn 2003, and Siomns, Corrales, and Wolfensberger 2002 for close analysis of some of the more radical claims about democratization), yet our discourse says this is what computers bring. Our own society has displayed strong tendencies toward authoritarianism and perhaps even corporate fascism, two ideologies strongly associated with rationalism, and yet we continue to endorse even further tilts in the rationalist direction. . . . Perhaps, despite appearances, there is a possible future in which computers are more powerful, more widespread, cheaper, and easier to use—and at the same time have much less influence over our lives and our thoughts.


PART ONE
COMPUTATIONALISM AND COGNITION
CHAPTER TWO
Chomsky's Computationalism

Chomsky was in the right place at the right time, filling the author-function for the nascent regime of computation by appealing to traditional Cartesian rationalism.

Golumbia connects Chomsky to Foucault author function for promoting rationalist objectivity.

(31) In this sense, despite Chomsky's immense personal charisma and intellectual acumen, it is both accurate and necessary to see the Chomskyan revolution as a discourse that needed not so much an author as an author-function “tied to the legal and institutional systems that circumscribe, determine, and articulate the realm of discourses” (Foucault 1969, 130).
(32) In a deliberate and also largely covert effort to resist the possibility of communist/Marxist encroachment on the U.S.
conceptual establishment (which points at something far broader than institutional philosophy), individuals, government entities including the military and intelligence bodies (De Landa 1991), and private foundations like the RAND Corporation, promoted values like objectivity and rationalism over against subjectivity, collectivity, and shared social responsibility.
(32) Chomsky offered the academy at least two attractive sets of theses that, while framed in terms of a profoundly new way of understanding the world, in fact harkened back to some of the most deeply entrenched views in the Western intellectual apparatus.
(32) there is a natural ease of fit between the computationalist view and the rationalist one, and this fit is what proves so profoundly attractive to the neoliberal academy.

Must emphasize Chomsky computationalist stance supporting governmentality, conservative power over more popular leftist politics, just as computer technology enforces entrenched power structures more than it encourages democratic gestures.

(33) We begin with Chomsky because his views combine and serve as a discursive source for the perspective that the most fundamental of human phenomena—cognition and language—can be effectively reduced to computation.

Chomsky partisans opposed to socially embedded interpretive perspectives.

Could individualist, neoliberal stance be representative of constituted by typical for personal computing paradigm before mass internetworking instantiated socially embedded collective paradigms for which individuality of personal mobile devices permeated by core commonality recognized as access.

(33) Thus it is almost always the case that partisans of the Chomksyan revolution are opposed to anti-individualist, socially embedded, and/or interpretive perspectives.

Famous example of Chomsky disparaging Foucault and Lacan on Usenet.

While defending Chomsky against fomenting community derision of interpretive, socially embedded perspectives and author function representatives like Foucault and Lacan, Golumbia shows how the computationalist position bootstrapped itself on the Chomsky Hierarchy.

Whether Chomsky would consider the passage about peasant experience simple and familiar ideas dressed up on complicated and pretentious rhetoric according to the well known Usenet text attributed to him does not dispel the fact that evolving ontological paradigms affects governmentality, and especially now the governmentality of software, real actually existing software embodying cyberspace in its broad sense.

Whether Chomsky would consider the passage about peasant experience simple and familiar ideas dressed up in complicated and pretentious rhetoric according to the well known Usenet text attributed to him does not dispel the fact that evolving ontological paradigms affect governmentality, and especially now the governmentality of software, real actually existing software embodying cyberspace in its broad sense, so we must adjust our thoughts to this offset despite its depravities of style, and this attunement I call human bioprogramming, whether it works evaluated iteratively versus determined all at once, revealing reflecting symptomatic of my fundamental personal idiosyncratic through life time experience programming style.

(33n1) In a widely circulated Usenet text whose authorship Chomsky has never disputed (Chomsky 1996), and which strongly resembles many of his other writings in tone and subject matter, Chomsky explains that Foucault offers “simple and familiar ideas . . . dressed up in complicated and pretentious rhetoric” and that Lacan, whom Chomsky “met several times,” was “an amusing and perfectly self-conscious charlatan.”

Computationalism and the Chomsky Hierarchy
(34) In
Syntactic Structures, Chomsky wants especially to argue for the mental reality of an entity (though not one that can be locally identified) called the language organ that exists in a material sense somewhere (the term will later become the “faculty of language” to emphasize that Chomsky does not mean there will be found a physically separate part of the brain devoted to processing language).
(34) What device did Chomsky have in mind? It has long been noted that Chomsky accepted funding for about a decade from several parts of the U.S. defense establishment.
(34-35) But the elementary linguistic theory is no arbitrary simple candidate: it is the logical structure that forms the foundation of contemporary computer science: the finite-state automata developed by Markov and von Neumann (1966) but introduced into U.S. computer science exactly via Chomsky's research.
(36) Chomsky distances himself from these views—those of figures like McCarthy, Newell and Simon, who might think of themselves as inspired by Chomksyan theory—to put his system outside the lineage of which it is exemplary.
(37) The goal of the work now known as the Chomsky Hierarchy is to establish a system in which the term
language, here identified exactly with syntax, can be applied both to logical formalisms like those used by computers and also to so-called natural languages.
(38) If throughout his intellectual life Chomsky has endured others seeing what they want to in his work, it seems clear that what happens in this case is not just computer scientists but an entire community of technologically-minded intellectuals seeing in Chomsky's work precisely the potential to do what Chomsky disclaims—to bring human language under computational control. Surely it is no leap to think that this is exactly what the defense-industrial establishment sees in Chomsky's program, which attracts the attention of precisely the logicians, computer scientists, and technicians who are looking for someone to lead them down the glory road to “machines speaking.”

Legitimating equivocation of language and logical systems.

(38) Taken together, the ideological burden of the CFG [Context-Free Grammar] essays is to legitimate the application of the word language to logical systems, despite the obvious fact that logical systems have historically been understood as quite different from human languages.

The Nature of Form in Formal Linguistics
(39) The kinds of transformations Chomsky documents from his earliest work onward (Chomsky 1957), ones which “invert” sentence structure, derive
wh-questions, or make active sentences into passive ones, are all logical formulae that do not simply resemble computer programs: they are algorithms, the stuff of the computer that Alan Turing, just prior to Chomsky and in some ways coterminous with him, had identified as the building blocks for an entire exploded mechanism of calculation.
(40) The emphasis on finitude, operability, precise definition, and guaranteed completion are all hallmarks of both contemporary computing science and of Chomskyan linguistics, just as they were for Turing's discovery and implementation of the computer itself.

Language organ is mechanism that generates infinite permutations of sentences like a theoretical Turing machine.

(40) Somewhere inside the human brain there must be a physical or logical engine, call it the language organ, whose job is to produce mathematical infinity, and the advent of this ability—also happens to be the crucial disjunction that separates humans from nonhumans and perhaps even intelligence from nonintelligence.

Attraction of Chomksyan approaches to white men and imperial cultures.

(41) Although there are many exceptions, it is still true that Chomskyan approaches have tended to attract white men (and also men from notably imperial cultures, such as those of Korea and Japan), and that women and minority linguists have tended to favor non-Chomskyan approaches.
(42) syntax refers to the restricted set of logical rules that are necessary to generate all the sentences of a given language. . . . They are largely unconscious and automatic [algorithms], and reflect the operation of the computer inside the human mind that performs all linguistic operations—the Language Organ.
(42) Thus, at the same time that Jerry Fodor, Hilary Putnam, and other analytic philosophers were working out the theory of philosophical functionalism, Chomsky and a different (if at times overlapping) set of followers were working out in rapid succession a series of theories of human language that saw a specialized kind of computer inside the human brain.
(42) They considered the Boas-Sapir-Bloomfield tradition of linguistic investigation to be old hat and outmoded . . . [whereas] Chomsky's work carried with it the penumbra of technological newness.
(43) Despite Chomsky's overt leftist politics, Chomsky's effect on linguistics was to take a field that had been especially aware of cultural difference and the political situations of disempowered groups and, in some ways, to simply dismiss out of hand the question of whether their practices might have much to offer intellectual investigation.

Compare bias for English in Chomskyan linguistics, and dismissal of cultural differences, to prevalence of English form in programming languages.

(43) The early generativists did not merely display a bias toward English; they presumed (no doubt to some degree out of lack of exposure to other languages) that its basic structures must reflect the important ones out of which the language organ operated.
(45) [Frederick] Newmeyer is one of the few writers to have attempted to explicate the use of the term formal linguistics to describe the Chomskyan tradition.
(45-46) Another way of understanding this belief in syntax is as a belief in pure, autonomous form that stands apart from the human world of performance. . . . That Chomsky is deeply interested in the operations of a supreme and transcendent authority is evident not merely from his political writings, but no less from his conduct in linguistics itself, where he is understood as a supremely dominant authority figure, accepting acolytes and excommunicating them with equal ease.

Followers of Chomsky characterized as predominantly white male computer geeks.

(46) The scholars who pursue Chomskyanism and Chomsky himself with near-religious fervor are, almost without exception, straight white men who might be taken by nonlinguistcs to be “computer geeks.”
(47) In the broad pursuit of CL [Computational Linguistics], which is almost indistinguishable from Chomskyan generativism but nevertheless gives itself a different name, the computer and its own logical functions are taken as a model for human language to begin with, so that computer scientists and Artificial Intelligence (AI) researchers use what they have learned to demonstrate the formal nature of human language.

Idiomaticity and iterability paralinguistic operations outside core operations of faculty of language.

(48) For a purely formalist account, both idiomaticity and iterability must be seen as paralinguistic operations, outside of the core operations of the syntax that is the internal computer called the faculty of language.
(49) A plausible alternative to the Chomskyan view, then, and one held by something like the majority of working linguists . . . is that there is no engine of linguistic structure. . . . Instead, they all evolved together, as a package of cognitive and linguistic capabilities whose computational features, to the degree they are all at constitutive, make up only a small and likely indistinguishable fraction of the whole.

Cartesianism and Computationalism
(49-50) In the first chapter of
Aspects [of the Theory of Syntax] and even more explicitly in Cartesian Linguistics, Chomsky outlines in the most explicit way in his career so far his identification with a particular strand of Western thought and the particular way in which it characterizes the idea of an automaton—an abstract machine, or a linguistic computer in which the human mind forms. . . . What Chomsky finds in his resurrection of Descartes is precisely a way to use the computational mind to forestall the assault on rationality that he sees expressed in thinkers like Quine.
(50) By ascribing the keyword
generative to [Wilhelm von] Humboldt—despite arguing in Syntactic Structures and elsewhere that this concept is essentially new to Chomsky because of its specific formal mechanisms—Chomsky wants to show that he is part of a great intellectual tradition that should not be easily gainsaid by contemporary critics.
(51) Humboldt was the most famous advocate of the view that the races and the languages they spoke were part of an historic progression, so that some cultures and peoples were seen to be primitive forms.
(51) The notion of a “doctrine of natural rights” espoused by Humboldt also exposes the fundamentally libertarian bent of Chomsky's intellectual views.

Hearkens back to Humboldt as originator of generative linguistics, ignoring anthropologists who focused on non-Western languages.

(51-52) Thus, rather than identifying with the intellectual traditions that have for a hundred years or more in the West been associated with left-leaning politics, Chomsky reaches back to a more primal conservatism and, by omission, annihilates all that has come in between. These omissions may be more telling than Chomsky's inclusions, for what they primarily overlook is the contribution of anthropology and politics to the study of non-Western languages. The names one does not read in Chomsky are not just the post-Humboldt, anti-Hegelian 19th-century linguists William Dwight Whitney and Max Muller, but perhaps even tellingly, the early 20th-century linguist-anthropologists Franz Boas and Edward Sapir and the linguists associated with them. For in these writers the study of language was implicated, as it is today in the work of cultural studies, with particular systems of global political domination and intellectual hegemony.
(52) This strong ambivalence in Chomsky's thought—one that has been noticed, at least on occasion, by commentators including Joseph (2002)--itself deserves thorough attention, in that it embodies exactly the deepest and most explicit political tension in our own society. In his overt politics, Chomsky opposes the necessity of hierarchies in the strongest way, while his intellectual work is predicated on the intuitive notion that language and cognition require hierarchies.
(52-53) Even if what we do with our rationality is to pursue goals based on irrational beliefs and desires, which may be said to emerge from unconscious motivations, the means we use to pursue those goals are thought to be rational.

Connection between Chomskyan computationalism into philosophical functionalism.

(53) George Miller himself perhaps best embodies the view that was about to sprout from Chomskyan computationalism into philosophical functionalism. In the early 1960s, in no small part due to the reception of Chomsky's writings, the Center for Cognitive Studies (CCS) was established at Harvard by Miller and Jerome Bruner.

Cultural structures of subjectivity at the heart of computationalism rather than belief in technnological progress.

(53-54) It is precisely politics and ideology, rather than intellectual reason, that gives computationalism its power. So some of the world's leading intellects have been attracted to it, precisely because of its cultural power; and so many of them have defected from it (just as so many have defected from Chomskyan generativism) because there exists a structure of belief at its core, one implicated not in technological progress but in the cultural structures of subjectivity.

Orthodox functionalism expounded by Putnam and Fodor in terms of machine states and generative meaning.

(55) This same moment (the one that also spawned the so-called linguistics wars; Harris 1993) coincides with important periods in the work of both Putnam and Fodor. Putnam, in the late 1950s and early 1960s, expounded the views we can call orthodox functionalism or, in philosophy-internal terms, machine-state functionalism. . . . [Jerry] Fodor's work begins with the specific investigation of meaning in a generative frame.

Linguistic turn in philosophy declared by Rorty collection emphasizing precise formalization of mental contents via formal langauges; main issue is attaching concepts to words, which are meaningless labels.

(55-56) We know this moment in philosophy as “the linguistic turn,” in part due to a collection edited by Richard Rorty by the name (Rorty 1967), which stressed the ordinary-language tradition of Wittgenstein and Austin that Fodor and Katz exclude (although they are careful not to disparage it as well). . . . It meant a conception of language that could allow mental contents to be formalized precisely because they use formal mechanisms for expression, what Fodor would formulate as the Language of Thought (Fodor 1975).
(56) It is a language for which the all-important issue is the attaching of concepts—mental contents—to the meaningless labels we call words.

Hardware and software model of brain and mind.

(56-57) By turning these methods and their concomitant linguistic analyses to the terms of philosophical theory, these thinkers could mount a new assault on the traditional mind/body problem armed with a clear and yet paradoxical tool out of computer science. The brain was not exactly a computer and the mind was not exactly software, but the hardware/software model is still the best metaphor we have for describing the brain, so much so that thinking and computation are almost indistinguishable.

Quine holism seems to offend humanist intuitions of individual creativity in Chomsky.

(57) Despite a lack of overt acrimony, it is clear that Quine represents exactly an instance of a discourse that Chomsky needed to displace in Western philosophy. . . . Chomsky's conception of language exists in interactive tension with a rationalism conception of the mind that takes its inspiration form a notion of individual human creativity. Quine's largely holistic thought risks coming close to an anti-individualism that offends Chomsky's most basic humanist intuitions.


CHATPER THREE
Genealogies of Philosophical Functionalism

(59) as proposed by Hilary Putnam and subsequently adopted by other writers, functionalism is a “model of the mind” according to which “psychological states ('believing that
p,' 'desiring that p,' 'considering whether p', etc.) are simply 'computational states' of the brain. The proper way to think of the brain is as a digital computer.”

Messianic understanding of computing at base of functionalism.

(60) Functionalism has its roots in explicit political doctrine, a series of cultural beliefs whose connection to the philosophical doctrine per se is articulated not as individual beliefs but as connected discourse networks, ranging from the history of philosophy and linguistics to the history of military technology and funding. . . . Put most clearly: in the 1950s both the military and U.S. industry explicitly advocated a messianic understanding of computing, in which computation was the underlying matter of everything in the social world, and could therefore be brought under state-capitalist-military control—centralized, hierarchical control. The intellectuals who saw the promise of computational views did not understand that they were tapping into a vibrant cultural current, an ideological pathway that had at its end something we have never seen: computers that really could speak, write, and think like human beings, and therefore would provide governmental-commercial-military access to these operations for surveillance and control.

Language and Analytic Functionalism
(60) The critical nexus of philosophers and linguists who emerge in Quine's wake—and in ambivalent relationship to him—all studied with Chomsky at MIT and many completed their PhDs there; several then migrated to Harvard.
(61) The explicit connections made by philosophers between functionalism and other cultural views situate this intellectual practice with particular strength within a political sphere that is often thought to be not a
context for but the subject of philosophy.
(61) The figures most associated with functionalism in an historical sense are the Harvard philosopher Hilary Putnam and his student and later MIT philosopher Jerry Fodor.
(62) In “Minds and Machines,” Putnam puts forth not merely the intuition that brains and digital computers are the same thing, but also the idea that “the various issues and puzzles that make up the traditional mind-body problem are wholly linguistic and logical in character” (Putnam 1960, 362)--which coincides in many ways with the grammatical intuition that Chomsky has put forward in the CFG and early generative grammar work.
(62) Putnam's “Minds and Machines” proceeds along lines inspired at least in part by Wittgenstein and by Turing. . . . Putnam seems to be endorsing Turing's method in “Computing Machinery and Intelligence,” namely, rejecting the question “can machines think?” by replacing it with a question about the way machines and human beings each use language.
(63) But the different between [Paul] Ziff's approach and Chomksy's is that Ziff is not trying to prove that machine grammars and linguistic grammar are isomorphic. In almost all of his later work, it is this central thesis that Putnam comes to doubt in the functionalist program.

Fodor's Mind
(63) As a student of both Chomsky and Putnam, Jerry Fodor articulated the most elaborated version of functionalism in the analytic literature.
(64) What Fodor and Chomsky characteristically share, and what helps to expose the cultural investments of the rest of their views, is their deep belief in a rationalist psychology and their disdain for views that put too much weight on meanings that are produced outside the observable human mechanism.

Atomic versus holist view of meaning-mind relations.

(66) What is important to LOT is what has always been important to Fodor: “LOT claims that mental states—and not just their propositional objects—typically have constituent structure(Fodor 1987, 136). In later work Fodor comes to call this view a compositional or atomic (as opposed to holist) view of meaning-mind relations (see especially Fodor 2000 and Fodor and Lepore 1992).
(69) The atomic story is the story of command and control—manage the units by identifying them and labeling them. What emerges from their combined product is Statist and individualist, because there is an individual who combines the state and the person in one—namely, the familiar and authoritarian figure of the King, God as Kind, or what Derrida reminds us to think of as the transcendental signifier.
(69) Inside the field this appears innocuous enough, but in a larger context the forces of relativism would seem to align exactly with the philosophical and literary enemies Fodor obliquely names, and that Chomsky shares: deconstruction, cultural studies, feminism and race-oriented studies, historicism, identity politics, postcolonialism—because within the larger cultural frame this is how these theories line up.
(70) By the mid-1990s, Fodor was starting to have doubts about the functionalist enterprise (and,
a fortiori, mainstream cognitive science), and his writings begin to show these doubts in rhetorically interesting ways.
(70-71) Even more characteristic of Fodor's recent thought, though, is the pinpoint focus on “Turing's syntactic account of mental process.” . . . This doctrine is based not in any specific observation but rather, much as the behaviorism Chomsky attacked, in philosophical intuition.
(71) In a
festschrift for Fodor (Loewer and Ray 1991) that features contributions from many leading analytic philosophers, another leading philosopher of mind, Daniel Dennett, matches Fodor's typical rhetoric in an essay called “Granny's Campaign for Safe Science” (Dennett 1991).

State appeal for striation, innate capitalism.

(72) What Granny endorses are exactly those tenets of Intentional Realism which Fodor sees from God's perspective in the “Creation Myth.” . . . The strength not of Fodor's commitment to this view, but to the philosophical community's interest in Fodor's commitment, is precisely the ideological strength of the State's investment in this view—in the view that the mind itself can be subjected to order, or in Deleuze and Guattari's terminology, striated. . . . Fodor's view is that in particular the term “belief has a functional essence . . . Ditto, mutatis mutandis, 'capitalism,' 'carburetor,' and the like” (Fodor 1998a, 8).

The Cultural Politics of the Computer Model of the Mind
(73) No aspect of this story is more revealing than the way in which Fodor's PhD advisor and Chomsky's high school classmate Hilary Putnam rises to prominence in part on a wave of functionalist doctrine, and then to a second prominence by developing extremely thorough arguments to pull that doctrine apart.
(76-77) Computationalism, when harnessed by the power of State philosophy, underwrites an extremely specific and contemporary notion of the self that is tied closely to the kind of subjectivity displayed by powerful leaders in our society. Instrumentalist reason is now the highest form of reason; means are everything, because the ends will be taken care of by themselves (by market-driven Providence, manifest in a technological nature). The self is not part of the world, but must attain mastery over it; subjects who cannot attain mastery have earned their almost-inevitable fate. To attain mastery is to understand that not all discourses are equally privileged; discourses are hierarchical, and at the top of the hierarchy is Science.
(77-78) What there is instead of reference is exactly what all the thinkers that Fodor and Chomsky and others conceptualize as the enemy posit. It is also that what Putnam refuses to see is the point of Derrida's writings, just as it is for Putnam in at least some of his phases, and what it is for Heidegger in “The Question Concerning Technology.” As Putnam paraphrases Wittgenstein, “we have no other place to stand but within our own language game” (Putnam 1992, 172). . . . In logical “languages” that Chomsky wants to put on a hierarchy with English and other natural languages do not play the game; they do not have “play,” in Derrida's famous way of putting it. . . . But while the ideal is Mr. Spock, the reality is Kirk: blind obedience, hierarchical order, the State as discovering techno-militarists, putting itself in the God's-eye seat of judgment over all cultures, even if its best (Kantian) hope is to never violate the needs of those beneath it. Kirk's gender, his relation to gender, and his race, the State, and the state of being the colonizer, are inextricably bound to these definitions, despite the wish to make them autonomous. The sense that we are the primitive being, that the Id (or, in more accurate translations of Freud, as suggested by Deleuze and Guattari 1983, “it”) is part of us, that our language might be outside the idealized realm of the rational, that our category of
the animal might include precisely some of the most important features we use to define our selves—all these ideas are present in Wittgenstein and anathema to computationalism.

Instead of reference, the natural language games; Kirk instead of Spock; compare to entrenched software systems (Mackenzie).

(79) While in some ways there are strange bedfellows among the constellation of views collected under the headings of mentalism, instrumentalism, functionalism, objectivism, computationalism, and fundamentalist religious belief-cum-repressed capital consumerism, it is also remarkable how closely aligned are these forces in contemporary culture. . . . Putnam is right to see that what ties these views together is ultimately a doctrine that can only be called religious in nature: what he calls a “God's-Eye View.” . . . This is exactly what Derrida means when he says that “a 'madness' must watch over thinking.”
(80) We do not know how to formalize this “madness,” this “irreason” that neverthless has a long tradition even within Western Enlightenment thought, as something like counter-Enlightenment. Most importantly, we do not want to know how: it is critical not to our self-understanding but to our social practice itself that we as social beings can escape whatever formalization we manufacture. . . . There is nothing mysterious about the fact that computers have trouble following such conflicting tendencies. More remarkable is our persistent expectation that they can or should—which is to say, our persistent belief that we can “ultimately” isolate mind away from its particular embodiment (see Lakoff and Johnson 1999), and no less its social matrix.


PART TWO
COMPUTATIONALISM AND LANGUAGE
CHAPTER FOUR
Computational Linguistics

(83) Rather than surveying these discourses, then, and in a fashion that comes to be characteristic of computationalist practice, the figures we associate with both the mechanical and intellectual creation of computers—including Turing, von Neumann (1944), Shannon, and Konrad Zuse (1993)--simply fashioned assertions about these intellectual territories that meshed with their intuitions. These intuitions in turn reveal a great deal about computationalism as an ideology—not merely its shape, but the functions it serves for us psychologically and ideologically.
(83) Perhaps the easiest and most obvious way to show that a computer was functioning like a human brain would be to get the computer to produce its results the same way a human being (apparently) does: by producing language.
(84) Computers invite us to view languages on their terms: on the terms by which computers use formal systems that we have recently decided to call languages—that is, programming languages. . . . Inevitably, a strong intuition of computationalists is that human language itself must be code-like and that ambiguity and polysemy are, in some critical sense, imperfections.

Universal translator from Star Trek reveals cultural beliefs about languages and future hopes of computer abilities.

(85) Like the Star Trek computer (especially in the original series; see Gresh and Weinberg 1999) or the Hall 9000 of 2001: A Space Odyssey, which easily pass the Turing Test and quickly analyze context-sensitive questions of knowledge via a remarkable ability to synthesize theories over disparate domains, the project of computerizing language itself has a representational avatar in popular culture. The Star TrekUniversal Translator” represents our Utopian hopes even more pointedly than does the Star Trek computer, both for what computers will one day do and what some of us mope will be revealed about the nature of language. Through the discovery of some kind of formal principles underlying any linguistic practice (not at all just human linguistic practice), the Universal Translator can instantly analyze an entire language through just a few sample sentences (sometimes as little as a single brief conversation) and instantly produce flawless equivalents across what appear to be highly divergent languages. Such an innovation would depend not just on a conceptually unlikely if not impossible technological production; it would require something that seems both empirically and conceptually inconceivable—a discovery of some kind of formal engine, precisely a computer, that is dictating all of what we call language far outside of our apparent conscious knowledge of language production. In this way the question whether a computer will ever use language like humans do is not at all a new or technological one, but rather one of the oldest constitutive questions of culture and philosophy.

Cryptography and the History of Computational Linguistics
(85) Chomsky's 1950s work was funded by DARPA specifically for the purposes of Machine Translation (MT), without regard for Chomsky's own repeated insistence that such projects are not tenable.

Linguistic theories of early computer engineers stem from their experience with computers rather than study of linguistics.

(86) But the early computer engineers—Turing, Shannon, Warren Weaver, even the more skeptical von Neumann (1958)--had virtually no educational background in language or linguistics, and their work shows no signs of engaging at all with linguistic work of their day. Instead, their ideas stem from their observations about the computer, in a pattern that continues to the present day.

Weaver Machine Translation of Languages ignore prior linguistics and begins with his own private memorandum on translation; compare to Burks, Goldstine, von Neumann claim that it would take us too far afield to start from first principles.

(86-87) In a pathbreaking 1955 volume, Machine Translation of Languages (Locke and Booth 1955), Weaver and the editors completely avoid all discussion of prior analysis of language and formal systems, as if these fields had simply appeared ex nihilo with the development of computers. . . . Like some computationalists today, Weaver locates himself in a specifically Christian eschatological tradition, and posits computers as a redemptive technology that can put human beings back into the prelapsarian harmony from which we have fallen.
(87) In an “historical introduction” provided by the editors, the history of MT begins abruptly in 1946, as if questions of the formal nature of language had never been addressed before. . . . The book itself begins with Weaver's famous, (until-them) privately circulated “memorandum” of 1949, here published as “Translation,” and was circulated among many computer scientists of the time who dissented from its conclusions even then.
(88-89) The most famous part of Weaver's memorandum suggests that MT is a project similar to cryptanalysis, one of the other primary uses for wartime computing. . . . Neither Enigma nor the Bombe could translate; instead, they performed properly algorithmic operations on strings of codes, so that human interpreters could have access to the underlying natural language.

Illegitimate analogy between code and language in Weaver memorandum.

(89) Weaver's intuition, along with those of his co-researchers at the time, therefore beings from what might be thought an entirely illegitimate analogy, between code and language, that resembles Chomsky's creation of a language hierarchy, according to which codes are not at all dissimilar from the kind of formal logic systems Chomsky proves are not like human language.
(90) Weaver's combinatoric arguments fails to address Wiener's chief points, namely that human language is able to manage ambiguity and approximation in a way quite different from the way that computers handle symbols.
(91-92) This strange view [positing an
Ursprache], motivated by no facts about language or even a real situation that can be understood physically, nonetheless continues to inform computationalist judgments about language.
(92) The crux of Wiener's and Weaver's disagreement can be said to center just on this question of univocality of any parts of language other than directly referential nouns.
(92-93) Weaver's principal intuition, that translation is an operation similar to decoding, was almost immediately dismissed even by the advocates of MT as a research program. . . . yet at some level, the intuition that language is code-like underwrites not just MT but its successor (and, strangely, more ambitious) programs of CL and NLP.

From MT to CL and NLP
(93) It is often possible to hear computationalists without firm knowledge of the subject asserting that Google's translator is “bad” or “flawed” and that it could be easily “fixed,” when in fact, Google has devoted more resources to this problem than perhaps any other institution in history, and openly admits that it represnts the absolute limit of what is possible in MT.
(94) The visible success in some of these programs (especially ones centered around speech synthesis and statistical analysis, neither of which has much to do with what is usually understood as
comprehension; Manning and Schutze [1999] provides a thorough survey) leads to a popular misconception that other programs are on the verge of success.
(94-95) Two of the most successful CL programs are the related projects of TTS and voice recognition: one using computers to synthesize a human-like voice, and one using computers to substitute spoken input for written input. . . . There is no question of “comprehension” in these systems, despite the appearance that the computer does what the speaker says—anymore than the computer is understanding what one types on a keyboard or clicks with a mouse.

Problems of intonation and suprasegmental melodies in text to speech synthesis and recognition cross into territory of Barthes grain of the voice, although most research focuses on written text as basis.

(96) Among the best-studied and most interesting issues in TTS is the question of intonation. . . . Human speakers and hearers use these tonal cues constantly in both production and reception, but the largely analog nature of this intonation and its largely unconscious nature have made it extremely difficult to address, let along manage, in computational systems.
(97) The correspondence between apparently linguistic tokens and suprasegmental melodies is something that computational researchers have hardly learned how to manage for purposes of recognition, to say nothing of natural language generation.
(97-98) Instead, putting aside the difficulties in this aspect of speech production, most CL and NLP research from the 1960s until recently has bypassed these issues, and taken the processing of written text in and of itself as a main project. . . . By focusing on written exemplars, CL and NLP have pursued a program that has much in common with the “Strong AI” programs of the 1960s and 1970s that Hubert Dreyfus (1992), John Haugeland (1985), John Searle (1984, 1992), and others have so effectively critiqued.

SDHRDLU and the State of the Art in Computational Linguistics
(98-99) [Terry
Winograd's] SHRDLU's model world persists as one of the most famous constructions in the history of computer science. Its world can be depicted in both physical space and simulated in the computer, such that the various programs contributing to SHRDLU can “perceive” objects in the model world in the same way they might be metaphorically said to perceive the elements of any running computer simulation.
(100) Like many CL and NLP projects, SHRDLU gestures in two directions at once: it hints at building more complicated systems that could handle more human language; and it also presents a model of what might be going on in the human brain. Notably, however, this model is much less subtle than Chomsky's view of linguistic computation; instead, the systems relies exclusively on expressions from formal logic.
(101) SHRDLU resembles the human linguistic facility so closely that many AI researchers took it to be a clear proof of concept for much more robust and usable CL systems. Underlying this view, though, is what can only be called a computationalist view of language itself.
(101) Phenomena that do not appear in SHRDLU's explicit representations can't be added to the system via conversation (as nonce words can be); instead, they, too, must be programmed into the computer.
(102) The human child, unlike the SHRDLU program, must create or realize its linguistic facility solely through conversation and introspection: positing a programmer to write routines comes dangerously close to endorsing a kind of intelligent-design view of the development of language. The child's grasp of logic is not simply an imperfectly matured system, but a part of a complex interaction between individual and environment whose configuration is not at all clear.
(102) In fact, like all completely computer-based languages, or what should better be called formal systems, terms defined in SHRDLU's language are entirely contained within the world of form; they do not make reference to the larger world and its web of holistically defined and contextually malleable language uses.

Winograd abandoned SHRDLU realizing it is a closed formal system requiring programming to extend meaning.

(103) For these reasons and other similar ones, Winograd himself abandoned the SHRDLU project and, under the influence of Heideggerian thought such as that found in Dreyfus's writings, began to think about the ways in which computers as machines interact with humans and function in the human world, and to see language as a practice that can only be understood in context.


CHAPTER FIVE
Linguistic Computationalism

(105) Somewhat remarkably, but in a strong testament to the power of computationalist ideologies, outside the sphere of professional CL researchers, we find a much less skeptical group of researchers who do, in fact, believe that much more significant strides in computer processing of language are just around the corner.

Computationalism and Digital Textuality

OHCO thesis in literary studies; Renear typologies of Platonism, Pluralism, Antirealism.

(105) During the last fifteen years, a small body of writing has emerged that is concerned with an idea called in the literature the OHCO thesis, spelled out to mean that texts are Ordered Hierarchies of Content Objects.
(106) But the particular shape taken by the initial OHCO thesis is highly revealing about a computational bias: a gut feeling or intuition that computation as a process must be at the bottom of human and sometimes cultural affairs,
prior to the discovery of compelling evidence that such a thesis might be correct.
(106) Recent work on the OHCO thesis suggests that its most restrictive versions are untenable just because of deep conceptual issues that emerged only through prolonged contact with real-world examples.
(106) [Allen]
Renear develops a philosophical typology via three seemingly familiar positions, which he calls Platonism, Pluralism, and Antirealism.
(107) Despite the scantiness of the discussion in Renear (1995), it remains one of the only serious examinations in the OHCO literature of the thesis that text in the abstract is fundamentally hierarchical.
(109) Rather than the Platonism/Pluralism/Antirealism scheme that Renear offers, a more useful typology in this context can be found in the longstanding schism between Rationalism on the one hand, and something we might variously see as Empiricism, Pragmatism, or Antirealism on the other, which we will here call anti-rationalism. One place to put the stake in between these two camps is precisely language itself. According to the first camp, language is very precise and orderly, ultimately based on one to one correspondences with things in the real world which account for our ability to know—a means of cognition (rationality) that helps to explain our presence in the world.
(109) According to the second camp, language is often very “imprecise” and “disorderly” (if these terms have any precise meaning), based in social and cultural accommodation and history, and more about people acting together (communication) than about cognition. To the degree that language and cognition are intertwined—and this degree is quite large—it becomes difficult to see whether language is “deformative” or “constitutive” of cognitive practices themselves.
(110) Each of these groups has exerted a fair amount of influence over the discussion of textuality, but even more broadly, each has exerted much influence over theories of language. More forcefully, neither one of these groups would implicitly endorse an off-hand distinction between
text and language, in the sense that we might expect text to have structuring properties different from those of language per se.
(111) This drives us back to that original intuition that formed OHCO in the first place: that it is possible, desirable, reasonable, or useful to try to
describe the structure of texts in markup based on SGML. . . . But it is instructive to think about what Microsoft Word does in this regard: it does mark every document as a hierarchical object with XML-like metadata, but it does not try to say much about the content of the document in that structure.

Language Ideologies of the Semantic Web
(113) The main beneficiaries of the implementation of structured data on the web would be commercial and financial, because commercial and financial institutions are the ones being harmed by their inability to mark up
data—not really text—and to use the ubiquitous Internet as a means to avoid the use of private networks.

Database model fitting for business and financial data but hardly for language and texts, so why the XML hype?

(113) For this purpose, XML and its variants are really profoundly effective tools, but that does not altogether explain why the web-user community and academics have been so taken with XML. . . . little about language to begin with, let alone “texts in particular,” let alone business documents, resembles a database.
(114) Related to this is the hard distinction between form and content that a database model of text implies.
(115) For a project like the Semantic Web to work, something more than the statistical emergence of properties from projects like del.icio.us and other community tagging projects is going to be needed.
(116) It has been quite astonishing to see the speed with which XML and its associated technologies have not merely spread throughout humanities computing, but have become a kind of conversion gospel that serves not merely as motivation but as outright goal for many projects. . . . Humanities researchers don't produce databases: they produce language and texts. Language and texts are largely intractable to algorithmic computation, even if they are tractable to simulated manipulation on the computer.

Distrust of having programmers in digital humanities may be based on assumption that their sole goal is to create XML databases is certainly a position in philosophy of programming; however, dismissal of XML becomes dismissal of programming humanities, which delineates a philosophical position whose net effect is turning away from the activities I group together as collectively machines and humans working code. Imagine a twist to the day countback operation such that if the next one is not of the same day then the next sentenced is substituted for the plus and minus next days PHI.

(116-117) Some humanists may benefit from extensive XML markup, but for the most part what archives do is limited, precisely because it turns out that best archiving practices show that minimal markup leads to longest life and maximum portability. . . . Surely the last thing we want is to say that digital humanities must be programmers. It is great to have some programmers among us, but it is vital to have nonprogrammers as well. Certain trends in computer history suggest a move toward making programmatic capabilities more available to the user, and less restrictively available to programmers alone, exactly through demotic tools like HTML, BASIC, and Perl. The way to do this with XML, I would suggest, is to incorporate it into relevant applications rather than to insist that humanities scholars, even digital humanists, must be definition be spending the majority of our time with it.
(117) There is a great deal of interest in language and what language can do for computers, on how programming languages can be more responsive to human language. What seems so strange about this is that it presents as settled a series of questions that are in fact the most live ones for investigation. We don't already know how language “works”; we are not even sure what it would mean to come to consensus about the question.
(117) Language: despite its claims to global utility, the web is particularly monolingual. The irony of being offered a remarkable tool at the price of sacrificing one's language is exactly that with which many in the world are presented today. . . . the W3C and digital text communities prefer to work on ways to theoretically expand the linguistic capabilities of the existing web, by wrapping largely English-only lexemes around existing, meaningful text.
(118) Google's power, simplicity, and success and its utility for us relies in part on the assumption that much web data will be largely unmarked—except, ideally, for library card like metadata that is already incorporated into web applications. This makes it difficult to see how searching might be
better if text were widely marked up, especially if plain-text versions of texts are not offered in addition to richly marked versions.
(118) I see nothing in particular in the W3C proposals suggesting that raw XML should become an authoring tool, and I do not see anything in Oxygen or other XML tools to suggest that they are meant to be used as primary document processors.
(119) Digital text, for the most part, is just text, and for the most part text is just language. Language, a fundamental human practice, does not have a good history of conforming to overarching categorization schemes. . . . where semantic markup is concerned, less is more.

Monolingualism of the World Wide Web
(119) Another tempting but inaccurate analogy between programming languages and natural languages can be found along the axis of
linguistic diversity. It is no accident, and no less remarkable, that insofar as something akin to natural language “runs” computers, that language would have to be identified with contemporary standard written English.

Predominance of English words, imperative statement forms, and standardization integral to most programming languages and system-level interfaces.

(120) Few English-only speakers realize the implications of the fact that almost all programming languages consist entirely of English words and phrases, and that most operating systems are structured around command-line interfaces that take English writing, and specifically imperative statements, as their input (Lawler 1999). . . . It seems no accident that computers rely on the availability of standardized text and that the availability of persons who are fluent in computer engineering emerge from cultures where English-style standardization is produced and often enforced.
(120-121) In at least three, connected ways, the web looks like an instrument of multilingualism, but on closer examination seems largely to be organized around Westernized, English-based categories and language concepts. First, the HTML for most web documents, the markup which surrounds the document's content (along with JavaScript and several other noncompiled script languages), is fundamentally in English, so that it is necessary to understand the English meaning of certain words and abbreviations in order to read a document's source (and in a critical sense, to interpret the document). Secondly, web servers and web softawre are themselves usually confined solely to English. . . . Related to this is the fact that the entire operating system world is run by English products and with English directory structures.

Computer revolution as vehicle for spread of dominant standard written English.

(121) We have been taught to think of the computer revolution as a fundamental extension of human thinking power, but in a significant way mass computerization may be more accurately thought of as a vehicle for the accelerated spread of a dominant standard written language.
(121-122) The problem with this spread, which must always be presented as instrumental, and is therefore always profoundly ideological, is that it arises in suspicious proximity to the other phenomena of cultural domination toward which recent critical work has made us especially sensitive. I am thinking here of the strong tendency in the West (although not at all unique to us) to dismiss alternative forms of subjectivity, sexuality, racial identity, gender, kinship, family structure, and so on, in favor of a relatively singular model or small set of models.
(122) This has become so much the case that we have encased in terms like
globality and modernity the apparently inevitable spread of written English and other European languages (Spivak 1999).
(123) In an interesting sense, these [embedded] computers and languages are more language-neutral than are the great bulk of “proper” computers with which the general public is familiar, but their pervasiveness gives the lie to the view that our computing infrastructure is the only one possible; rather, the “English-only” characteristic of modern computers is clearly a layer that mainly servers social, and therefore ideological, functions.
(123-124) In this way computers serve one of the most disturbing and the least-considered powers of globalization: the economic power to tell people that their way of doing things is worth less than culture empowered with modern technology, that their ways of life are stagnant and uninteresting, and that to “make something of themselves” they must “get on board” with modernity, represented by the computer and the significant education computing requires, often (due to children being the ones who feel this pull most strongly) to the real disparagement of whatever they choose to leave behind.
(124) Networked computing has helped to inform people of the eradication of minority languages, but often in a frame that suggests these languages are already “lost,” and in any case so “backwards” and nontechnological that they are of little use to the modern age.

Criticism of OLPC leads to another key question for philosophy of computing and programming, whether energy should be spent trying to do things differently, engineering a less majoritarian computing infrastructure, if that is possible; note this is a different question than the one Weizenbaum ponders.

(124-125) There are few more obviously disturbing applications of such thinking than in the One Laptop Per Child (OLPC) project spearheaded by Nicholas Negroponte. . . . There could be almost no more efficient means of eradicating the remaining non-Western cultures of the world than to give children seductive, easy-to-use tools that simply do not speak their languages. . . . Like the Semantic Web, the fact that such resources are profoundly majoritarian is considered entirely secondary to the power they give to people who have so little and to the power such projects create. Of course disadvantaged people deserve such access, and of course the access to computer power will help them economically. The question is whether we should be focusing much more of our intellectual energy making the computer infrastructure into an environment that is not majoritarian, rather than spending so much of our capacity on computerizing English, translating other languages into English, getting computers to speak, or, via projects like the Semantic Web, getting them to “understand” language for us.


PART THREE
CULTURAL COMPUTATIONALISM
CHAPTER SIX
Computation, Globalization, and Cultural Striation

(129-130) Among the most powerful tools for what I will call
oligarchical capitalism is the use of large-scale pricing power to manipulate human behavior and the actions of the working class so as to deprive them of real political choice and power, in the name of apparently laudable goals like efficiency, personalization, individual desire and need.

Cultural Striation
(130) Financial, legal, and health care information often exceeds the mere recordkeeping functions of which the public is aware; software applications in these industries enable so-called data mining and other forms of analysis that allow the creation of what can only be understood as new knowledge, which is completely inaccessible to the general public. Yet this information is used exactly and explicitly for the control of public behavior, for the manipulation of populations and individuals as well as for the small-scale provision or denial of services.

Proprietary software like Claritas PRIZM creates new knowledge about consumers applying cultural striations with far reaching effects that could be seen as disturbingly racialist to sustain oligarchical capitalism.

(131) Claritas and its forerunners have used computers to develop a suite of marketing applications which striate the consuming population into statistical aggregates that allow pinpointed marketing, financing, advertising, and so forth.
(131) Typical among the product Claritas produces is one called Claritas PRIZM, which “divides the U.S. consumer into 14 different groups and 66 different segments” (Claritas 2007).
(133) The methods and categories employed by Claritas PRIZM exemplify the computationalist view of culture. Despite neoliberal claims to equal access democracy and to the cultural power of multiculturalism and antiracist discourse, the PRIZM categories could not be more explicitly racialist.
(133) Governmental oversight has not found a way to manage—if legislators are even aware of them—the complex and highly striated groupings created by packages like PRIZM, despite the fact that they are in some ways at least as culturally disturbing as straightforward racist policies.
(134) PRIZM is just a tool to striate the U.S. population at a level of sophistication beyond that which most human beings can conceptualize without computational assistance, and in this sense both its existence and its effects can be and have been essentially hidden in plain sight; like the complex operations of airline ticket pricing software, it has grown so effective that its functions can be exposed through their effects with little fear of public reprisal.
(134-135) the lack of awareness of the general public about such tools and their use in politics and public institutions suggests that they continue, today, to be used primarily for the enrichment of the power elite (Mills 1956).

Computationalist World History

Many real-time strategy computer games exhibit the same computationalist world view, especially Microsoft Age of Empires; related to procedural rhetoric.

(135) In fact, within a wide variety of computer games, one can see a process exactly isomorphic to such software applications, in which quantified resources are maximized so that the player can win the game.
(136) Playing an RTS game such as
Warcraft, Starcraft, Civilization, Alpha Centauri, Age of Empires, Empire Earth, or any of a number of others, one experiences the precise systemic quality that is exploited for inefficiency in ERP software and its particular implementations.
(137) Instead of a paradise of informatic exchange, the world of computer representations is a bitter and political one of greedy acquisitiveness, callous defeat, and ever-more-glorious realizations of the will-to-power. It seems inevitable that the result of such pursuits is most commonly boredom. . . . The game thus “covertly” satisfies the unconscious, but offers only traumatic repetition as satisfaction, more consumption as the solution to the problem of being human.
(137) Among the most paradigmatic of RTS games is Microsoft's
Age of Empires, which has been issued so far in three major iterations.
(137) There is no real ability to depict intergroup interactions, hybridization, or blending; the existence of racialized and national groupings is presented as a kind of natural fact that cannot be questioned or changed in the course of gameplay, as if this were also true of the global history the game will display.
(138) The main reason for players to choose an ethnicity or nation in the course of the game is to realize certain benefits and drawbacks that affect gameplay as a whole. . . . Much like the segmentation employed by Claritas PRIZM, these categorizations reduce the complexities of social life to measurable quantities; even further, they employ capability-based labels that define civilizations in terms of particular technologies and characteristics, so that, for example, the Britons can be characterized as a “foot archer civilization.”
(139) The whole idea of applying modifiers and bonuses to each civilization hangs on the notion that there is one central history of cultural development that is at least in part common to each nationality and race. . . . As such,
Age of Empires and similar RTS games do not merely argue but demonstrate that human history is unified, progressive, and linear, and that in particular the development of technology stands as the ultimate end of civilization. Culture per se is seen to be no more or less than decoration.
(140) Development ends with the Imperial Age, a triumphal goal toward which every civilization aims.

Sense of computationalism has shifted from reductive view of intelligence as logical rationality, Ursprache, English-biased technological systems to oligarchical, Statist capitalism.

(140) While its developers would surely claim that such features are a consequence of gameplay necessities and the use of the tools available at hand, there can be little doubt that a game like Age of Empires II instances a computationalist perspective on world history. According to this view, history is a competition for resources among vying, bounded, objectified groups.
(140) the pursuit of historical change is the pursuit of striated power, realized as both an individual “leveling-up” and a societal achievement of historical epoch, themselves licensed by the accumulation of adequate resources, which are always channeled back into an even more intensive will-to-power.
(142) Civilizations are described exclusively in terms of economics (especially resource accumulation), military technique, and contribution to modern Statecraft.
(142) The only way to move out of a Claritas segment is to change where one lives, presumably by also changing economic situation as well; but there is little (if any) way to change the internal characteristics of the striated segments.
(142) In
Age of Empires the entire world is reconceptualized as fully capitalist and Statist from the outset, as if capital accumulation and Western-style technology are inevitable goals toward which all cultures have always been striving.
(143) Computation and striated analyses; essentialist understandings of race, gender, and nation; and politics that emphasize mastery and control do not merely walk hand-in-hand: they are aspects of the same conceptual force in our history.
(143) [McKenzie]
Wark comes closer than Galloway to seeing the inherent logics and politics of RTS games and the history they embody, but the proper name America is arguably both too specific and too general to capture what is at issue in the culture of computation. . . . In some sense, at least, the impact of computationalism on the world is much less like the historical development of America, and much more like the worldwide reemergence of principality like domains of imperial control today associated with medieval and pre-modern political forms (Hardt and Negri 2000, 2004). A more apt name might be “neoliberalism.”

Empires and Computation
(145) Among the most influential accounts of such cultural transformation is the
New York Times columnist Thomas Friedman's (2005) The World Is Flat: A Brief History of the Twenty-First Century.
(145) The trope of empowered individuals, as if they are unconnected from the political and economic and cultural institutions in which they are embedded, is one of the most familiar of our recent times, and we see again and again that this trope does not indicate just how those who already have relatively less power will have any more political influence then they do now, when those with the most power already also have their power increased.
(146) Thus the first three of Friedman's “forces” is the introduction of the personal PC with Microsoft Windows; the second is the global connectivity that is approximately coterminous with the rise of the world wide web, but for Freidman also entails the 1995 commercialization of the Internet and Netscape going pubic; and the third is the introduction of what he rightly calls “work flow software,” again almost exclusively a corporate innovation.
(147) Surprisingly, though, despite the ways in which Friedman's first three benefits replicate colonial logic and largely benefit large corporations, the rest of his ten “forces” are even more explicitly focused on corporations.
(147) what has been flattened via IT is not at all individual access to culture, economics, or political power, but rather the “playing field” for capitalist actors.
(149) There would seem to be no position at all from which to question whether it is desirable or even ethical to persistently map every square inch of global terrain and make it available for electronic processing; since the benefits of such a scheme are so apparently obvious, only cranks or luddites might stand in opposition to them.
(150) [Nicholas]
Carr's article [“IT Doesn't Matter”] and subsequent book have been roundly denounced by corporate computationalists, among them Bill Gates, arguably not merely because Carr notes the immediate falsehoods surrounding computers in particular but also because he begins to outline precisely the historical and ideological contexts that computers demand.

Differential benefits of IT to individuals favors the wealthy and powerful; critical discourse focuses on surveillance and intellectual property.

(150) it is specifically power elites and oligarchies who have access to the most powerful computers and the newest tools; to ignore this situation simply because those of us relatively low on social hierarchies also receive benefits is to miss the forest for the trees.
(151-152) Among the only discourses of critical thought about computers today has to do with surveillance. . . . Because computation does empower individuals of all stripes, including those of us who are already extremely powerful, we cannot hope that this sheer expansion of power will somehow liberate us from deep cultural-political problems; because computation sits so easily with traditional formations of imperialist control and authoritarianism, a more immediately plausible assumption would be that the powerful are made even more powerful via computational means than are the relatively powerless, even as everyone's cultural power expands.

Forget there were rhizomatic technologies before computerization, such as telephone networks; it is the nature of our computers to territorialize and striate biopower for State control.

(153-154) We want to imagine computers as deterritorializing our world, as establishing rhizomes, “flat,” nonhierarchical connections between people at every level—but doing so requires that we not examine the extent to which such connections existed prior to the advent of computers, even if that means ignoring the development of technologies like the telephone that clearly did allow exactly such rhizomatic networks to develop. What computers add to the telephonic connection, often enough overwriting exactly telephonic technology, is striation and control: the reestablishment of hierarchy in spaces that had so far not been subject to detailed, striated, precise control. . . . contrary to received opinion, it is the nature of our computers to territorialize, to striate, and to make available for State control and processing that which had previously escaped notice or direct control.


CHAPTER SEVEN
Computationalism, Striation, and Cultural Authority

Classic critical position leaves open whether direct participation is a legitimate role of the scholar, whether in form of managing or engineering.

(155) only by detailing exactly the nature of these forces do we hold out any hope of managing them.
(156) Yet perhaps even more than communication itself, what computerized networking entails is the pinpoint location of each object and individual in a worldwide grid. . . . But at the same time the cellphone itself, precisely demarcated via a numeric identity akin to the Internet's IP number, becomes an inescapable marker of personal location, so much so that with much more frequency than land-line phones, it is routine for cellphone users to be asked why, at any hour of the day or night, they failed to answer their phone—as if the responsibility for such communications lies with the recipient and not with the maker of the call.

Striation resulting from expectancy of availability via every connected cellular phone interferes with cultural importance of smooth spaces and times, for example evenings and weekends; at the same time, the enclosure of the workday is interrupted by local, personal communications devices outside the corporate infrastructure as well as those riding upon them, for example workstation Internet access.

(157) In this example, a previously smooth space—the space from which one felt free not to answer the phone, to be away from the phone, perhaps even to use the inevitable periods of unavailability as a means to use space and time for one's own ends—now becomes striated, and in this sense visible only as its striation becomes visible. This is an important characteristic of striation, in that we do not simply have “smooth” or “free” spaces that we resist letting be incorporated into striation—rather, we typically rely on the smoothness of space (and time) not even recognized as such for important parts of culture and personal practice.

Spreadsheets, Projects, and Material Striation
(157-158) Spreadsheets are a chief example of computational thinking and of striation because they existed before computers proper; they became nearly ubiquitous with the rise of business and personal computing and eventually came to redefine significant aspects of business and personal conduct; they also take extensive advantage of real computational resources.

Balance sheet as staple of business thinking before electronics now extended to general workforce as spreadsheets.

(158) With the advent of computers, the thinking behind balance sheets could be widely expanded and implemented at every level of corporations and other organizations, rather than existing as esoteric tools for only the initiated. Today, to be the manager of a segment of any size within a modern corporation means essentially, at least in part, to manage a spreadsheet.
(159) The spreadsheet reality is profoundly striated: it is arguably just the application of striation to what had previously been comparatively smooth operations and spaces. . . . One of the most interesting characteristics of spreadsheets is their division not just of space but of time into measurable and computable units.
(160) Any sort of activity can be represented in a project management document, and this software is frequently used as a kind of large-scale to-do list for employees, but its main purpose is to provide project managers with computational, hierarchical, and striated authority over the human beings contributing to a given project and to the company as a whole.
(161) As such the facts of day-to-day life in the corporate entity become less and less the actual activities of human beings and more and more the facts (or actually symbols and figures) included in the project management document.
(161-162) “The definitive feature of the mainframe era,” [Alan]
Liu writes, “was precisely that it insisted that IT conform to organizational patterns optimized for the earlier automation paradigm.” . . . Those structures of control are no less effective for appearing less personalized and rigid on the surface than they did in the 1950s: to the contrary, I am arguing that they are more effective and perhaps more insidious precisely because they appear to be more individualized today (see Zuboff 1988 for a few hints along similar lines).

Dehumanizing perception of time tracking and project management views of workers insidiously tied into culture of cool associated with computer technologies.

(162) Subjectively, again, the response of most employees to the sight of their time so precisely scheduled, tied to project income and expense, “rolled-up” into global pictures of corporate finance and resource management, can only be understood as profoundly dehumanizing in just the way that the most stringent Fordist management of physical labor is.
(162) In this sense, it is a triumph of what Liu calls the “
culture of coolthat computational employees can accept, and in some cases help to implement, the tools and methods that can contribute to extremely high-stress work environments, severe productivity demands, and real-time, invasive surveillance of work activities (and personal activities conducted at or during work-time).

Tools for Authority

Critical study awaiting for ERP and CRM systems that define and control systems of social actions and actors, yet are treated as ideology-free tools.

(163) One area of computing that has so far remained outside of critical attention is the proliferation of engineering protocols which are used by corporations for the implementation of large-scale systems (large both quantitatively and geographically). The most well-known of these are called “Enterprise Resource Planning” (ERP) and “Customer Relationship Management” (CRM), though these are only two examples of a widespread approach in business computing. Some of the best-known software manufacturers in the history of commercial computing, including SAP, BAAN, Microsoft, Oracle, PeopleSoft, Computer Associates, and others, have derived much of their income from these protocols. They have been explicitly developed and implemented for the purpose of defining and controlling the system of social actions and actors; they constitute a sizable fraction of the work taken by the thousands of engineering students produced by today's institutions of higher education. They are the “skills” that computer scientists and engineers develop to “use” in the world at large, and they are always construed and described as such. They are “tools,” supposedly free of ideological weight, and so far largely free from critical scrutiny. Like other such phenomena, on reflection they turn out to be among the most precise and deliberate structures of political definition and control, and they remain so far largely outside of the purview of cultural interpretation and understanding.
(164) ERP thus refers to software designed to allow business executives and IT managers to subject every aspect of their organization to computerized control.
(164) ERP business process analyses look for so-called “inefficiencies” in the system, finding places, for example, where a resource is sitting idle when it could be doing work. We know that “profit” will never failed to be included as the primary value toward which the system is skewed. In some systems of discourse, cultural studies has uncovered what seem to be covert marks of orientation toward capital; in ERP systems these orientations are explicit.
(165) In many cases, it seems the ERP approach actually does attempt to answer these questions, by offering up enticing and increasingly abstract values for apparently quantifiable problems. Human values that are hard to quantify are, often as not, simply ignored.
(165) In recent models of enterprise-wide software, control and monitoring mechanisms move beyond the representable “resources” and toward a more loosely defined object known as the customer. The best-known post-ERP software paradigm is Customer Relationship Management (CRM), which is a marketing term for a suite of contact, maintenance, and knowledge software modules.
(166) Thus the entirety of the company-customer encounter is finally reduced to its quantitative equivalents, the human being reduced to virtually nothing but actor reading knowledge-based scripts. One can imagine being unable to determine whether one's interlocutor is a live human being, or an set of taped phrases taped from a live encounter—an odd if dramatic example of an apparently spontaneous Turing machine.
(167) The development of elaborated systems of social control such as personal and small business credit scoring represent the sacrifice of knowledge and control to quantitative formalizations. Who has given their consent to this control, and under what circumstances and with what understanding has this consent been given?

Managed care operates like a RTS game.

(167) Nowhere are the effects of CRM more paradigmatically evident than in health care, especially in so-called managed care. . . . Unless one is prepared to treat the health care system as an “AI opponent” in the “game,” one is likely not going to receive maximal care for one's state of bodily health, but rather a set number of resources dictated by the “AI opponent.”
(167-168) There is no doubt that adding CRM to health care represents an extension of insurance. . . . The whole infrastructure presses on what is beyond awareness, instead of true dialogue with a human doctor we trust, who understands the range of our health issues and can present us integrated information.
(168) CRM makes it possible for health care providers to focus on “bottom line benefits” by affecting the consumer's “propensity to use External Service Providers” (Gauthier 2001).
(169) Understood as both a business model and a specific kind of software application, CRM is most often implicated in the kinds of business practices that are typically understood by the term
globalization. CRM helps to implement a hard-and-fast division between the sovereign intelligence that runs organizations and the various intelligences that work for them. . . . As such, CRM software (and its affiliates) has contributed to the Hobbesian picture of a corporation as run by a small, even oligarchical, group of princely leaders and a large, undifferentiated group of workers (whom labor laws have typically not yet recognized in this fashion).
(169) While the rhetoric of CRM often focuses on “meeting customer needs,” the tools themselves are constructed so as to manage human behavior often against the customer's own interest and in favor of statistically-developed corporate goals that are implemented at a much higher level of abstraction than the individual.
(170-171) Thus nearly all interaction with contemporary corporations is mediated precisely and specifically by computerization, both internally and externally. . . . Their [outsourced call centers] goal is largely not to resolve customer problems, but instead to manage problematic human requests via computerized scripts that have been precisely engineered to limit the amount of time each customer spends on the phone and so to maximize the number of calls each worker can process.

Internal and External Striation
(171) Arguably, many large corporations today actually do not conduct the services they advertise and which they claim to provide in advertising and official documentation. Among the most telling scandals in the late 20th-century business culture involved the telecommunications companies that emerged in the wake of the court-ordered breakup of the AT&T monopoly.
(171-172) In each case, these companies were operated by small groups of owners with large financial stakes in the underlying investment transactions that created the companies, and with a significant interest in the fate of the various stock issues attached to the companies.
(172-173) The outside shell of Enron offered huge new efficiencies brought about through a variety of supply-chain management technologies; but inside the company, Enron was a profoundly competitive, masculinist, oligarchical structure in which power and the pursuit of power reigned supreme. Computers were vital to this structure, perhaps more for their ideological functioning than their abilities to manage information. . . . There can be little doubt that the Enron story is at least to some degree representative of exactly what is at stake in the digital transformation of society—promises of benefits to consumer that ultimately benefit most those at the top, and benefits which themselves prove evanescent on close scrutiny.
(173) By establishing a powerful hierarchical order that is coded into machine applications and business rules, computers thus help to radically increase the stratification of the world's population, even as they work hand-in-hand with capitalist development to raise countries, in succession, up the “ladder” of economic development.
(174) Wal-Mart prides itself on developing computer systems that are used today by many corporations, even if it usually takes advantage of these systems in proprietary systems that are used only metaphorically by others, or in systems resembling the ones created by Wal-Mart, perhaps patented by it, and then emulated by software companies and others.
(175) Now Wal-Mart's suppliers must either themselves implement supply-chain management systems, or risk the same warehousing issues that used to be the provenance of Wal-Mart itself; in turn, other companies are pressured both externally and internally to comply with Wal-Mart's system, which in the most usual case involves implementation of similar computer systems.

Examples of lack of democratic control over RFIC and EPC that are likely to be pervasively deployed by private corporations and government.

(176-177) Like similar technologies in the past, RFID and EPC [Electronic Product Code] have raised a certain amount of consumer and democratic concern, such that Wal-Mart in particular has been compelled in the name of public relations to contribute to a global effort called EPCGlobal whose job is to promote public awareness of the uses of RFID and EPC and their benefits for consumers. Rather than demonstrating democratic control over this technology, though, the existence of EPCGlobal points more generally to the obvious privacy and surveillance issues raised by technologies like RFID that suggest the provision of worldwide, pinpoint surveillance to centralized authorities. The ethical standards promulgated by EPCGlobal are themselves far less worrying than are the inevitable more secretive applications of similar technologies that are sure to follow on the heels of RFID. Such technologies radically reformulate critical tenets on which concepts like personal privacy and private space, let alone public space, are themselves constituted in a democratic society, and while their explicit use by law enforcement and the government may at least at times be raised for policy discussion by lawmakers (thought perhaps less often than some might wish), their widespread proliferation by multinational corporations is far less subject to democratic investigation and control, and at the same time perhaps more pervasive and constitutive of the contemporary lifeworld than are the surveillance technologies used by government.


PART FOUR
COMPUTATIONALIST POLITICS
CHAPTER EIGHT
Computationalism and Political Individualism

(181-182) To begin with, the notion of the “user” posited by the empowerment thesis takes for granted that this user is the one at the heart of modern liberal individualism (one might even say
neoliberal individualism). . . . there is something self-serving to simply presume that the liberal human subject—the subject of democratic politics—is the user most fully enabled by widespread computational power.
(182) To the degree that human beings exist in part in social tension with institutions, the fact that both are empowered by computerization must give some pause. . . . Our computers are built to enable exactly the kinds of bureaucratic and administrative functions that are by and large the provenance of institutions; it should be no surprise that these functions in particular are enabled by computers.
(182) Today, the main part of each person's computing time is no doubt spent in the office, where he or she may be surprised to find the degree to which every aspect of his or her work—supposedly freed and empowered by the computer—is also monitored and analyzed in a way that previous labor could not have been.

Individualism

Macpherson possessive individualism result of empowering effects of computers on subjectivity.

(183) With regard to individual human beings in particular, it is arguable that the empowering effects of computers tend in the direction of what the Canadian political theorist C.B. Macpherson (1962) so accurately analyzed as “possessive individualism.” By this phrase Macpherson meant not merely the acquisition of goods and capital that are the hallmark of the contemporary world, but a deeper sense that our individualism is something that each of us owns in a commercial sense and that this ownership is what makes us a kind of master of our selves.
(183) Arguably, the strong sense of mastery experienced by users as they sit at the computer terminal—the sense that the computer is a kind of quasi-cognitive entity that obeys the human user's orders to a high degree of precision—walks hand-in-hand with an exaggerated sense of individual importance.
(184) The computer seems easily to inspire dreams of individual domination and mastery, of a self so big that no other selves would be necessary, of a kind of possession of resources that could and would eclipse that of any other entity. . . . Arguably, the discourse of communitarianism exists in no small part as a compensation for the extravagant individualism we know is visible everywhere on computers.
(184) Emphasis on the mechanical, algorithmic, and exact functions of cognition and language closely parallels the view that what is remarkable about these phenomena is their instantiation in the individual, almost to the complete exclusion of their social realizations.
(185) Views based on the power of computation are therefore likely to privilege the individual, often viewed in isolation, over more distributed social functions. . . . To literary scholars familiar with the work of Lacan and others the controversy over individualism may seem surprising, because it hinges on the philosophical view that the system of language is effectively contained entirely in the brains of individuals—where Althusser, Lacan, and other poststructuralists would insist that language is something like a system into which the individual as such is introjected, and is therefore not even understandable as a purely individualistic object.

Mastery

Early Turkle studies of children learning to use computers.

(185) Psychologically, the signal experience of working with computers for the power elite is that of mastery.
(186) The programmer experiences mastery in a subject-object relationship with the computer, spending enough time in this relationship that it becomes in essence his or her decisive psychological orientation.

Structural identification between programmer and power elite sets up majoritarian, white male capitalist image thriving on mastery.

(186) For the power elite, the laboring body of the corporation is itself a kind of computer (and in many ways a literal computer) over which he or shee exerts nearly the same pleasurable mastery the programmer experiences in front of the computer. In this sense, and this conclusion can be only speculative, there is a structural identification between programmer and power elite that largely serves the elite's ends.
(186-187) power is shared through the provision of a device, the computer, that provides a kind of illusion of social power in its structural proffering of relational power.

See Zizek on cyberspace.

(187) There are curious and unacknowledged correspondences between the use of the computer and Freud's specific concept of mastery as described in his early writings, particular in the Three Essays on the Theory of Sexuality (1905), where mastery is referred to as both an apparatus and then as a drive (trieb). . . . Intensely engaged in the use of both hand and voyeuristic vision, the computer programmer is thought to be outside of sexuality, but arguably there is an intense and sadistic economy of sublimation and identification at play as sexuality appears not to be invoked (Zizek 1997 touches on related issues).
(188) But the computer adds another element: it adds what appears to be a legitimate replication of the master-slave relationship out of which the United States was built and which now haunts it as not merely a psychological specter but as the specter of its relation to capital.
(188) In many ways, the computer is a more perfect slave than another human being could ever be; it also provides a template for seeing ourselves as slaves in a way that plantation owners never could have imagined.
(189) The trope of mastery provides one of the most powerful connections between the various spheres in which I am arguing that computationalism today is especially influential. Through the process of identification, including the desire to see oneself in the machine and also to be aligned with political and institutional leaders, the contemporary academy has endorsed a variety of efforts to find within the brain a computer that could be mastered.

Rationalism
(190) It is relatively familiar to historians of computers and of philosophy that there is a significant overlap between those figures who are generally thought of as exemplary rationalists, and those thinkers whose works are said to prefigure the science underlying modern computers. Chief among these is of course Gottfried Wilhelm Leibniz (b. 1646, d. 1716), the German philosopher whose work in mathematics and logic is often taken as the starting point for histories of computing.

Reason is syntax.

(191) Rationalism is not simply the operation of reasoned principles, with these defined in general (or, we might say, analog) terms; rather, it is specifically the application of the rules of formal logic—essentially, mathematical rules—to symbols whose meaning is, in an important sense, irrelevant to the question whether our reasoning is, in an important sense, irrelevant to the question whether our reasoning is valid. Reason is syntax: it is the accurate application of principles like modus ponens to any substance whatsoever.
(192) Cognition includes that part of our minds that appears to have something like free will with regard to thinking; though we are certainly capable of following logical rules like
modus ponens, our though also seems capable of going in many other directions, and even when we decide on an action that contradicts good logic, we nevertheless appear to be thinking even then.

Correlation between rationalism and conservatism among philosophers.

(192) Here Putnam and Kant help us to connect rationalism to that part of its doctrinal history we have not discussed yet at length: the philosophers most strongly associated with rationalism are the ones whose political (and even institutional-intellectual) views have most often been thought of as profoundly conservative, as favoring authoritarianism and as denying the notion that most of us are capable of thinking in the full human sense.
(193) This doctrine, as [Leo] Strauss's approving invocation of Hobbes shows, has played a curious and in some ways determinative role in 20th-century political and intellectual practice.
(194) In a somewhat paradoxical fashion, by restricting true cognition to rational calculation, rationalism can be said to diminish rather than exalt the role of cognition in human experience and judgment. There are at least two strong arguments that support this contention. The first is an argument found repeatedly in Derrida's writings (see especially Derrida 1990, 1992, 2005), according to which the problem in rationalist conceptions of thought is that they evacuate the notion of decision that is supposed to be their hallmark.
(195) [second] Perspectives that isolate rationality from the rest of human experience instrumentalize reason; they make it a kind of (supposedly neutral) tool, for the use of an unspecified authority (found either inside the person or in the sovereign).
(195) Because cognition itself is formal, syntactic, and thereby instrumental, we are extending the human cognitive apparatus by building out our scientific and technological instruments; because this is the only sort of knowledge worth the name, and knowledge solves social problems, we need only build out our technology sufficiently to address any problems that emerge.
(196) Yet the evidence that this [more rationality, techne, and capital] is not the right solution is everywhere. This is not to suggest that rationality is wrong or misguided or that we should eradicate it; it is to suggest that our societies function best when they are balanced between what we will call here rationalism and whatever lies outside of it. . . . In this way, one power of the computer and computational institutions is to make it increasingly hard to see that rationalism is just one mode of cognition, and not cognition itself. When we presume that the subject of politics just is the rational subject, stripped of her cognitive capacities outside and beyond rationality, we do not merely privilege one from of decision making above others: we work away at the fabric of the polis.


CHAPTER NINE
Computationalism and Political Authority

(197) Management practice today relies heavily on computer modeling of business systems; much of the software sold by major manufacturers like Siebel, Computer Associates, Oracle, and other major computer companies is devoted to Business Process Modeling (BPM), in which a variety of practices are reconfigured so as to be transportable across computing platforms or, more profitably, from the analog world to the digital world.
(198) Regardless, the basic operation of much business in the world today, especially the business associated with the keyword “globalization,” is conducted in the name of BPM and other solutions that are focused around providing management with
simulative control of business practice. The general thrust of this work is to reduce social process to form, ideally to the CEO's preferred computational form, which often continues to be the spreadsheet.
(198) In this sense, computational practice reflects social imbalance: more pointedly, it is
classed.

Styles of Computational Authority

Gates and Ballmer criticized as quintessential techno-egotist male exploiters instantiating the Leviathan principle.

(199) The computer instantiates for Gates the principle by which he orders his world: find a loophole that one can systematically exploit, whether or not the “intended purpose” of the task at hand is served by the solution or not.
(200) Despite its instrumental appearance, the computer is an especially effective model of power in our society, and especially provocative for those who see the possible applications of that model to the social sphere: to enact one's own will-to-power, in whatever form one obeys that impulse.
(200) Perhaps even more openly than Gates's, Ballmer's personality exemplifies the techno-egotist subject that has its own power as virtually its only focus. Ballmer and Gates, despite being two, in their relation to Microsoft as a whole, perfectly instantiated the Leviathan principle.
(201) We like to tell ourselves that computing is a universal ability, perhaps the cardinal ability, by which thought itself is defined, despite the overwhelming social evidence that computing is quite biased toward males and toward the way maleness is performed in our society.
(202) [Alison]
Adam reminds us of the degree to which both Cyc and Soar, the two most fully articulated of the strong AI projects, were reliant on the particular rationalist models of human cognition found in any number of conservative intellectual traditions.

Computers many have savior of many things that can be learned propositionally, but not connaissance of having that knowledge; and many things cannot be learned through propositional communication, especially embodied behaviors like midwifery.

(202) The computer knows how to add numbers; so do we. We might even say that computer has savoir about addition. But it does not have connaissance; it does not know that it is adding, or even that it knows how to add. At the same time, a human being can have savoir about things that it seems inconceivable to teach to a computer. . . . The midwife learns her craft by practice and observation and only occasionally through what looks like propositional communication.
(203) The knowing subject posited by both Cyc and Soar is just that knowing subject posited by the most narrow and conservative of Western intellectual traditions.
(203) Culturally, then, it is imperative that the claims of computer “evangelists” about the technological direction of society be viewed within a clear historical frame.

Questionable position of holding back new media studies to clarify position transcends conservative stereotypes.

(203-204) Postcolonical theory and gender studies, like other strands of recent cultural theory, draw our attention rightly to the overt cultural structures used in the past to support a distinctly hierarchical worldview. . . . In that sense, one attitude that we do not see yet displayed but which is notable for its absence is the one that says of digital media: look what other media have done to imbalance the world already; what right have we to start new media when we have so poorly figured out what to do about the old ones?

Computationalist Governance
(205) Resistance to the view that the mind is mechanical is often found in philosophers we associate with liberal or radical views—Locke, Hume, Nietzsche, Marx. These thinkers but both persons and social groups in the place of mechanical reason, and as well all admit, tend to emphasize social and relational duties rather than “natural right” (see Kreml 1984 for an apposite characterization of these tendencies and their intellectual histories).
(206) We all know that this equation is constitutive of contemporary subjectivity: that we are literally constructed by the absence within ourselves of the egoism that makes true success possible. We all share in the rational knowledge of what is required of us, thanks to the abstract machine that is equivalent to thought.

Mastery over computers seen as poor compromise with lack of social skills; Kirk needs Spock and McCoy.

Using interpersonal language crucial developmental competency for which using technology should not be substituted.

(206) For must of us, mastery over computers is part of a poor compromise with the rest of society and most critically with other people. We allow ourselves the fantasy that our relation to the world is like our relation to the computer, and that we can order things in the world just so precisely. . . . Dr. McCoy, despite his apparent hysteria, is just as necessary for Kirk to make decisions as is Spock.
(207) Some of our most sophisticated and thoughtful perspectives on computers hover around the question of the computer's modeling of mastery and its relation to political power (also see Butler 1997 and Foucault 2000 on the relationship between political and personal power). . . . It is not necessary to the becoming-self that a child use a computer, watch television, or even read books; it is necessary that she or he use language.

Awkwardly stated criticism of Turkle, Weizenbaum and Galloway failing to analyze what happens to children when ready-to-hand computers become basis of personality, a situation for which I certainly must consider myself.

(207) The concern Turkle and Weizenbaum (and even as sophisticated a writer as Galloway 2004) do not seem to focus on what happens when the computer is the right instrument for the particular child—when the computer itself, or even more profoundly, the social metaphor the computer offers, are ready-to-hand for the child set to adopt them.

Galloway and Chun, among others, mistakenly emphasize minor instances of decentralization in overall systemic authoritarian structures in open source projects.

(208) This kind of unification haunts even those web projects that appear, to us today, to be especially disorganized, loose, and distributed, such as Wikipedia, Linux, and other open source software, social “tagging” and other features of the so-called Web 2.0 along with the rest of the Semantic Web, and even the open-text search strategies of Google. While appearing somewhat chaotic and unstructured, they are both tightly structured “underneath” and in formal conception, and also part of a profound progression toward ever more centralized structures of authority. Thus even skeptical writers like Galloway (2004) and Chun (2006) would seem to have mistaken certain elements of distribution as characteristic of the whole system; what Galloway calls “decentralization” is less fundamental to the Internet than the tremendous process of centralization and absolute authority that is the base structure of computationalism, and that provides a highly authoritarian structure of power that truly drives the great majority of the worldwide digital revolution.

Computationalist Order
(209) One of the most visible loci of computational hierarchy is in the contemporary software paradigm known as Object-Oriented Programming (OOP). . . . In this sense, OOP is an especially apt example of computationalist discourse: it emerges from presumptions about and facts about computers, but is fundamentally an intellectual object not required but inspired by real computers (see [Brian Cantwell] Smith 1996 for some account of the importance of objects to the philosophy of computing).

OOP fits within computationalism by emphasizing hierarchy, speciation, categories.

(209-210) It is not simply the object/environment distinction that is so attractive to the computationalist mind-set; it is the hierarchies that OOP languages generally demand. . . . This is much like the “classical” model of speciation to which Western science has been attracted for hundreds of years, but even in that case scientists are aware that it is an idealization: that the material world does not fit so neatly into the categories our scientific programs prefer.
(210-211) Because these facts are so apparent in the conceptualizations underlying OOP, computer scientists have proposed alternate models that are less objectifying, including “Aspect-Oriented Programming” or AOP (see, e.g., Filman, Elrad, Clarke, and Aksit 2004) and “Subject-Oriented Programming” or SOP (see, e.g., Harrison and Ossher 1993); it is in no small part because of the lack of fit of these conceptualizations with computationalism that they have found so little traction in computing practice.
(211) When XML met the world of (human) languages, engineers became extremely frustrated with the “failure” of human languages to be characterizable in hierarchical terms.

Mania for classification.

(211) Every[where] in contemporary computing one sees a profound attention to categories—one might even call it a mania for classification. . . . categories that are ultimately meant for machine processing more than for human processing.

Emphasis on concentration reminiscent of panopticon like mania for classification.
(212) Another critical form of computational authority can be thought of as an emphasis on
concentration.
(212) Sitting at a single monitoring station, the archetypical panoptical observer about whom Foucault (1975) talks at great length today has access to a far more comprehensive view of the institution he or she was monitoring, through both video and any number of computational tools that in some cases work with video.

Presentist focusing on latest tools view fails to see prevalence of information throughout history, networked versus centralized practices, leading to belief in historical rupture and revolution.

(213) It is presentist because it presumes that people in their social lives prior to the Internet did not exist as a “broad network of autonomous social actors,” a claim that does not seem credible on its face; and its focus on the individual does not address how computers are used in large institutions like universities, corporations, and nonprofits. Despite the formal decentralization of the network protocol used in these organizations, their structure seems to me to remain highly centralized and hierarchical—in some ways, in fact, more controlled and centralized than they could ever have been without computerization. This is a bias toward the screen-present: because we can see (or imagine we can see) the physical network, we believe it is more real than the evanescent social “networks” on which it is built and which it partly supplants.

Foucault population thinking.
(214) In the [concentration] camps human beings were reduced to something less than the full status which we typically want to accord to each one of us; and what Foucault calls “population thinking,” which certainly is associated with a widespread reliance on computational techniques, was dramatically in evidence in German practice.
(214) In the early modern era, when lords really did have fiefdoms, what licenses the proclamation that communication was “centralized” and not “networked”? Why and how can we assume that networks are not centralized, when our world is full of networks, both physical and abstract, that precisely
are centralized?
(215) It simply is not clear what a communication revolution would be, or perhaps more accurately, it is not clear whether communication revolutions are in anyh way as dramatic as the political revolutions from which the terminology derives. . . . we want to believe that we live today in something new called the “information age,” as if past generations had not been involved deeply in the exchange, storage, preservation, and use of information: but what else were they engaged in? . . . it is only by focusing almost exclusively on the tools we have in front of us that we can imagine the products we are using today are revolutionary.
(216) Turing can be said to have codified what was already well understood in theory and was even more clearly already implemented in practices throughout human society.

Can this heuristic against revolutionary view mesh with Janz on African philosophy?

(216) As a heuristic, then, this study adopts the view that revolutionary change must be demonstrated before it should be taken as dogma; that we have not yet seen clear demonstrations of the fact of revolution in our society; and that it makes sense then to examine computers themselves and the discourses that support and surround them as if there has not been a revolution: not something dramatically new (even if there are of course many new things) but something like an increase in and increasing emphasis on something upon which society, and capitalist society in particular, has always relied.

Hobbes Leviathan foreshadows computationalism.

(217) Conceptually there is a powerful tie between the theory and implementation of modern political authority and the figure of computation. In the single text that might be said to most clearly define the notion of political sovereignty in the West (one that also explicitly connects views of human understanding to political philosophy), Leviathan by Thomas Hobbes, computation figures in two precise and related way. Both are quite well known in their own way, but they are generally not related as tropes in Hobbes's especially tropic writing. The first occurs famously in the first page of the Introduction to the volume: . . . “For by Art is created that great LEVIATHAN called a COMMON-WEALTH, or STATE, (in latine CIVITAS) which is but an Artificiall Man.”
(218) The parts of the body politic—in other words, individuals—the body that before was a “body without organs,” and that is now an artificial animal, to be made up in the new automaton called the State, cannot themselves escape computational state administration. . . . They must feel it is not just their interest but their nature to submit to the sovereign; they must have within them a simulacrum of the mechanism that constitutes the Leviathan itself. Thus in
Leviathan, Chapter 5, Hobbes writes: . . . “REASON, in this sense, is nothing but Reckoning (Adding and Subtracting) of the Consequences of generall names agreed upon, for the marking and signifying of our thoughts.” While by no means its origin, this passage serves as an appropriate proxy for the association in the West of the view that the mind just is a computer with the pursuit of political absolutism.
(218-219) Contrary to the views of advocates and critics alike that the computer age should be characterized by concepts like “decentralizing, globalizing, harmonizing, and empowering” (Negroponte 1995, 229), it seems more plausible that the widespread striating effects of computerization walk hand-in-hand with other, familiar concepts from the histories of rationalism.
(219-220) Out of the preference for looking at the screen, rather than looking at the world, computer advocates suggest that authority is dissolving—that the world is becoming borderless, decentered, and governed by either everyone (in the best case) or by a strong majority (in the next-best case). . . . Arguable, the Bush-Cheney political administration comes as close to fascism as the U.S. has ever come, in at least 100 years, precisely because of the close ties between the government, the military, and corporations that grow increasingly large and increasingly resemble governments. Within corporations rule is absolute, and the computer is everywhere an instrument for this absolute rule. . . . political change and political action need to proceed first of all from people and their direct impact on social relations, and only secondarily from representations on the computer screen.


EPILOGUE
Computers without Computationalism

Method is ultimately to described ideological phenomena.

(221) The main goal of this book has been to describe a set of ideological phenomena: the functions of the discourse of computationalism in contemporary social formations; the imbrication of that discourse in a politics that seems strikingly at odds with a discourse of liberation that has more and more come to characterize talk about computers; and the disconnect between the capabilities of physical computers and the ideology of computationalism that underwrites much of our own contemporary investment in being digital.
(222) While it is no doubt inevitable that forms of long-distance communication would develop in any plausible human history, I am not persuaded that the exact forms of the telephone, telegraph, etc., are metaphysically necessary.
(222) Computation hovers provocatively between invention and discovery. . . . we found digital computation because our society is already so oriented toward binarisms, hierarchy, and instrumental rationality. . . . Because we do not know how to answer these questions, we simply do not know whether digital computation is something we have invented out of whole cloth and that is wholly contingent on other facts of our social world, or whether we have discovered a process as fundamental to our physical makeup as is the oxidative process that can culminate in fire.
(222-223) When we turn to developments as important as the human genome and its concomitant technologies, stem cell therapy, and other biological research in particular, we have learned to understand that these technologies actually ignore non-technical input to their great detriment.

Chance to diverge from conclusion that computationalism sustains closed expertise, and how it can be critically addressed by a sort of Socratic default, with analysis of turning away from learning programming noted by Turkle.

(223) I am suggesting that this [closed expertise] emerges because of the particular character of computerization—because of its close association with computationalism. . . . One can easily imagine technical experts themselves becoming much more self-critical than many of them appear to be today—both more critical of the very idea of closed computer architectures to which few nontechnicians have access, and even more strongly, critical of their own supposed mastery and importance. . . . If true, if our democracy is conditioned on the development of tools which only experts can understand and manipulate, it is hard to see how republican democracy itself can persist through such a condition; if it is not true, as I suspect, then computer evangelists and experts must learn to doubt themselves much more openly.
(224) One of the most compelling lines of research that is especially relevant to my own concerns is the one currently being conducted by the Canadian cognitive psychologist Adele Diamond and her collaborators, and at a variety of experimental preschool curricula, of which the best known is called Tools of the Mind. Within this strand of child development research the primary interest is the child's development of so-called executive functions.

Environments full of clearly indicated goals limits to play and development of self regulation, which affects participation in democratic society.

(224) In an argument that may seem counterintuitive, these researchers demonstrate than an environment in which goals are clearly indicated prohibits children from developing their ability to regulate themselves. . . . It seems only a small stretch to take this lesson politically: a person with a fully developed sense of self-regulation will see him- or herself as an active, powerful member of the democratic body, a person with a limited but critical responsibility toward the general governance of society (see Zizek 1997).
(224) There is no room in this picture for exactly the kind of distributed sovereignty on which democracy itself would seem to be predicated.
(224-225) engagement with the computer deprives the user of exactly the internal creation not of an authoritarian master but instead of a reasonable governor with whom negotiation and compromise are possible. . . . computers may be at part the cause of the widespread (in, one notes, fully “modernized” or “developed” societies) phenomenon of what is today called Attention Deficit-Hyperactivity Disorder (ADHD).

Final questions for a philosophy of computing similar to those reached by Weizenbaum.

(225) they seem to stand in for a series of questions that are much more difficult to ask, and whose answers seem much more difficult to envision: should computers be used for everything of which they are capable? . . . Does the fact that computers provide us with a significant pleasure of mastery license their use for thing we must master? . . . If we could show, as I have suggested here is plausible, that the relationship between individuals (and institutions) and computers produces problematic psychological and/or ideological formations, what correctives might we develop to these formations?
(225) we need to question with particular intensity whether that presumption [that everything is moving toward a Utopian future] is rooted in careful observation of social realities, or instead in ideological formations that propel us to overlook the material conditions of the world we hope to better.



Golumia, David. The Cultural Logic of Computation. Cambridge: Harvard University Press, 2009. Print.