Notes for Wendy Hui Kyong Chun Programmed Visions: Software and Memory

Key concepts: archive, biopower, causal pleasure, code, codework, daemonic processes, data-driven programming, fetish, governmentality, interface, mapping, memory, metaphor, neoliberalism, produser, programmability, programmed visions, pseudocode, software as thing, storage, transcoding, undeadness of new media, vicissitudes of execution.


Related theorists: Gregory Bateson, Baudrillard, Geoffrey Bowker, Manfred Broy, Judith Butler, Leah Ceccarelli, Derrida, Julian Dibbell, Richard Doyle, Paul Edwards, Douglas Engelbart, Michel Foucault, Matthew Fuller, Alexander Galloway, David Golumbia, David Grier, David Harvey, Heidegger, Grace Murray Hopper, Fredric Jameson, Mark Johnson, Lily Kay, Thomas Keenan, Friedrich Kittler, George Lakoff, Brenda Laurel, Adrian Mackenzie, Michael Mahoney, Catherine Malabou, Warren McCulloch, Tara McPherson, David Mindell, John von Neumann, Walter Pitts, Sadie Plant, Margaret Jane Radin, Paul Ricoeur, Scott Rosenberg, Erwin Schrödinger, Ben Shneiderman, Alan Turing, Sherry Turkle, Joseph Weizenbaum, Frances A. Yates, Slavoj Zizek.

Series Foreword (Matthew Fuller)
(vii) While
Programmed Visions operates as a sustained introduction to the idea of software, code, and programmability as they work in relation to computation, the book is also a meditation on how this model proliferates, by various means, into systems such as living materials that are in turn understood to be bearers of a form of code that instructs their growth and that can, by further convolution, be read as a print out of the truth of an organism.
(viii) Chun's claim, in an interlude text in this book, is that the computer, and software in particular, has gone one step further, becoming a metaphor for metaphor, a means by which other metaphors are filtered and arranged, becoming in turn a system of universal experiential machining.
(viii) The ability of numbers, statements, currencies, or other signs to stand in for all kinds of things gives systems of abstraction and generalization immense power, especially when they can be made to line up into larger-scale structures, producing veritable machines.
Programmed Visions gives us a means of understanding such processes, but also importantly understanding how software is the code that works to disintermediate these systems. Thus, to understand the contemporary situation, it is not enough solely to recognize the operations of the economy, or even to be able to interrogate the morphological expressivity of a genetic array, but also to understand the very mechanisms that conjoin them.


Preface: Programming the Bleeding Edge of Obsolescence

Undeadness of new media related to logic of programmability in which programmed visions create futures based on past data.

(xii) Although this cycle of the ever-returning and ever-receding new mirrors the economic cycle it facilitates, the undeadness of new media is not a simple consequence of economics; rather, this book argues, this cycle is also related to new media's (undead) logic of programmability. New media proliferates “programmed visions,” which seek to shape and to predict—indeed to embody—a future based on past data.

Materialization of software as thing, hardened programming, and memory hardened into storage.

(xii) This book addresses this concept of programmability through the surprising materialization of software as a thing in its own right. It argues that the hardening of programming into software and of memory into storage is key to understanding new media as a constantly inspiring yet disappointing medium of the future.
(xii) Specters haunt us through our interfaces—by working with them we can collectively negotiate the dangers and pleasures of the worlds they encapsulate and explode.


Acknowledgments


Introduction: Software, a Supersensible Sensible Thing
(1) Software seems to allow one to grasp the entire elephant because it is the invisible whole that generates the sensuous parts. . . . To know software has become a form of enlightenment: a Kantian release from self-incurred tutelage.
(2) Although technologies, such as clocks and steam engines, have historically been used metaphorically to conceptualize our bodies and culture, software is unique in its status as metaphor for metaphor itself.
(2) This paradox—this drive to grasp what we do not know through what we do not entirely understand—this book argues, does not undermine, but rather grounds software's appeal.
(2) Computers—understood as software and hardware machines—this book argues, are mediums of power. This is not only because they create empowered users, but also and most importantly, because software's vapory materialization and its ghostly interfaces embody—conceptually, metaphorically, virtually—a way to navigate our increasingly complex world.

How Soft is Software?
(3) Computer scientist Manfred
Broy describes software as “almost intangible, generally invisible, complex, vast and difficult to comprehend.” . . . Historian Michael Mahoney describes software as “elusively intangible. In essence, it is the behavior of the machines when running. It is what converts their architecture to action, and it is constructed with action in mind; the programmer aims to make something happen.”
(3) To be apprehended, software's dynamic porousness is often conceptually transformed into well-defined layers. . . . Application on top of operating system, on top of device drivers, and so on all the way down to voltage changes in transistors. What, however, is the difference between an onion's layers and its core?
(4) At first, software encompassed everything that was not hardware, such as services. The term
soft, as this book elaborates, is gendered. Grace Murray Hopper claims that the term software was introduced to describe compilers, which she initially called “layettes” for computers; J. Chuan Chu, one of the hardware engineers for the ENIAC, the first working electronic digital computer, called software the “daughter” of Frankenstein (hardware being the son).
(4) Legal battles over software copyrights and patents make clear the stakes of this transformation of software from a service, priced per instruction, to a thing. Not surprisingly, software initially was considered neither patentable nor copyrightable because of its functional, intangible, and “natural” status.
(5) As a physical process, however, software would seem uncopyrightable. . . . To address this contradiction, the U.S. Congress changed the law in 1975, so that expressions, as opposed to the actual processes or methods, adopted by the programmer became copyrightable.
(5) Since, as I have argued elsewhere, computer reading is a writing elsewhere, viewing the momentary arrangement of electrons in memory as a tangible copy technically makes all computer reading a copyright infringement.

Philosophy just beginning to note effects of software as thing on metaphysics, intellectual property, subject, information.

(5-6) These changes, brought about by the “hardening” of software as textual or machinic thing through memory, point toward a profound change in our understanding of what is internal and external, subject and object. . . . The notion of intellectual property, which seems to break this dichotomy, was initially a compromise, she [Margaret Jane Radin] contends, between the Enlightenment notion that the intellect was internal and property external. . . . Crucially, Radin argues that the information age has compromised the compromise that intellectual property represents, since, by breaking down the distinction between tangibility and intangibility, it conceives of information, whether internal or external, as always external to the self (hence the patentability of genes).

All information as thing, albeit neighborhood amalgamation rather than discrete unity, coinciding with governmentality and protocol periodization (Foucault and Galloway).

(6) Software as thing has led to all “information” as thing. . . . Treating software as a thing means treating it, again, as a neighborhood, as an amalgamation. . . . Indeed, this book argues that the remarkable process by which software was transformed from a service in time to a product, the hardening of relations into a thing, the externalization of information from the self, coincides with and embodies larger changes within what Michel Foucault has called governmentality. Software as thing is a response to and product of changing relations between subjects and objects, of challenges brought about by computing as a neoliberal governmental technology.

Soft Government
(7) The liberal market undermines the power of the monarch by undermining his or her knowledge: no one can have a totalizing view. It also consumes freedom: it both produces freedom and seeks to control it.
(7) Importantly, though, computers in the period this book focuses on (post-World War II) coincide with the emergence of
neoliberalism. As well as control of “masses,” computers have been central to processes of individualization or personalization. Neoliberalism, according to David Harvey, is “a theory of political economic practices that proposes that human well-being can best be advanced by liberating individual entrepreneurial freedoms and skills within an institutional framework characterized by strong private property rights, free markets, free trade.”
(8) In a neoliberal soceity, the market has become an ethics: it has spread everywhere so that all human interactions, from motherhood to education, are discussed as economic “transactions” that can be assessed in individual cost-benefit terms. The market, as Margaret Thatcher argued, “change[s] the soul” by becoming, Foucault argues, the “grid of intelligibility” for everything. . . . Thus, this changed man who has imbibed the market ethic is thus eminently governable, for
homo oeconomicus is shaped through “rational” and empowering management techniques that make him “self-organized” and “self-controlling.”

Interfaces are another key component of neoliberal transformation certifying dream of programmability to nonexperts (Shneiderman).

Chun gives much attention to philosophical reflection about the dream of programmability as return to world of Laplaceian determinism, the Dehomag image under which humans encouraged to be overwhelmed by machines for the benefit of their aspirations, even if they were to systematically exterminate thousands of their fellow animals: respecting attentiveness to fatalistic enslavement to interfaces, there is much potential for adjusting trajectories, provided attention is given to schematism of perceptibility when it comes to media studies.

(8-9) Relatedly, “user-friendly” computer interfaces have been key to empowering and creating “productive individuals.” As Ben Shneiderman, whose work has been key to graphical user interfaces (GUIs), has argued, these interfaces succeed when they move their users from grudging acceptance to feelings of mastery and eagerness. Moreover, this book argues, interfaces—as mediators between the visible and invisible, as a means of navigation—have been key to creating “informed” individuals who can overcome the chaos of global capitalism by mapping their relation to the totality of the global capitalist system. . . . New media empowers individuals by informing them of the future, making new media the future. . . . This future—as something that can be bought and sold—is linked intimately to the past, to computers as capable of being the future because, based on past data, they shape and predict it. . . . As chapter 1 elaborates, computers, understood as software and hardware machines, have made possible a dream of programmability, a return to a world of Laplaceian determinism in which an all-knowing intelligence can comprehend the future by apprehending the past and present.

Programmed visions are always limited; compare to conclusion reached concerning hermeneutic phenomenology in second candidacy exam.

(9) This book, therefore, links computers to governmentality neither at the level of content nor in terms of the many government projects that they have enabled, but rather at the level of their architecture and their instrumentality. . . . By individuating us and also integrating us into a totality, their interfaces offer us a form of mapping, of storing files central to our seemingly sovereign—empowered—subjectivity. By interacting with these interfaces, we are also mapped: data-driven machine learning algorithms process our collective data traces in order to discover underlying patterns (this process reveals that our computers are now more profound programmers than their human counterparts). . . . Crucially, this knowledge is also based on a profound ignorance or ambiguity: our computers execute in unforeseen ways, the future opens to the unexpected. Because of this, any programmed vision will always be inadequate, will always give way to another future.

Complicate Turkle and others who link GUIs to postmodernism.

(10) Chapter 2 analyzes how this invisibly visible (or visibly invisible) logic works at the level of the interface, at the level of “personal computing.” . . . Looking both a the use of metaphor within the early history of human-computer interfaces and at the emergence of the computer as metaphor, it contends that real-time computer interfaces are a powerful response to, and not simply an enabler or consequence of, postmodernism and neoliberalism.
(10) Chapters 3 and 4 of part II examine the intertwining of computer technology and biology, specifically the emergence of memory and its importance to notions of programmability.
(10) Chapter 3 argues that software was not foreseen, because the drive for software—for an independent program that conflates legislation with execution—did not arise solely from within the field of computation, but also from early Medelian genetic and eugenics.
(10) Revising the running hypothesis of the first three chapters, chapter 4 shows how digital hardware, which grounds software, is itself axiomatic. . . . Crucially, it argues that computer memory, as a constantly regenerating and degenerating archive, does not simply erase human agency, but rather makes possible new dreams of human intervention and responsibility.
(11) As this synopsis hopefully makes clear, understanding software as a thing does not mean denigrating software or dismissing it as an ideological construction that covers over the “truth” of hardware. It means engaging its odd materializations and visualizations closely and refusing to reduce software to codes and algorithms—readily readable objects—by grappling with its simultaneous ambiguity and specificity. . . . Crucially, this effort to rethink, and indeed theorize things, is intimately intertwined with media: Martin
Heidegger begins “The Thing” by outlining the shrinking of time and space due to “instant information” (television being the peak of this abolition of every possibility or remoteness).
(11) Software as thing is inseparable from the externalization of memory, from the dream and nightmare of an all-encompassing archive that constantly regenerates and degenerates, that beckons us forward and disappears before our very eyes.


You
(13)
You. Everywhere you turn, it's all about you—and the future. You, the produser.


[Part] I Invisibly Visible, Visibly Invisible
(15) Computers have fostered both a decline in and frenzy of visual knowledge.
(15) Digital images, in other words, challenge photorealism's conflation of truth and reality; the notion that what is true is what is real and what is real is what is true.
(16) This proliferation, paradoxically, has also fostered a growing belief that computers enable total transparency. Jean
Baudrillard in The Ectasy of Communication has argued “we no longer partake of the drama of alienation, but are in the ecstasy of communication. And this ecstasy is obscene,” because “in the raw and inexorable light of information,” everything is “immediately transparent, visible, exposed.”
(17) Crucially, this desire to bring together billions of data items was and is not limited to governmental organizations.
(17) In order to become transparent, the fact that computers always
generate text and images rather than merely represent or reproduce what exists elsewhere must be forgotten. . . . Every use is also an act of faith: we believe these images and systems render us transparent not for technological, but rather for metaphorical, or more strongly ideological, reasons.

Computers as metaphors for all effective procedures, the invisible generating visible effects.

(17-18) Its combination of what can be seen and not seen, can be known and not known—its separation of interface from algorithm; software from hardware—makes it a powerful metaphor for everything we believe is invisible yet generates visible effects, from genetics to the invisible hand of the market; from ideology to culture. Joseph Weizenbaum has argued that computers have become metaphors for all “effective procedures,” that is, for anything that can be solved in a prescribed number of steps, such as gene expression and clerical work. . . . The linking of rationality with mysticism, knowability with what is unknown, makes it a powerful fetish that offers its programmers and users alike a sense of empowerment, of sovereign subjectivity, that covers over—barely--a sense of profound ignorance.
(18) Software, through programming languages that stem from a gendered system of command and control, creates an invisible system of visibility, a system of causal pleasure. This system renders our machine's normal processes demonic and makes our computer truly a medium: something in between, mystical, channeling, and not entirely trustworthy.


1 On Sourcery and Source Codes

Historical transformation of pseudocode into source code, program into noun.

(19) Software as logos turns program into a noun—it turns process in time into process in (text) space. In other words, Manfred Broy's software “pioneers,” by making software easier to visualize, not only sought to make the implicit explicit, they also created a system in which the intangible and implicit drives the explicit. They thus obfuscated the machine and the process of execution, making software the end all and be all of computation and putting in place a powerful logic of sourcery that makes source code—which tellingly was first called pseudocode—a fetish.
(20) The point is to make our computers more productively spectral by exploiting the unexpected possibilities of source code as fetish. . . . Rather then seeing technology as simply fulfilling or killing theory, this chapter outlines how the alleged “convergence” between theory and technology challenges what we thought we knew about logos. Relatedly, engaging source code as fetish does not mean condemning software as immaterial; rather, it means realizing the extent to which software, as an “immaterial” relation become thing, is linked to changes in the nature of subject-object relations more generally.

Source Code as Logos

Both Microsoft and free software hide the vicissitudes of execution.

(21) Knowing software, however, does not simply enable us to fight domination or rescue software from “evil-doers” such as Microsoft. Software, free or not, is embedded and participates in structures of knowledge-power. . . . More subtly, the free software movement, by linking freedom and freely accessible source code, amplifies the power of source code both politically and technically. It erases the vicissitudes of execution and the institutional and technical structures needed to ensure the coincidence of source code and its execution. This amplification of the power of source code also dominates critical analyses of code, and the valorization of software as a “driving layer” conceptually constructs software as neatly layered.

Software a logos related to ideal of kings speech in Phaedrus; theorists declare code performative.

(22) This view of software as “actually doing what it says[Galloway] (emphasis added) both separates instruction from, and makes software substitute for, execution. . . . By doing what it “says,” code is surprisingly logos. Like the King's speech in Plato's Phaedrus, it does not pronounce knowledge or demonstrate it—it transparently pronounces itself.
(22) Not surprisingly, many scholars critically studying code have theorized code as performative. . . . The independence of machine action—this autonomy, or automatic executability of code—is, according to Galloway, its material essence.

Crafty Sources

Example of working PowerPC assembly code to add two numbers.

(23-24) The compilation or interpretation—this making executable of code—is not a trivial action; the compilation of code is not the same as translating a decimal number into a binary one. . . . The relationship between executable and higher-level code is not that of mathematical identity but rather logical equivalence, which can involve a leap of faith.
(24) Code does not always or automatically do what it says, but it does so in a crafty, speculative manner in which meaning and action are both created. It carries with it the possibility of deviousness: our belief that compilers simply expand higher-level commands—rather than alter or insert other behaviors—is simply that, a belief, one of the many that sustain computing as such. This belief glosses over the fact that
source code only becomes a source after the fact. Execution, and a whole series of executions, belatedly makes some piece of code a source, which is again why source code, among other things, was initially called pseudocode.
(24-25) Source code is more accurately a
re-source, rather than a source. Source code becomes the source of an action only after it—or more precisely its executable substitute—expands to include software libraries, after its executable version merges with code burned into silicon chips; and after all these signals are carefully monitored, timed, and rectified. . . . Source code as techne, as a generalized writing, is spectral.

Source Code, after the Fact
(25) Much disciplinary effort has been required to make source code readable as the source.

Diagram of hardware logic circuit, which is also an abstraction.

(25) Making code the source also entails reducing hardware to memory and thus erasing the existence and possibility of hardware algorithms.
(26) To be clear, I am not valorizing hardware over software, as though hardware naturally escapes this drive to make space signify time. Crucially, this schematic is itself an abstraction.
(27) The notion of source code as source coincides with the introduction of alphanumeric languages. With them, human-written, nonexecutable code becomes source code and the complied code, the object code. Source code thus is arguably symptomatic of human language's tendency to attribute a sovereign source to an action, a subject to a verb.
(27) Code is executable because it embodies the power of the executive, the power of enforcement that has traditionally—even within classic neoliberal logic—been the provenance of government. . . . “Code is law,” in other words, automatically brings together disciplinary and sovereign power through the production of self-enforcing rules that, as von Neumann argues, “govern” a situation.
(27-28) David
Golumbia—looking more generally at widespread beliefs about computers—has insightfully claimed: “The computer encourages a Hobbesian conception of this political relation: one is either the person who makes and gives orders (the sovereign), or one follows orders.” . . . This wish for a simpler map of power—indeed power as mappable—drives not only code as automatically executable, but also, as the next chapter contends, interfaces more generally. This wish is central to computers as machines that enable users/programmers to navigate neoliberal complexity.
(28) Against this nostalgia, [Judith]
Butler, following Jacques Derrida, argues that iterability lies behind the effectiveness of performative utterances. For Butler, iterability is the process by which “the subject who 'cites' the performative is temporarily produced as the belated and fictive origin of the performative itself.” . . . The term res, as Heidegger notes, designates a “gathering,” any thing or relation that concerns man.

Yes, Sir!

Gendered, military history of computing: shift from commanding a female computer to commanding a machine.

(29) This conflation of instruction with result stems in part from software's and computing's gendered, military history: in the military there is supposed to be no difference between a command given and a command completed—especially to a computer that is a “girl.”
(29) One could say that programming became programming and software became software when the command structure shifted from commanding a “girl” to commanding a machine.
(29-30) Software languages draw from a series of imperatives that stem from World War II command and control structures. . . . The Wrens [Women's Royal Naval Service] also (perhaps ironically) called
slaves by the mathematician and “founding” computer scientist Alan Turing (a term now embedded within computer systems), were clerks responsible for the mechanical operation of the cryptanalysis machines (the Bombe and then the Colossus), although at least one of the clerks, Joan Clarke (Turing's former fiance), became an analyst. . . . This “interactive” system also seems evident in the ENIAC's operation: in figure 1.2, a male analyst issues commands to a female operator.
(30) Computation depends on “Yes, Sir” in response to short declarative sentences and imperatives that are in essence commands. . . . Commands lie at the core of the cybernetic conflation of human with machine.
(32) Programming the ENIAC—that is, wiring the components together in order to solve a problem—was difficult, especially since there were no manuals or exact precedents. . . . The unreliability of the hardware and the fact that engineers and custodians would unexpectedly change the switches and program cables compounded the difficulty.
(32) Because logic diagrams did not then exist, Holberton developed a four-color pencil system to visualize the workings of the master programmer. This drive to visualize also extended to the machine as a whole.
(33) Both software and feminine sexuality reveal the power that something that cannot be seen can have. . . . Responding to [Sadie]
Plant's statement [cybernetics is feminisation], Alexander Galloway has argued, “the universality of [computer] protocol can give feminism something that it never had at its disposal, the obliteration of the masculine from beginning to end.” Protocol, Galloway asserts, is inherently antipatriarchy.

Points out Hopper dream of automatic programming that is also significant to Rosenberg.

Hopper story complicates feminist readings.

(34) To put Hopper and the “ENIAC girls” together is to erase the difference between Hopper, a singular hero who always defined herself as a mathematician, and nameless disappearing computer operators. It is also to deny personal history: Hopper, a social conservative from a privileged background, stated many times that she was not a feminist, and Hopper's stances could be perceived as antifeminist (while the highest-ranking female officer in the Navy, she argued that women were incapable of serving in combat duty). Not accidentally, Hopper's dream, her drive for automatic computing, was to put the programmer inside the computer and thus to rehumanize the mathmatician: pseudocode was to free the mathematician and her brain from the shackles of programming.

Bureaucracies within the Machine
(34) The conflation of instruction with action, which makes computers understood as software and hardware machines such a compelling model of neoliberal governmentality and which resuscitates dreams of sovereign power, depends on incorporating historical programming hierarchies within the machine.
(34) Programming, even at what has belatedly been recognized as its origin, was a hierarchical affair.
(35) SAGE, however, not only taught people how to code but also inculcated a strict division of programming in which senior programmers (later systems analysts), who developed program specifications, were separated from programmers, who worked on coding specifications; they in turn were separated from the coders who turned coding specifications into documented machine code.
(36) Kraft targets structured programming as de-skilling: through it, programming was turned from a craft to an industrialized practice in which workers were reduced to interchangeable detail workers.
(36) Not surprisingly, having little to no contact with the actual machine enhances one's ability to think abstractly rather than numerically. . . . Gotos make difficult the conflation of instruction with its product—the reduction of process to command—that grounds the emergence of software as a concrete entity and commodity. That is, gotos make it difficult for the source program to act as a legible source.
(37) Indeed, structured programming which emphasizes programming as a problem of flow, is giving way to data abstraction, which views programming as a problem of interrelated objects, and hides far more than the machine. Data abstraction depends on information hiding, on the nonreflection of changeable facts in software.

Data-driven programming as beginning of alternative to humans writing code suggested by Kittler.

(37-38) Thus abstraction both empowers the programmer and insists on his/her ignorance—the dream of a sovereign subject who knows and commands is constantly undone. . . . Abstraction is the computer's game, as is programming in the strictest and newest sense of the word: with data-driven” programming for instance, machine learning/artificial intelligence (computers as source code) has become mainstream.
(38) Importantly, this stratification and disciplining of labor has a much longer history: human computing itself, as David
Grier has documented, moved from an art to a routinized procedure through a separation of planners from calculators.
(40-41) As this example [WPA Math Tables Project] makes clear, such programming depended on mind-numbingly repetitive operations by the “dumb” and the downtrodden, whose inept or deceitful actions could disrupt the teask at hand. Modern computing replaces these with vacuum tubes and transistors. As Alan Turing contended, “the class of problems capable of solutions by the machine can be defined fairly specifically . . . [namely] those problems which can be solved by human clerical labor, working to fixed rules, and without understanding.”
(41) Source code become “thing”—the erasure of execution—follows from the mechanization of these power relations, the reworking of subject-object relations through automation as both empowerment and enslavement and through repetition as both mastery and hell. Embedded within the notion of instruction as source and the drive to automate computing—relentlessly haunting them—is a constantly repeated narrative of liberation and empowerment, wizards and (ex-)slaves.

Automation and Sourcery
(41) Automatic programming is an abstraction that allows the production of computer-enabled human-readable code—key to the commodification and materialization of software and to the emergence of higher-level programming languages.

From pseudocode to source code as immaterial information, actualizing Turing short code, by separating imperative from action.

(41-42) Higher-level programming languages, unlike assembly language, explode one's instructions and enable one to forget the machine. . . . With programming languages, the product of programming would no longer be a running machine but rather this thing called software—something theoretically (if not practically) iterable, repeatable, reusuable, no matter who wrote it or what machine it was destined for; something that inscribes the absence of both the programmer and the machine in its so-called writing. Programming languages enabled the separation of instruction from machine, of imperative from action, a move that fostered the change in the name of source code itself from “pseudo” to “source.” . . . Pseudocode, which enables one to move away from machine specificity, is called “information”--what later would become a ghostly immaterial substance—rather than code.
(42) This story of a “manly” struggle against automatic programming resonates with narratives of mechanical computing itself as “feminizing” numerical analysis.
(43) Automatic programming, seen as freeing oneself from both drudgery and knowledge, thus calls into question the simple narrative of it as dispersing a reluctant “priesthood” of machine programmers.
(45) This selling campaign not only pushed higher-level languages (by devaluing humanly produced programs), it also pushed new hardware: to run these programs, one needed more powerful machines.
(45) This “selling campaign” led to what many have heralded as the democratization of programming, the opening of the so-called priesthood of programmers.
(45) Higher level programming languages—automatic programming—may have been sold as offering the programmer more and easier control, but they also necessitated blackboxing even more the operations of the machine they supposedly instructed. Democratization did not displace professional programmers but rather buttressed their position as professionals by paradoxically decreasing their real power over their machines, by generalizing the engineering concept of information.
(46) The notion of the priesthood of programming erases this tension, making programming always already the object of jealous guardianship, and erasing programming's clerical underpinnings.

Causal Pleasure
(46-47) The distinction between programmers and users is gradually eroding. With higher-level languages, programmers are becoming more like simple users. Crucially, though, the gradual demotion of programmers has been offset by the power and pleasure of programming. . . . The progression from playwright to stage director to emperor is telling: programming languages, like neoliberal economics, model the world as a “game.” . . . Iterability produces both language and subject. Importantly, Weizenbaum views the making performative or automatically executable of words as the imposition of instrumental reason, inseparable from the process of “enlightenment” critiqued by the Frankfurt school.

Second code snippet is C++ hello world meant to be easily deciphered.

(47) Programming languages offer the lure of visibility, readability, logical if magical cause and effect.

Obvious tie to Phaedrus: writing as fetish, Weizenbaum compulsion to program, ignoring vicissitudes of execution like treating reading as knowing.

(48) The seeming ease of programming hides a greater difficulty—executability leads to unforeseen circumstances, unforeseen or buggy repetitions. Programming offers a power that, Weizenbaum argues, corrupts as any power does. What corrupts, Weizenbaum goes on to explain, however, is not simply ease, but also this combination of ease and difficulty. Weizenbaum argues that programming creates a new mental disorder: the compulsion to program, which he argues hackers, who “hack code” rather than “work,” suffer from (although he does not that not all hackers are compulsive programmers).
(48) According to Weizenbaum, because programming engages power rather than truth, it can induce a paranoid megalomania in the programmer. Because this knowledge is never enough, because a new bug always emerges, because an unforeseen wrinkle causes divergent unexpected behavior, the hacker can never stop.
(49) Although Weizenbaum is quick to pathologize hackers as pleasureless pitiful creatures, hackers themselves emphasize programming as pleasurable—and their lack of “usefulness” can actually be what is most productive and promising about programming. . . . Hacking reveals the extent to which source code can become a fetish: something endless that always leads us pleasurably, as well as anxiously, astray.

Source Code as Fetish
(49-50) Source code as source means that software functions as an axiom, as “a self-evident proposition requiring no formal demonstration to prove its truth, but received and assented to as soon as it is mentioned.” . . . As an axiomatic, it, as Gilles Deleuze and Felix Guattari argue, artificially limits decodings. It temporarily limits what can be decoded, put into motion, by setting up an artificial limit—the artificial limit of programmability—that seeks to separate information from entropy, by designating some entropy information and other “non-intentional” entropy noise. . . . Code, however, is a medium in the full sense of the word. As a medium, it channels the ghost that we imagine runs the machine—that we see as we don't see—when we gaze at our screen's ghostly images.
(50) Understood this way, source code is a fetish.
(50) A fetish allows one to visualize what is unknown—to substitute images for causes.
(51) The parallel to source code seems obvious: we “primitive folk” worship source code as a magical entity—as a source of causality—when in truth the power lies elsewhere, most importantly, in social and machinic relations. If code is performative, its effectiveness relies on human and machinic rituals. . . . Against this magical execution, source code supposedly enables an understanding and a freedom—the ability to map and know the workings of the machine, but, again, only through a magical erasure of the gap between source and execution, an erasure of execution itself.

Readability of source code includes embedded natural language in its essential syntax as well as comments.

(51) Source code's readability is not simply due to comments that are embedded in the source code, but also due to English-based commands and programming styles designed for comprehensibility.
(52) This notion of source code as readable—as creating some outcome regardless of its machinic execution—underlies “
codeworkand other creative projects.
(52-53) Source code as fetish, understood psychoanalytically, embraces this nonteleological potential of source code, for the fetish is a deviation that does not “end” where it should. . . . Fetishists, importantly, know what they are doing—knowledge, again, is not an answer to fetishism, but rather what sustains it.
(53) To make explicit the parallels, source code, like the fetish, is a conversion of event into location—time into space—that does affect things, although not necessarily in the manner prescribed.

Source code fetish creates virtual authorial subject, even leads to putative critical act of revealing sources and connections.

(53) Code as fetish means that computer execution deviates from the so-called source, as source program does from programmer. . . . This erasure of the vicissitudes of execution coincides with the conflation of data with information, of information with knowledge—the assumption that what is most difficult is the capture, rather than the analysis, of data. This erasure of execution through source code as source creates an intentional authorial subject: the computer, the program, or the user, and this source is treated as the source of meaning. . . . To know the code is to have a form of “X-ray vision” that makes the inside and outside coincide, and the act of revealing sources or connections becomes a critical act in and of itself.

Hints at potential surprises in unknowns that may arise despite programmed vision, like deformation discoveries by McGann.

(54) Embracing software as thing, in theory and in practice, opens us to the ways in which the fact that we cannot know software can be an enabling condition: a way for us to engage the surprises generated by a programmability that, try as it might, cannot entirely prepare us for the future.


Computers that Roar

Proliferation of metaphors.

(55) Metaphors proliferate not only in interfaces, but also in computer architecture: from memory to buses, from gates to the concept of architecture itself. Metaphors similarly structure software: viruses, UNIX daemons, monitors, back orifice attacks (in which a remote computer controls the actions of one's computer), and so on. John von Neumann deliberately called the major components of modern (inhuman) computers “organs,” after cybernetic understanding of the human nervous system.
(55-56) The role of metaphor, however, is not simply one way. Like metaphor itself, it moves back and forth. Computers have become metaphors for the mind, for culture, for society, for the body, affecting the ways in which we experience and conceive of “real” space. . . . This vaguest understanding—software as thing—is neither accidental to nor a contradiction of the computer as metaphor, but rather grounds its appeal.
(56) George
Lakoff and Mark Johnson argue, “The essence of metaphor is understanding and experiencing one kind of thing in terms of another.” . . . Crucially, metaphors do not simply conceptualize a preexisting reality; they also create reality.
(57) Paul
Ricoeur, focusing more on metaphor as a linguistic entity, similarly stresses the centrality and creative power of metaphor. . . . This movement from surprise to understanding is mirrored in metaphor itself, which is a mode of animation, of change—it makes things visible, alive, and actual by representing things in a state of activity.

Computers as metaphors for substitution itself.

(57) Computers, understood as universal machines, stand in for substitution itself. . . . Less obviously, computers—software in particular—also concretize Lakoff and Johnson's notion of metaphors as concepts that govern, that form consistent conceptual systems: software is an invisible program that governs, that makes possible certain actions. But if computers are metaphors for metaphors, they also (pleasurably) disorder, they animate the categorical archival system that grounds knowledge.
(57-58) Key to understanding the power of software—software as power—is its very ambiguous thingliness, for it grounds software's attractiveness as a way to map—to understand and conceptualize—how power operates in a world marked by complexity and ambiguity, in a world filled with things we cannot fully understand, even though these things are marked by, and driven by, rules that should be understandable, that are based on understandability.


2 Daemonic Interfaces, Empowering Obfuscations
(59) Interfaces have become functional analogs to ideology
and its critique—from ideology as false consciousness to ideology as fetishistic logic, interfaces seem to concretize our relation to invisible (or barely visible) “sources' and substructures.
(60) Indeed, the interface is “haunted” by processes hidden by our seemingly transparent GUIs that make us even more vulnerable online, from malicious “back doors” to mundane data gathering systems. Similar to chapter 1, this chapter thus does not argue we need to move beyond specters and the undead, but rather contends that we should make our interfaces more productively spectral—by reworking rather than simply shunning the usual modes of “user empowerment.”

Interface, Intrafaith
(62) Since exhaustive and unambiguous description was difficult, if not impossible, one needed to work “interactively”--not just automatically—with a computer. . . . The goal, then, was to develop artificial systems to combat human frailty by usurping the human.

Direct Manipulation

Direct manipulation leads to Malabou flexibility.

(63) Crucially, Shneiderman posits direct manipulation as a means to overcome users' resistance; as a way to dissipate hostility and grudging acceptance and instead to foster enthusiasm by developing feeligns of mastery. . . . This new spirit of capitalism fosters commitment and enthusiasm—emotions not guaranteed by apy for working under duress—through management techniques that stress “versatility, job flexibility, and the ability to learn and adapt to new duties.” . . . In such a system, Malabou underscores, drawing from Boltanski and Chiapello, flexibility is capitulation and normative, and “everyone lives in a state of permanent anxiety about being disconnected, rejected, abandoned.”
(64) What does help, though, is direct engagement: an interface designed around plausible and clear actions.
(64-65)
Laurel's move to theater is both interesting and interested, and it resonates strongly both with Weizenbaum's parallel between programmer as lawgiver/playwright discussed previously and with Edwards's diagnosis of the computer as a metaphor of the closed world, a term also drawn from literary criticism.
(65) Because events happen so logically, users accept them as probable and then as certain. Consequently, this system ensures that users universally suspend their disbelief. This narrowing also creates pleasure.
(65-66) In Laurel's view, the constraints the designer produces do not restrict freedom; they ensure it. Complete freedom does not enhance creativity. . . . To buttress this feeling of mastery, disconcerting coincidences and irrelevant actions that can expose the inner workings of programs must be eliminated.
(66) Laurel's conception of freedom, however, is disturbingly banal: the true experience of freedom may indeed be closer to an existential nightmare than to a pleasant paranoid dream.

Interfaces as Ideology
(66) In a
formal sense computers understood as comprising software and hardware are ideology machines. They fulfill almost every formal definition of ideology we have.

Operating systems interpellate users actually and rhetorically; blind faith supplants knowledge that was never there.

(66-67) Interfaces and operating systems produce “users”--one and all. Without OS there would be no access to hardware; without OS there would be no actions, no practices, and thus no user. Each OS, in its extramedial advertisements, interpellates a “user”: it calls it a name, offering it a name or image with which to identify.
(67) Computer programs shamelessly use shifters—pronouns like “my” and “you”--that address you, and everyone else, as a subject. . . . Interfaces are based on a fetishistic logic. Users know very well that their folders and desktops are not really folders and desktops, but they treat them as if they were—by referring to them as folders and as desktops. This logic is, according to Slavoj
Zizek, crucial to ideology. . . . Through the illusion of meaning and causality—the idea of a law-driven system—do we not cover over the fact that we do not and cannot fully understand or control computation? That computers increasingly design each other and that our use is—to an extent—a supplication, a blind faith?
(67-68) As many historians have argued, the time-sharing operating systems developed in the 1970s spawned the “personal computer.” That is, as ideology creates subjects, interactive and seemingly real-time interfaces create users who believe they are the “source” of the computer's action.

Real-time Sourcery
(68) What is authentic or real is what transpires in real time, but real time is real not only because of this indexicality—this pointing to elsewhere—but also because of its quick reactions to users' inputs.
(69) This volitional mobility, [Tara]
McPherson argues, reveals that the “hype” surrounding the Internet does have some phenomenological backing. . . . Crucially, this fostering of a belief in true change—in the ability to change, in the direct causality between one's actions and a result—is programmed into the interface.
(69) Interactive pleasure does not simply derive from a representation of user actions in a causally plausible manner; it also comes from “user amplification.”
(69) Julian
Dibbell has argued eloquently that online spaces are themselves essentially maps, that is, diagrams that we seek to inhabit. Maps and mapping are also the means by which we “figure out” power and our relation to a larger social entity.
(71) UTOPIA seemingly enables what Fredric
Jameson has called a “cognitive map.” . . . The functioning of these smart interfaces parallels Marxist ideology critique. The veil of ideology is torn asunder by grasping the relations between the action of individual actors and the system as a whole.
(71) Through this process the invisible whole emerges as a thing, as something in its own right, and users emerge as mapping subjects.
(71-72) The fact that software, with its onion-like structure (a product of programming languages), acts both as ideology
and as ideology critique—as a concealing and as a means of revealing—also breaks the analogy between software and ideology, or perhaps reveals the fact that ideology always also contains within itself ideology critique.

Postmodern Confusion, Interface Clarity
(72) This drive to constantly map—and to understand through mapping—responds to postmodernist disorientation.
(73) Cognitive mapping combines the geographer Kevin Lynch's discussion of the ability of citizens to map the city around them with Althusser's definition of ideology. . . . Such a map, which Jameson in 1983 argues we did not yet have, is necessary in order to understand the totality that is capitalism; because the profit motive and the logic of capitalism set absolute barriers and limits to social changes and transformations, we need a way to comprehend its totality and our relation to it.

Interface produsers a response to postmodernist disorientation, but not the mapping envisioned by Jameson.

(73-74) Importantly, Jameson does argue that cyberpunk and other literature/art that deals with the thematics of mechanical reproduction, as well as paranoid conspiracy theories, offer “a degraded figure of the great multinational space that remains to be cognitively mapped.” . . . This chapter has been arguing that interfaces—with their constant emphasis on the act of making connections—would seem to instantiate an aesthetics of cognitive mapping. They provide a mapping—a “cognitive connectionism”--that respects the space of multinational capital and the ways in which that totality is not immediately experiencable or knowable, and yet also enables agents to act as sources. Indeed, many activists have argued that the Internet and text messaging offer effective ways of intervening on global capitalism. Rather than immobilized subjects, we have a surfeit of “produsers,” who diligently produce, post, and click, providing content for “free.”
(74) Interfaces are not the cognitive maps called for by Jameson because they do not engage the totality of class relations, but rather focus on totality differently figured (information networks, etc.).

Suggests immersion in networked flows alternatives to mapping as proposed by Jameson (Berry, Galloway)?

(75) Could it be that rather than resort to maps, we need to immerse ourselves in networked flows—time-based movements that both underlie and frustrate maps?

As We May Think
(75) Our digital interfaces are an analogy to an analogy. As David
Mindell argues, whenever we use a mouse or look at our screens, we are engaging in activities that precede our digital computers. The canonization of Vannevar Bush's article “As We May Think” within new media studies makes clear the nostalgia for an analog future.
(76-77) A companion to Jameson's postmodern individual is the bewildered scientist, who is incapable of making sense of—of mapping—the scientific archive. . . . Unlike Jameson, Bush's solution is mechanical rather than political. . . . The solution Bush envisions is mechanized access: the memex.
(77) Importantly, the code space of the memex did not render these items into abstracted, disembodied information, but rather linked them together within an invisible space of place markers.
(78) This insistence on the memex as mechanical was not simply a concession to cost, but also stemmed from an understanding of the mechanical as more intuitive, more personal, as more analog and more lasting.
(78) This permanence of the record—of microfilm—would not only grant once more the privilege of forgetting (as though any of us could ever be exempt from such a deprivation), it also would do so while saving us from repetition: repetitive thought and repetitions in thought.
(79) Reading against the grain of Bush's argument, I contend that this uncertainty stems not from the lack of devices such as the memex, but from the act of reading itself.

Act of reading assumed by Bush and others to automatically entail understanding, the same fallacy pointed out by Plato, a human parallel to ignoring vicissitudes of execution in computers; conflations of storage with access, memory with storage, word with action.

(79) In Bush's writing, and in prognoses of the information revolution more generally, there is no difference between access to and understanding of the record, between what what would be called, perhaps symptomatically, “machine reading” and human reading and comprehension, between information and argument, between mapping and understanding. The difficulty supposedly lies in selecting the data, not in reading it, for it is assumed that reading is a trivial act, a simple comprehension of the record's content.
(80) A machine alone, however, cannot turn “an
information explosion into a knowledge explosion; it cannot fulfill the promise of what Michel Foucault has called “traditional history.”
(80) The example of Mendel as source is also revealing because this belief in sources—Mendel as the source of genetics, memex as the source of the Internet, code as the source of our computers—ultimately is based on a conflation of storage with access, of memory with storage, of word with action.

Liberation by undead memories; compare to Derrida archive.

(80) Repetition, however, is not simply a sign of thought wasted, but also of thought disseminated. . . . Computers “liberate” us from memory through their undead memories, and their interfaces mimic the workings of simple analog systems in which there is some actual connection between what we see and do, between different systems being modeled.

Repeating Bush
(82-83) At the heart of this system of augmentation is a theory of practice, of training. According to
Engelbart, we are already augmented through our use of language, customs, and tools (symbols and processes). . . . This system was thus to augment the human by producing more human cut-and-try technologies. . . . At the base of Engelbart's system is a trainable exemplary “primitive” who can, through step-by-step (digital?) training, improve his or her skills.

Engelbart bulldozer metaphor for human augmentation versus moving masses en mass.

(83) Rather than a system designed to move masses en mass, these interfaces personalize mass movement and destruction. It is everyone and all in a bulldozer; everyone and all's actions amplified. Engelbart's system underscores they key neoliberal quality of personal empowerment—the individual's ability to see, steer, and creatively destroy—as vital to societal development.
(84) Key to “human augmentation” is the establishment of users who act and through their actions believe—all via a linear narrative that praises nonlinear processes of empowering.
(85) Supplementing this cinematic call to identity with Engelbart, however, is a televisual structure of technologically mediated liveness and interpersonal discourse. . . . The demo, in other words, has been so “life changing” not simply because of the technology it featured, but also because of the iamges and visions of interconnectivity it established through its visual presentation.
(85-87) Through our identification with Engelbart via his demo we emerge as sovereign subjects—subjects of files. Not accidentally, Engelbart's tasks are administrative: compiling lists, assigning ownership to files. . . . The personalizing of files—both virtually and legally through various “access to information” laws—individualizes and totalizes.
(87) At the alleged origins of interactive real-time interfaces, then, is a desire to control, to “govern,” based on a promise of transparent technologically mediated contact. It is a vision of permanence and flexibility: the files are permanently stored and the user's information tracked. Through this, this special interface has come to stand in for the machine itself, erasing the medium as it proliferates its specters, making our machines transparent producers of unreal visions.

Daemonic Media
(87) This spectrality makes our media daemonic: inhabited by invisible, orphaned processes that, perhaps like Socrates's
daimonion, help us in our times of need.
(88-89) The other famous daemon, more directly related to those spawning UNIX processes, is known as
Maxwell's daemon. . . . Daemonic processes, therefore, are slaves that work tirelessly and, like all slaves, define and challenge the position of the master.

Daemonic processes fit Derridean analysis of writing.

(89) Interactive operating systems, such as UNIX, transform the computer from a machine run by human operators in batch mode to “alive” personal machines, which respond to the user's commands. . . . Real-time processes, in other words, make the user the “source” of the action, but only by orphaning those writing processes without which there could be no user. . . . As a symptom of this desire for the transparency of knowledge, for the reigning of rationality, daemon is also a backronym. Since the first daemon automatically made tape backups for the file system, it has been widely and erroneously assumed that daemon initially stood for “Disk And Executive MONitor” (this alleged “source” phrase was later adopted).
(89 footnote 94) This resonates with Derrida's analysis of writing, in contrast to “living logs,” as “orphaned” in “Plato's Pharmacy,”
Dissemination.

Ghostly Interfaces, Confused Mappings
(89-90) This mapping makes us believe that the world, like the computer, really comprises invisible hands and rules that we can track via their visual manifestations. Hence the popularity of software as a metaphor for almost everything—culture, genetics, life—and the reduction of everything to transparent (and in Baudrillard's term “obscene”) communication.
(90-91) In this definition,
smart means tracking users' actions: like the iTunes and Amazon.com user-driven product recommendation systems, smart technologies automatically capture information about users' personal preferences and usage and set it to a central database to be processed and analyzed.

Beyond Manovich transcoding, invisible readings as way to think about software studies, closer to Kittler schematism of perceptibility, but seems to reject call for deep understanding of ECT in favor of interrogating interfaces.

(91) Manovich argues that in order to understand new media we need to engage both layers, for although the surface layer may seem like every other media, the hidden layer, computation, is where the true difference between new and old media—programmability—lies. He thus argues that we must move from media studies to software studies, and the principle of transcoding is one way to start to think about software studies.
(91) The problem with Manovich's notion of transcoding is that it focuses on static data and treats computation as a mere translation. Not only does programmability mean that images are manipulable in new ways; it also means that one's computer constantly acts in ways beyond one's control. To see software as merely transcoding erases the computation necessary for computers to run. The computer's duplicitous reading does not simply translate or transcode code into text/image/sound, or vice versa; its reading—which conflates reading and writing (for a computer, to read is to write elsewhere)--also partakes in other invisible readings.
(92) To be clear: this chapter is not a call to a return to an age when one could see and comprehend the actions of our computers. . . . Neither is this chapter an indictment of software programming. . . . It is, however, an argument against common-sense notions of software precisely because of their status as common sense (and in this sense they fulfill Antonio Gramsci's notion of ideology as hegemonic common sense); because of the histories and gazes such notions erase; and because of the future they point toward. Software has become a common sense shorthand for culture, and hardware shorthand for nature. . . . People may deny ideology, but they don't deny software—and they attribute to software, metaphorically, greater powers than have been attributed to ideology. Our interactions with sofwtare have disciplined us, created certain expectations about cause and effect, offered us pleasure and power—a way to navigate our neoliberal world—that we believe should be transferable elsewhere. It has also fostered our belief in the world as neoliberal: as an economic game that follows certain rules. The notion of software has crept into our critical vocabulary in mostly uninterrogated ways.

Goal of nontransparent data tracking as outcome of critically interrogating interfaces.

(92) To do this, we need, again, to understand the ways in which the drive to map and to promote transparency enables nontransparent data tracking that cuts across the governmental, the political, the commercial, and the personal.
(93) These projects [Chris Csikszentmihalyi's Government Information Awareness; Open Government Initiative] are critical not simply for the “transparency” or information they seem to offer, but also because they give users the collective power to transform these databases and these debates over what counts as evidence.
(94) Deliberately making databases dirty—by providing too much or erroneous information—may be the most effective way of preserving something like privacy.
(94) Furthermore, the more an interface is programmed—the more it tries to meet and anticipate users' needs and to direct their actions—the more confused and confusing it becomes. . . . What certain types of mapping and actions eradicate is the ways in which we are not the only agents: as Adrian
Mackenzie argues, software is a neighborhood, an amalgam that brings together many different modes of action.

Computers as metaphors for totality; stability from ephemerality.

(94) Understanding computers as metaphors for metaphor also means engaging the artificiality of metaphor—producing new metaphors—that make strange and estranging the world around us. This aesthetic creation would not seek to make a totality visible, so we can then navigate it, but rather, through deliberately odd and artificial means, to create what we consider totality to be.
(95) Last, this interrogation of transparency, mapping, and interfaces needs to address the ways in which the digital rather than simply offering a stable material for memory is also fundamentally ephemeral. . . . The question becomes: how did constant regeneration become stable transmission?


[Part II] Regenerating Archives
(97) Memory underlies the emergence of the computer as we now know it: the move from calculator to computer depended on “regenerative memory.” . . . Memory hardens information—turning it from a measure of possibility into a “thing,” while also erasing the difference between instruction and data (computer memory treats them indistinguishably).

Meaning of archive and source code only determined after the fact (Derrida again).

(99) The archive thus buttresses a certain definition of public as state authority through the transformation, as Derrida notes, of a private domicile into a public one. It is also based on a promise that links the past to the future: whatever it possessed at the beginning of an archon's term shall remain at the end; an archive conserves. This conservative promise is tied to another: the promise to respect precedent, that is, to follow past rules in order to guarantee a just future. . . . The meaning of an archive, like source code, can only be determined after the fact. It is a promise to the future.
(100) Traces do not simply degenerate at a faster pace, they also transform themselves. This transformation challenges the process of consignation—of indexing and organizing—that grounds the archive; it also fundamentally changes how archived materials are retrieved, or “reanimated” and thus experienced.


3 Order from Order, or Life According to Software
(101) To repeat, software is axiomatic. As a first principle, it fastens in place a certain logic of cause and effect, a causal pleasure that erases execution and reduces programming to an act of writing.
(101) The history of computing and the history of biology are littered with moments of deliberate connection and astonished revelation, from computer storage as “memory” to regulatory genes as “switches,” from genetic to evolutionary “programs.”
(103) Importantly, von Neumann's neurons are already cyberneticized: they are, as chapter 4 elaborates, the idealized neurons of Warren
McCulloch and Walter Pitts, who used “neurons” to instantiate Turing's universal machine.
(103) In von Neumann's analogy, code or program was not the pivot between the machine and the biological.
(103) Rather, code as logos existed elsewhere and emanated from elsewhere—it was part of a larger epistemic field of biopolitical programmability. Part I of this book focuses on gendered military relations; this chapter examines the intertwined and intertwining importance of Mendelian genetics and biopower more generally.
(103-104) To make this point, this chapter perhaps perversely reads Erwin
Schrödinger's What Is Life?--widely through controversially considered to be the ur-text of modern genetics—as What Is Software? . . . Untangling the ways that software and heredity intersect to create what Francois Jacob would call the “logic of life”--a logic that reduces action to word, life to a programmable code—this chapter argues that early to mid-twentieth century genetics and eugenic prefigured the emergence of software.

Software: Not Always Already There
(104) Software has become such a powerful conceptual tool that it is hard to remember that it did not always exist. The privilege accorded to software as always already there is remarkable, especially in terms of theorizing biological phenomena.
(104-105) This emphasis on software and computers is also evident in the critical and historical literature. . . . Software becomes analogous to rhetoric itself.
(105) That [Lily]
Kay's and [Richard] Doyle's texts are two of the most brilliant and insightful on the intersections between cybernetics and genetics is not irrelevant: in order to think genetics theoretically, it seems necessary to assume software, and to assume a line of influence moving from computers to biology.

Feedback, not software joins dead and alive in early cybernetics; parallel in process control models.

(106) In cybernetics as first conceived, there is no separation between software and hardware, an impossible distinction during the 1940s; Winer's algorithms and instructions consist of relays. What links the dead and the alive—perhaps making them both undead—is feedback.
(107) In other words, what if the text lauded as launching modern genetics—because it postulated the existence of a genetic code-script and because it inspired physicists such as Francis Crick and James Watson to move into biology from physics (even though the text was widely considered to be scientifically inaccurate or repetitive even at the time of its writing)--also inadvertently “launched” modern stored-program computers? . . . Both computer as memory machine and genetics as program are thus part of what Richard Panek has provocatively called “the invisible century,” a move within the sciences in the twentieth century away from studying what is simply visible and experientially based—such as organization—toward speculation, toward theorizing “that there is more to the universe
than meets the eye.”

What Is Software?
(110) Part of the “ideality” of Schrödinger's system stems from the fact that execution is inscribed within these instructions. For Schrödinger, the code-script was not only a plan but also execution.
(110) This notion of code as both artchitect's plan and builder's craft in one clearly draws from, even as it mutates, the seventeenth-century notion of nature as the Book of Life.
(111) By bringing biology to the attention of physicists, however, [Gunther] Stent contends,
What Is Life? Became “the 'Uncle Tom's Cabin' of the revolution in biology that, when the dust had cleared, left molecular biology as its legacy.” . . . According to [Leah] Ceccarelli's rhetorical reading, Schrödinger's text reveals the value that untrue, unoriginal--”vapory”--science can have.
(112) Stepping aside from debates over the value of
What Is Life?, I want to use Doyle's argument to outline another debate: the relevance of this text, and modern conceptions of heredity in general, to the emergence of software, for Schrödinger's positing of a code that is both law and execution arguably foreshadows code as computer.
(112) The archive is what Foucault earlier calls the “positive unconscious of language” in
The Order of Things: An Archaeology of the Human Sciences; that is, the well-defined regularity of empirical knowledge, which is part of scientific discourses, even as it alludes to the consciousness of the scientist.
(113) Thus, regularity and subjectivity, for Foucault, are found in dispersion rather than in continuity or in stability.

Compare role of language defining what is visible as example of programmed vision to Hayles comment on discursivity defining postmodern subject.

(113) That is, the desire to map what is visible and what is articulable is key to understanding the impact of code and programmability—to the linking of the two “programmed visions.” Programmability is thus not only crucial to understanding the operation of language, but also to how language comes more and more to stand in for—becomes the essence or generator of—what is visible.
(114) This relation of what can be seen and what is not hidden yet driving—and which is not terminal—coincides with our perception of the relationship between a program and its interface. This is not to say that Foucault views statements as “source code”; this is the opposite of his approach. This is to say that this notion of an operational field of enunciative function resonates with von Neumann's notion of code as a dynamic “context,” as something that does not pin down a meaning, but rather guides—makes possible—certain calculations.

The Returns of Laplace
(115) Although hardware grounds universality (or something close to universality, that is, digitality), software, as an entity independent of hardware, is crucial to machines as “self-reproducing,” that is, key to Jacob's reading of Schrödinger's notion of heredity as memory.

Invisible Transmissions, Visible Results
(120) Johannsen argued that the genotype, rather than the phenotype, was transferred between parent and offspring: according to his argument, both shared and exact same genotype, making the genotype, like computer memory, a strangely ahistorical entity nonetheless key for any historical relationship.

Eugenics as Nature
(120) The understanding of genes as ahistorical also depends on the separation of transmission from development.
(120) The nature-nurture divide, however, also originated in the nineteenth century, with Galton's refutation of what Ernst Mayr—in language clearly resonating with computer technology—has called Darwin's theory of “soft inheritance.”
(121) The battle between nature and nurture was a battle over what type of control to use: eugenics or social welfare.
(121) Genetics separated cultural and biological transmission, but in doing so also make biological transmission a question of transmissible, cultural knowledge—a question of and for the archive.
(122-123) Eugenics, in other words, is a key component of what Michel Foucault in his
History of Sexuality, volume 1, called biopower. . . . Sexuality knits the individual to the population and privileges the population—and its betterment—over the individual, bizarrely absolving him of both rights and responsibilities. Also, although moving responsibility from the individual to society in general, eugenics offers the individual the means to “map” an otherwise invisible system so that she can make the right marriage decisions.
(124) Indeed breeding encapsulates an early logic of programmability that inspired genetics and recognition of heredity as physical transmission. Eugenics, in other words, was not simply a factor driving the development of high-speed mass calculation at the level of content (the statistical demands of the biometricians helped foster mass calculation), but also at the level of logic or of operationality.

Breeding Programs
(125) The interrelationship of eugenics, genetics, breeding, and capital was made most explicit by Charles Davenport, who was also a founder of the American Breeders' Association. . . . Tracking everything from eye color to criminality, Davenport saw his work as essentially determining the independent unit characters and their impact on American society.

Breedability as proof of programmability based on repetition as evidence of inheritance.

(126) Breedability became a proof of programmability in a bizarre logic that assumed any repetition evidence of inheritance, that is, repetition with no difference. . . . This version of programmability also asserts a reverse-programmability, that is, the ability to determine an original algorithm—a strategy, or plan for action—based on interactions with unfolding events.

Programmability Continued
(127) In particular, the failure of the old eugenics movement created a space for a new, more rigorous physiochemically based study of human heredity and behavior still focused on social betterment.
(128) As information displaced older visions of chemical and biological specificity and DNA was articulated as a programmed text, “the material control of life would be now supplemented by the promise of controlling its form and
logos, its information.” In Who Wrote the Book of Life?, as noted earlier, Kay elegantly and convincingly reveals the ways in which information was adopted as a metaphor into biology in the period from 1953 to 1967.
(128) Agreeing that the move toward information changed discussions of biological control, it also argues that this turn to information as source did not simply emerge from elsewhere. Rather, it stemmed in part from the early eugenic belief in programmability, in an invisible mechanistic causality. . . . Computer code-script conflates legislation with execution, but also reveals both the possibility and impossibility of a eugenic programmability: although “rules” may be followed, goals often are not. As well, code as logos resonates with the American eugenics movement's emphasis on individual decision making, rather than its overt message of social engineering.
(129) Liberalism needs both to produce freedom and to devise mechanisms to control it. Neoliberalism, Foucault goes on to argue, focuses on the production and control of this freedom through competition. Thus, biopower as a form of power focused on “yes” rather than “no,” fostering life rather than death, encourages and is encouraged by a certain drive for life and independence, albeit one that is also linked to a tightly prescribed logic of programmability.
(129) This combination of empowerment with restrictions, of feelings of power generated by systems initially designed to restrict, drives the seductiveness of computing as a metaphor and as a way of encapsulating and experiencing power.
(130) This move away from specific genes and their corresponding functions (the idea that the genome is a code that can be cracked, that it is analogous to a software program that simply drives protein expression) toward a more nuanced understanding of the cell as comprised of various networks and signals (cell as an ecosystem) poses important new questions to science and technology studies, including a serious challenge to a politics that endorses the importance of nurture over nature. . . . Both brain as network and neoliberal management techniques move away from the notion of a central program or central power toward a decentralized network of agents.

Plasticity as new metaphor for programmable visions, superseding flexibility; compare to Hayles discussion of Malabou.

(130-131) Computing as well has moved toward less strictly “programmable” systems—in theory if not yet in everyday practice. . . . Crucially, Malabou does not simply denunciate neurobiology, but rather engages it closely to argue for the difference between flexibility—which is capitulation—and plasticity. Plasticity, situated between two extremes--”the taking on of form” and “the annihilation of form”--enables a double movement, an explosive self-creation, that offers resistance to global neoliberalism. How might we understand plasticity in relation to the ongoing transformation of programmable visions?


The Undead of Information
(133) Memory and storage are different. . . . By bringing memory and storage together, we bring together the past and the future; we also bring together the machinic and the biological in to what we might call the archive.
(134) Intriguingly, the Mystic Writing Pad—or more properly its modern version, the Etch A Sketch(R)--returns as the model for the hard drive in a textbook on computer forensics.
(134) Data on a hard drive, Kruse and Heiser emphasize, leave a permanent trace, even as the drive makes room for new “impressions.” This description of the hard drive, written by information security experts, eerily repeats Freud's description of the unconscious. It also highlights the work that “memory” (in contrast to archiving) entails—to be retrieved, these traces must be submitted to a rigorous process of reading.
(135) Moreover [beyond primacy of RNA model and Derrida linking writing, cybernetics, memory], to understand information as undead, we need to understand its relation to that other undead thing—the commodity. If a commodity is, as Marx famously argued, a “sensible supersensible thing,” information would seem to be its complement: a supersensible sensible thing. . . . That is, if information is a commodity, it is not simply due to historical circumstances or to structural changes; it is also because commodities, like information, depend on a ghostly abstraction.
(135) That ghostly jelly, [Thomas]
Keenan argues, is humanity—the common humanity that survives in the things exchanged and, like language, makes exchange possible.


4 Always Already There, or Software as Memory
(137) Digitization surprisingly emerged as a preservation method in the 1990s by becoming a major form of “reformatting,” a procedure designed to save intellectual content threatened by decaying materials—such as acidic wood-pulp paper and silver-nitrate film—by reproducing it. . . . This celebration of the digital as archives' salvation stems in part from how digital files address another key archival issue: access. . . . Digital archives are allegedly H. G. Wells's “World Brain” and Andre Malraux's museum without walls, among other dreams, come true.
(138) The material limits of materials not only cause the future to be volatile, but also, again, so do the ever-updating, ever-proliferating, and increasingly incompatible soft and hard technologies—the challenges to the historical preservation of software outlined in the introduction to this book.

Information is always embodied, and digitally it is messy albeit axiomatic.

(139) Rather than making everything universally equivalent, the digital has exploded differences among media formats. . . . The information traveling through computers is not 1s and 0s; beneath binary digits and logic lies a messy, noisy world of signals and interference. Information—if it exists—is always embodied, whether in a machine or an animal. To make information appear disembodied requires a lot of work, work that is glossed over if we just accept the digital as operating through 1s and 0s.
(139) Revising the working thesis of chapters 2 and 3—software as axiomatic—chapter 4 contends that the digital is axiomatic.

Biological Abstractions
(140) In fact, rather than a temporary omission, abstractness was von Neumann's modus operandi, central to the “axiomatic” (blackboxing) method of his general theory of natural and artificial automata and consonant with his game theory work.
(140-141) This fateful abstraction, this erasure of the vicissitudes of electricity and magnetism, surprisingly depends on an analogy to the human nervous system. . . . In accordance with McCulloch and Pitts, von Neumann expunges the messy materiality of these “neurons.” . . . This analogy thus depends on and enables a reduction of both technological and biological components to blackboxes.

Nothing but Analog, All the Way Down

Compare proposition that it is analog all the way down redefining analogy itself to Hayles analysis of cybernetics.

(142-143) Based on an analogy to computing elements, neurons, which themselves grounded computing elements as digital, are declared digital: an initial analogy is reversed and turned into ontology. At the base of this logic lies a redefinition of analogy itself as a complicated mechanism that operates on continuous quantities, rather than on discrete units.

Analog to What?
(143) Analog elements, even as the “ground” digital ones such as transistors and neurons, are not simple predecessors to digital computers.
(144) Not only were analog computers not viewed or accepted as stepping-stones toward digital ones, but also the division itself between analog and digital electronic computers was not clear.
(145) At the core of early analog analyzers lie ordinary differential equations. Similar ordinary differential questions describe seemingly disparate and unrelated electrical, electromechanical, mechanical, and chemical phenomena, all of which can be understood as closed “circuits.” Analog machines, in this sense, work because ordinary differential equations are universal at a large scale, and because Newton's laws describing force can also describe electrical charge and water capacity.
(147) Crucially, the differential analyzer employed “generative” functions—that is, the output could feed into itself. It could thus solve for variables on both sides of the equation.
(147) These generative functions mark a fundamental difference between digital machines, which solve problems step by step, and analog machines.

Digital computer becomes simulacrum, whereas analog was simulation.

(148-151) Because of this mechanical yet “live,” analogous relationship, analog machines have generally been conceptualized as more transparent and intuitive than digital ones. . . . The differential analyzer was not, as the digital computer would be, amenable to notions of “universal” disembodied information. The differential analyzer simulated other phenomena, whereas digital computers, by hiding mimesis, could simulate any other machine. That is, while both digital and analog computers depend on analogy, digital computers, through their analogy to the human nervous system (which we will see stemmed from a prior analogy between neurons and Turing machines), simulate other computing machines using numerical methods, rather than recreating specific mechanical/physical situations. . . . The first, the engineer's perspective, views computers as models and differential equations as approximations of real physical processes; the second, the mathematician's perspective, treats equations as predictors, rather than descriptors of physical systems—the computer becomes a simulacrum, rather than a simulation.
(151) The electronic machines of the 1950s and 60s differed significantly from their mechanical predecessors. We thus need to be careful not to base arguments about analog machines as a whole on Vannevar Bush's early machines. . . . Op-amps as integrators, or even multipliers, were not “seeable” and graspable in the same fashion as wheel and disks.

Fordist logic and concealment of operation that reaches critical point according to Kittler when the last microchip is laid out on paper.

(151-152) The move to electronic not only deskilled operators, it also made computers mass producible. . . . Electronic analog and digital computers used mass-produced vacuum tubes and later transistors. Thus, both electronic analog and digital “machines” participate in Fordist logic: they automate calculation and production and make invisible the mathematics or calculations on which they rely.
(152) Digital machines, however, are more profoundly Fordist than analog ones. . . . It is in programming, or to be more precise, programming in opposition to coding, that analog and digital machines most differ. . . . However, whereas digital flow charts produce a sequence of individual operations, analog programming produces a “circuit” diagram of systematic relations.
(152) This complexity made it unlikely that analog computers could spawn or support code as logos. Code as logos—code as the machine—is intimately linked to digital design, which enables a strict step-by-step procedure that neatly translates time into space.

In the Beginning Was Logos (Again)
(153-154) In “A Logical Calculus of the Ideas Immanent in Nervous Activity,” McCulloch and Pitts seek to explain the operation of the brain in logical terms. . . . At its heart lies the equation of “the 'all-or-none' character of nervous activity” with propositional logic. It reduces neuronal action to a statement capable of being true or false, “to a proposition which proposed its adequate stimulus.” This equation once more conflates word with action: in this particular case, the firing of a neuron with the proposition that “made” it fire.
(154) Despite this, they argue that the all-or-none behavior of neurons makes them the fundamental psychic units or “psychons,” which can be compounded “to produce the equivalents of more complicated propositions” in a causal manner. . . . Even though they state that formal equivalence does not equal factual explanation, they also insist that the differences between actual and idealized action do not affect the conclusions that follow from their formal treatment, namely they discovery/generation of a logical calculus of neurons.
(155) In this schema, analog to digital conversion takes place at the level of data—the difference in machine technology is completely erased through a logic of equivalence.
(155) By calling the cortex a digital machine, McCulloch sought to displace the then popular theory of the mind as functioning mimetically. . . . This is most clearly seen in his 1959 “What the Frog's Eye Tells the Frog's Brain,” (an article with J. Y. Lettvin, H. R. Maturana, and W. H. Pitts). . . . Neural nets are inspired by and aspire to be Turing machines. Von Neumann's use of McCulloch and Pitts's analysis is thus an odd and circular way of linking stored-memory digital computers to computing machines—once more, an over-determined discovery of a linkage between biology and computer technology, yet another turn of the double helix (before, of course, there was a double helix).
(155) This linkage not only establishes a common formal logic, it also enables the emergence of computer “memory.” Moving away from ideas of field-based, analogical notions of memory, McCulloch's neural nets produce transitory memories and ideas through circular loops. Drawing from Wiener's definition of information as order (negative entropy), McCulloch argues that ideas are information: they are regularities or invariants that conserve themselves as other things transform. . . . Causality runs only one way: one cannot decisively “reverse engineer” a neural net's prior state.
(156) This emergence of memory is thus, as [Geoffrey]
Bowker notes, also a destruction of memory. . . . Memories are rendered into context-free circuits freed from memory, circuits that are necessary to the operation of the animal/machine.
(156-157) This notion of memory as circuit/signal underscores McCulloch's difference from cognitive psychology, which, following developments in computer technology, would consider the brain hardware and the mind software. . . . Notably, McCulloch in his later work did address software, or programs, but referred to them as instructions to be operated on by data in memory, rather than as stored themselves in memory. Instructions, in other words, did not drive the system—the logic, the logos, happened at the level of firing neurons.

The argument for memory, stored instructions, and primacy of software based on von Neumann use of McCulloch and Pitts neuron model rathert than Shannon communication model.

(157) Thus, by turning to McCulloch and Pitts rather than to Shannon, von Neumann gains a particular type of abstraction or logical calculus: an axiomatic abstraction and schematic design that greatly simplifies the behavior of its base components. Von Neumann also gains a parallel to the human nervous system, key to his later work on “general automata.” Last, he “gains” the concept of memory—a concept that he would fundamentally alter by asserting the existence of biological organs not known to exist. Through this hypothetical “memory organ,” and his discussion of the relationship between orders and data, his model would profoundly affect the development of cognitive science and artificial intelligence (AI) and life (AL). Through this memory organ, von Neumann would erase the difference between storage and memory, and also open up a different relationship between man and machine, one that would incorporate instructions—as a form of heredity—into the machine, making software fundamental.

Memories to Keep in Mind
(157) Von Neumann's work with natural and artificial automata in general reverses the arrow of the analogy established in “First Draft.” Rather than explaining computers in terms of the human nervous system, he elucidates the brain and its functioning in terms of computational processes.
(157-158) Computer storage devices as memory is no simple metaphor, since it asserts the existence of an undiscovered biological organ. . . . This guess regarding capacity assumes that the brain functions digitally, that it stores information as bits, which are then processed by the brain, rather than functioning more continuously in a “field-based” manner. Again, this assumption was by no means accepted whole-heartedly by biologists.
(159) The EDVAC was to increase the speed of calculation by putting some of those values inside the memory organ, making porous the boundaries of the machine. Memory instituted “
a prosthesis of the inside.” Memory was not simply sequestered in the “organ”; it also bled into the central arithmetic unit, which, like every unit in the system, needed to store numbers in order to work.
(159) To contain or localize memory, von Neumann organized it hierarchically: there were to be many memory organs, defined by access time rather than content. . . . In this last step [input output devices], the borders of the organism and the machine explode. Rather than memory comprising an image of the world in the mind, memory comprises the whole world itself as it becomes “dead.”
(160) This conflation both relied on and extended neurophysiological notions of memory as a trace or inscription, like the grooves of a gramophone record.
(161) Memory as storage also allows von Neumann to describe genes as a form of human memory.

Descriptions that Can

Immortality through undead storage.

(162) Regardless, this substitution of Word with Deed sums up von Neumann's axiomatic approach to automata and his attraction to McCulloch and Pitts's work. It also leads him to conceive of memory as storage: as a full presence that does not fade, even though it can be misplaced. What is intriguing, again, is that this notion of a full presence stems from a bureaucratic metaphor: filing cabinets in the basement. This reconceptualization of human memory bizarrely offers immortality through “dead” storage: information as undead.
(162-163) This controversial axiomatization, which von Neumann would employ later in his theory of self-reproducing automata, reduces all neuronal activities to true/false statements. Neurons follow a propositional logic. . . . Words that describe objects, in other words, can be replaced by mechanisms that act, and all objects and concepts, according to von Neumann, can be placed in this chain of substitution.

Equivalence of describing and producing an object linked to equivalence of access and comprehension, setting up programmed visions, although the question remains who or what transforms descriptions into instructions, and makes humans and automata indistinguishable: knowledge management reflects this unconscious philosophy by focusing on systems making data ready at hand rather than organizing structure of data.

(163) This notion of an actual object is not outside of language, even if it is outside “literary description,” for, to von Neumann, producing an object and describing how to build it were equivalent.
(164) Thus, the memory of the system—here postulated as a more vibrant form of memory than “paper tape”--becomes the means by which the automaton can self-reproduce.
(164) What becomes crucial, in other words, and encapsulates the very being of the machine, are the instructions needed to construct it. Furthermore, and inseparable from the translation of event into instruction, this description—as a set of instructions itself—contains a bizarre, almost mystical, address. For, when von Neumann says, “Now form an instruction
ID, which describes automaton D, and insert ID into A within D,” or “Combine the automata A and B with each other, and with a control mechanism C,” who will do this forming and combining; who will perform these crucial steps and how? What mystical force will respond to this call? Like Faust before Mephistopheles arrives, are we to incant spells to create spirits? The transformation of description into instruction leaves open the question: who will do this? Who will create the magical description that goes inside? Remarkably, this call makes clear the fact that humans are indistinguishable from automata, something that bases von Neumann's game theory as well.

Games and Universes

Strategy replaces description to yield automatic production of instructions; program, not short code.

(165) Most importantly, von Neumann and [Oskar] Morgenstern introduce the notion of strategy to replace or simplify detailed description. . . . This replacement of a complete description with a strategy is not analogous to the replacement of machine code with a higher-level programming language, or what von Neumann calls “short code.” . . . This strategy, which game theory remarkably assumes every player possesses before the game, is analogous to a program—a list of instructions to be followed based on various conditions. . . . A strategy/program thus emphasizes the programming/economic agent as freely choosing between choices.
(166) [Gregory]
Bateson is absolutely correct in his assessment: in outlining such a comprehensive version of a strategy, game theory assumes a player who could only be—or later would become—an automaton. . . . A strategy is something an automaton—or more properly a programmer—working non-”interactively” with a computer has.
(166) Von Neumann, in a rather historically dubious move, equates abstract or universal Turing machines with higher-level languages.

Complete and short codes.

(166-167) To make this argument, von Neumann separates codes into two types: complete and short. . . . Importantly, Turing himself did not refer to short or complete codes, but rather to instructions and tables to be mechanically—meaning faithfully—followed. . . . Thus, in a remarkably circular route, von Neumann establishes the possibilities of source code as logos: as something iterable and universal. Word becomes action becomes word becomes the alpha and omega of computation.

Enduring Ephemeral
(167) Crucially, memory is not a static but rather an active process. A memory must be held in order to keep it from moving or fading. Again, memory does not equal storage.
(167) As Frances A.
Yates explains [concerning Rhetorica Ad Herennium], the rhetorician treated architecture as a writing substrate onto which images, correlating to objects to be remembered, were inscribed.
(168) Memory as active process is seen quite concretely in early forms of “regenerative memory,” from the mercury delay line to the Williams tube, the primary memory mentioned earlier. . . . One tube could usually store about a thousand binary bits at any given moment.
(169) Today's RAM is mostly volatile and based on flip-flop circuits and transistors and capacitors, which require a steady electrical current. . . . Memory traces, to repeat Derrida's formulation, “produce the space of their inscription only by acceding to the period of their erasure.”
(169-170) This degeneration, which engineers would like to divide into useful versus harmful (erasability versus signal decomposition, information versus noise), belies and buttresses the promise of digital computers as permanent memory machines. . . . That is to say that if memory is to approximate something so long lasting as storage, it can do so only through constant repetition, a repetition that, as Jacques Derrida notes, is indissociable from destruction (or in Bush's terminology, forgetting).
(170) This enduring ephemeral—a battle of diligence between the passing and the repetitive—also characterizes content today.

Threat of digital dark age in discussion of Internet Wayback Machine.

(170-171) The imperfect archives of the IWM [Internet Wayback Machine] are considered crucial to the ongoing relevance of libraries. . . . The IWM is necessary because the Internet, which is in so many ways about memory, has, as Ernst argues, no memory—at least not without the intervention of something like the IWM. Other media do not have a memory, but they do age and their degeneration is not linked to their regeneration. As well, this crisis is brought about because of this blinding belief in digital media as cultural memory. This belief, paradoxically, threatens to spread this lack of memory everywhere and plunge us negatively into a way-wayback machine: the so-called “digital dark age.”
(172) Repetition and regeneration open the future by creating a nonsimultaneous new that confounds the chronological time these processes also enable. . . . The nonsimultaneoulsness of the new, this layering of chronologies, means that the gap between illocutionary and perlocutionary in high-speed telecommunications may be dwindling, but—because everything is endlessly repeated—response is demanded over and over again.
(172) Digital media networks are not based on the regular obsolescence or disposability of information, but rather on the resuscibility or the undead of information. . . . Repetition becomes a way to measure scale in an almost inconceivably vast communications network.

Need to focus on enduring ephemerality rather than speed.

(172-173) Rather than getting caught up in speed then, what we must analyze, as we try to grasp a present that is always degenerating, are the ways in which ephemerality is made to endure. . . . The pressing questions are: why and how is it that the ephemeral endures? And what does the constant repetition and regeneration of information effect? What loops and what instabilities does the enduring ephemeral introduce to the logic of programmability?


Conclusion: In Medias Res
(177) Through our clicks, we perhaps always escape, but never leave, embroiled more strongly in an ideology that persists through our changes rather than our knowledge. . . . So, instead of being enlightened and free, we seem to be caught in a certain madness: constantly acting without knowing, moving from crisis to crisis. We seem to be free only within certain constraints, within a “mousetrap.”

Compare in media res conclusion and alternatives to neoconservativism to Berry being a good stream: beginning with things, data-driven programming, status of computer as medium.

(177) Crucially . . . software can only be understood in media resin the middle of things. In media res is a style of narrative that starts in the middle, as the action unfolds. . . . Software in media res also means that we can only begin with things—things that we grasp and touch without fully grasping, things that unfold in time, things that can only be rendered “sources” or objects (if they can) after the fact. Further, it means addressing the move within programing toward data-driven programminga form of programing that, because it starts with data and then seeks (through machine learning algorithms) to discover the pattern “driving” behavior, is programming in media res. Last, in media res means taking seriously the computer's peculiar status as medium. . . . Code (both biological and technological), in other words, is “undead” writing, a writing that—even when it repeats itself—is never simply a deadly or living repetition of the same.

Flor C freedom depends on neighborhood of relations and unfolding action as example alternative to neoconservatism superseding neoliberalism.

(178) Neoliberalism is being superseded by neoconservatism: a drive to compensate for neoliberal isolation and chaos through a return to communal conservative values. Neoconservatism, however, is not the only way in which networks are being reimagined. . . . For Flor C., freedom stems from a collective patience and giving way—a collective flow in which one is immersed and imperiled. This freedom does not offer a feeling of mastery; it neither relies on maps nor sovereign subjects nor strategies, but rather depends on a neighborhood of relations and on unfolding actions.


Epilogue: In Medias Race

Wants to connect software and race, exploring the latter through in media res conclusion regarding the former.

(179) This book initially was inspired by the striking parallels between software in medias res and race—that is, parallels between software and race as key terms in the current frenzy of and decline in visual knowledge. Linked together in the early twentieth century through the notion of a “genetic program,” software and race embody two important ways of conceptualizing a seductively causal relationship between order and vision, the visible and invisible, the imaginable and readable—a causal relationship that contradicted early twentieth-century visions of a dark entropic future.
(179) Like software, race was, and still is, a privileged way of understanding the relationship between the visible and invisible: it links visual cues to unseen forces.
(180) Race and software therefore mark the contours of our current understanding of visual knowledge as “programmed visions.” As human vision is increasingly devalued through technological mediation in the sciences and through ideals of “color-blindness,” images, graphics, and simulations proliferate. While writing this book, however, it because clear that the topic “race as archive” was too big to be included. It has become a project in its own right, but I conclude with software in medias
race because it has haunted this book and its vision.


You, Again
(181)
The question: how are we to imagine you? By tracing the moments of connection—the ways in which the local unfolds to the global, constituting the “glocal”? Or, by taking these tracings as the beginnings of a more powerful imagining, a more powerful hallucination?



Chun, Wendy Hui Kyong. Programmed Visions: Software and Memory. Cambridge, MA: The MIT Press, 2011. Print.