Notes for Lev Manovich Software Takes Command

Key concepts: affordances, cultural software, grey software, media software, metamedium, personal dynamic media, remediaion, software studies, stretchtext.


Related theorists: Jay Bolter, Jerome Bruner, Douglas Engelbart, Matthew Fuller, Sigfried Giedion, Adele Goldberg, Richard Grusin, Stuart Hall, Alan Kay, Ted Nelson, Howard Rheingold, Claude Shannon, Alvy Ray Smith, Ivan Sutherland, Noah Wardrip-Fruin, Warren Weaver.

First encounter with text was free, online version.

ACKNOWLEDGMENTS


Introduction
Understanding media
(1-2) Welcome to the world of permanent change—the world that is now defined not by heavy industrial machines that change infrequently, but by software that is always in flux.
(2) Software has become our interface to the world, to others, to our memory and our imagination—a universal language which the world speaks, and a universal engine on which the world runs.
(2) This book is concerned with “
media software--programs such as Word, PowerPoint, Photoshop, Illustrator, After Effects, Final Cut, Firefox, Blogger, WordPress, Google Earth, Maya, and 3ds Max.
(4) More generally, how are interfaces and the tools of media authoring software shaping the contemporary aesthetics and visual languages of different media forms?
(4) What happens to the idea of a “medium” after previously media-specific tools have been simulated and extended in software?

What is media after software?

(4) In short: What is “media” after software?

Does “media” still exist?
(4-5) What is the intellectual history of media software? What was the thinking and motivation of the key people and research groups they were directing?--J.C.R. Licklider, Ivan Sutherland, Ted Nelson, Douglas Engelbart, Alan Kay, Nicholas Negroponte—who between 1960 and the late 1970s created most of the concepts and practical techniques that underlie today's media applications?

Situates work within software studies; title pays homage to Giedion Mechanization Takes Command.

(5) Its title pays homage to a seminal twentieth-century book Mechanization Takes Command: a Contribution to Anonymous History (1947) by architectural historian and critic Sigfried Giedion.
(5) My investigation is situated within a broader intellectual paradigm of “software studies.”

Software, or the engine of contemporary societies
(7) And this “cultural software”--cultural in a sense that is directly used by hundreds of millions of people and that it carries “atoms” of culture—is only the visible part of a much larger software universe.
(9) Even today, ten years later, when people are constantly interacting with and updating dozens of apps on their mobile phones and other computer devices,
software as a theoretical category is still invisible to most academics, artists, and cultural professionals interested in IT and its cultural and social effects.

What is software studies?
(10) But computer science is itself part of culture. Therefore, I think that Software Studies has to investigate the role of software in contemporary culture, and the cultural and social forces that are shaping the development of software itself.
(10-11) The publication of this groundbreaking anthology [
New Media Reader] laid the framework for the historical study of software as it relates to the history of culture.
(11) In February 2006 Matthew
Fuller who had already published a pioneering book on software as culture (Behind the Blip: essays on the culture of software, 2003) organized the very first Software Studies Workshop at Piet-Zwart Institute in Rotterdam.

Reference to peer-reviewed journal Computational Culture.

(11-12) To help bring this change, in 2008, Matthew Fuller, Noah Wardrip-Fruin and I established the Software Studies book series at MIT Press. . . . In 2001, Fuller together with a number of UK researchers established Computational Culture, an open-access peer-reviewed journal that provides a platform for more publications and discussions.
(12-13) Yet another relevant category of books comprises the historical studies of important labs and research groups central to the development of modern software, other key parts of information technology such as the internet, and professional practices of software engineering such as user testing.

Rheingold first to explicitly base computers as new media, not just new technology.

(13) My all-time favorite book, however, remains Tools for Thought published by Howard Rheingold in 1985, right at the moment when domestication of computers and software starts, eventually leading to their current ubiquity. This book is organized around the key insight that computers and software are not just “technology” but rather the new medium in which we can think and imagine differently.
(13) Beginning around 2000, a number of artists and writers started to develop the practice of software art which included exhibitions, festivals, publishing books, and organizing online repositories of relevant works.
(15) I think of software as
a layer that permeates all areas of contemporary societies. Therefore, if we want to understand contemporary techniques of control, communication, representation, simulation, analysis, decision-making, memory, vision, writing, and interaction, our analysis cannot be complete until we consider this software layer.

Insistence on new methodologies of software studies including humanities scholars who program and have technical experience to round out accounts of modern media and technology.

(15-16) At the same time, the existing work in software studies already demonstrates that if we are to focus on software itself, we need new methodologies. That is, it helps to practice what one writes about. It is not accidental that all the intellectuals who have most systematically written about software's roles in society and culture have either programmed themselves or have been involved in cultural projects and practices which include writing and teaching software—for instance, Ian Bogost, Jay Bolter, Florian Cramer, Wendy Chun, Matthew Fuller, Alexander Galloway, Katherine Hayles, Matthew Kirschenbaum, Geert Lovink, Peter Lunenfeld, Adrian Mackenzie, Paul D. Miller, William J. Mitchell, Nick Montfort, Janey Murray, Kaite Salen, Bruce Sterling, Noah Wardrip-Fruin, and Eric Zimmerman. In contrast, the scholars without this technical experience or involvement—for example, Manual Castells, Bruno Latour, Paul Virilio, and Siegfried Zielinksi—have not included discussions of software in their otherwise theoretically precise and highly influential accounts of modern media and technology.
(16) These programming and scripting languages and APIs did not necessarily make programming easier. Rather, they made it much more efficient. . . . Yet another reason for more people writing software today is the emergence of a massive mobile apps marketplace that, unlike the desktop market, is not dominated by a few large companies.

Despite of preference for retaining affordances of specific media over convergence in invisible interface, endorses hope that programming will become easy and lead to long tail democratization.

(17) Clearly, today the consumer technologies for capturing and editing media are much easier to use than even the most friendly programming and scripting languages. . . . But I do not see any logical reasons why programming cannot one day become equally easy.
(17-18) Although we are far from a true “long tail” for software, software development is gradually getting more democratized. It is, therefore, the right moment to start thinking theoretically about how software is shaping our culture, and how it is shaped by culture in its turn.

Cultural software
(20) This book is determined by my own history of engagement with computers as a programming, computer animator and designer, media artist, and as a teacher. . . . My first experience with computer graphics was in 1983-4 on Apple IIe.
(21) Thus, although I first learned to program in 1975 when I was in high school in Moscow, my take on software studies has been shaped by watching how during the 1980s GUI-based software quickly put the computer in the center of culture.

Grey software is that which is not directly used by most people, such as logistics and industrial automation software, although it regulates society.

Culture software enables cultural actions: creating artifacts, accessing and remixing them, creating knowledge online, communicating, engaging in interactive cultural experiences, participating in information ecology by expressing preferences and adding metadata, developing software tools and services.

(21) However, since I do not have personal experience writing logistics software, industrial automation software, and other grey” software, I will not be writing about such topics. My concern is with a particular subset of software which I used and taught in my professional life. I call it cultural software.
(21-23) These cultural actions enabled by software can be divided into a number of categories.
1 Creating cultural artifacts and interactive services . . .
2 Accessing, appending, sharing, and remixing . . .
3 Creating and sharing information and knowledge online . . .
4 Communicating with other people . . .
5 Engaging in interactive cultural experiences . . .
6 Participating in the only information ecology by expressing preferences and adding metadata . . .
7 Developing software tools and services.

Media applications
(27) While I will focus on media applications for creating and accessing “content” (i.e. media artifacts), cultural software also includes tools and services that are specifically designed for
communication and sharing of information and knowledge, i.e. “social software.”
(28-29) The challenge of software studies is to be able to use terms such as “content” and “software application” while always keeping in mind that the current social media/cloud computing paradigms are systematically reconfiguring the meaning of these terms.
(29) The interface category is particularly important for this book. I am interested in how
software appears to users—i.e. what functions it offers to create, share, reuse, mix, create, manage, share and communicate content, the interfaces used to present these functions, and assumptions and models about a user, her/his needs, and society encoded in these functions and their interface design.
(31) Should I not put my energy into promoting programming rather than explaining applications?

Focus on mainstream applications and create and access cultural content over promoting programming, which is an exceptional category.

(31) The reason for my choices is my commitment to understand the mainstream cultural practices rather than to emphasize (as many cultural critics do) the exceptions, no matter how progressive they may be.
(32) at the end of the twentieth century humans have added a fundamentally new dimension to everything that counts as “culture.” This dimension is software in general, and application software for creating and accessing content in particular.

From documents to performances
(33-34) Instead of fixed documents that could be analyzed by examining their structure and content (a typical move of the twentieth-century cultural analysis and theory, from Russian Formalism to Literary Darwinism), we now interact with dynamic “software performances.” . . . In other words, in contrast to paintings, literary works, music scores, films, industrial designs, or buildings, a critic cannot simply consult a single “file” containing all of the work's content.
(34-35) This shift in the nature of what constitutes a media “document” also calls into question well-established cultural theories that depend on this concept. . . .l Communication scholars have then the model of information transmission formulated by Claude
Shannon in his 1948 article A Mathematical Theory of Communication and his subsequent book published with Warren Weaver in 1949, and applied its basic model of communication to mass media.
(35) Classical communication theory and media industries considered such partial reception a problem; in contrast, in his influential 1980 article “Encoding/decoding” the founder of British Cultural Studies, Stuart
Hall, argued that the same phenomenon is positive. . . . But both the classical communication studies and cultural studies implicitly took for granted that the message was something complete and definite.

Actively managed model of communications replaces classical theory of Hall encoding decoding in which partial reception problematic.

(35-36) The interfaces of media access applications . . . encourage people to “browse,” quickly moving both horizontally between media (from one search result to the next, from one song to another, etc.) and vertically, through the media artifacts (e.g., from the contents listing of a music album to a particular track). . . . In other words, the “message” that the user “receives” is not just actively “constructed” by him/her (through a cognitive interpretation) but also actively managed (defining what information s/he is receiving and how).
(37) This shift from messages to platforms was in the center of the Web's transformation around 2004-6. The result was named Web 2.0.
(38) Continuously changing and growing content of web services and sites; variety of mechanism for navigation and interaction; the abilities to add one's own content and mashup content from various sources together; architectures for collaborative authoring and editing; mechanisms for monitoring the providers—all these mechanisms clearly separate interactive networked software-driven media from twentieth-century media documents. . . . And while “old media” (with the exception of twentieth-century broadcasting) also provided this random access, the interfaces of software-driven media players/viewers provide many additional ways for browsing media and selecting what and how to access.
(39) This media architecture enables easy addition of new navigation and management tools without any change to the documents themselves.

Why the history of cultural software does not exist

No reason to resurrect obsolete versions of most cultural software, in contrast to reissue of early video games.

Suggestion that cultural interest would be catalyzed if early software was widely available.

(41) It does not derive any profits from the old software—and therefore it does nothing to promote its history. . . . in contrast to the video games from the 1980s, these early software versions are not treated as separate products which can be re-issued today. . . . Although I am not necessarily advocating the creation of yet another category of commercial products, if early software was widely available in simulation, it would catalyze cultural interest in software similar to the way in which wide availability of early computer games, recreated for contemporary mobile platforms, fuels the field of video game studies.

With no preservation of obsolete versions of cultural software to study, no conceptual history or investigation of roles played by software in media production; compare to Campbell-Kelly and other software historians.

(41) we lack not only a conceptual history of media editing software but also systematic investigations of the roles of software in media production. For instance, how did the adoption of the popular animation and compositing application After Effects in the 1990s reshape the language of moving images?
(42) In summary, a systematic examination of the connections between the workings of contemporary media software and the new communication languages in design and media (including graphic design, web design, product design, motion graphics, animation, and cinema) has not yet been undertaken.
(420 By focusing on the theory of software for media design, this book aims to complement the work of a few other theorists that have already examined software responsible for game platforms and design (Ian Bogost, Nick Montfort), and electronic literature (Noah Wardrip-Fruin, Matthew Kirschenbaum).
(42) In this respect, the related fields of code studies and platform studies being developed by Mark Marion, Nick Montfort, Ian Bogost and others are playing a very important role.

Summary of the book's narrative
(43) What was the thinking and motivation of people who between 1960 and the late 1970s created the concepts and practical techniques that underlie today's cultural software? How does the shift to software-based production methods in the 1990s change our concepts of “media”? How have interfaces and the tools of content development software reshaped and continued to shape the aesthetics and visual languages we see in contemporary design and media? These are the key questions that I take up in this book.
(43) I will trace a
particular path through this history that will take us from 1960 to today and which will pass through some of its most crucial points.

Kay called computer first metamedium; foundations established in 1960s through 1970s so that by mid1990s media hybridization, evolution and deep remix are dominant concepts.

(44) Accordingly, [Alan] Kay calls computers the first metamedium whose content is “a wide range of already-existing and not-yet-invented media.”
(44) The foundations necessary for the existence of such metamedium were established between the 1960s and the late 1970s.
(45) I use three different concepts to describe these developments and the new aesthetics of visual media which developed in the second part of the 1990s after the processes of adoption reached sufficient speed. These three concepts are
media hybridization, evolution, and deep remix.

New historical stage of softwarization first affecting professional creatives, then the rest of us; would Manovich progression be orality, literacy, hybridity?

(45-46) Once they were simulated in a computer, previously incompatible techniques of different media begin to be combined in endless new ways, leading to new media hybrids, or, to use a biological metaphor, new “media species.” . . . In my view, this ability to combine previously separate media techniques represents a fundamentally new stage in the history of human media, human semiosis, and human communication, enabled by its “softwarization.”
(46) Deep remixability is central to the aesthetics of motion graphics.
(46-47) The next major wave of computerization of culture has to do with different types of software—social networks, social media, services, and apps for mobile platforms. . . . The 1990s' media revolution impacted
professional creatives; the 2000s' media revolution affected the rest of us.
(47) I decided that offering the detailed theoretical analysis of this new wave would be premature. . . . Instead, I am focusing on tracing the fundamental developments which made possible and shaped “digital media” before its social explosion: the ideas about the computer as a machine for media generation and editing of the 1960s-1970s, their implementation in the media applications in the 1980s-1990s, and the transformation of visual media languages which quickly followed.

Sutherland Sketchpad first computer design system presented publicly in 1961.

(47) To be more precise, we can frame this history between 1961 and 1999. In 1961, Ivan Sutherland at MIT designed Sketchpad, which became the first computer design system shown to the public. In 1999, After Effects 4.0 introduced Premiere import. Photoshop 5.5 added vector shapes, and Apple showed the first version of Final Cut Pro—in short, the current paradigm of interoperable media authoring and editing tools capable of creating professional media without special hardware beyond the off-the-shelf computer was finalized. And while professional media tools continued to evolve after this period, the changes so far have been incremental. Similarly, the languages of professional visual media created with this software did not change significantly after their radical transformation in the second part of the 1990s.
(48) I have chosen to focus on the desktop applications for media authoring most widely used today—Photoshop, Illustrator, InDesign, Dreamweaver, After Effects, Final Cut, Maya, 3ds, Max, Word, PowerPoint, etc. . . . I will also be making references to popular web browsers, media sharing services, email services and clients, web-based office suites, and consumer geographic information systems. Since I am interested in how users interact with media, another key softawre category for this book is media players and document viewing applications.
(50) Because I do not expect a typical reader of this book to have a working experience with these expensive systems, I will not be referring to them further in this book.
(50) I love and support open source and free access, and use it for all my work.

Foregrounds the most commonly used applications, which are likely commercial, regardless of personal ideological preference for free, open source options.

(50-51) The reason this book focuses on commercial media authoring and editing software rather than its open source equivalents is simple. In almost all areas of software culture, people use free applications and web services. . . . However, in the case of professional tools for media authoring and editing, commercial software dominates. It is not necessarily better, but it is simply used by many more people. . . . Since I am interested in describing the common user experiences, and the features of media aesthetics common to millions of works created with the most common authoring tools that are all commercial products, these are the products I choose to analyze. And when I analyze tools for media access and collaboration, I similarly choose the most popular products—which in this case includes both free software and services provided by commercial companies (Safari, Google Earth), and free open source software (Firefox).


PART ONE
Inventing media software
CHAPTER ONE
Alan Kay's universal media machine

Appearance versus function
(55) Between its invention in the mid-1940s and the arrival of PCs in the early 1980s, the digital computer was mostly used for military, scientific, and business calculations and data processing. It was not interactive. It was not designed to be used by a single person. In short, it was hardly suited for cultural creation.
(56) In short, it appears that the revolution in the means of production, distribution, and access of media has not been accompanied by a similar revolution in the syntax and semantics of media.
(56) Building on the already accomplished work of the pioneers of cultural computing, the Learning Research Group at Xerox PARC, headed by Kay, systematically articulated the paradigm and the technologies of
vernacular media computing, as it exists today.
(57) It is well known most of the the key ingredients of personal computers as they exit today came out of Xerox PARC: the Graphical User Interface with overlapping windows and icons, bitmapped display, color graphics, networking via Ethernet, mouse, laser printer, and WYSWYG printing. But what is equally important is that Kay and his colleagues also developed a range of applications for media manipulation and creation that also all used a graphical interface. They included a word processor, a file system, a drawing and painting program, an animation program, a music editing program, etc. Both the general user interface and the media manipulation programs were written in the same programming language, Smalltalk.
(57-58) When Apple introduced the first Macintosh computer in 1984, it brought the vision developed at Xerox PARC to consumers (the new computer was priced at USD $2,495). The original Machintosh 128K included a word processing and a drawing application (MacWrite and MacPaint, respectively). Within a few years these were joined by other software for creating and editing different media: Word, PageMaker and VideoWorks (1985), SoundEdit (1986), Freehand and Illustrator (1987), Photoshop (1990), Premiere (1991), After Effects (1993), and so on. In the early 1990s, similar functionality became available on PCs running Microsoft Windows.
(58) By around 1991, the new identity of a computer as a personal media editor was firmly established.

GUI software turned computer into remediation machine representing earlier media.

(58-59) By developing easy-to-use GUI-based software to create and edit familiar media types, Kay and others appear to have locked the computer into being a simulation machine for “old media.” Or, to put this in terms of Jay Bolter and Richard Grusin's influential book Remediation: Understanding New Media (2000), we can sat that GUI-based software turned a digital computer into a “remediation machine,” a machine that expertly represents a range of earlier media.
(59) There was definitely nothing in the original theoretical formulations of digital computers by Turing or Von Neumann about computers imitating other media such as books, photography, or film.
(60) While media theorists have spent considerable efforts in trying to understand the relationships between digital media and older physical and electronic media in the 1990s and 2000s, the important sources—the writing and project by Ivan Sutherland, Douglas Engelbart, Ted Nelson, Alan Kay, and other pioneers working in the 1960s and the 1970s—remained largely unexamined.

What is media after software becomes the new question; Kay personal dynamic media historically unprecedented affordances.

(60) In short, I want to understand what is “media after software--that is, what happened to the techniques, languages, and the concepts of twentieth-century media as a result of their computerization.
(61) Kay conceived of “
personal dynamic mediaas a fundamentally new kind of media with a number of historically unprecedented properties such as the ability to hold all the user's information, simulate all types of media within a single machine, and “involve the learner in a two-way conversation.” These properties enable new relationships between the user and the media s/he may be creating, editing, or viewing on a computer. And this is essential if we want to understand the relationships between computers and earlier media. Briefly put, while visually, computational media may closely mimic other media, these media now function in different ways.
(62) To use a different term, we can say that a digital photograph offers its users many “
affordancesthat its non-digital predecessor did not.
(62-63) In summary, we can say that only some of the “new DNA” of a digital photograph is due to its particular place of birth, i.e., inside a digital camera. Many others are the result of the current paradigm of network computing in general.
(63-64) While Vannevar Bush, J.C.R. Licklider and Douglas Engelbart were primary concerned with augmentation of intellectual and in particular scientific work, Kay was equally interested in computers as “a medium of expression through drawing, painting, animating pictures, and composing and generating music.”

Simulation is the central notion of the Dynabook”

Dynabook platform a metamedium, challenging prior understanding of media as separate from one another.

(64) In this article [“Personal Dynamic Media”] Kay and Goldberg describe the vision to create “a personal dynamic medium the size of a notebook (the Dynabook) which could be owned by everyone and could have the power to handle virtually all of its owner's information-related needs.”
(65) Rather, the goal was to establish a computer as an umbrella, a platform for
all existing expressive artistic media. (At the end of the article Kay and Goldberg give a name for this platform, calling it a “metamedium.”) This paradigm changes our understanding of what media is. From Gotthold Ephraim Lessing's Lacoon; or, On the Limits of Painting and Poetry (1766) to Nelson Goodman's Languages of Art (1968), the modern discourse about media depends on the assumption that different mediums have distinct properties and in fact should be understood in opposition to each other. . . . Some of these new connections were already apparent to Kay and his colleagues; others became visible only decades later when the new logic of media set in place at PARC unfolded more fully; some may still not be visible to us today because they have not been given practical realization. . . . All in all, it is as though different media are actively trying to reach towards each other, exchanging properties and letting each other borrow their unique features.
(70) It was only Kay and his generation that extended the idea of simulation to media—thus turning Universal Turing Machine into a
Universal Media Machine, so to speak.

Beyond remediation, creating magical paper: adding new properties and personal programming suggests another site for critical programming.

(70-71) Appropriately, when Kay and his colleagues created computer simulations of existing physical media—i.e., the tools for representing, creating, editing, and viewing these media—they “added” many new properties. . . . As Kay has referred to this in another article, his idea was not to simply imitate paper but rather to create “magical paper.”
(72) Studying the writings and public presentations of the people who invented interactive media computing—Sutherland, Engelbart, Nelson, Negroponte, Kay, and others—makes it clear that they did not produce the new properties of computational media as an afterthought. On the contrary, they knew that they were turning physical media into new media.

View control example of new media property intentionally highlighted by Engelbart demo, comparable to Nelson idea of stretchtext.

(72-73) Paying attention to the sequence of the demo reveals that while Engelbart had to make sure that his audience would be able to relate to the new computer systems to what they already knew and used, his focus was on new feature of simulated media never before available previously. . . . As Engelbart points out, the new writing media could switch at the user's wish between many different views of the same information.
(75) (In 1967 Ted Nelson articulated and named a similar idea of a type of hypertext, which would allow a read to “obtain a greater detail on a specific subject.” He named it “
stretchtext.”)
(75) Since new media theory and criticism emerged in the early 1990s, endless texts have been written about interactivity, hypertext, virtual reality, cyberspace, cyberculture, cyborgs, and son on. But I have never seen anybody discuss “view control.” And yet this is one of the most fundamental and radical new techniques for working with information and media available to us today.
(78) While such historical precedents for hypertext are often proposed, they mistakenly equate Nelson's proposal with a very limited form in which hypertext is experienced by most people today—i.e., the World Wide Web.
(79) “What kind of structures are possible in hypertext?” asks Nelson in a research note from 1967. He answers his own question in a short but very suggestive manner: “Any.”
(80-81) The announcement for his [Nelson's] January 5, 1965 lecture at Vassar College talks about this in terms that are even more relevant today than they were then: “The philosophical consequences of all this are very grave. Our concepts of 'reading', 'writing', and 'book' fall apart, and we are challenged to design 'hyperfiles' and write 'hypertext' that may have more teaching power than anything that could ever be printed on paper.”
(82) Although Nelson says that hypertext can support any information structure and that this information does not need to be limited to text, his examples and his style of writing show an unmistakable aesthetic sensibility—that of literary modernism.
(82) The early twentieth-century avant-garde artists were primarily interested in questioning conventions of established media such as photography, print, graphic design, cinema, and architecture. . . . In contrast, Nelson and Kay explicitly write about creating new media, not only changing the existing ones.
(83-84) Instead of pa particular modernist “ism,” we get a file structure. . . . Instead, the new system would be capable of simulating all these media with all their remediation strategies—as well as supporting development of what Kay and Goldberg referred to as new “not-yet-invented media.” And of course, this was not all. Equally important was the role of interactivity. The new meta-systems proposed by Nelson, Kay and others were to be used interactively to support the processes of thinking, discovery, decision making, and creative expression. . . . Finally, at least in Kay's and Nelson's vision, the task of defining new information structures and media manipulation techniques—and, in fact, new media as a whole—was given to the user, rather then being the sole province of the designers. . . . Since the end of 2000, extending the computer metamedium by writing new software, plugins, programming libraries and other tools became the new cutting-edge type of cultural activity – giving a new meaning to McLuhan's famous formula “the medium is the message.”
(84) In either case, the need for new research is justified by a reference to already established or popular practices—academic paradigms which have been funded, large-scale industries, and mainstream social routines which do not threaten or question the existing social order.
(84) The invention of new mediums for its own sake is not something which anybody is likely to pursue, or get funded. From this perspective, the software industry and business in general is often more innovative than academic computer science.
(85-86) The newness lies not in the content but in the software tools used to create, edit, view, distribute and share this content. Therefore, rather than only looking at the “output” of software-based cultural practices, we need to consider software itself—since it allows people to work with media in a number of historically unprecedented ways.
(88) Rather than conceiving of Sketchpad as simply another medium, Sutherland presents it as something else—a communication system between two entities: a human and an intelligent machine.

Example of digital frame buffer as new creative medium.

(90-91) But even if we forget about SuperPaint's revolutionary ability to combine graphics and video, and discount its new tools such resizing, moving, copying, etc., we are still dealing with a new creative medium ([Alvy Ray] Smith's term). As Smith pointed out, this medium is the digital frame buffer, a special kind of computer memory designed to hold images represented as an array of pixels (today a more common name is graphics card).

The permanent extendibility
(92) In short,
new media” is “new” because new properties (i.e., new software techniques) can always be easily added to it.
(92) What used to be separate moments of experimentation with media during the industrial era became the norm in a software society.
(93) But this process of continual invention of new algorithms does not just move in any direction. . . . As new techniques continue to be invented they are layered over the foundations that were gradually put in place by Sutherland, Engelbart, Kay, and others in the 1960s and 1970s.

Malleability of software compared to other industrially produced objects.

(93) New programs can be written and existing programs can be extended and modified (if the source code is available) by anybody who has programming skills and access to a computer, a programming language and a compiler. In other words, today software is fundamentally malleable in a way that twentieth-century industrially produced objects were not.

Kay like Kemeny does philosophy of programming.

(94) This democratization of software development was at the core of Kay's vision. Kay was particularly concerned with how to structure programming tools in such a way that would make development of media software possible for ordinary users.
(94) This means that the idea that a new medium gradually finds its own language cannot apply to computer media. If this were true it would go against the very definition of a modern digital computer. This theoretical argument is supported by practice. The history of computer media so far has not been about arriving at some standardized language—as, for instance, happened with cinema—but rather about the gradual expansion of uses, techniques, and possibilities.
(95) To rephrase this example in more general terms, we can say that rather than moving from an imitation of older media to finding its own language, computational media was from the very beginning speaking a new language.
(96) The inventors of computational media had to question many, if not most, already established concepts and techniques of how both software and hardware function, thus making important contributions to hardware and software engineering. A good example is Kay's development of Smalltalk, which for the first time systematically established a paradigm of object-oriented programming.
(96-97) Looking at the history of computer media and examining the thinking of its inventors makes it clear that we are dealing with the opposite of technological determinism. . . . Similar to Marx's analysis of capitalism in his works, here the analysis is used to create a plan for action for building a new world—in this case, enabling people to create new media.

Example of interactive interface as non-deterministic development not latent in theoretical computing concepts of Von Neumann architecture.

(97) But the most important example of such non-deterministic development is the invention of the modern interactive graphical human-computer interface itself by Sutherland, Engelbart, Kay and others. None of the key theoretical concepts of modern computing as developed by Turing and Von Neumann called for an interactive interface.

Media must be thought beyond symbols, for even Platonic living writing ideal which implies invisible interface akin to direct manipulation, especially for learning; on the right track correcting ideology of direct manipulation, in which the medium disappears, with position leveraging material specific affordances of media.

Kay goes beyond Kemeny and others who focus on utilitarian uses interpellating adults of all ages by including experimentation and artistic expression; appeal to enactive, iconic and symbolic mentalities via mouse, icons and windows, Smalltalk.

Kay deliberate design guide thinking of computers as medium for learning, experimentation and artistic expression for children of all ages; user interface should appeal to enactive, iconic and symbolic mentalities as articulated by Bruner and Piaget.

Learning having enactive, iconic and symbolic components means removing need to program from interface unintentionally weakened human intelligence.

(97-98) According to Kay, they key step for him and his group was to starting thinking about computers as a medium for learning, experimentation, and artistic expression which can be used not just by adults but also by “children of all ages.” Kay was strongly influenced by the theory of cognitive psychologist Jerome Bruner. . . . Bruner gave slightly different names to these different mentalities [of Piaget]: enactive, iconic, and symbolic. While each mentality has developed at different stages of human evolution, they continue to co-exist in an adult.
(98) Kay's interpretation of this theory was that a user interface should appeal to all these three mentalities. In contrast to a command-line interface, which is not accessible for children and forces the adult to use only symbolic mentality, the new interface should also make use of emotive and iconic mentalities.
(98-99)
Mouse activates enactive mentality (know where you are, manipulate), Icons and windows activate iconic mentality (recognize, compare, configure.) Finally, Smalltalk programming language allows for the use of symbolic mentality (tie together long chains of reasoning, abstract.)
(99) In actual use, a contemporary GUI involves constant interplay between different mentalities.

PARC GUI designed as medium to facilitate learning, discovery and creativity.

(100) If we are to agree with Bruner's theory of multiple mentalities and Kay's interpretation of this theory, we should conclude that the new computational media that he helped to invent can do something no previous media can—activate our multiple mentalities which all play a role in learning and creativity, allowing a user to employ whatever works best at any given moment and to rapidly switch between them as necessary. . . . In short, while many HCI experts and designers continue to believe that the idea human-computer interface should be invisible and get out of the way to let users do their work, looking at the theories of Kay and Goldberg that were behind GUI design gives a very different way of understanding an interface's identity. Kay and his colleagues at PARC have conceived GUI as a medium designed in its every detail to facilitate learning, discovery, and creativity.

Commercially successful GUI designed as intuitive mimicry of physical world workspace.

(101) Unfortunately, when GUI became the commercially successful paradigm following the success of Apple's Mac computers, introduced in 1984, the intellectual origins of GUI were forgotten. Instead, GUI was justified using a simplistic idea that since computers are unfamiliar to people, we should help them by making interface intuitive by making it mimic something users are already well familiar with—the physical world outside of a computer (which in reality was an office environment with folders, desks, printers, etc.) Surprisingly, even in recent years—when “born digital” generations were already using computer devices even before they ever set foot in an office—this idea was still used to explain GUI.

The computer as a metamedium
(102-103) In other words, a computer can be used to create
new tools for working with the media types it already provides as well as to develop new not-yet-invented media.

Literacy implies reading and writing abilities; media editing applications provided with computers should inspire users to write their own programs.

Development of Smalltalk and applications provided with computer to encourage user modification and development to test hypotheses: a prototype of critical programming.

(103) Using the analogy with print literacy, Kay motivates this property in this way: “The ability to 'read' a medium means you can access material and tools generated by others. The ability to write in a medium means you can generate materials and tools for others. You must have both to be literate.” Accordingly, Kay's key effort at PARC was the development of the Smalltalk programming language. . . . In other words, all media editing applications that would be provided with a computer, were to serve also as examples, inspiring users to modify them and to write their own applications.
(104) Accordingly, the large part of Kay and Goldberg's paper is devoted to description of software developed by the users of their system . . . professionals, high school students, and children—in order to show that everybody could develop new tools using the Smalltalk programming environment.
(104) Just as a scientist may use simulation to test different conditions and play different what/if scenarios, a designer, a writer, a musician, a filmmaker, or an architect working with computer media can quickly “test” different creative directions in which the project can be developed as well as see how modifications of various “parameters” affect the project.

Is it only historical accident that the Macintosh did not ship with a user development environment, corrupting Kays vision?

Processing model language for everyday users developing their own media tools.

(105) Unfortunately, when in 1984 Apple shipped Macintosh, which was to become the first commercially successful personal computer modeled after the PARC system, it did not have an easy-to-use programming environment. . . . Only more recently, as the general computer literacy has widened and many new high-level programming languages have become available—Perl, PHP, Python, JavaScript, etc.--have more people started to create their own tools by writing software. A good example of a contemporary programming environment, very popular among artists and designers and which, in my view, is close to Kay's vision, is Processing.


CHAPTER TWO
Understanding metamedia
The building blocks

(107) During the years I was writing and editing the book, many important developments made Alan Kay's vision of a computer as the “first metamedium” more real—and at the same time more distant.



Manovich, Lev. Software Takes Command. New York: Bloomsbury, 2013. Print.