Notes for Andy Clark Supersizing the Mind: Embodiment, Action, and Cognitive Extension

Key concepts: body, cognitive extension, embodiment, hazard function, mashup, subjectivity.

To Chalmers limited bandwidth between elements of environment allows maintaining internal conscious core for humans. At the same time, accepting the weak functionalism of the Parity Principle supports mentality existing in the cyborg network. Body as adaptively potent mashup. Model of subjectivity shifts from monadic pure agent to manager actively dovetailing multiply layered systemic interactions, swirling dynamic structures embodying implicit metacognitive commitments exercised in complex skill hierarchies. Key transition for arguing texts and technology influence on subjectivity is whether cognitive extension meaningful concept like empirically documented sensory extension. Compare conceptualization of embodiment to Hayles eras of cybernetics to explore how computing paradigms interact with cognitive paradigms, from the Cartesian subject to the postmodern dividual. Hazard function linked to engineering intensity function applied to exercise of learning how particular computers work alongside other activities increasing mental capacity a path into cyberculture, making it relevant to culture studies where it intersects digital culture and also, which is the goal of my argument, philosophy. Distinction between vehicles and contents Clark intones biochauvinistic prejudice amenable to Bogost alien phenomenology philosophy of computing.

Related theorists: Chalmers, Deleuze, Hayles.

Foreword
David Chalmers
(ix) Clark is a connoisseur of the myriad ways in which the mind relies on the world to get its work done.
(x) This is the thesis of the extended mind: when parts of the environment are coupled to the brain in the right way, they become parts of the mind.
(xi) Still, I think that there is one potentially principled place where the opponent of the extended mind can resist. This is an appeal to the dual boundaries of perception and action.
(xii) I take the moral here to be that the classification of states can depend on our explanatory purposes.
(xiv) The deeper point is that extended states can function in explanation in very much the same way that beliefs function, and they should be regarded as sharing a deep and important explanatory kind with beliefs. This explanatory unification is the real underlying point of the extended mind thesis.
(xiv) But then, what about the big question: extended consciousness?
(xiv-xv) Still, I think it is unlikely that any everyday process akin to Otto's interaction with his notebook will yield extended consciousness, at least in our world. . . . Perhaps part of the reason is that the physical basis of consciousness requires direct access to information on an extremely high bandwidth. . . . But our low-bandwidth conscious connection to the environment seems to have the wrong form as it stands.
(xv) I tentatively conclude that the extension of the mind is compatible with retaining an internal conscious core.

Chalmers: limited bandwidth between elements of environment allows maintaining internal conscious core for humans; at the same time, accepting the weak functionalism of the Parity Principle supports mentality existing in the cyborg network.

(xv) All one needs is the very weak functionalism captured in the Parity Principle: roughly, if a state plays the same causal role in the cognitive network as a mental state, then there is a presumption of mentality, one that can only be defeated by displaying a relevant difference between the two (and not merely the brute difference between inner and outer). Combined with the observation that there are no relevant differences in the relevant cases—an observation that does not require functionalism for its support—the thesis follows.
(xvi) In case after case, in domain after domain, Andy Clark brings out the many ways in which the extended view of the mind can productively reconfigure our thinking about the relationship between mind and world.


Acknowledgments
(xvii-xviii) Shaun
Gallagher organized a truly rewarding interdisciplinary conference called Cognition: Embodied, Embedded, Enactive, Extended, held in October 2007 at the University of Central Florida. . . . As ever, my greatest intellectual debt is to Daniel Dennett, whose work and views are without a doubt the major, if sometimes subterranean, influence on all that I write.


Introduction: BRAINBOUND Versus EXTENDED
(xxv-xxvi) The loop through pen and paper is part of the physical machinery responsible for the shape of the flow of thoughts and ideas that we take, nonetheless, to be distinctively those of Richard Feynman. . . . Such considerations of parity, once we put our bioprejudices aside, reveal the outward loop as a functional part of an extended cognitive machine. . . . Such cycles
supersize the mind.
(xxvii) But the notion of “meshing” that [Esther]
Thelen deploys should give us pause, suggesting as it does a kind of ongoing intermingling of cognitive activity with the perceptuomotor maxtrix from which it putatively emerges.
(xxvii) Rather, what is at issue is something to do with the separability of mind, body, and world, at least for the purposes of understanding mind as the “locus of intelligence.” What
Haugeland is selling is a radical package deal aimed at undermining a simple, but arguably distortive, model of mind. This is the model of mind as essentially inner and, in our case, always and everywhere neurally realized.

Initial questions concerning the brainbound model: is internal, neurally realized model of mind indicative of the modernist perspective; how does instrumentality fit; veil of transduction perspective on perception.

(xxvii) According to BRAINBOUND, the (nonneural) body is just the sensor and effector system of the brain, and the rest of the world is just the arena in which adaptive problems get posed and in which the brain-body system must sense and act. If BRAINBOUND is correct, then all human cognition depends directly on neural activity alone.
(xxviii) Maximally opposed to BRAINBOUND is a view according to which thinking and cognizing may (at times) depend directly and noninstrumentally upon the ongoing work of the body and/or the extraorganismic environment. Call this model EXTENDED. . . . Cognition leaks out into body and world.

Entry for texts and technology studies admitting that our writing machines influence our subjectivity.

(xxviii) This matters because it drives home the degree to which environmental engineering is also self-engineering. In building our physical and social world worlds, we build (or rather, we massively reconfigure) our minds and our capacities of thought and reason.


I
FROM EMBODIMENT TO COGNITIVE EXTENSION
1
The Active Body
1.1 A Walk on the Wild Side
(3-4) Whereas robots like Asimo walk my means of very precise, and energy-intensive, joint-angle control systems, biological walking agents make maximal use of the mass properties and biomechanical couplings present in the overall musculoskeletal system and walking apparatus itself.
(7) To capture such effects, Pfeifer and Bongard (2007) invoke the
Principle of Ecological Balance.
(7-8)
Nontrivial causal spread occurs whenever something we might have expected to be achieved by a certain well-demarcated system turns out to involve the exploitation of more far-flung factors and forces. . . . Robotics thus rediscoveres many ideas explicit in the continuing tradition of J. J. Gibson and of “ecological psychology.”

1.2 Inhabited Interaction
(9) Nonetheless, the successful exploitation of passive-dynamic effects may well be a major contributing element to what
Dourish (2001) nicely calls “inhabited interaction,” a way of being in the world that is contrasted with “disconnected control.”

Fit between morphology and control at the core of multipurposive synergetic systems.

(10-11) But one way in which evolved agents truly inhabit rather than simply control, their bodies may be usefully understood in terms of a profound fit between morphology and control.

1.3 Active Sensing
(13) According to PEA [the
Principle of Ecological Assembly promoted by Ballard et al.], the canny cognizer tends to recruit, on the spot, whatever mix of problem-solving resources will yield an acceptable result with a minimum of effort.
(13) It is important that, according to the PEA, the recruitment process marks no special distinction among neural, bodily, and environmental resources except insofar as these somehow affect the total effort involved.

1.4 Distributed Functional Decomposition
(13-14) Distributed functional decomposition is a way of understanding the capacities of supersized mechanisms (ones created by the interactions of biological brains with bodies and aspects of the local environment) in terms of the flow and transformation of energy, information, control, and when applicable, representation.

Vision is computational active sensing deictic coding.

(14) As a result, a Ballard-style approach is able [quoting Wilson] to combine the concept that looking is a form of doing with the claim that vision is computation introducing the idea that eye movements constitute a form of deictic coding.

One-third second temporal order of magnitude for embodied subjectivity as boundary of intellectual awareness according to Ballard.

(14) Ballard et al. (1997) suggest using the term “the embodiment level” to indicate the level at which functionally critical operations occur at timescales of around one-third second.

1.5 Sensing for Coupling
(15) Sensing here acts as a constantly available channel that productively couples agent and environment rather than as a kind of “veil of transduction” whereby world-originating signals must be converted into a persisting inner model of the external scene.
(16) Instead of using sensing to get enough information inside, past the visual bottleneck, so as to allow the reasoning system to “throw away the world” and solve the problem wholly internally, they use the sensor as an
open conduit allowing environmental magnitudes to exert a constant influence of behavior. . . . What is created is thus a kind of new, task-specific agent-world circuit.

Cartesian, modernist subjectivity separating perception and cognition as manipulating internal models being supplanted by open conduit likely tied to inscription technologies of static, external marks.

(17) The embodied agent is empowered to use active sensing and perceptual coupling in ways that simplify neural problem solving by making the most of environmental opportunities and information freely available in the optic array.

1.6 Information Self-structuring

Importance of time-locked multimodel sensory stimulation in category learning and concept formation further details model of subjectivity.

(17) In human infants, grasping, poking, pulling, sucking, and shoving create a rich flow of time-locked multimodal sensory stimulation. Such multimodal input streams have been shown to aid category learning and concept formation.
(19) Such work depicts intelligent response as grounded in processes of information extraction, transformation, and use, while recognizing the key roles, in those very processes, played by timing, action, and coupled unfolding.
(20) In the presence of this critical active structuring, the net can learn image-sound associations using “raw” visual and auditory data (an unsegmented sound stream and an un-preprocessed video stream) and without the benefit of any inbuilt “language model.”
(21) It matters because
the presence of an active, self-controlled, sensing body allows an agent to create or elicit appropriate inputs, generating good data (for oneself and for others) by actively conjuring flows of multimodal, correlated, time-locked stimulation.

1.7 Perceptual Experience and Sensorimotor Dependencies
(22) The central claim is thus that differences in what we perceptually experience correspond to differences in sensorimotor signatures (patterns of association between movements and the sensory effects of movement).
(23) One upshot of all this, or so it is claimed, is that “what determines phenomenology is not neural activity set up by stimulation as such, but the way the neural activity is embedded in a sensorimotor dynamic” (Noe 2004, 227).

Noe suggestion that phenomenology is determined by time-locked multimodal sensory stimulation, in sense of structure, affordances, limits: connect to Bogost unit operations and alien phenomenology, and real-time computing to be enriched by my opposite-of-deadlines concept.

(23) attention to the possibility that the substrate (the “vehicles”) of specific perceptual experiences may involve whole cycles of world-engaging activity.

1.8 Time and Mind

A polarization of colliding philosophies at heart of study of subjectivity and artificial intelligence: criticism of DST from computer science and process control engineering, traffic and routing problems of multiprocessing and distributed systems that were not imagined by early computing theorists grounded in Turing machines and single processing von Neumann architectures.

(23) This polarization (among dynamical and computational and information-theoretic approaches) is, I think, one of the less happy fruits of recent attempts to put brain, body, and world together again.
(24) DST [
Dynamical Systems Theory] is a powerful framework for describing and understanding the temporal evolution of complex systems.
(26) Total state explanations do not fare well as a means of understanding systems in which complex information flow plays a key role. . . . The real power of the device consists in its ability to rapidly and cheaply reconfigure the the way these components interact. Information based control systems thus tend to exhibit a kind of complex articulation in which what matters most is the extent to which component processes may be rapidly decoupled and reorganized. This kind of articulation has been depicted as a pervasive and powerful feature of real neural processing. The fundamental idea is that large amounts of neural machinery are devoted not to the direct control of action but to the trafficking and routing of information within the brain.

1.9 Dynamics and “Soft” Computation

Clark concludes outcome of dynamical, soft computationalism (continuous reciprocal causation) is a hybrid that I see as more informed by more recent models of computing than earlier cybernetics, tying in Hayles intermediation as well, perhaps a of Latour modernish breed.

(27) Instead, what we seem to end up with is a very powerful and interesting hybrid: a kind of “dynamical computationaismin which the details of the flow of information are every bit as important as the larger scale dynamics and in which some dynamical features lead a double life as elements in an information-processing economy. . . . Such work aims to display the specific contributions that embodiment and environmental embedding make by identifying what might be termed the dynamic functional role of specific bodily and worldly operations in the real-time performance of some task.
(27-28) From this [assumption of fully internal loops], it does not follow that we could not assign representational and (more broadly) information-processing roles either to the elements or to their coupled unfolding. It might be, for example, that the two elements are still best understood as trading in different kinds of encoding or information, kinds that nonetheless mutually and continuously modify each other in some useful manner. . . . In such cases, we need to understand
both the distinctive individual contributions of the various coupled elements and the powerful effects that flow from their coupled unfolding.
(28) This will be so where the nature of the contributions being made by the “parts” is
itself changing radically over time as a result of the multiple influences from elsewhere in the system. At the extreme limit, such variability may undermine attempts to gloss stable types of systemic events as the bearers or vehicles of specific contents.

Extreme limit of CRC may also confound Bogost unit operations perspective: point is not to reject either computational or dynamic approaches outright, but at the same time recognize the potentially profound influence of each in particular situations; also clear entry point for acknowledging role of dynamical tools (including texts and technologies) in extended cognition.

(28) Short of this extreme limit, however, considerations concerning the importance of time and continuous reciprocal causation mandate not an outright rejection of the computational/representational vision but rather the addition of a potent and irreducibly dynamical dimension. Such a dimension may manifest itself in several ways, including the use of dynamical tools to recover potential information-bearing states and processes from highly complex (and sometimes bodily and environmentally extended) webs of causal exchange.

1.10 Out from the Bedrock
(29) The next three chapters ramp up the complexity, exploring first the surprising lability and negotiability of human sensing and embodiment, then the transformative potential of material artifacts, language, and symbolic culture, and leading finally to the suggestion that
mind itself leaches into the body and world.


2
The Negotiable Body
2.1 Fear and Loathing
(30-31) I believe that human minds and bodies are essentially open to episodes of deep and transformative restructuring in which new equipment (both physical and “mental”) can become quite literally incorporated into the thinking and acting systems that we identify as our minds and bodies.

2.2 What's in an Interface?
(32-33) We discern
interfaces at the points at which one machine can be easily disengaged and another engaged instead, allowing the first to join another grid or to operate in stand-alone fashion.

2.3 New Systemic Wholes

Compelling examples of new systemic wholes in monkey research with robot arm, Tactile-Visual Substitution System experiments with humans, and tactile flight suit.

(34) Creatures capable of this kind of deep incorporation of new bodily (and as we'll later see, also sensory and cognitive) structure are examples of what I shall call “profoundly embodied agents.” Such agents are able constantly to negotiate the agent-world boundary itself.

2.4 Substitutes
(35) As a second class of examples of recalibration and renegotiation, consider the plasticity revealed by work in sensory substitutions.
(36-37) Even without penetrating the existing surface of skin and skull, sensory enhancement and bodily extension are pervasive possibilities. One striking example is a U. S. Navy innovation known as a tactile flight suit. . . . What matters, in each case, is the provision of closed-loop signaling so that motor commands affect sensory input.

2.5 Incorporation Versus Use

Gallagher body image/body schema distinction.

(38-39) The plastic neural changes reported by Carmena et al., and now further emphasized by Maravita and Iriki and by Berti and Frassinetti, suggest a real (philosophically important and scientifically well-grounded) distinction between true incorporation into the body schema and mere use. . . . As I shall use the terms (see Gallagher 1998), the body image is a conscious construct able to inform thought and reasoning about the body. The body schema, by contrast, names a suite of neural settings that implicitly (and nonconsciously) define a body in terms of its capabilities for action, for example, by defining the extent of “near space” for action programs.

2.6 Toward Cognitive Extension
(39) Could human
minds be genuinely extended and augmented by cultural and technological tweaks, or is it (as many evolutionary psychologists, such as Pinker 1997, would have us believe) just the same old mind with a shiny new tool?

Key transition for arguing texts and technology influence on subjectivity is whether cognitive extension meaningful concept like empirically documented sensory extension.

(40) But whereas we can now begin to point, in the case of basic tool use, to the distinctive kinds of visible neural changes that accompany the genuine assimilation of tools or of new bodily structure, it is harder to know just what to look for in the case of mental and cognitive routines.
(40) To insist that such change requires the literal intelligibility of the operations of the new by the old, rather than simply the emergence for new wholes that are then
themselves the determiners of what is and is not intelligible to the agent. It must thus be possible, at least in principle, for new nonbiological tools and structures to likewise become sufficiently well integrated into our problem-solving activity as to yield new agent-constituting wholes.

Change blindness experiments strengthen idea that embodiment in environment sustains reliance on visual field in place of constant inspection; reliance of ready-at-handedness extended to wearable computing and ubiquitous information access.

(41) A quick (though frequently misused; see the critical discussion in sec. 7.3) illustration is provided by recent work on so-called change blindness. . . . That larger organization “assumes” the (ecologically normal) ability to retrieve, via saccades or head and body movements, more detailed information as and when needed.
(41) As we move toward an era of wearable computing and ubiquitous information access, the robust, reliable information fields to which our brains delicately adapt their inner cognitive routines will surely become increasingly dense and powerful, perhaps further blurring the boundaries between the cognitive agent and his or her best tools, props and artifacts.

2.7 Three Grades of Embodiment

Compare conceptualization of grades of embodiment (mere, basic, profound) to Hayles eras of cybernetics to explore how computing paradigms interact with cognitive paradigms, from the Cartesian subject to the postmodern dividual.

(42) We can now distinguish three grades of embodiment. Let's call them (simply if unimaginatively) mere embodiment, basic embodiment, and profound embodiment. . . . A profoundly embodied creature or robot is thus one that is highly engineered to be able to learn to make maximal problems-simplifying use of an open-ended variety of internal, bodily, or external sources of order.
(43) Fortunately for us, human minds are not old-fashioned CPUs trapped in immutable and increasingly feeble corporeal shells. Instead, they are the surprisingly plastic minds of
profoundly embodied agents: agents whose boundaries and components are forever negotiable and for whom body, sensing, thinking, and reasoning are all woven flexibly and repeatedly from the accommodating weave of situated, intentional action.


3
Material Symbols
3.1 Language as Scaffolding

Compare this updated version of language as scaffolding to Ong on formation of modern subject through orality, literacy, and print.

(44) In this chapter, I examine three distinct but interlocking benefits of the linguistic scaffold. First, the simple act of labeling the world opens up a variety of new computational opportunities and supports the discovery of increasingly abstract patterns in nature. Second, encountering or recalling structured sentences supports the development of otherwise unattainable kinds of expertise. And third, linguistic structures contribute to some of the most important yet conceptually complex of all human capacities: our ability to reflect on our own thoughts and characters and our limited but genuine capacity to control and guide the shape and contents of our own thinking.

3.2 Augmenting Reality
(45) The material symbol here acts as a manipulable and, in some sense, a merely “shallowly interpreted” (Clowes 2007) stand-in, able to loosen the bonds between perception and action. Importantly, the presence of the material symbol impacts behavior not in virtue of being the key to a rich inner mental representation (though it may be this also) but rather by itself, qua material symbol, providing a new target for selective attention and a new fulcrum for the control of action. . . . But it is their ability to provide simple, affect-reduced, perceptual targets that (I want to suggest) explains much of their cognitive potency.
(46) Experience with tags and labels may be a cheap way of achieving a similar result. Spatial organization reduces descriptive complexity by means of physical groupings that channel perception and action toward functional or appearance-based equivalence classes.

3.3 Sculpting Attention

Carruthers linguaform templates Almost sound like a link for object-oriented philosophy.

(49) The linguaform templates of encoded sentences provide, according to Carruthers, special representational vehicles that allow information from otherwise encapsulated resources to interact.

3.4 Hybrid Thoughts?
(50-51) When we add the use of number words to the more basic biological nexus,
Dehaene argues, we acquire an evolutionarily novel capacity to think about an unlimited set of exact quantities. We gain this capacity not because we now have a mental encoding of 98-ness just like our encoding of 2-ness. Rather, the new thoughts depend directly, but not exhaustively, on our tokening the numerical expressions themselves as symbol strings of our own public language. The actual numerical thought, on this model, occurs courtesy of the combination of this tokening (of the symbol string of a given language) and the appropriate activation of the more biologically basic resources mentioned earlier.
(52) What matters for present purposes is that there may be no need to posit (for the average agent), in addition to this coordinated medley, any further content-matching internal representation of, say, 98-ness. Instead, the presence of actual number words in a public code (and of internal representations
of those very public items) is itself part of the coordinated representational medley that constitutes many kinds of arithmetical knowing.

3.5 From Translation to Coordination
(53) According to this conception, language works its magic not (or not solely) by means of translation into appropriate expressions of neualese or the language of thought but also by something more like coordination dynamics. Encounters with words and with structured linguistic encodings act to anchor and discipline intrinsically fluid and context-sensitive modes of thought and reason.

Words are cues to meaning; hypomnesis is good after all.

(54) “Words,” Elman goes on to argue, “do not have meaning, they are cues to meaning” (306). Words and sentences act as artificial input signals, often (as in self-directed speech) entirely self-generated, that nudge fluid natural systems of encoding and representation along reliable and useful trajectories.
(55) By contrast, I am inclined to see the potential for representational hybridity as massively important to understanding the nature and power of much distinctively human cognition.
(55) First,
Fodor has the LOT (Language of Thought) already in place, so the basic biological engine, on his account, comes factory primed with innovations favoring structure, generality, and compositionality.

Problem with concepts for Fodor is their entailing primordial, biological Platonic forms.

(55-56) Second, much of Fodor's insistence upon a deflationary reading of the hybrid option flows directly from his (in)famous views concerning concept learning. For given those views, the meaning of hybrid representational forms cannot be learned unless the learner already had the resources to represent that very meaning using more biologically basic (indeed, innate) resources.

3.6 Second-order Cognitive Dynamics
(58) All this “thinking about thinking” is a good candidate for a distinctively human capacity and one that may depend on language for its very existence. For as soon as we formulate a thought in words or on paper, it becomes an object for both ourselves and for others. As an object, it is the kind of thing we can have thoughts about. In creating the object, we need have no thoughts about thoughts, but once it is there, the opportunity immediately exists to attend to it as an object in its own right.
(59) Linguaform reason, if this is correct, is not just a tool for the novice (e.g., as suggested by Dreyfus and Dreyfus 2000). Instead, it emerges as a key cognitive tool by means of which we are able to objectify, reflect upon, and hence knowingly engage with our own thoughts, trains of reasoning, and cognitive and personal characters.

3.7 Self-made Minds

Engelbart Type C activity reflects massively self-engineered nature of mature mental routines.

(60) Our mature mental routines are not merely self-engineered: They are massively, overwhelmingly, almost unimaginably self-engineered. The linguistic scaffoldings that surround us, and that we ourselves create, are both cognition enhancing in their own right and help provide the tools we use to discover and build the myriad other props and scaffoldings whose cumulative effect is to press minds like ours from the biological flux.


4
World, Incorporated

4.1 Cognitive Niche Construction

()

Hazard function and active dovetailing (playing Tetris) linked to engineering intensity function applied to exercise of learning how particular computers work alongside other activities increasing mental capacity a path into cyberculture, making it relevant to culture studies where it intersects digital culture and also, which is the goal of my argument, philosophy.

(72) A natural way to think about epistemic actions is in terms of the Principle of Ecological Assembly (sec. 1.3). The costs (temporal and/or energetic) of adding nonpragmatic actions to the problem-solving mix are outweighed by the benefits conferred. . . . The goal is to quantify the net benefit of using epistemic actions by laying the time cost of the extra rotations against the resultant “increase in the player's mental capacity” (Maglio, Wenger, and Copeland 2003, 1). . . . work by Townsend and Ashby (1978), Townsend and Nozawa (1995), and Wenger and townsend (2000) provides a promising tool in the form of a measure known as the hazard function of the response time (RT) distribution during problem solving. Very informally, this is a measure of the instantaneous probability of completing a process in the next move. In engineering, this is also known as intensity function.
(73) Previews were shown to produce a clear increase in capacity, as measured by the change in value of the hazard function, and these benefits increased when memory load was greatest (i.e., with greater lags between preview and decision).

Thinking about borking public radio through fossification toward a philosophy of computing as a title for the popular culture still needing one for the required sooner but doing the later anyway as form of what Clark describes as the hazard function.

Tetris story illustrating hazard function and active dovetailing; agents as managers of their interaction (Kirsch).

(73) The Tetris story also illustrates the importance of what I'll call “active dovetailing.” . . . [quoting Kirsch] we are moving in a direction of seeing agents more as managers of their interaction, as coordinators locked in a system of action reaction, rather than as pure agents undertaking actions and awaiting consequences. (2004, 7)

Model of subjectivity shifts from monadic pure agent to manager actively dovetailing multiply layered systemic interactions, swirling dynamic structures embodying implicit metacognitive commitments exercised in complex skill hierarchies.

(73) In the most interesting class of cases, then, a tight yet promiscuous temporal dovetailing binds the inner and outer at multiple timescales and levels of processing and organization.

4.7 The Swirl of Organization

Subpersonally mediated calls means operations in which, as Manovich puts it, software takes command.

(74) One key characteristic (first discussed in sec. 2.6) concerns the delicate temporal integration of multiple participating elements and processes (including, e.g., the emergence of automatic, subpersonally mediated calls to internal or external information stores).
(74) A useful way to think about the structuring of such resources may be in terms of what I shall call
implicit metacognitive commitments.
(75) Deeply integrated, progressively automated, epistemic actions figure prominently in the construction of
complex skill hierarchies.

4.8 Extending the Mind
(76) The considerations concerning efficient processing (chap. 1), organizational plasticity (chap. 2), the potential role of material symbols in hybrid organizations (chap. 3), and cognitive scaffolding and distributed functional decomposition (sec. 1.4 and 4.5) all come together in ongoing debates concerning “the extended mind” (Clark and Chalmers 1998).

Paying attention to distinction between vehicles and contents revealing biochauvinistic prejudice amenable to Bogost alien phenomenology philosophy of computing.

(76) It is important, in considering these issues, to respect the distinction between vehicles and contents.
(77-78) The
Parity Principle thus provided a “veil of ignorancestyle test meant to help avoid biochauvinistic prejudice. Applied to the case at hand, it invites us, or so we argued, to treat the standard players' epistemic use of the external rotate button [playing Tetris], the near future agent's use of a cyberpunk implant, and the Martian player's use of native endowment as all on a cognitive par.

Veil of ignorance behavioral truth test implicit in parity principle helps avoid biochauvinistic prejudice concerning activities that may be meaningfully considered germane to humans and machines; crucial leveling of the playing field when considering potential cognitive entities whose experiential realm is constituted by cyberspace, for example, distributed machine operations and protocol based communications phenomena, an example more realistic than example involving futuristic cyberpunk implants and Martians, and contrast this active externalism, dovetailing to passive, reference-based externalism of Putnam and Burge. Important for DH proposal.

(78-79) In the paper, we showed (in some detail; see the appendix) why all this was orthogonal to the more familiar Putnam-Burge style externalism. . . . Here, then, the causally active physical organization that yields the target behavior seems to be smeared across the biological organism and the world. Such active externalism was quite different, we claimed, from any form of passive, reference-based externalism.
(80) Applying the four criteria yielded, we claimed, a modestly intuitive set of results for putative individual cognitive extensions. A book in my home library would not count. The cyberpunk implant would.
(80-81) Overall, the, our claim was the Inga's biological memory systems, working together, govern
her behaviors in the ways distinctive of believing and that Otto's smeared-out biotechnological matrix (the organism and the notebook) governs his behavior in the same sort of way. So the explanatory apparatus of mental state ascription gets an equal grip in each case, and what looks at first like Otto's action (looking up the notebook) emerges as part of Otto's thought. The gap between deeply integrated calls to epistemic action and true cognitive extension, if this is correct, is slim to vanishing.

4.9 BRAINBOUND Versus EXTENDED: The Case So Far
(82) This recruitment process looks to be systematically insensitive to the nature and location of the resources concerned, which may include just about any mix of calls to neural resources (including biological memory) external resources (including external encodings), and real-world actions and operations. Such heterogeneous mixes, actively dovetailed in time and space together constitute (or so I have claimed) the physical underpinnings of many of the most characteristic cases of human cognizing.

Physical underpinnings well explained by this model of subjectivity, raising questions: given its proximity to machine systems, does it follow that machine cognition independent of human may exist; need it be disconnected to be genuine; can the humans be parts of incomprehensible, alien swirls for it?

(82) Whereas BRAINBOUND locates all our mental machinery firmly in the head and central nervous system, EXTENDED allows at least some aspects of human cognition to be realized by the ongoing work of the body and/or the extraorganismic environment.


II
BOUNDARY DISPUTES
5
Mind Re-bound?
5.1 EXTENDED Anxiety
(85)



III
THE LIMITS OF EMBODIMENT
8
Painting, Planning, and Perceiving
8.1 Enacting Perceptual Experience
(169) Might this intimacy of brain, body, world, and action shed light on the nature and mechanisms of conscious perception? A positive answer is suggested by what I shall call “
strongly sensorimotormodels of perception (O/Regan and Noe 2001; Noe 2004). According to such models, perceptual experience gains its content and character courtesy of an agent's implicit knowledge of the ways sensory stimulation will vary as a result of movement. Perceptual experience, on such accounts, is said to be enacted (Varela, Thompson, and Rosch 1991) via skilled sensorimotor activity.
(170) In particular, they threaten to obscure the computationally potent and functionally well-motivated
insensitivity of key information-processing events to the full subtleties of embodied cycles of sensing and moving.

8.2 The Painter and the Perceiver
(172) This stress on knowledge of (or expectations concerning) sensorimotor dependencies is meant as an alternative to standard appeals to qualia conceived as intrinsic, “sensational,” properties of experience.
(172) Or to take a more recent formulation: “
Perception is an activity that requires the exercise of knowledge of the ways action affects sensory stimulation(Noe 2007, 532).

8.3 Three Virtues of the Strong Sensorimotor Model
(172) First and most important, there is the emphasis on skills rather than on qualia as traditionally conceived.
(174) For Noe, then, experience is “not caused by and realized in the brain, although it depends causally on the brain. Experience is realized in the active life of the skillful animal” (2004, 226).
(175) Prediction learning has shown itself to be a valuable tool for the extraction of a number of important regularities, such as those characteristic of grammatical sentences, of shape, and of object permanence.
(175) On Noe's account, however, a critically important subclass of cases is defined over
consciously experienced perspectival properties of objects.
(175) But while agreeing that prediction learning is a powerful
knowledge-extraction tool, especially in the perceptual arena, I am not convinced that mature perceptual experience is then constituted by the running of what might be thought of as the prediction software itself.
(177) But by focusing so much attention on the sensorimotor frontier, they deprive us of the resources needed to construct a more nuanced and multilayered model of perceptual experience and risk obscuring some of the true complexity of our own cognitive condition.

8.4 A Vice? Sensorimotor (Hyper)sensitivity
(178) Despite the superficially liberal appeal in these quotes to “functional multiplicity,” the required identity (for precise sameness of experience) thus reaches far down into the structure of the physical apparatus itself and demands very fine-grained similarities of body and gross sensory equipment.
(179) For the skills to which such deflationary accounts (among which I count the strong sensorimotor theory) appeal may
themselves be coarse- or fine-grained and may thus involve activities and capacities that are systematically insensitive to some of the goings-on at the sensorimotor frontier.

8.5 What Reaching Teaches
(181) There is, in any case, another possibility here that has significant empirical support and that is ultimately, or so I shall argue, suggestive of an alternative to the Strong Sensorimotor Model itself. This is the possibility that contents of conscious perceptual experience are determined by the activation of a distinctive body of internal representations operating quasi-autonomously from the realm of direct sensorimotor engagement. Such representations are perceptual but are geared toward (and optimized for) the specific needs of reasoning and planning rather than those of fluent physical engagement.
(181) These “
dual-streammodels appear to differ from strong sensorimotor models in at least two crucial respects.
(183) The best interpretation of all these bodies of data, according to Milner and Goodale, is that memory and conscious visual experience depend on a type of mechanism and coding that is different from, and largely independent of, the mechanisms and coding used to guide visuomotor action in real time.

8.6 “Tweaked” Tele-assistance
(187)


9
Disentangling Embodiment
9.1 Three Threads

Clark presents the clearest, most sensible, up to date basis for philosophy of computing as it intersects mind, cognition, consciousness, subjectivity and ultimately the human: spreading the load, self-structuring of information, supporting extended cognition threads joined by hypothesis of cognitive impartiality, hypothesis of motor deference, multiple functionality all highly relevant.

(198) In this final substantive chapter, I hope to show that (despite some recent publicity) these appeals to embodiment, action, and cognitive extension are best understood as fully continuous with computational, representational, and (broadly speaking) information-theoretic approaches to understanding mind and cognition. In so doing, I hope to display at least something of the likely shape of a mature science of the embodied mind.

9.2 The Separability Thesis
(198) Shapiro (2004) seems to suggest that it [specific details of how the brain and body embody the mind] does [matter for cognition]. He presents an argument against one version of the claim of platform independence that he dubs the Separability Thesis (ST).
(200) The common upshot of all these arguments, then, is a kind of principled body centrism, acording to which the presence of humanlike minds depends quite directly on the possession of a humanlike body.

9.3 Beyond Flesh-eating Functionalism
(202) The increasingly popular image of functional, computational, and information-processing approaches to mind as flesh-eating demons is thus subtly misplaced. For rather than
necessarily ignoring the body, such approaches may instead help target larger organizational wholes in ways that help reveal where, why, how, and even how much (see sec. 9.8) embodiment and environmental embedding really matter for the construction of mind and experience.

9.4 Ada, Adder, and Odder

9.5 A Tension Revealed

9.6 What Bodies Are
(207) At this point, it may seem as if the body is, just as it happens, the locus of willed action, the point of sensorimotor confluence, the gateway to intelligent offloading, and the stable (though not permanently fixed) platform whose features and relations can be relied upon (without being represented) in the computations underlying some intelligent performances. But I am included to go further and to assert not just that this is what the body
does but that this (or something quite like it) is what, at least for all cognitive scientific purposes, the body is.

9.7 Participant Machinery and Morphological Computation

Multiple functionality of embodied, morphological computation distinguishes natural automata from engineered solutions; however, Kurzweil, Kittler and plenty others foresee software taking command of its own evolution, no doubt aided by affordances leveraging their own machinery.

(211) For the idea is that evolved biological intelligences, unlike the more neatly engineered solutions with which we are still most familiar as designers, are perfectly able to find and exploit unexpected forms of multiple functionality. That is to say, they may find and exploit solutions in which a single element (e.g., a bodily routine or motion) plays many roles, some of them merely practical and others more “epistemic” in nature. The clean division between mechanical (body) design and controller design that characterizes many humanly engineered solutions looks quite unimportant (indeed, often counterproductive) if what we seek is efficiency and maximal exploitation of resources. Paul's demonstration may be compared to Thompson, Harvey, and Husbands' (1996) and Thompson's (1998) work using genetic algorithms to evolve real electronic circuits. The evolved circuits turned out to exploit all manner of physical properties usually ignored or deliberately suppressed by human engineers.
(211) The case of gesture for thought (sec. 6.7) may be an example of just this kind, in which actual hand and arm motions look to implement encoding and processing operations that are, as McNeill suggests, holistic and analog rather than local, symbolic, and discrete.
(211) This possibility is also underscored by recent work on the computational role of the tendon network of the fingers.
(212) But in the present case, the authors suggest, it is not just that the load is spread but that the
control function itself is distributed across the nervous system and tendon network, such that “part of the controller is embedded in the anatomy, contrary to current thinking that attributes the control of human anatomy exclusively to the nervous system” (Valero-Cuevas et al. 2007, 1165).
(212) As I read the authors, this is because the structure of the tendon network itself modifies, in a complex and systematic manner, what they describe (1165) as the
interpretation of signlas delivered by the nervous system.

An up to date description of what the body is, as adaptively potent mashups, for example tendon network example that points to engineered possibilities in which cognition imbricated in larger control network, reaching discourse comprehension therefore susceptible to changes in media systems (Hayles endorsing synergistic, intentional modifications where Kittler sees a fundamental type of technological determinism); however, multiple functionality counter to some design strategies, especially if centered on specific control capabilities without considering affordances of instantaneous systems.

(213) For the step from latent to explicit morphological computation depends essentially on the agent's ability to sense its own bodily states. . . . Daily embodied activity may thus be playing many subtle, yet-to-be-understood cognitive roles. To take just one concrete example, there is a growing body of work on the possible role of eye movements in thought, reason, discourse comprehension, and recall.

9.8 Quantifying Embodiment
(215) Such increases in the information structure present in the sensory signal provide, the authors argue, a clear functional rationale for the evolution and use of coordinated sensorimotor behavior as a means of actively structuring our own sensory experience.
(215-216) In an neat inversion, these informational measures can also be used to drive the evolution of artificial agents. . . . Using a mixture of behavioral and information-theoretic cost functions, Sporns and Lungarella were able to evolve agents capable of coordinated visuomotor action. Before evolution, the accidental touch of the target object did not yield foveation, tracking, or prolonged object “capture.” After evolution, arm and eye worked together to acquire and scan the objects. Maximizing specific forms of information structure was thus seen to lead to the emergence of key adaptive strategies, including visual foveation, tracking, reaching, and tactile exploration of objects. In this way, actively maximizing key parameters relating to the self-structuring of information flows helps explain the emergence of coordinated sensorimotor activity in embodied agents and provides a new design tool for evolving artificial agents able to profit from various forms of embodied intervention and, hence, information self-structuring.

9.9 Heideggerian Theater
(217) We will need to combine a dynamic sensibility to the importance of action, timing, and closely coupled unfolding with, I predict, the use of a variety of more familiar tools and constructs. . . . But despite the use of some familiar and some unfamiliar tools, the object of study here is not the same as before. Our target is not just a neural control system but a complex cognitive economy spanning brain, body, and world. . . . The body is—dare I say it?--the Heideggerian Theater: the place where it all comes together, or as together as it comes at all.


10
Conclusions: Mind as Mashup

Mind as mashup casts mind in sync with latest conceptions of new media.

(219) Confronted by the kaleidoscope of cases encountered in the previous chapters, the proper response is to see mind and intelligence themselves as mechanically realized by complex, shifting mixtures of energetic and dynamic coupling, internal and external forms of representation and computation, epistemically potent forms of bodily action, and the canny exploitation of a variety of extrabodily props, aids, and scaffolding. Minds like ours emerge from this colorful flux as surprisingly seamless wholes: adaptively potent mashups extruded from a dizzying motley of heterogeneous elements and processes.


Appendix: The Extended Mind
Andy Clark and David Chalmers


Clark, Andy. Supersizing the Mind: Embodiment, Action, and Cognitive Extension. New York: Oxford University Press, 2008. Print.