Notes for Kumiko Tanaka-Ishii Semiotics of Programming

Key concepts: basic narratives, computer sign, currying, deconstruction, functional programming, glossematics, haecceity, homoiconicity, identifiers, intuitionist logic, lambda calculus, language, layer of address, layer of type, mirror nature evaluation function, narreme, pansemiotic view, referential transparency, reflection, reflexivity, self-reference, side effects, sign, speculative introduction, strong inference, structuralism, substitution, weak inference.


Related theorists: Barthes, Benjamin, Erret Bishop, Brouwer, Chun, Derrida, Eco, Hayles, Heidegger, Hjelmslev, Kernighan, Maruyama, Nöth, Peirce, Propp, Ritchie, Saussure, Thompson, von Neumann.

Acknowledgments
(ix) This book was motivated by a conversation in 2001 in which Professor Toru Nighigaki of the University of Tokyo suggested that I clarify the relationship between object-oriented programming languages and the semiotic theories of Charles Sanders Peirce.
(ix) Also, it was Professor [Marcel] Danesci's suggestion to collect several of my early articles appearing in
Semiotica into the form of a book.
(x) Before final publication, Emeritus Professor Eiiti Wada, of the University of Tokyo, kindly read the text and gave me detailed comments from the viewpoint of a professional programmer. His great enthusiasm for programming has influenced me, since I was an undergraduate, to learn the computer programming theories appearing in this book.
(x) Last, this book could not have taken its final form without the support of Yuichiro Ishii, my husband. Although he is currently working professionally in the legal domain, he is one of the most talented programmers I know.


1
Introduction
1.1 The Aim of This Book

Reconsidering reflexivity as essential property of sign systems; border of significance made explicit in design of artificial languages.

(1) The theme of this book is to reconsider reflexivity as the essential property of sign systems. In this book, a sign is considered a means of signification, which at this point can be briefly understood as something that stands for something else. . . . Signs functions in the forms of a system consisting of a relation among signs and their interpretations.
(1) As will be seen further along in the book, a sign is essentially reflexive, with its signification articulated by the use of itself. Reflexivity is taken for granted, as the premise for a sign system such as natural languages. On the other hand, the inherent risk of unintelligibility of reflexivity, has been noted throughout human history in countless paradoxes. . . . With artificial languages, however, it is necessary to design the border of significance and insignificance and thus their consideration will serve for highlighting the premise underlying signs and sign systems.
(1-2) The artificial languages considered in this book are programming languages. They are artificial languages designed to control machines. The problems underlying programming languages are fundamentally related to reflexivity, and it is not too far-fetched to say that the history of programming language development is the quest for a proper handling of reflexivity. . . . In particular, the aim of the book is to consider the nature of signs and sign systems through discussion of programming languages by semiotics.

Common test bed of sign systems due to extent humanities disciplines treat humanity as discursive (Hayles).

(2) At the same time, some readers might also wonder to what extent humanity can be considered merely in terms of signs and sign systems. Such an approach, however, is indeed extant in the humanities, particularly in semiotics, linguistics, and philosophy. It is therefore not an oversimplification to compare human language and computer language on the common test bed of sign systems.

Understand signs by looking at machines for intersection, as Derrida did with writing; chance to revisit von Neumann on weaknesses of artificial automata.

(2-3) Considering both as sign systems, their comparison seems to lead to highlight the premise upon which our sign system is founded. Namely, the application of semiotic theories to programming enables the consideration, in a coherent manner, of the universal and specific nature of signs in machine and human systems (see Figure 1.1.). Such a comparison invokes the nature of descriptions made by humans in general, of the kinds of features a description possesses, and of the limitations to which a description is subject. These limitations, this book suggests, are governed by reflexivity. Moreover, the difference between computer signs and human signs lies in their differing capability to handle reflexivity, which informs both the potential and the limitations of the two sign systems. While people do get puzzled, they seldom become uncontrollable because of one self-reference. In contrast, for computers, reflexivity is one of the most frequent sources of malfunction.

1.2 Computational Contributions to Semiotics
(3) The domain of modern semiotics was established by Saussure and Peirce, with roots dating back to the ancient Greeks and the scholastics.
(3) The object of semiotic analysis has traditionally been sign systems for human interpretation: natural language, texts, communication, codes, symbolic systems, sign language, and various art forms. . . . Unfortunately, this means that semiotic studies have normally been conducted without a clearly delineated separation between the sign system to be analyzed and that used for its study.

Like Kittler problem with media studies, semiotic studies seldom delineated from their expressive symbolic systems.

(4) The use of computer languages as a semiotic subject does not suffer from this failing, however, because human beings do not think in machine language.
(4) With respect to interpretation, in a sense, there is in theory no system better than that of computer languages because they are truly formal, external, and complete in scope. Since this interpretation is performed mechanically, it is explicit, well formed, and rigorous. Computer languages are the only existing large-scale sign system with an explicit, fully characterized interpreter external to the human interpretive system. Therefore, the application of semiotics to computer languages can contribute, albeit in a limited manner, to the fundamental theory of semiotics.

Key argument and significance for understanding semiotic problems in programming languages leading to renewed understanding of human signs.

What happens as technological nonconscious extends into human signs, can this sharpening happen implicitly in programmers, what of sourcery complications?

(4) Understanding the semiotic problems in programming languages leads us to formally reconsider the essential problems of signs. Such reconsideration of the fundamentals of semiotics could ultimately lead to an improved and renewed understanding of human signs as well.

1.3 Semiotic Contributions to Computing
(5) Most programs are generated by human beings. As expressions written in programming languages are interpreted by both humans and machines, these languages reflect the linguistic behaviors of both. . . . Thus, an analysis of recent, well-developed programming languages may reveal significant aspects of human linguistic behavior.

Interesting suggestion that OOP latent in earlier semiotic theory; technological development inspires humanities study, like applied poststructuralism and postmodernism.

(5) Many of the concepts, principles, and notions of computer programming, however, have derived from technological needs, without being situated within the broader context of human thought. An example is that the paradigm of object-oriented programming is considered to have been invented in the 1960s. This was, however, no more than the rediscovery of another way to look at signs. The technological development of programming languages has thus been a rediscovery of ways to exploit the nature of signs that had already been present in human thought.
(5) The application of semiotics to programming languages therefore helps situate certain technological phenomena within a humanities framework. To the extent that computer programs are formed of signs, they are subject to the properties of signs in general, which is the theme of this book. That is, the problems existing in sign systems generally also appear in programming languages.

1.4 Related Work
(6) This book addresses the theme of the reflexivity of signs but attempts to bridge the two genres of natural and formal language to situate reflexivity as the general property of signs and sign systems.

History of semiotics of computing starting with Zemanek, Andersen and Andersen, Holmquvist, Jensen, Liu, de Gruyter, Floridi, de Souza.

(6) The earlier mention of this topic was a brief four-page article in Communications of the ACM (Zemanek, 1966), which emphasized the importance of the semiotic analysis of programming languages. Publication of an actual study analyzing the computing domain, however, had to wait until publication of studies by Andersen (1997) and Andersen, Holmquvist, and Jensen (1993, 2007). Their work modeled computer signs within information technology in general. Such work was important because it opened the domain of the semiotic analysis of computing, and it has been continued further by authors such as Liu (2000). Ever since then, this domain has progressed through papers in Walter de Gruyter's Semiotica and the Journal of Applied Semiotics, through conference/workshop papers on Organizational Semiotics, and also through Springer's Journal of Minds and Machines, which takes a more philosophical approach. Other related publications are those of Floridi (1999, 2004), which provide wide-ranging discussion of philosophy as applied to the computing domain. In terms of application, the most advanced domain in this area of semiotics is human-computer interaction, the advances in which have been elucidated in a book by de Souza (2006).

1.5 The Structure of This Book

Book examines semiotics from viewpoints of models of signs, kinds of signs, and systems of signs.

(6-7) The fundamentals of semiotics can be examined from three viewpoints: first, through models of signs as an answer to the question of what signs are; second, through kinds of signs and the content that signs represent; and third, through systems of signs constructed by those signs.
(7) The levels of syntax, semantics, and pragmatics do appear in the book but are distributed throughout the various chapters at appropriate points, when necessary. In this sense, the term
language in this book does not signify a language in the context of linguistics, that is, a system with morphology, syntax, semantics, and pragmatics. The signification of language in this book is in its most abstract form, referring to a kind of sign system in which the signs are linguistic elements. In other words, I treat a language as a relation among linguistic elements and their interpretations.
(7) This book, however, does not include such introductory chapters in either semiotics or computer programming. Rather, introductory material is provided throughout as needed.

Use of artwork examples as extension of hypothetical semiotic analyses beyond computer programming languages.

(7-8) When I began writing this book, semiotic theory was not sufficiently established to be straightforwardly applied in a complete form that could be introduced at the beginning of the book. Application of semiotic theory to a well-formed corpus required dismantling, reconsidering, and reconstituting the constituent theories. Most of the chapters in this book treat a semiotic problem that I find fundamental and the problem is analyzed and hypothetically solved by some adaptation of semiotic theory through its application to computer programs. These hypothetical conclusions currently apply, in the most rigorous sense, only to computer programs. To show the potential of these conclusions, however, they are also applied to the artwork at the beginning of each chapter, thus offering an intuitive or metaphorical introduction to the hypothetical problem explored in the chapter.

Uses Haskell and Java as programming languages for highlighting points of arguments.

(8) In contrast, for programming languages, I refer only to theories and concepts already extant within the computer programming domain and merely utilize them for semiotic analysis: since a programming language is well-formed and rigorous, the relevant theory is fundamentally clear. . . . each chapter is based on specific programming languages that best highlight the point of the argument. . . . Among numerous programming languages, the two representative ones introduced here are Haskell and Java.

Statement of markup strategy for working code using typewriter face, italics for mathematical notations, single quotes for terms and phrases, double quotes for inline quotes from other references.

(9) Executable program code is shown in typewriter face, whereas mathematical notations, titles, emphases and important terms are in italics. Sample terms and phrases appearing in the book are enclosed in single quotation marks, whereas inline quotes taken from other references are enclosed in double quotation marks.

Note book is complied from previous writings.

(9) The individual chapters are based on my papers published in Walter de Gruyter's Semiotica, in the Journal of Applied Semiotics, and in the Journal of Minds and Machines.


2
Computer Signs in Programs
2.1 Introduction

(10) The introduction is briefly made through two comparable executable example programs written in two different programming languages. From among the substantial number of different programming languages, Haskell and Java were chosen because these languages represent two paradigms – a functional language and an object-oriented language – that have interesting features from a semiotic viewpoint.

Suggests ambitious humanities readers may be able to grasp program operations judged simple for those trained in computer science; practicing programmers may be in the middle, not having such formal education.

(11) The examples should be easy for a reader from the computer science domain to understand. . . . The explanation is given for the sake of ambitious readers from the humanities domain, who might use computers everyday but have never written programs.

2.2 Two Sample Programs

For introduction to working code, fifteen line Haskell program displayed in Figure 2-1, and twenty-seven line Java program in Figure 2-2 calculating area of rectangle, circle, ellipse.

(11) Each of the two sample programs calculates the areas of three simple shapes: the rectangle, the circle, and the ellipse.
(13) The program therefore consists of a
definition part and a use part, which operate both globally and locally. . . . A definition is a kind of statementthe basic unit of execution in a computer program – whereas the use is described through an expression. . . . a definition contains an expression (on the right hand side of the =) and an expression may include definitions, as in the first block of the let-expression.
(14) The [Java] code contains five blocks, with the first four starting with the term class. These blocks define the data structures for the shape types: Shape, Rectangle, Ellipse, and Circle.
(14) The Shape
function (line 3) is needed for initial construction of an instance of shape data. The function area is for calculating the area.
(15) Considering these relations among shape types from the viewpoint of mathematical sets, the classes are related by the keyword extends. A class that extends another class
inherits the properties of that class, and the inherited properties can be used without declaration.
(15) The function area
is defined to calculate the area as the width multiplied by the height (line 4), and this is the default way to calculate the area for all classes that extend Shape. Since this calculation applies to rectangles, the class Rectangle does not require redefinition of the function area. For the other shapes, Ellipse and Circle, this calculation is inaccurate and must be redefined.
(15) A
declaration is another type of statement declaring the use of a sign in a program.

2.3 Identifiers
(16) The actual semiotic comparison and analysis of these programs starts in the next section. Leveraging these two examples, the rest of this chapter is dedicated to explaining which signs are of concern in this book and their semantic levels.
(16) In short, for kinds of signs appear in computer programs: Literals, Operators, Reserved words, Identifiers.

Implicit ontology of programs as hierarchical blocks similar to OHCO theory of textuality, as implies stored program architecture.

(17) Signs must be defined before being used, but the definition can be made by the user or within the system design. The first three kinds of signs are defined within the language system, and programmers merely utilize them. . . . In contrast, identifiers are defined by the programmer, and a program consists of hierarchical blocks of identifier definitions and uses.
(17) Values are represented by the corresponding identifiers and defined within the program. Among these values are data and functions, and both of these are stored at addresses represented by their corresponding identifiers: in the case of data, the data representation in bits is stored at the memory address associated with the identifier; in the case of a function, its code in bits is stored at the associated memory address. Some identifiers represent complex structures consisting of data and functions.
(17-18) Historically speaking, identifiers
were literally memory addresses in early programming languages. . . . Today's identifiers are abstract representations of memory addresses in the form of signs.

Computer signs are identifiers in programs.

(18) The analysis in this book focuses mainly on these identifiers. Most other language-specific signs are defined as identifiers in the metalevel language that describes the language, as will be seen in Chapter 11. Moreover, many computer signs, such as visual icons for mouse clicking or operating system sounds, are implemented once through representation by some identifier within a program. That is, most signs used in computing are introduced as identifiers and defined at some programming level before being put to actual use. Therefore, the focus here on identifiers, in fact, covers most signs on computers. We use the term computer sign to denote these identifiers appearing in programs and we focus on them as our analysis target.

2.4 Semantic Levels of Identifiers

Semantic levels of identifiers of pansemiotic view: hardware, programming language subdivided into type and address, natural language.

(18) In the generation and execution of programs, different levels of semantics are used for the interpretation of identifiers.

2.4.1 Computer Hardware Level
(18) The memory address assigned to an identifier is, in fact, what the identifier actually is, giving it meaning.

Given this distinction between levels, and contrary to Kittler, there is software.

(18) An identifier therefore represents both an address and a value in bits at the hardware level. . . . This semantics at the computer hardware level is now becoming more the domain of professionals who build compilers and optimizers, whereas programmers tend to handle programs only at the higher levels of programming languages and natural languages.

2.4.2 Programming Language Level
(18) All identifiers are defined and used in a program. This definition and use form another semantic level.
(19) In addition to definition and use, there are two other layers of interpretation within a programming language.

Layer of type indicating kind of data value or function, or combination.

(19) Layer of type. Many contemporary programming languages feature types, where a type indicates an abstraction of a kind of data, a function, or a combination of the two. A typed language means a language in which the type is declared explicitly in programs. The type of an identifier limits the kind of data that it represents and the kinds of expressions in which it can be used. . . . An identifier thus has interpretations at this level of the type.

Also inline statements in other language such as assembler and preprocessor directives, making Cicero connection.

Layer of address may also be identified.

(19) Layer of address. . . . Within a program, an identifier usually represents a value, but it often happens that addresses must also be represented and processed via identifiers. This is implemented using a special syntax or pragmatics predefined within the programming language. This direct meaning as an address within the program gives a meaning to the identifier.

2.4.3 Natural Language Level
(20) The activity is helped greatly by attaching comments to program lines in natural language intended for human consumption. Another, more important issue at this level is the interpretation of identifiers that are apparently formed of natural language terms.

Normal semiotic analysis of natural language terms that are borrowed from natural language.

(20) Therefore, programmers are trained to choose and design meaningful identifiers from a natural language viewpoint.
(20) Since the identifiers are thus borrowed from natural language, they are considered subject to normal semiotic analysis of terms in natural language.

2.5 Pansemiotic View

Peirce pansemiotic view holds for computers; mind implies signs, but Clark parity principle allows study abstracted from question of nature of intelligence.

(20) Setting the interpretation level at the programming language level means considering the interpretation of signs within the semiotic system. It does not require external entities such as the physical objects that a program represents.
(20) Such a viewpoint is called the
pansemiotic view and is attributed to Charles Sanders Peirce's notions of human thought. . . . Note that not all of these ideas deny the existence of entities apart from signs: entities exterior to signs do exist, of course, but the pansemiotic viewpoint suggests that they can be grasped in the mind only through representation by signs.
(21) Putting aside whether it applies to human thought, the pansemiotic view is taken in this book because it allows comparison of computers with humans at the same level of the sign system. . . . The computing world is a rare case in which the basic premise of pansemiotic philosophy holds.


PART 1
MODELS OF SIGNS
3
The Babylonian Confusion
3.1 Two Models of Signs

Chapter 3 on Babylonian Confusion begins with quote from Frege; paintings by Chardin and Baugin exemplify realistic and vanitas art.

(26) The most fundamental semoitic question that philosophers and linguists have considered from ancient times is that of the basic unit of signs. The hypotheses in response to this question can briefly be classified into two sign models: the dyadic model and the triadic model.

Dyadic and triadic sign models from Augustine and Greek philosophy.

(27) The root of the dyadic model is found in the philosophy of Augustine in the fourth century. Among the scholastics following this tradition, signs were regarded as aliquid stat pro aliquo, that is, something standing for something else. A sign consisted of a label or name (aliquid) and a referent (aliquo). . . . At the beginning of the twentieth century, Saussure advocated that the function of a label is not mere labeling but rather that the label articulates the content of the sign.
(27-28) The root of the triadic model appears in Greek philosophy, in Plato and Aristotle. Here, a real-world object is considered to evoke its
idea in the human mind and thus leads to its its label. . . . Peirce, in parallel with the development of the dyadic model in the nineteenth century, wrote that the order of the three elements appearing in the mind is not as Plato said: it is the representamen (label) that evokes the interpretant (idea or sense) defining the object (referent). In the example of the tree, the label 'tree' evokes the idea of the tree, which designates the referent tree.

Examine computer programming languages to test hypotheses about semiotics: what implications about theory versus expediency, and so on, does this suggest concerning the development of programming languages, how much accident, how much philosophically motivated design?

(29) Nöth states, however, the the correspondence “before and after Frege is a Babylonian confusion.”
(29) The theme of this chapter is to establish a hypothesis for solving this Babylonian confusion through analysis of signs in computer programs. Above all, if the two models are both essential and important, then a concept in one model must be found in the other. Moreover, such contrast must appear in some form in computer signs, too.

3.2 Two Hypotheses

Tease out consequences of dyadic and triadic for correspondences between sign models of Saussure relatum, signified, excluded thing, and Perice of signifier, object, interpretant studied by Noth and Eco.

(29) There is one point upon which everyone agrees: the relatum correspondence between Saussure's signifier and Peirce's representamen.
(29) Assuming correspondence of the remaining relata leaves only three possibilities: Saussure's signified corresponds to Peirce's object, Saussure's signified corresponds to Peirce's interpretant, or Saussure's signified corresponds to both.
(30) Among those who have considered this correspondence, the two key representatives are Winfried
Nöth and Umberto Eco.

3.2.1 A Traditional Hypothesis
(30) Here, Saussure's thing is what Nöth considered the referential object in the above statement [bilaterality excluding referential object], which is excluded from the sign model in Saussure's theory. For Saussure, “language is located only in the brain,” and therefore a thing cannot from a part of the sign model.
(30-31) Consequently, Nöth and Eco indicate that the two sign models of Saussure and Peirce can be related as follows: Saussure's signifier and Peirce's representamen correspond. Saussure's sign model does not include the referential object, and Saussure's signified and Peirce's interpretant correspond.

3.2.2 A New Hypothesis
(31) Therefore, Peirce's object in his sign model is the immediate object, which is actually the mental representation of a real-world entity.
(32) More precisely, Peirce's dynamical object corresponds to Saussure's think, with both referring to a real-world object. . . . Saussure's signified is conceptual and therefore corresponds well with Peirce's immediate object.
(32 footnote 5) Through the dyadic era, the notion of signs as mental drove them to become virtual, depriving them of a real-world foundation. When triadic modeling was revived, the notion of
use appeared, doubling the signification while reinforcing the virtualization of signs.
(33) The interpretant of a sign calls other signs that evoke interpretants, which call other signs, and so on, leading to infinite semiosis. . . . Peirce's sign model thus encapsulates not only the mental representation of an object but also interpretations.
(34) The differences among signs appear only in the presence of other signs within use. Then, it is likely that in Saussure's model the use of a sign is not incorporated in the sign model but exists as a holistic value within the system.

New hypothesis that Saussure signified corresponds to Peirce immediate object, and interpretant in language system outside sign model appearing as difference in use.

(34) Overall, this raises another hypothesis, that Saussure's signified corresponds to Peirce's immediate object and Peirce's interpretant is located in Saussure's language system outside the sign model. The dimension of reference or sense is not ignored by Saussure; rather, the interpretant is simply situated outside the sign model, appearing as difference in use.

3.3 Two Programming Paradigms and the Sign Models

Testing sign models with programming paradigms: where is the common area function located with respect to definition of shape?

(34) The question addressed here is where to locate the calculation of the common function area with respect to the definition of the shape.

Definitions of functional and object-oriented programming: data definition remains minimal in former, maximal in latter.

(35) The first program, in Haskell, was written using a paradigm called functional programming. In languages using this paradigm, programs are described through functional expressions. A function is a mapping of an input set to an output set. In this paradigm, functions are considered the main entity; therefore functions that apply to data are defined outside the data definitions. The use of data is not included in the data definitions, which thus remain minimal.
(35) The second program, in Java, was writing using another paradigm,
object-oriented programming. Programs are written and structured using objects, each of which models a concept consisting of functions and features. This programming paradigm enhances the packaging of data and functionality together into units: the object is the basis of modularity and structure. Therefore, the data definition maximally contains what is related to it. The calculation proceeds by calling what is incorporated inside the definition.

3.3.1 Dyadic/Triadic Identifiers

Functional programs all dyadic identifiers; dyadic and triadic in object-oriented programs.

(35) In functional programs all identifiers are dyadic, whereas in object-oriented programs dyadic and triadic identifiers are both seen.

3.3.2 The Functional Paradigm and the Dyadic Model

Dyadic identifiers acquire meaning from use located external to context in functional paradigm, and relate to Saussure model.

(36) A sign in the dyadic model has a signifier and a signified. Because all dyadic identifiers consist of a name and its content, the name is likely to correspond to the signifier and the content to the signified.
(36-37) As in Saussure's theory, then, difference in use plays an important role. . . . In other words, dyadic identifiers acquire meaning from use, which is located external to their content.

3.3.3 The Object-Oriented Paradigm and the Triadic Model

Triadic identifiers in object-oriented languages class name, data, function compare to relata of representamen, object, interpretant; class has information about its functionality.

(37) A sign in the triadic model has a representamen, an object, and an interpretant. Since all triadic identifiers in the object-oriented paradigm consist of a name, data, and functionalities, these lend themselves respectively to comparison with the relata of the triadic sign model. . . . Thus, the functions defined within a class are deemed interpretants. . . . The fact that each class has information about its functionality differs from the dyadic case, where it is the function that knows which data to handle.
(37) In the dyadic model, different uses attribute additional meanings to dyadic identifiers. In contrast, in the object-oriented paradigm such meanings should be incorporated within the identifier definition from the beginning. Everything that adds meaning to an identifier must form part of its definition; therefore if two sets of data are to be used differently, they must appear as
two different structures.

3.4 The Babylonian Confusion Revisited

Figure 3-7 maps the philosophical problem of semiosis to programming examples as Babylonian confusion revisited.

(39) In the dyadic model, meaning as use is distributed inside the language system as a holistic value, so that a sign sequence appears as a result of a sign being used by some other sign located in the system; in the triadic model, meaning as use is embedded inside the sign's definition, so that semiosis is generated through uses readily belonging to the sign.
(40) Assuming correspondence of the two models as hypothesized in this chapter, one important understanding gained is that the dyadic and triadic models are compatible: that neither model lacks or ignores components existing in some part of the other model.

3.5 Summary

Saussure (dyadic)

Signifier

Signified

Holistic value

Peirce (triadic)

Representamen

Immediate object

Interpretant

Terms in this book

Signifier

Content

Use

Artwork comparison

Visual representation

Theme/subject

Stylistic/visual/linguistic interpretation


Summary table map technical terms in semiotics and computer science for rest of the book.

(41-42) I will use the terms signifier, content, and use, as given in the table. These choices derive from the overlap of technical terms in semiotics and computer science. Such names for sign relata indicate the nature of a sign in this book. The content concerns the what, or the semantics, of a sign, whereas the use concerns the how, or the pragmatics, of a sign. In other words, a sign is a medium for stipulating semantics and pragmatics.
(42) the distinction between the two models can be made trivial as they are
equivalent models under certain conditions.

Figure 3-8 map of the book deserves analysis in itself as a form of visual rhetoric.

(44) Consequently in this book, when I compare artworks to signs, the visual representation is considered to correspond to the signfier, the immediate object as the theme or subject or the painting to the content, and the stylistic/visual/linguistic interpretation of the content to the use, or interpretant.


4
Marriage of Signifier and Signified
4.1 Properties of Signs

Chapter 4 on Marriage of Signifier and Signified begins with quote from Augustine, images Tohaku Pine Trees and Turner Norham Castle, Sunrise.

Saussure relative and absolute arbitrariness.

(47) The arbitrariness of signs is thus obvious within the context of programming. In natural language, in contrast, such arbitrariness had to be discovered, another important contribution made by Saussure. One reason for this lies in the fact that in natural language people have to use the same sign to mean (almost) the same thing to communicate with each other. Saussure calls this social convention and indicates how signs are arbitrary but bound. He also indicates that the arbitrariness is a matter of degree in natural language. He raises the concept of absolute and relative arbitrariness by using examples.
(47) Saussure further presents the following notion of
difference that we saw in the previous section, which constituted the basis of structuralism. . . . Here, he mention of both signifiers and signifieds disappears. A sign system is a system of relations, including that between the signifier and the signified, that among signifiers, and that among signifieds. To the extent that such a network of relations is formed, there is no necessity that a specific signifier be used to represent a signified.
(48) The questions underlying this paradox are the association between the signifier and the signified, whether or not they are separable, and above all, the
role of a signifier with respect to a signified. The aim of this chapter is to consider this role through computer signs.

Intelligence capable of using lambda calculus can be conceived in programming languages by machines as well as natural languages by humans.

(48) The extent of application of the lambda calculus as a formal language suggests that the key to what makes a language a language is embedded within the framework of the lambda calculus.

4.2 Lambda Calculus

Equivalence of lambda calculus as methodological tool, involving Church and Kleene, and Turing machine both embody overall ideas about computing, both in terms of technological complexity and human body centrism, biochauvanism: consider engaging contrast to Derrida archive here.

(49) The lambda calculus was originally established by Alonzo Church and Stephen Kleene in the 1930s. It was created to formulate problems in computability, and since it is considered the smallest universal programming language, any computable function can in principle be expressed and evaluated using it. The lambda calculus has been mathematically proved to be equivalent to a Turing machine.

Chomskian recursive definition of a grammar by using rewrite rules bases lambda calculus.

(49) Formally, the lambda calculus consists of (1) a function definition scheme and (2) variable substitution. Its complete grammar can be presented using a context-free rule set within only three lines, as follows:
<expression> ::= <identifier>
<expression> ::= lambda<identifier> . <expression>
<expression> ::= <expression> <expression>,
where <expression> and <identifiers> denote the sets of expressions and identifiers, respectively, and the symbol ::= indicates the Chomskian recursive definition of a grammar by using rewrite rules.
(50) An expression generated by the second line of LG is called a
lambda-term. An example of a lambda-term is given by
lambda x.x + 1.
This expression denotes a function the performs the addition of one to the variable x.
(50) An identifier with a scope defined in an outer expression is also allowed in the lambda calculus.

Variable substitution at the heart of LG: good but tedious examples expressed in print, narrative form; how could they be illustrated procedurally?

(50) A lambda-term thus defined can be juxtaposed with another expression, as expressed in the third line of LG. This evokes variable substitution, which is the semantics of the grammar of this third line.
(52) This means that any computation can be described by chains of beta-reductions through mere substitution of signs.

Intersection of natural and computer language interpretation in substitution basis of LG, where humans can learn about themselves by studying built environment, especially programmed machines; example of things both humans and computers do, common ways in which they work, both articulate.

(52-53) With this expressive power of the lambda calculus, its framework has often been used to describe both the semantics of a programming language and the formal semantics of natural languages. The formal semantics of natural language Gottlob Frege and Bertrand Russell, who denoted the semantics of language through logic (Lycan, 1999). The description later integrated the lambda calculus. Merging this trend with the formality of the possible world proposed by Rodolf Carnap and then established by Saul Kripke, Richard Montague developed a theory of semantics called the Montague grammar based on the lambda calculus. Such use of the lambda calculus as the formal framework of natural language suggests the potential of natural language interpretation not being so far from that of computer languages: one essence of natural language interpretation might be substitution.

4.3 The Lambda-Term as a Sign Model
(53) To
articulate in this book means to construct a semiotic unit formed of signs.

Figures 4-3 and 4-4 useful illustrations of operation of lamda term in dyadic sign model substitution: should this be mandatory learning for interpellation into a digital humanities philosophy of computing discourse?

(54) Within any dyadic model after Saussure, the signifier, or name, is a function to articulate the signified; that is, articulation is performed by the signifier.
(54-55) Naming occurs when two lamda-terms are juxtaposed for a beta-reduction. . . . Within any dyadic sign model, the focus of discussion has been how a concept itself
acquires a name. In contrast, in a lambda-term A, the naming process is modeled in terms of how to provide a name to an expression B, so that B is consumed within A.
(56) Naming in the lambda calculus thus facilitates
interaction between two lamda-terms through the substitution of beta-reductions. In particular, a name is needed to consistently indicate a complex ensemble within a scope. As long as this purpose is met, the identifiers provided by lamda-terms are arbitrary.

Dynamic, local essence of names/identifiers implied in computing semiosis by LC, presented by another table linking Saussurian dyadic models and lamda-term characteristics.

(56) Another observation from the articulation and naming of lambda-terms is that a name is always given dynamically to something that has been previously articulated by lambda. The name is thus always a local name provided by the first expression to the second, and it is only valid within the scope of the lamda-term. . . . Even though the notions of local versus global and dynamic versus static are relative, modern dyadic models consider signs to be relatively global and static, in the sense that the scopes of signs are not considered.
(57-58) Such a view of the lambda calculus, however, shows that the name is introduced in a manner unrelated to the content. . . . In contrast, for the dyadic model, Saussure says that the signifier articulates the signified; signifier and signified are inseparable two sides of a sign, and each side cannot exist without its counterpart.

Limitation of LG to effect simultaneous introduction beyond formulaic outer-bounds scope resolution may point to differences between machine and humans bases of intelligence, subjectivity, thinking, language processing: no surprise next section is about self-reference, for the advertised asymptotic point is reflexivity.

(58) The current LG is unable to introduce a pair consisting of a signifier and content at the same time. To observe the effect of simultaneous introduction, LG should be extended to allow such definition.

4.4 Definition of Signs by Self-Reference
(58) Most practical languages, even programming languages, provide a mechanism to introduce a sign through definition. This allows introduction of a signifier to siginfy something articulated.
(59) In other words, a let-expression allows definition of signs by
self-reference, which means that an identifier is defined by referring to itself.

Speculative introduction of sign in programming prior to instantiation or assignment related to self-referentiality in natural language, but more specifically is how outer-scope resolution in LG works: at its limit is recursive programming structures.

(60) To thus define a sign by self-reference, a signifier must be introduced so that it refers to the self, the content to be articulated. This means that a sign is speculatively introduced to allow self-referential expression. Speculative introduction means introduction of a sign before content is consolidated. Introduction of a sign assigns a memory space for an identifier indicating some content. Note that any memory space always has come content in its bits, even though the content might be random. Thus, the signifier and the signified are indeed two inseparable sides of a sign. Nevertheless, a sign can be speculatively introduced to indicate content that has yet to be consolidated.
(60) Most importantly, in a self-referential sign, the signifier provides the means/functionality to articulate the content. Separation of the signifier and articulation then becomes impossible. At the same time, the effect of
use on content is apparent in self-reference because the content of a self-referentially defined sign depends on its use.
(61) Saussure's paradox is not a paradox at all, but is a necessity with self-referential signs, and as will be seen in Chapter 9, most natural language signs are self-referential.
(61) Unlike natural language, careless self-reference in programming languages can lead to nonhalting execution. . . . Despite this drawback, recursion is often the preferred approach, and many programming textbooks explain how definition by self-reference is natural, elegant, intuitive, and easy to understand.
(62) Church showed theoretically that any recursive function can be transformed into a composition of the fixed-point function and a non-recursive function by transforming the recursion into an iteration.

Fixed point function is Church transformation of reflexive self reference from recursion to iteration, provided untyped cases.

(63-64) The essence of this transformation is to modify the calculation of self-reference into calculation through a reflexive procedure, which is expressed by the fixed-point function. Church's transformation intuitively means squeezing the recursion into a fixed-point function and expressing the remaining functionality nonrecursively. Similarly, any recursive function can be generated through transformation of an expression in LG by using the fixed point function. Hence, LG-let and LG are equivalent.
(64) In other words, self-reference cannot be expressed directly in LG but is present in the form of the reduction process. Introduction of definition transforms this underlying reflexivity into self-reference.
(64) This equivalence, however, is limited to cases in which both LG-let and LG are
untyped. . . . In other words, to make a typed language as powerful as untyped LG, the lambda calculus necessarily requires the let-expression.
(64) The essence of
type lies in the role of disambiguating the content of sign when they are used. When the type is known, then a function may acquire the same name for different content.

Use is the priest marrying signifier and signfied in triadic sign modeling, perfectly demonstrated in LG example.

(64-65) Consequently, the introduction of definition induces self-referential definition of signs, which provides articulation of content through use. . . . For a proper marriage of the signifier and the signified, an intermediary 'priest', use, is necessary. This thought has always been present in triadic sign modeling, in the the use (interpretant) connects the signifier (representamen) and the content (signified).

4.5 Self-Reference and the Two Sign Models
(65) Both the dyadic and the triadic models apply to the lambda calculus. The difference lies in what is considered the unit of a sign. In a dyadic model, one lambda-term and the identifier provided to it are considered as a unit, and relations between signs are situated within the network of signs. In contrast, in a triadic model, the interaction between two lambda-terms as a whole is embedded within the sign model.
(66) Self-reference thus resolves the separation between content and use. . . . That is, in a self-referential sign, the dyadic and triadic models are equivalent.

Use freezes into content.

(66) The opportunity for use to freeze into content is present in self-reference. Starting from using a sign, the sign's use freezes into its content.

4.6 The Saussurian Difference
(67) In other words, a procedure to judge the equality of any two arbitrary lambda-terms cannot be expressed by a lambda-term. Similarly, it is known that the equivalence of two arbitrary expressions in a programming language (or of two Turing machines) cannot be judged through the use of a program (or a Turing machine).
(67) This operational way of judging the equivalence of two expressions within the computer science domain is analogous to Saussure's statement: there is only difference among signs.
(67-68) Generally, both the content and the use of a sign evolve dynamically, depending on the historical background. The meaning of a sign becomes impossible to explicitly articulate and can only float in a network of signs that have been juxtaposed and related. Just as Saussure says, then, the meaning of a sign must be this relational structure, generated through repetition and reflexive use among related signs.

4.7 Summary


5
Being and Doing in Programs
5.1 The Antithesis of Being and Doing

Chapter 5 on Being and Doing in Programs begins with quote from Maruyama; Ice images by Maruyama, Freidrich, Fontana.

Being as ontological status of entity whose ontic character established by what it is, doing by what it does and what can be done to it; being/doing antithesis emerges under triadic sign modeling.

(71) 'Being', in this chapter, refers to the ontological status of an entity whose ontic character is established by what it is, while 'doing' denotes that of an entity whose ontic character is specified by what it does and by what can be done to it.
(71) I draw a more general hypothesis, namely, that the 'being'/'doing' antithesis emerges in any domain where entities are described according to triadic sign modeling. The 'being' ontology emerges when relations are constructed according to signs' content, whereas the 'doing' ontology emerges when relations are constructed according to signs' uses.

5.2 Class and Abstract Data Type


'Being'

'Doing'

Typology by Meyer (2000)

Class

Abstract data type

Java (Sections 5.3 and 5.4)

Class

Interface

Code sharing

Yes

No

Task sharing

Hard

Easy


Table based on Meyer typology distinguishing being and doing in Java as class and interface for abstract data type, and impact on code sharing and task sharing.

(72) According to Meyer (2000), a set of objects can be described in one of two ways: by class or by abstract data type.
(73) Thus, a programmer using a class is assumed to be well acquainted with its internal structure. This assumption increases the programmer's responsibility to know about objects with respect to 'what they are' and how to use them consistently. Under such circumstances it is difficult for many programmers to work cooperatively, resulting in limited possibility of task sharing. ('Task sharing' row, 'Being' column of table). This is a 'being' sort of object construction, in which the ontological relation is formed according to what the object
is ('Being column of Table 5.1).
(73) Programs based on abstract data types have exactly the opposite property. An abstract data type is a set of declarations of functionalities for a collection of objects. . . . All communication with such objects is conducted via this interface. . . . This is a 'doing' kind of object construction, in which the ontological relation is formed according to what the object can do or what can be done to the object ('Doing' column of Table 5.1).

Early object-oriented languages used classes to design objects, implying being, more recent languages allow abstract data types, suggesting doing: being framework preferred for small projects, doing framework for large, distributed projects.

(73) The first successful object-oriented languages, such as Simula and Smalltalk, allowed object design only via a class. The more recent C++ language has the implicit/limited functionality of the abstract data type. Recent languages, such as Java, incorporate the abstract data type as a major part of the language's design.
(73-74) In actual coding, when a single programmer develops a small-scale program, the 'being' ontological framework is preferred. In contrast, the 'doing' framework is adopted when the scale is large and the project involves many different programmers and multiple tasks (such as when building a language library).

5.3 A Being Program Example

Being program example in class inheritance features of common and unique child features.

(74) Such an interclass relationship of A with B, may be 'A is a B', is called inheritance; it guarantees that classes A and B have the same features and functions, whereas a child can have additional features.
(76) Such 'being' constructs are essentially based on primordially having common features rather than common functions.

5.4 A Doing Program Example

Doing program example in interface declaring set of functions indicating how objects are accessed.

(78) In interface declares a set of functions, which only indicate how objects are accessed. The interfaces are implemented by classes, indicated by the solid ellipses. The functionality of a class is of the 'being' kind, but implementing an interface changes the functionality into the 'doing' kind by having a protocol declared within the interface.

Procedural rhetorics of family and license systems distinguishing inheritance and interface.

(81) A frequent classroom metaphor used in teaching programming languages is that classes connected by extends create a family system, whereas abstract data types defined by implements create a license system.

5.5 Being versus Doing and the Two Sign Models
(81-82) Following the terminology used in this book for sign relata, the class name corresponds to the signifier, the features correspond to the content, and the functions correspond to the use. . . . In other words, the hypothesis of this chapter is that the ontological difference between 'being' and 'doing' emerges depending on which side of the triadic sign model is emphasized in constructing an ontology, just as when Saussure and Peirce reversed 'aliquid stat pro aliquo'.

In object models being takes interior view, well fit for dyadic sign model, doing exterior.

(83) 'Being' takes the interior view, stipulating an object from what it is, whereas 'doing' takes the exterior view, stipulating an object from how it looks from the outside and how it can be used.
(83) Naturally, then, the dyadic model takes the viewpoint of considering an object from within or, more precisely, only from within, completely excluding how the object is used by other objects and instead regarding use as a holistic value in the sign system.

Pierce objects considered from interior view, Heidegger exterior doing versus being ontology.

(84) That is, Peirce considered his object to be more primordial than his interpretant. This reveals that Peirce considered objects from the interior view.
(84) A philosopher who took the exterior view of 'doing' was Martin
Heidegger. He suggested that the 'doing' relation is primordial with respect to the 'being' relation. . . . Note how Heidegger's notions of ready-at-hand/present-at-hand correspond with 'doing'/'being'. As Gelven summarizes, Heidegger regarded ready-at-hand as more primordial than present-at-hand, meaning that 'doing' is more primordial than 'being'.

5.6 To Be or To Do
(85) Applying this discussion of the previous chapter, we see that the distinction between 'being' and 'doing' is likely to disappear for self-referential signs.

For Maruyama technology driven increasing social complexity shifting value from being to doing a symbol of modernity.

(85-86) Masao Maruyama, a political scientist, has identified the shift from 'being' to 'doing' as a symbol of modernity. According to Maruyama, the dynamic of modernity is deconstruction of the social hierarchy rooted in 'being', the result of filtering away all kinds of ineffective dogma and authority. Such deconstruction was generated by a shift of value from 'being' to 'doing', which occurred because of the increased social complexity resulting from advances in communication and transportation technologies.

Importance of abstract data type and interfaces in evolution of programming languages reflect shift from deep, interior object definitions to exterior conceptions: how does STL C++ fit this trend versus other more recent language innovations?

(86) In programming languages, too, the abstract data type has become more important as software complexity has increased. This shift is indeed related to complexity because when many different objects are needed they can no longer be understood through deep knowledge of what they are. The solution is instead to define a simple interface, or communication protocol, and then to limit the relations among objects according to that interface.

5.7 Summary


PART 2
KINDS OF SIGNS AND CONTENT
6
The Statement x := x + 1
6.1 Different Kinds of Signs

Chapter 6 on the Statement begins with quote by Panofsky, images of birds by Jakuchu, Margritte, Brancusi.

(93) A value is thus represented on three different levels: value, address, and type. Such stratification generates the following ambiguity problem for the user: Given a sign, does it indicate a value, an address, or the type?

Mentions Hjelmslev as do Deleuze and Guattari, initially in the context of the silly Challenger narrative, a version of the Platonic dialogue virtual reality phenomena representation.

(94-95) When two models are mapped onto the same target, the correspondence is understood via that target. This chapter applies the same tactic by considering the ambiguities of computer signs appearing in programs and applying the sign classification approaches of Hjelmslev and Peirce.

6.2 Semiotic Ambiguity of Identifiers
(96) This ambiguity between value and an address cannot be avoided, since any value is stored at a memory space having an address: a memory address can thus mean the address itself or the value stored there.
(96) Contextual information provides the only means of resolving this ambiguity. . . . Although the precise definition and implementations are given within the language specification, typically, the x
on the left side indicates an address, while on the right side indicates a value.

Call by value and call by reference reflects semiotic ambiguity of identifiers.

(97) The same ambiguity occurs every time a program refers to an identifier. A significant example of this within programming languages is the contrast between call by value and call by reference, an important concept that all good programmers understand.
(98) Java specifies that the values of basic types are processed directly by value, whereas the values of complex types are processed indirectly by reference.
(98) Another ambiguity can be seen between type and value at the semantic level of the programming language.

Disambiguation of type as kind or value sometimes only by context in source code.

(99) Thus, the sole identifier Rectangle can mean either a type or a kind of value. . . . This can only be disambiguated by the context, such as the existence of the reserved word new.
(100) Given content, therefore, there are multiple levels on which an identifier represents it. Such variety of representation of content is formalized in semiotics as sign classification.

6.3 Hjelmslev's Connotation and Metalanguage

Barthes sign studies presented in Myth Today based on Hjelmslev glossematics (glossary and mathematics).

(100) Hjelmslev extended Saussure's dyadic framework and called it glossematics. . . . The introduction of Hjelmslev's theory in this section is based on an interpretation using a graphic formulation that was devised by Roland Barthes through his studies on applying Hjelmslev's sign classification to various targets.
(100 footnote 9) According to (Noth, 1990), glossematics is the study of the sign, or
glosseme, as the basic unit or component carrying meaning in language. The term glossematics combines glossary with mathematics, which represents the underlying philosophy of the field of study.

Figure 6-7 depicts Hjelmslev/Barthes interpretation of computational sign: object/metalanguage relations and denotation/connotation.

(101) Hjelmslev considered that the dimension of either signifier or content could further form a sign, as shown in Figure 6.7. The upper part of the figure shows the case in which the content forms another sign, whereas the lower part of the figure shows the case in which the signifier forms another sign. Hjelmslev said that the former case establishes the relation between the two signs as object language and metalanguage, whereas the latter case establishes the relation between the two signs as denotation and connotation.
(102) On the other hand, the metalanguage concept corresponds to the logical definition of the term as a language about language. The target language to be explained is called the object language. A metasign is a sign of metalanguage. The content part of a metasign covers the signifier and the content of the corresponding object language signs at the same time.
(102) The layer consisting of x
as a signifier and its address forms a denotational layer, and the layer consisting of the value indicated by x is deemed to form the connotational layer. In other words, the signifier x connotes the content 32.
(102) On the other hand, a metasign is deemed to correspond to the type, which is an abstraction of data instances.

In Hjelmslev model signs become ambiguous when signifying its own content or content of another sign, well exemplified with pointers.

(103) In Hjelmslev's model, a sign is allowed to signify the content of another sign in relation with itself, as in the case for a denotation and a metasign. Then, a sign becomes ambiguous in one of the following two cases: when a signifier signifies its own content, or when a signifier signifies the content of another sign.

6.4 Peirce's Icon, Index, and Symbol

Peirce forms of firstness, secondness, thirdness as a layer model: compare to Barthes image, symbol, icon.

(103) Within the triadic framework, Peirce himself provides a theory of sign classification. This is based on his universal categories, in which any logical form is classified by the number of forms in relation, namely, one, two, or three forms.
(104) According to Peirce, a typical example of a form of thirdness is the sign.
(104) Applying the universal categories to the relation between a sign and its content gives three kinds of signs – icon, index, and symbol.

Articulation of Peirce triadic framework using C language variable declaration.

(105) The most primitive entities are zeros and ones, and the bit patterns representing values are icons. A bit pattern has the ultimate resemblance to the data partaking of the object's character. Similarly, literals denoting these values in digits and instance constructors within programs could be considered icons. An index naturally corresponds to a reference to the value located at the address represented by x, since data are physically stored in computer memory and form an organic pair with the value, and the address has nothing to do with value. . . . As for the symbol, a type seems to be a sign that embeds a general idea about a value. As a consequence, in the following example expression,
int x = 32,
int
seems to correspond to the symbol, x to the index, and the value 32 to the icon.
(106) In other words, one way to formulate ambiguity in Peirce's framework is to consider it to occur when the universal category of the relation of a sign with its content degenerates.

6.5 Correspondence of the Two Sign Classifications

Correspondences between dyadic and triadic frameworks with programming languages seem to validate philosophical models.

(107) The concepts of icon/index/symbol and denotation/connotation plus object language/metalanguage correspond within the context of application to programming languages. Similar concepts have been developed within both the dyadic and the triadic frameworks, and the resulting correspondences validate each framework.
(108) Looking back to the three works of art shown at the beginning of this chapter, the representation of birds in Figure 6.1 seems to function as the icon, that of Figure 6.2 as the index, and that of Figure 6.3 as the symbol, in Peirce's terminology. Correspondingly, the second painting can be considered to connote a bird, with its denotation being the sky, and the third painting can be considered to represent an abstract bird as a metasign, with its object language sign being a bird, in Hjelmslve's terminology.

6.6 Summary


7
Three Kinds of Content in Programs
7.1 Thirdness

Chapter 7 on Three Kinds of Content begins with quote by Breton, images by Klee Tale a la Hoffmann, Kiitsu Morning Glories, Rembrandt Self-Portrait.

Peirce branching chain figure for intuitive explanation of universal categories.

(111-112) According to Peirce, if there are only two-term relationships, they only form a chain without branches, but with three-term relationships any two forms can become connected in various ways. A three-term relationship thus provides a sufficient number of forms for any two forms to be freely related; additional forms are not required.
(113) The criteria that distinguish thirdness from secondness must be clarified to gain a better understanding of the way of thinking that any form is one of only three types.

Questions of criteria distinguishing universal categories considered in relation to functional language Haskell.

(113) In this chapter, these questions of the universal categories are considered in the functional programming paradigm by examining the functional language Haskell.
(113) In this functional paradigm, all functional forms are decomposed by using Church's transformation and
currying. Analysis of the result of the transformation shows that thirdness is essentially different from secondness and cannot be decomposed into firstness and secondness.

7.2 Definitions and Expressions
(114) As explained in Section 2.2, a program consists of two parts: the definition part, in which identifiers are defined in terms of their content, and the use part, in which identifiers are used through expressions. Signs are related through these definitions and expressions. The objective of this chapter is therefore to see how many signs are essentially involved in a definition and an expression.

Decompose functional relations into minimal relations via Churchs transformation and currying.

(114-115) More precisely, through currying, the maximum number of terms involved in an expression can be transformed exactly into a combination of relations of two terms, provided that the expression does not include or is not included in a self-referential definition. . . . Application of Church's transformation and then currying will therefore decompose functional relations into minimal relations.

7.3 Currying

Currying used to reduce expressions to multiple applications of one-argument functions after all self-referential definitions treated with Churchs transformation.

(115) Currying is a transformation that applies to expressions. A functional application to multiple arguments can be reduced to multiple applications of one-argument functions.
(117) Before one applies currying to expressions, therefore, all self-referential definitions should be transformed into compositions of the special function fix
and non-self-referential parts. This is done by using Church's transformation, which was introduced in Chapter 5.

7.4 Church's Transformation
(119) Versions of the fix
function were proposed by Turing (1936-1937), Curry, and others, as well as by Church; one of these versions is the fixed-point operator shown in Chapter 4 for the untyped case.

Church transformation separates recursive part of definition into fix and non-recursive parts, revealing hidden constraint.

(120) The significance of this transformation is that it allows us to separate the recursive part of a definition from the non-recursive part.
(120-121) This increase shows that a self-referential function
inherently includes a hidden argument, which appears upon transformation into fix and nonrecursive parts. Therefore, the calculation of f x as a fixed point is affected by another implicit input, which is the constraint that the final execution f x should fulfill x = f x. . . . Definitions and expressions are decomposed using currying and Church's transformation so that the results can be analyzed in terms of the universal categories. . . . First, by applying Church's transformation, all self-referential definitions are transformed into compositions of fix and non-self-referential parts. . . . Second, by currying, all multiple-argument functional applications are turned into one-argument applications.
(122) By removing all such unnecessary temporary signs appearing in definitions, a program finally consists only of functional one-argument expressions of the form f x, where f
is either fix or a nonrecursively defined f, and the definition of fix.

(122) Now, let us consider how many terms are involved in this resulting program. In the expression f x, x is a term without an argument, placing it in the category of firstness, while the function f, which is not fix, represents a two-term relationship: secondness, applied to x. The fix function is essentially a three-term relationship, as it involves fix, f, and its resulting solution x.
(123) This indicates that the essence of thirdness is fix, the self-reference.
(123) Thirdness is a kind of content. . . . Articulation of content as such requires a means to refer to the content. Moreover, this means must be speculative, since the target to be stipulated will be defined by means of itself. In other words, signs and a sign system play a crucial role in realizing content thirdness.
(123) Moreover, once the description is obtained by the use of signs, it applies to any content that fulfills the description. A description using signs in this sense involves an abstraction. This way of considering thirdness as an abstraction explains why Peirce considered a sign as representative of thirdness, since the abstraction consists of reflexivity reconsidering content in comparison with similar content.
(124) Another issue that should be considered before proceeding further is the relationship between sign classification and the universal categories.
(124) In other words, a sign's category can be represented by the triplet (n1, n2, n3), where n1 indicates the universal category for how the sign is, n2 indicates the category of the relation to the object, and n3 indicates the category of the relation to the interpretant. . . . Therefore, there are only ten categories in total.
(125) The relationship between Peirce's fine sign classification and computer signs can probably be further considered, but this is difficult without sufficient motivation.

Three categories sufficient to decompose all relations in functional computing language.

(125) All relations can be decomposed into one-term, two-term, and three-term relationships; therefore, three categories are sufficient. In computing, forms are as Peirce suggested, with content classified into three categories according to how many items of content are involved. Moreover, the difference between secondness and thirdness lies in whether self-reference is involved. For expressing reflexivity, it is seen that three terms are required, and signs are an important means to realize reflexive being.

Paintings at beginning of chapter 7 instantiate firstness, secondness, thirdness.

(125) Our understanding so far can be compared to the contents of the three paintings introduced at the beginning of this chapter. . . . For the painting in Figure 7.1, the painter's concept constitutes the theme, and this is the imaginary image of the painter, constituting firstness. For the painting in Figure 7.2, real-world morning glories inspired a realistic image in the painter's mind, which was then deformed by the painter's imaginary reinterpretation. In this case, the imaginary interpretation, applied to the realistic image, would form the content as secondness. Finally, for the painting in Figure 7.3, the realistic image of the painter himself was interpreted by disguising the subject as Zeuxis laughing, under the constraint that the disguised person is reciprocally the painter himself. Self-portrait as such is deemed to form the content constituting thirdness.

7.6 Summary


8
An Instance versus The Instance
8. 1 Haecceity

Chapter 8 on An Instance versus The Instance begins with quote by Foucault, image of The Fountain by Duchamp: a careful reading would be tracking all opening quotations, frontispieces of preceding chapters to hear together.

Difference between any instance and the instance, signified by haecceity, serious issue in computation due to ease perfect reproduction.

(127) Mass production frames a class of instances that are exactly alike, each devoid of haecceity. This selection considers the contraposition of an instance and the instance, which is a more serious issue in computation than in art because of the ease of perfect reproduction.

What makes a thing different than any other, such as at particular crossings of human and machine cognition found in philosophy of computing literature to which the radical boundary with the physical world of the other is the cyberspace interface to which switch matrix is a type of closed loop feedback control system running the pinball program, solenoids are muscles (and speech but not all sound, suggesting it is important to carefully discriminate speech and sound especially evident around things like symposia), unthought lamp and display driver driver circuits. Need to spend some time to see what fascinated earlier artificial intelligence researchers. Basic narratives are like combinations of thirdness that actually mathematically reduce to ten from two to the fifth. See potential hinted at in conclusion of chapter 11.

(127) The term haecceity in this chapter signifiers a property that the instance possesses but an instance does not, namely, something about the kind of a thing that makes it different from any other.

8.2 A Case Study of a Digital Narrative

Propp narrative generation juxtaposed with impressive account of robot game play commentators as example of failure to achieve haecceity in system design perhaps a way of thinking about the philosophy of computing through this text taking up its logic still keeps things logic dependent, ignoring vicissitudes of execution Chun notices.

(130) Vladimir Propp constructed a narrative theory of Russian stories. He conducted a thorough analysis of Russian folk takes and obtained 31 basic narratives (narrative units) lying underneath. In Propp's model, any Russian story is generated through combination of these narremes by using a narrative syntax.

8.3 Levels of Instantiation
(132) The opposition of form and matter is present in one of the oldest philosophical problems, called the problem of universals.
(132) Three parties, supporting realism, conceptualism, and nominalism, pondered the existence and location of
universals vis-à-vis instances (Yamanouchi, 2008). During this period, the instantiation process was considered an important problem, denoted by using the term haecceity—a scholastic term expressing individuality or singleness.

The descent of importance of instances to flickering signifiers cheapens the virtual reality experience as it has since early Greek times when writing and reading were the state of the art mass communication arts (according to Heidegger, do not say technologies or you misunderstand the nature of that primordial thought at the beginning of philosophical thinking): the order of examples of instantiation here is first, singularity, second, copying, third, computer graphics generated by a program.

(133) This quest for universals, however, degraded the importance of instances. We can see how instances gradually lost value over the course of a shift through three different methods of instantiation as follows: . . . computer graphics generated by a program.
(133) Originally, every instance was unique and existed for one period of time only.
(133) The second method of instantiation is related to technologies for making copies.

Also memory-related instances, although where I am thinking nostalgia for early machinery experience of digital emigrants; met here via interaction criteria.

(133-134) The third method of instantiation is to produce completely reproducible instances. . . . Some particular instances, however, can still be the instance with haecceity, such as high-quality computer graphics that have been developed. These are the instances yet they are reproducible.

8.4 Restoring Haecceity
8.4.1 Optimization

(134) Since the instance obtained through optimization is the best among all other instances, it acquires the significance of representing the class.
(135) Just as a good example requires human intuition, the evaluation function generally requires human judgment and creativity. Moreover, the evaluation function could depend on the context or need, and therefore it must be designed in consideration of such context.

Thus the example program meets naturalness criterion by perceiving supposedly physical soccer match that may also be virtual.

(135) Among various evaluation functions, a common approach is to mirror nature. A typical evaluation function is formulated in the form of the probability of naturalness.

8.4.2 Interaction

Memory effect to which I was taking extreme case of nostalgia powering SSR.

(136) Another possibility for haecceity restoration was already suggested in Section 8.2: interaction. . . . Computer games and interactive art make use of the special effect that whenever the user is involved in instantiation, the instance reacquires haecceity.
(137) A special case of interaction that avoids this conflict is adaptation. After a period of use, the instances generated by the system adapt to what the user prefers. Adaptation thus proceeds through collaboration between the system and the user by applying an evaluation.
(137) Adaptation is popular within the user interface domain. An example is text entry systems (MacKenzie and Tanaki-Ishii, 2007).

8.4.3 Haecceity and Reflexivity

Haecceity and reflexivity conjoin pre-post-postmodern and posthuman, as Hayles situates it in second wave of cybernetics and the third deals with adaptation.

(137-138) The particularity of the two schemes of optimization and interaction is that they are attributed with test procedures. Usually an instance is input to a test procedure, and the evaluation is the output. This procedure can be used differently to obtain the best instance among instances. This works by starting from an input and then choosing the next input according to the evaluation result for the first instance. By repeating this process, an instance is gradually improved to obtain the instance. The test procedure is thus changed into a reflexive procedure to obtain the best instance.

Is my masters thesis methodology a similar reflexive procedure to obtain best instance of working pmrek?

(138) An instance possibly gains singularity or rareness when it becomes the fixed point of some reflexive procedure. . . . The fixed point as a singular point within the domain provides a possible rationale for haecceity. Reflexivity, in my opinion, is one means to transform an instance into the instance.

Speaking about basis of virtual worlds for machine and human virtual realities, and also of danger of making goal programming haecceity to be understood along same physiology as writing, but now with offer to join nonhuman embodiment in foss, the static sense like all writing, and running instances, the new affordances of artificial intelligence built via optimization and/or interaction.

(138) Through the use of a sign or a sign system, the signification is poured and frozen into the final instance with haecceity. Under the pansemiotic view, in particular, of a world consisting only of signs without basis, reflexivity could be one important means to generate such a basis.

8.5 The Kind of The Instance

Provides solution to long standing philosophical questions surrounding inference by pointing to locatable programming points, patterns of working code, that impressively also summaries postmodern deconstruction, invoking and summarizing Derrida in a single paragraph, gulp versus bite, taken like a pharmakon.

(140) The nature of the instances thus obtained through optimization and/or interaction is the melting point of class and instance, of strong and weak inference. Forms of different categories must be mixed within the content, as in the case of x in x = f x. That is, the instance is the point where the sheer distinction of class and instance dissolves. Haecceity had been considered as a sort of notion opposed to universality, but it is in fact the transformation of an instance to the instance, the deconstruction of form and matter.

Source code solves by instantiating millions of examples in working code of Derridean concept founding human thought as well.

(140-141) Such thoughts regarding deconstruction of the sheer separation between binarily opposed concepts are attributed to postmodernist philosophy. For example, Jacques Derrida's deconstruction suggests how Western binary oppositions such as subject/object, form/matter, and universal/indivduum are not as clear cut as had been long considered and are subject to deconstruction (Culler, 1982). The form x = f x can be compared to his notion of différance, where x is differentiated by f x and the differentiation is iterated to reach a fixed point as a deconstructive being.

Haecceity generation either structure or construction is human machine interface.

(141) If haecceity could be explained by reflexivity, and the nature of humanistic value lies in reflexivity, then this requires further elaboration within computing. The next chapter examines the problem of reflexivity in computer systems and how it characterizes the structure of computer sign systems as different from human sign systems.

8.6 Summary


PART 3
SYSTEMS OF SIGNS
9
Structural Humans versus Constructive Computers
9.1 Uncontrollable Computers

Chapter 9 on Structural Humans versus Constructive Computers begins with quote by Hofstadter, Images Globe with Spheres by Vasarely, Malevich Suprematist Painting.

(146) As a whole, both machine calculation and human thought are thus based on sign processing.
(147) One apparent cause of such structural differences is the difference in how people and computers handle reflexivity.

9.2 Signs and Self-Reference

Linguistic expressions at margins of interpretability often exemplify reflexivity, for example the factorial.

(147) Language is used by means of linguistic expressions, which are considered interpretable in a system. Some linguistic expressions are situated at the margins of interpretability; among these are those exemplifying reflexivity.
(148) The example of the factorial also shows that some self-references can be transformed into a definition without self-reference. Such cases are limited, however, and the interpretation of self-reference in general is problematic because the content could be null or even contradictory.
(149) Note how self-reference causes the definition and use to become mingled. . . . As a consequence, the dyadic and triadic sign models become equivalent when signs are self-referentially defined.

While no difference between natural and computer signs, both exhibiting dyadic or triadic models, human and computer interpretation strategies diverge as structural and constructive.

(149) In other words, to this point, there is no large difference between natural and computer signs. The interpretation strategies for reflexivity, however, are different in the two language systems, causing them to have totally different structures.

9.3 A Structural System

Robust human interpretive strategy of give up, switch context, or continue leaves concrete content of signs ambiguous, depending on reciprocal definition.

(150) Humans have the choice to give up, switch the context, or continue.
(150) Such an interpretive strategy allows robust interpretation of problematic expressions with self-references; at the same time it generates a sign system in which much of the concrete content of signs is left ambiguous but still exists within the language system. . . . Every natural language sign is reciprocally defined every time it is referred to. Naturally, the whole natural sign system becomes self-referential.
(150) In addition, the uses and content of a sign change over time, and the whole represented by the signifier evolves.
(151) The meaning of a natural language sign thus exists floating among the network of signs that are used in expressions referring to the sign. . . . A signifier then represents everything that is related to the sign with respect to the content and uses. The signifier functions as the kernel onto which the uses and content accumulate. It is thus the signifier that articulates the meaning; the meaning is
not named by the signifier a posteriori.

Structuralism situates meaning in the holistic system, having particular implications for self-referential statements.

(151-152) The origin of this holistic view underlying the sign system lies in Saussurian structuralism. A language system is structural if the meaning of an element exists within a holistic system. . . . The generative model explains this structural aspect of the system in relation with how the signifier articulates the signified: the speculative introduction of a signifier generates a meaning consisting of an ensemble of content and uses, thus forming a structural system.

9.4 A Constructive System
(152) Computers process self-referential definitions in a totally different manner from humans. . . . Unless every recursive application of f
converges, such as by handling a subproblem with respect to that given originally, as seen in the case of the factorial function, the application of f can form an infinite chain. A computer therefore risks falling into an infinite loop when interpreting self-reference.
(152) Without the ability to judge whether a given program halts, once a computer starts running any calculation it risks falling into an endless cycle.

Computer interpretative strategy risks endless cycle when handling recursion, so constraints built into programming languages such as hierarchical type classification and scope.

(152-153) The halting problem is such a large issue that various ways to aid programmers in generating properly halting programs have been an important part of the history of programming language development. . . . First, theoretical support is provided by clarifying the general features that executable self-referential definitions possess. Second, various linguistic restrictions are introduced so as to better control the signs used in programs. One approach is to classify signs hierarchically by type.
(153) Another aid is the notion of scope, meaning the range of code within which a sign is valid.

Constructive system generates larger, target calculation from composition of smaller components.

(153-154) Design through such techniques and bottom-up programming attitude generates constructive systems. A constructive system, in this book means a system in which a larger element is generated as a composition of smaller components. . . . To write a program is thus to give a constructive view of the target calculation.

Constructivist programming sounds like Turkle hard mastery; refers to Brouwer intuitionist logic and Bishop constructive mathematics.

(154) This philosophy was elaborated by Luitzen E.J. Brouwer through his intuitionistic logic, which has developed into the form of the constructive mathematics of Erret Bishop. Briefly, intuitionistic logic is a logic framework that does not rely upon reductio ad absurdum, thus eliminating indirect proofs that explicitly avoid showing the existence of mathematical formulas. The sophistication of this theory relates to the programming method called constructive programming, in which the specification of a software application is formally expressed and programs are developed as proofs.

9.5 Structure versus Construction

Begin with system to work back to the word, or individual word to construct the system.

(154) By analogy to Saussure's quotation displayed in Section 9.3, in a structural system, we must not begin with the word, or term, but we should begin from the system, whereas in a constructive system we must begin with the word, or term, in order to construct the system.
(155) In a way, a structural system is naturally formed, without any formal requirements, so the signs connect with each other arbitrarily and freely. The resulting system is holistic, irreducible to a minimal core. Since such a system makes the best of reflexive signs, the sign system itself is reflexive. However, a constructive system is generated from a minimal core of signs guaranteed to halt, and the system must then be further constructed in a bottom-up manner to fulfill the formal requirement of halting. Connections among signs are made by necessity, and any system finally reduces to a small set of signs and their relations, representing the functions and data provided by the language system, which further reduce CPU commands.
(155) Such differences affect the robustness of a system.
(156) An important aspect of the relation between a structural system and a constructive system is that the former may include the whole of the latter, whereas the latter may include only part of the former.

Programmers and users must remain completely aware of all signs and how they are related.

(156) Programmers must fully administer all signs, remaining completely aware of how they are related. Since computer software applications are generated through programming, the users of current computers naturally face a similar challenge.

Goal of human friendlier computer languages seems to call for constructive systems able to handle structurally formed signs, perhaps emergent from the entire system, although the traditional focus is reflexivity.

(156) To make a more human friendly computer language, a fully constructive system should be restructured so that it can handle structurally formed signs. The key lies in the method of processing reflexivity, inclusively of self-reference.

9.6 Summary


10
Sign and Time
10.1 Interaction

Chapter 10 on Sign and Time begins with quote by TS Eliot, image Kiunhiu by Taneomi and untitled by Pollock, both resembling calligraphy, implying action.

Pansemiotic view taken to force study of relations between sign and world, focusing on computer signs.

(159) In these chapters, the pansemiotic view is taken, in which the environment is also considered pansemiotic. That is, a sign system may only interact with or handle the environment via signs. . . . The underlying questions of this chapter are how a sign system relates with the outer world and represents it and how a sign is involved in this process.
(159) Action paintings like this remind us that behind painting is always a time flow along which the painter worked, stroke by stroke.
(160) A sign system evolves through interaction with a heterogeneous outer world, and this requires the time flow.

Sign value changes are the ontological basis of virtual systems, echoing grammatological results of Derrida concerning graphic and presumably all semiotic systems in general.

(160) Currently, such uncontrollable events are described through sign value changes. . . . The value triggers further calculations within the system, and the system outputs the result based on the input.

Referential transparency an aspect of compute science research, development, and implementation like rights management; these very difficult problems that machines handle at best with great complexity as discussed in this chapter are the flip side of complex problems that humans routinely handle, to answer a question from the first set.

(161) Studies have been made on how to implement a computer system that always remains consistent, through a limitation called referential transparency: a restriction that any expression must have a unique value independent of time.

10.2 The State Transition Machine

Identification of modern computers with von Neumann hardware, state transition, stored program, fetch and execute, and so on: a view of the machine world that makes input and output the oracle boundary of the uncontrollable, unknowable other to the machine world.

(161) A modern computer has von Neumann-type hardware, which is based on a state transition machine. A state is a sequence of zeros and ones inside the CPU registers, the main memory, and the secondary memory (e.g., the hard disk).

Discussion of debugging insists that mere inspection of program code is insufficient; it must be run or simulated to appreciate side effects as well as uncontrollable input output, which are weaknesses of state transition machines that could be contrasted to human abilities to handle referential transparency.

(162) The correctness of a program cannot be verified without considering the order of the expressions being executed. . . . A program cannot be verified only by checking the correctness of each expression. Debugging thus almost resembles the virtual execution of a program.

No social conventions stabilizing sign values in computer program language use; even though human language signs are arbitrary, they are not subject to the change typical of machine signs: how then are they held in check, how do we reach assurance to trust them?

(162) The critical problem seen so far underlying computer programs is related to the arbitrariness of signs: even for a constant value, the signifier is arbitrary. This arbitrariness means that the content can change with the whim of the moment. In natural language, the meaning of a word can also change, which corresponds to the value change of signs in a computer program. Change in natural language, however, is effective only after it spreads globally. This condition of globalness serves to restrain easy value changes and as a consequence natural language values are relatively fixed. . . . As Saussure says, signs are arbitrary but bound. In contrast, in the case of computer signs, change easily occurs, since a computer language system does not have any restrictions corresponding to the social conventions that stabilize sign values.

10.3 Referential Transparency
(162-163) Referential transparency is a restriction applied to a programming language system to ensure that every expression has a unique value. This ensures that the value of a sign can never again be changed once the value is set.

10.4 Side Effects

Side effects are how the putative intention of program code differs from actual execution separate of programmer intention: think of double hyphens being changed to em dash by a word processor ruining the prima facie soundness of working code; humans can leverage side effects creatively, whereas programmed systems typically degrade.

(164) A side effect in the computer science domain is broadly defined as a situation in which the value of a variable is unexpectedly changed, despite the programmer's intentions, during evaluation of an expression. . . . Side effects include exception handling, nondeterminacy, concurrency, and, above all, interaction.

Accounting for side effects is very cumbersome and costly by making signs disposable to achieve referential transparency, such as generating a new sign or world for every changing value, dialogue and monad.

(165) A side effect can be described in a referentially transparent manner by using a common trick, namely, making signs disposable.
(166) There are two ways to implement disposable signs: by generating a new sign for every changing value, and by generating a new world for every changing value. The former method is called a
dialogue, whereas the latter is called a monad.
(169) In general, referentially transparent systems remain computationally costly and are thus slower than a normal state-transition system.
(169) In other words, the attempt to get away from value changes started from one ordering problem and returned to another ordering problem. Writing an interactive program thus means administering some order for both cases, with and without transparency.

10.5 Temporality of a Sign

This view precludes storing computation components in the environment beyond the programmatically addressable memory, using the same trick embodied cognition theorists attribute to human thinking and computing.

(170) The relationship between the spatial and temporal aspects of signs is that spatiality precedes temporality: without allocation of a sign, its value cannot be changed. . . . In contrast, plain content without a signifier can never exist on a computer, since content does not exist on a computer without being stored somewhere in memory: within the CPU registers, the main memory, or the secondary memory.

Essence of temporality is the shift from undefined to assigned value somewhere in memory.

(172) What remains temporal is the essence of temporality in the kind of calculation described by computer programming: namely, the shift from [turnstile] to a value.

10.6 Interaction and a Sign

Sign introduces heterogeneity form outside the system with interaction; without interaction, sign awaits atemporal halting state.

(172) In the case of calculation without side effects, the moment when the content of a signifier is [turnstile] is the period of waiting for the calculation to finish.
(172) In contrast, in the case of calculation with interaction, [turnstile] does not represent the period of waiting for the calculation to finish, but rather the period of suspension, of waiting until the value comes from somewhere external.
(172) Calculation without side effects always aims toward this atemporal state.

Note how the Burks, Goldstine, von Neumann text concludes with this singular gesture of signaling: can it be argued that early computer and perhaps even programming philosophies are biased by this noninteractive paradigm, are there echoes even of living writing ideal for shimmering signifiers?

(172) Interaction causes the sign system to remain temporal and keep changing. The nature of this change is utterly different from the case without side effects. Therefore, the role of a sign in interaction is to introduce heterogeneity from outside.

10.7 Signs and Sein
(173) These two premises [sign has speculative nature and operates with external world] indeed hold for human sign systems too. In the last chapter, it was seen how a sign of a natural language is speculatively introduced and how its use reflexively stipulates its content. As for the second premise, it is trivial to note that human sign systems work within their surroundings.

Returning to Heidegger rediscovers human version of what was reached by studying semiotics of computer programming.

(173) The role of a sign suggesting a similar transcendental view in a general sign system is in fact present in Heidegger. . . . A sign is a speculative medium, a means for a sign system to interact with the outside. We must also recall here that such a speculative nature was also the basis for implementing self-reference by requiring the speculative introduction of a signifier.
(174) The magnum opus is unfinished and the book ends with the question: “Is there a way which leads from primordial time to the meaning of being?” (Heidegger, 1927, Section 438).

10.8 Summary


11
Reflexivity and Evolution
11.1 Reflexivity of Natural Language

Chapter 11 on Reflexivity and Evolution begins with quote by Wittgenstein, image The Gallery of the Archduke Leopold by Teniers the Younger and Woman Holding a Balance by Vermeer; Escher Print Gallery of cover joins the rhetoric.

(176) This last chapter considers the reflexive nature of a sign system again, but this time from the viewpoint of the whole system. . . . Here, the notion of reflexivity is a special case of a system's interaction with itself via the external world, and thus the argument assumes the concepts of the discussion in the last chapter.
(176-177) A painting that as a whole constitutes a visual system is now indicated as content and adds meaning through such reciprocal presentation of
systems in a system. . . . These two examples show interpretation of other paintings in a painting, which can be compared to processing other systems' output within a system. There are also examples of a painting that includes itself, which can be compared to the reflexivity of a system. In Escher's painting of Print Gallery, appearing on the front cover of this book, the central part is left blank. De Smit and Lenstra (2003) attempted to fill this blank part and found that the blank part is connected to the painting itself, thus causing infinite recursion of the painting itself toward the center.

Value for humans of using self-reflexive input output interactions with external world and self is what is insensible to machines; belief in improvement through reflexive feedback.

(177-178) The common understanding that to state/write is to understand shows that the production of an expression objectifies a thought as a composition of signs, which reflexively becomes the input influencing the thought, thus fostering understanding. . . . Through this reflexive reconsideration of self-produced output, a human being can change, improve, and evolve.
(178) Through communication among people, human beings mutually affect one another, and the system as a whole changes, improves, evolves.
(178) A natural language system is naturally reflexive because of its structural nature.
(179) In computer languages, however, reflexivity at the system level is not obvious, with one reason for this lying in their constructive nature.

11.2 Reflexivity of a Sign System

Homoiconicity is media convergence.

(179) The term reflexivity, as defined in the previous section, is related to homoiconicity, which appeared in computer science literature in the 1960s. Homoiconicity is a feature of a programming language system that denotes that a computer program has the same form as the data that are input and output by the program.
(180-181) The capability of a sign system to interpret its own output involves
self-augmentation, that is, the capability of the system to change or modify itself, to extend, to improve, or, possibly, to evolve. . . . Such [nonreflexive] communication can be compared to an assembly line, where each system does some work and then sends the result to the next system without being able to interpret what exactly the output is.

Comparison of open and closed systems to monad and vulnerable non-monad entity; solipsism versus open sytems embedded within common public systems.

(181) When multiple systems are involved, an important issue must first be considered: whether a language system is open or closed. . . . The closedness concept, in my opinion, resembles the windowlessness represented in the philosophical concept of a monad.
(181) In other words, mutual augmentation among open systems is qualitatively equivalent to self-augmentation. At the same time, open systems are vulnerable.
(182) Within this assumption, reflexivity plays the crucial role of simulating the other's understanding by using one's own interpretation scheme, as follows. . . . If this could be called otherness or alterity, then otherness is conditioned by closedness through the privatization of the interpretation scheme.
(182) One argument against solipsism is that humans have constructed open systems embedded within natural language as common public systems.

11.3 Categories of Reflexivity

Nonreflexive computer languages include HTML; are C++ generic types the same as templates?

(182) Not all computer languages are reflexive: many produce outputs not interpretable by the self but only interpretable by others. For example, markup languages, such as HTML, are not reflexive, since the markup language interpreter cannot interpret its own output.
(183) Reflexive language systems produced self-interpretable expressions. Among these systems, a genre similar to markup language is preprocessing language. . . . Examples include C macros for the C language and generic types for C++ or Java.

Preprocessing nonreflexivity avoids infinite substitution.

(183) The major function of a preprocessing language is text substitution. . . . That is, rules are not applied recursively within a C macro. This constraint forces the preprocessing to halt, or guarantees that it will, preventing the occurrence of infinite substitution.

Example of hacker game Quine for exploring infinite loops and self-interpretable programs; three categories of computer languages based on reflexivity.

(184) In contrast, in a programming language, the number of loops can be infinite. A typical example is the hackers' game Quine, named after the philosopher Quine, who appeared in the first section of this chapter. In this game, a programmer is asked to write a nontrivial program, called a Quine, that produces itself.
(184-185) In other words, considering a programming language system as f, its Quine program is its fixed point x = f x. . . . Consequently, three categories of computer languages can be defined with respect to reflexivity by counting how many times the language system can produce a self-interpretable program:
1. a nonflexive language system,
2. a finitely reflexive language system, and
3. an infinitely reflexive language system.
(185) The categories of computability and reflexivity somewhat resemble each other in that a distinction is made on whether some number of repetitions is finite or infinite. For a language that is Turing complete, it is formally proved that a nontrivial Quine exists.

11.4 Reflexivity of a Computer System

Primordiality of compiler interpreter division of language types.

(186) The two fundamental systems of a programming language are the compiler and the interpreter. . . . The creation of a language system therefore fundamentally means producing either an interpreter or a compiler.

Preference for C due to its constituting other language systems and full functionality to manipulate computer hardware, touching on sourcery discussion of Chun.

(186) Many actual language systems are written in the C language, which provides the full functionality to manipulate computer hardware. Since C is infinitely reflexive, many of the resulting programming language systems are infinitely reflexive.
(186) The interesting question is how the C language system itself was generated. The compiler for the C language was written in C (
Kernighan and Ritchie, 1988; Thompson, 1984). Briefly, an interpreter for a subset of C sufficient to build the compiler was first generated in assembly language. . . . The language system was thus reciprocally enlarged by successively inputting a new program to an old compiler and interpreter and producing a new compiler and interpreter.
(187) Augmentation of the C compiler was thus performed
almost by using the reflexive feature of the C language system, except that the new versions of the C compiler and interpreter programs were generated by humans on the basis of the older version.

Compiler iterations require human involvement, but points to an autonomous high command by machines running self-reflexive, self-programming software; formulating improvement, not language framework, is the constraint on emergent artificial intelligence: here is a clear statement of what computers cannot do.

(187) The key point is the plan for updating: how to extend or improve the language. Currently, successive versions of C compiler programs cannot be produced automatically, since each new version has bug fixes and new functions in the language, which are defined by human ideas. . . . To automatically generate an improved compiler, the scheme for evaluating the complier should be made explicit.
(187-188) Thus, the essence of the problem does
not lie in the lack of a language framework for self-augmentation; rather, the problem lies in the lack of formulation for improvement.
(188) The metalanguage used for metaprogramming consists of commands for code generation and commands for run-time evaluation within the overall evaluation of the program. The latter commands are introduced into language systems in the form of a function called eval. The eval
function dynamically evaluates a program code fragment given to it by using the language system's interpreter. It may dynamically change the system behavior.

Exploiting reflexive features of language system such as reflection via debugger attached to running processes materializes ideology of eval function; also entails second look at programming now that such systems are possible and not merely narratively described.

(188) The function eval is used together with metalanguages for code generation. Metalanguages include commands to obtain the program code currently being executed. The programming paradigm of reflection provides a set of metalinguistic commands that enable access to the code attached to data objects and redefinition of the calculations therein. . . . More importantly, a language system can embed other language systems.
(189) One way to summarize the history of programming language development is to view it as the process of making languages more dynamic by exploiting the reflexive features of a language system.

11.5 Reflexivity of a System of Computer Systems

Contrary to Kittler proposition that this is no software, portable languages intentionally absorb architecture specific differences.

(189) computer language interpretation systems are constructed to absorb such [architecture specific] differences, and thus, evaluating an expression of a programming language results in an equivalent consequence on different computers.

Danger of open systems joined with reflexivity illustrated by Thompson as social consequence of protected mode, although it may also be supported by cultural forces motivated by property rights.

(189) In his paper Reflections on Trusting Trust, Thompson (1984) briefly showed how easily a Trojan horse can be constructed from a Quine. Integrating malicious code into a Quine generates a computer virus that reproduces itself indefinitely. Thus, the viruses endemic in the computer world are a result of combining reflexivity and openness.

Reflexivity also found in distributed, networked processing that includes exchanging programs.

(190) Thus enclosed, each system has its own individual processing capability, and multiple computers can form a role-sharing system. Metaprogramming serves well for such connected yet closed systems. . . . Program code is scattered, and intermediate calculation results are handed from one system to another. . . . In other words, delegation is implemented by programs exchanging programs, thus making use of reflexivity.

Walks away from this possibility of emergent cognitive-embodied process in machine worlds with infinitely reflexive languages in reflexivity under multiplicity achieving self-augmentation through adaptive metaprogramming; what sounds like a fair assessment of human evolutionary success is on the threshold of machine species-being as well.

(190) Adaptation can also be mutually performed by multiple systems: two systems can collaborate to complete a task with their programs adjusted mutually. . . . Adaptation sacrifices controllability, however, and the system becomes unpredictable, since the adaptation depends on what the system has experienced. . . . Surrounding every adaptive computer system, there is the controversy over whether this form of adaptation truly enhances usability.
(190) Consequently, the reflexivity of computer language systems also provides the potential for multiple systems to mutually augment one another.

11.6 Summary

Computers achieving their own self evolution possible, but requires human ingenuity to construct eval function and securing openness.

Reaches ethical stance of applying tricks to close essentially open systems, like Odysseus against the Sirens: seems like the openness problem belongs to humans worried about the machines become black boxes obeying their own high commands; what if the agent affecting intellectual and technological change is itself a trickster, coyote (Haraway)?

(191) Can computers evolve by themselves? . . . First, there is the difficulty of elucidating the direction for self-evolution. . . . Thus, the difficulty of self-augmentation arises not from the lack of a framework but rather from the lack of a way to formulate a suitable evolutionary direction in which to proceed. Second, technology for securely controlling multiple systems must be devised. Computer language systems are vulnerable because they are essentially open, so tricks must be applied to close them, which makes them difficult to control.
(192) The question of how to exploit such inherent reflexivity in a constructive system will remain one of the main problems of computational systems, and I believe that more evolutionary technology exploiting the reflexivity of programming language systems is inevitable.


12
Conclusion

(195-196) Underlying this book is the question of the differences between humans and machines. . . . In contrast [to embodiment arguments], this book has attempted to consider this question in terms of the common test bed of sign systems. . . . Such a comparison is deemed not to be too much of an oversimplification, given that the precise delineation between natural and computer languages has not been obvious within the domain of the formal theories of languages.

Rather than focus on affordances of embodiment, focus on differences between human and computer languages; handling reflexivity is key, as well as handling ambiguity, although also crucial is eval function.

(196) A sign is founded on its speculative introduction without any guarantee that it will acquire any final, concrete content. This way of being forms the basis for a sign to function as a transcendental medium for acquiring heterogeneous signification from the outer world. A system formed of such signs becomes naturally susceptible to reflexivity, and the strategy for handling reflexivity determines what kind of sign system it becomes.

What is the book response to where we need to go next, what is mine, who is the we: read alongside Kittler Protected Mode.

(196) Every computational form must be well-formed and explicit, so the ambiguity underlying reflexivity cannot remain without becoming a cause of malfunction. Computer systems have therefore been developed by avoiding ambiguous reflexivity, resulting in constructive systems. The potential for exploiting the reflexivity of sign systems remains limited for machines, and computer systems are far from evolving.


Glossary

Handy glossary applies to glossematics of theorist so important in book, otherwise unheard of in the readings, Hjelmslev. So start reading with glossary, advertisement on first page, toc, then traditional first content of earlier notes files formats. No doubt if this book is written for humans and machine cognizers, such an approach is likely to occur, just as likely to be divided between semiotics and computer science.

(199) This glossary briefly defines the key terms used in this book, both for semiotics and computer science. Many terms have technical usages in each domain, with divergent and multiple signficiations.

Semiotics
connotation:

Glossary term semiotices, subheading connotation interesting to distinguish use of terms in subjectivity context, and speak purely of formal characteristics of programming, working code: Hjelmselv glossematics can nonetheless be compared to analysis of myth in Barthes, for in fact the latter was influenced by the former.

(199) Connotation is defined in a pair with denotation. In the usual sense, the denotation is the stable, material, and logical meaning carried by a sign, independently of its context. . . . In contrast, the connoation is a more subjective, context-dependent signification carried by a sign.
(199) Hjelmslev, in his glossematics, considered that the dimension of either expression or content could recursively form a sign. When the expression further forms a semiotic layer, the original layer constitutes the connotation for this layer, which constitutes the denotation (Section 6.3). In this book, the terms denotation and connotation are used according to Hjelmslev's sense, not in the typical sense related to materiality and subjectivity.

content:
(199) Hjelmslev adopted a dyadic modeling of signs. What corresponds to Saussure's signified, he called the content.



Tanaka-Ishii, Kumiko. Semiotics Of Programming. New York: Cambridge University Press, 2010. Print.