Formalization and the Openness of the Human Project
April 7, 2020The advent of science as well as modern political organization during the 16th and 17th centuries Europe mutually benefited from analogizing the concept of law within each other’s domains. When Hobbes speaks of natural law in the Leviathan in order to arrive at the relative equality of all humans, he means something similar to Galileo’s ‘all bodies fall at the same rate’, but in the domain of human activity. That is to say, law connotes something discovered rather than constructed, such as legal laws. This notion of law, however, despite harkening at least as far back as Aristotle, takes its cue originally from the notion of explicit social rules. This older notion of law denotes social rules, constitutions, dicta, maxims, and edicts used to constrain and organize human collectives. To understand how human collectives escaped the “state of nature”, so to speak, requires tracing the origins of how they began to codify their behaviour and the world around them. My aim in this article is to attempt to trace the process whereby humans begin to eclipse the behavioural limitations of proximate species through their cognitive prowess. I will argue that, whatever the causal preconditions, humans differ from other species because of their ability to formalize their own behaviour, and the world around them. How did humans end up exercising dominion of their environment? Much has been made about this shift, and to what factors, whether anatomical or cognitive, it is owed. Hypotheses vary from ascriptions of causal responsibility to the development of language, the evolution of groups, religions, co-evolutions between humans and flora or fauna, all of which culminate one way or another in the Neolithic age, when humans in large scale abandon their nomadic existence for a sedentary one. Agricultural society, consequently, becomes the social scaffolding for diversified division of labour and thereby, large-scale collective action projects enabling unprecedented technological advancement.
We know that certain anatomical advantages evolved in humans prior to the onset of the Neolithic Age. The current phylogenetic tree shows that the Australopithecine branch splits from hominini, a class of primates that includes chimpanzees, some 8 millions years ago, and its offshoot, Australopithecus, which leads to the homo genus, branches some 4 million years ago. Bipedal capabilities are found in Australopithecus some 4.2-3.9 million years ago, though not to the full human extent. Australopithecus had a brain size not much bigger than that of modern apes, so bipedalism preceded the evolution of larger brains in humans by several million years, though the former could have influenced the latter in indirect ways (bipedalism necessitated pelvic modifications in the birth canal that led to the evolution of openings in the skull called fontanelles that allow the brain to grow after birth til about 2 years of age). Beginning ~3 million years ago til the present, just before the branching of the genus homo, the human brain increases 2-3 times relative to its body mass, though the most recent increases do not correlate to body mass. Only with Homo heidelbergensis, some 700 000 to 200 000 years ago, does the genus homo brain volume begin to approximate that of current humans. Interestingly, Neanderthals, which were active 400 000 to 40 000 years ago, had equal-to-larger brains than humans. However, brain mass does not alone indicate intelligence, nor does the estimated number of neurons, though both can be taken into account with other variables. With somewhat smaller brains, humans have more neurons than Neanderthals, and larger mammals like elephants have more than twice the number of human neurons.
These anatomical innovations bear some clues as to how human behaviour begins to qualitatively diverge from other proximate species. Bipedalism freed forelimbs for more sophisticated manipulation of tools and may have fostered reorganization of motor and sensory brain areas. The neocortex, a part of the cerebral cortex, which accounts for 76% of the brain mass, is the most recently evolved part of the brain where the more sophisticated functions of future planning and reasoning reside. Evidence suggests that the neocortex in proto-humans evolved against the selective pressure of the group, the ability to fend for oneself within a group dynamic that required a combination of cooperative and competitive behaviours. In-group, out-group dynamics play a significant role in human sociability today, though they do not explain how we have managed to overcome these limitations to construct polities, states, and corporate entities that do not rely on physical proximity to a kinship group. Hunter-gatherers functioned largely in tribes of no more than 100 members based on kinship and matrilineal descent, though not exclusively so.
The missing piece of the puzzle here, language, is hypothesized to have evolved ~ 200 000 years ago, coincident with several humanoid species including Neanderthals. Much of our cognitive prowess is pre-linguistic, so it would be erroneous to invest the weight of our humanity entirely to this innovation. The evolution of speech, however, marked a fantastic shift in the relations of fit between a species and its environment. Speech multifold proliferates the granularity of in-group communication and representation of the outer world. Details about new and more hospitable environments can be described that were before not possible, threats can be articulated with greater efficiency; in other words, a mapping of the world becomes possible through language that exceeds the constraints of direct experience. The world of homo sapiens begins thereby to outstrip the world of other species by a large margin. The concept of the ecological niche can best hammer this point home. An ecological niche refers to the degree of fit between a species and its environment (which often includes other species), whereby the environment drives genetic adaptations in the species, and in turn, the species change the environment through their activity. If you consider an isolated species, in many respects this relationship is highly asymmetrical. Radical environmental changes could lead to extinction or habitat tracking, i.e. the species shifts to a habitat that better fits its adaptations. Only humans escaped this dependence via a set of adaptations that enabled them to have a greater degree of control over their environments. Humans are habitat independent on account of their ability to fashion tools and imagine solutions that are exacerbated by the ability to refine them through communication.
Modern linguists identify two properties of human languages as distinct: productivity and displacement. Productivity refers to the frequency of deployment of grammatical regularities in generating sentences, whereas displacement refers to the ability of language to communicate about things not immediately present spatially and temporally. The mechanics of these features require greater scrutiny, but displacement appears to be an emergent feature of grammatical regularity, which enables greater morphological variation. Whether language evolved in response to the need for displacement or emerged as a byproduct of structural advancements is not known. A feature related to displacement is the ability to generate a lexicon that analogizes perceptual and familiar objects with invisible and abstract posits (not posits in the scientific sense, but constructions), thereby expanding the individual’s as well as the collective’s picture of the world. The effects were very likely multivariate and wide, but on the whole language begets a subset of truths that hold in virtue of collective adherence (which forms the condition of possibility for religions and belief-systems), though this metafact would not have been known to our early ancestors. It’s not until the Greeks that a debate between nomos and physis emerges: the distinction of human convention, laws and norms, from natural principles, namely principles native to nature like the rotations of the heavens.
It is here that I wish to propose a new causal mechanism in the generation of cultural novelty. That mechanism is formalization. We can reasonably surmise that prior to the articulation of verbal and written rules, tribal dynamics relied on moral economies enforced through forms of negative sanctioning, i.e. marks of disapproval, and punishment. Certain behaviours would have been collectively discouraged because they jeopardized the subsistence line of the tribe such as the hoarding of food instead of sharing with members. Speech would have enabled the articulation of undesired behaviours into specific injunctions, which in turn likely increased the moral robustness of the group. The general thesis that I want to develop here is that codification of its rules results in the increased robustness of a system. That is to say, certain systems develop the ability to self-represent the rules by which they behave, as a result creating a feedback loop that allows them to change their behaviour indefinitely. In particular, and I hope to delve into this later, systems that codify normative rules, namely rules that prescribe rather than describe, imply infinite modifications because they can recodify their normative rules in light of refinements of descriptions or environmental changes.
Writing, for example, emerged with the first states such as Sumer in Mesopotamia and enabled the codification of the informal rules that governed tribes and agrarian communities. Of course, that codification would not have been a 1 to 1 correlation with tribal morality, as the law would have been adapted to the specific needs of the state, but sufficient isomorphism between the two was likely preserved. By isomorphism I mean “structure-preserving mappings.” Writing itself, moreover, is a formalization of speech because, by creating physical analogs of speech structure, it promotes the stability of its rules, the ease of its dissemination, and enables the codification of the oral tradition. The effects of writing are too many to enumerate, but in a nutshell, it created a mechanism of intergenerational informational transmission that, in effect, kickstarted cultural evolution: the acceleration of cultural changes through the continuity of accumulating intellectual and technological inheritance.
The connection between formalization and systemic robustness needs further elaboration. Aristotle’s the laws of thought, which formed a systematization of prior insights, are a case in point. The “laws” were implicitly operant in discourse prior to their explicit formulation. However, their explicit formulation as rules created the conditions for a domain of discourse with stringer constraints. The second and third laws can be deduced from assuming the law of identity: a = a. If a =a, then a ≠ ~ a, the law of non-contradiction, which states: not both a & non-a. The law of excluded middle, codifies the principle of bivalence, i.e. a & ~ a cannot be true at the same time, thereby, either a or not a (a v ~ a) is true. Explicit formulation of these laws ensured that subsequent philosophical discourse adhered more closely to the constraints they imposed, leastwise providing a mechanism for weeding invalid arguments. An inference rule like modus ponens is a generalization from particular instances of use, which, when formulated symbolically as a theorem, can be applied more systematically. The codification of the laws of thought culminates with modern logics, which formalize aspects of natural language into formal systems, that in turn can be adapted to construct serial processing machines like computers. A human can perform calculations, but if you codify the known rules of mathematics into an electronic contraption that follows explicit and unambiguous instructions by converting fluctuating voltage input into binary notation, then you increase the calculation power and efficiency by many orders of magnitude.
The increased acuity of rational discourse through cultural transmission, though not without significant interruptions i.e. the fall of the Greco-Roman world that bred it, led to increasingly robust codifications of the outer world. Just like logical theorems are derivations of axiomatic formulations of the broadest human-scale environmental invariances (events being stretched in time and spatio-temporally distinct), physical theories codify narrower patterns in the natural world into mathematical formalizations. Logic dictates the contours that a representational system must abide by in order to be truth-preserving, i.e. preserve general isomorphism with empirical reality by warding off contradictions. Just like formalization of rational discourse through logic exacerbates the knowledge enterprise, i.e. it sets it on systematically sounder footing, sufficiently high fidelity codifications of natural phenomena open up nature’s amenability to human control. Roman engineering relied on basic arithmetic, geometry and rules of thumb about the properties of materials, which limited their ability to construct more efficient and less laborious structures. Newton’s development of inertia into a general theory of motion and subsequent refinements unlocked unprecedented engineering capacity by quantifying the vector forces acting on a system of masses, and thereby rendering predictable vast ranges of observable phenomena, i.e. macro-scale bodies moving at far-from light speeds in more or less idealized conditions.
The structure and variety of formalisms referenced demand some clarification, which harkens back at the debate between nomos and physis that preoccupied the ancient Greeks. This debate still rages on today. The question can be formulated as follows: can we neatly separate institutional facts from brute, mind-independent facts about the world? Much confusion arises from the fact that “objective, third-person” facts cannot be expressed outside of representational systems, which owe their existence to social agreement. We express mathematical truths relative to a numbering system, which itself is a product of social agreement, and therefore substitutable. Similarly, we express truths about the natural world through linguistic posits, which are often poor analogues for the phenomena they purport to describe. So in a fashion, we codify veridical formalisms, or formalisms that express standard-independent truths, through a scaffolded infrastructure of social agreement that involves language and more domain-specific systems of representation. This is, in turn, greatly compounded by the observation that a great of deal truths are formalisms that do not correspond to anything in nature: they are purely facts of social agreement, such as fiat money, the legal system, states, corporations, contracts between individuals etc. Language, in a fashion, compounds the possibilities for cooperation as it creates the conditions for inexhaustibly extending the ontology of the world: our modern world swells with standards and composite entities, from the level of political organization to signal transmission in telecommunications.
Whether codifying rules in the domain of social organization or that of description and explanation, i.e. philosophy and science, despite their distinct conditions of validity, their articulation has implications for the causal powers of the system. States and corporate entities, as formalisms of pure agreement, have causal powers that mere individuals do not have. Similarly, an individual human being can modify their behaviour by representing their behavioural patterns internally and subsequently making an effort to counteract them. It is possible, in fact probable, that we could achieve such introspective generalizations in lieu of language. But language enabled a much more flexible and higher resolution mapping of the facts and more acute inference generation. The openness of human systems, or the undecidability of their future, is owed as a result, in part, to the feedback loop between veridical and normative formalisms. The two are caught in a vicious causal cycle: the latter create the conditions for the former, which in turn change the latter, and so on. Veridical formalisms increase a system’s representation of the outer world, and thereby the latter’s degree of manipulability by humans, but normative and institutional formalisms create the conditions for improved veridical formalisms. Improved veridical formalisms, in turn, expand the realm of the possible, and exert causal effects on normative formalisms. There are many knots here that I am not addressing, nor pretending to resolve, such as the logical relationship between is and ought, or the evolution of moral systems and their interaction with broader cultural evolution.
Can the escape velocity of human achievement from the orbit of mere evolutionary mechanisms be attributed to some distinct and primitive human power? Is it language, consciousness, or some elusive cognitive ability whose adaptive or happenstance emergence enabled our divergence from the rest of extant species (assuming rationality is a composite of something more basic)? Or was a combination of adaptations and/or random mutations responsible? Philosophers continue to debate this issue vigorously on matters of principle. The philosopher Daniel Dennett argues that there is no difference in principle between an organism designed by natural selection and an automaton designed by humans, because at bottom both submit to simpler algorithmic automata. The philosopher John Searle has argued emphatically that humans generate social facts and designs in virtue of ”intrinsic intentionality”, that capability of the mind to be directed at objects or states-of-affairs. Searle thinks intentionality is an evolved capability, but that it is not something we can implement in computers unless we make them conscious. Intrinsic intentionally allows us to give observer-relative functional attributions to observer-independent facts such as “the heart is for pumping blood”. In observer-independent reality, argues Searle, the heart has no such function. Therefore, humans can design things in virtue of their ability to assign what he calls agential-functions, such as: computers are for computing things. Outside this observer-relative function assigned by humans, the computer does not compute: it merely follows causal sequences. Algorithms therefore, according to Searle, are observer-relative function attributions to natural phenomena that are not computing anything. As a result, computers and all products of human design, possess derived intentionality. Dennett counteracts that, by Searle’s logic, airplane wings are for flying but eagle’s wings are not. In short, Dennett wants to analogize human design with natural selection, while Searle wants to ascribe a chasm: human designs are not relevantly similar to natural designs because of our observer-relative impositions due to our evolved intentionality.
Is intentionally a well-carved property, or will neuroscience reveal it to be a composite of more basic brain mappings? Moreover, is intentionality emergent? And what, technically, does emergent mean? In light of what I’ve said, at what threshold in human evolution does intentionality emerge?