Not all physical systems are created equal.
There’s, in fact, a remarkable level of variation within properties of physical systems, especially within those we label complex. In this digest, I will bring together a cornucopia of ideas that converge on certain regions of complexity: non-linear, far-from equilibrium thermodynamic, organizationally closed, yet open and undecidable/indeterministic cybernetic systems.
The aim of this piece is to connect some disparate yet perhaps interrelated ideas in order to illuminate the dynamics of complex phenomena. Part of the aim, also, is to bring into relief the conceptual joints of a set of ideas whose mutual contact generates friction.
Temporal, causal, explanatory, statistical, and epistemic asymmetries abound in our everyday contact with the world. Time moves toward the future, causes precede effects, things that explain precede those that require explanation, knowledge more or less accumulates, and systems statistically evolve toward disorganization. And yet, our best physical theories that purport to describe the most fundamental constituents and interactions in the world (chiefly: General Relativity & the Standard Model of Quantum Field Theory) bear no imprint of any of such asymmetries. The fundamental laws of nature, inscribed in the equations of General Relativity and the Standard Model, do not distinguish between past and future.
Either then, our physical theories are lacking some key ingredient necessary to account for the profuse asymmetries evident in all aspects of our acquaintance with the world, or these asymmetries are the imprint of a partial and parochial cognitive context, rooted in evolutionary exigencies that work well in our statistically skewed corner of the universe, but that otherwise belie a true, time-symmetrical universe, changing but not displaying any statistical preference in its evolution.
Precisely due to these potential schisms, the mechanics of time beseech us to pursue the true meaning of complexity, whose dynamics are most conspicuously replicated in the human brain. The site of the products of human endeavour rooted in the variegated tenor of experience, and the painstaking pursuit of the fruits of rationality, the brain’s dynamics and instantiation of mind and consciousness as yet elude explanation of the standard set by the physical sciences.
In this article I seek potential explanations in non-linear dynamical systems, non-equilibrium statistical mechanics, and chaos and instability as the range of phenomena within which brains and complex organisms are either embedded in or instantiate within their dynamical parameters. Emergent, stable, periodical parameters create space for variable evolutions of cognitive systems toward metastable attractors or open variables that converge in the protracted interface of cognition and world. The interactional outcomes at this cleavage of informational and organizational complexity, I will argue, are undecidable*.
This will be pursued thematically as dynamical and explanatory affinities between the concept of strange attractors, which are fractal mathematical structures, here invoked theoretically as islands of instability within wider, stable dynamical structures of physiological regulation, and the concept of strange loops, which are hierarchy-violating formal instantiations of self-reference in sufficiently high-fidelity formal systems like first-order logic and other axiomatic systems capable of expressing basic arithmetic. The relationship between formal hierarchies, which are traditionally understood as non-causal, and hierarchies of scale, which are organizational, will be explored.
Time-symmetry or Entropy?
A core puzzle within physics today is understanding how the thermodynamic time-asymmetry arises out of time-symmetric fundamental physical laws. What is the thermodynamic asymmetry you might ask? It is, in brief, the law of entropy: the observation that any closed system invariably evolves toward thermodynamic equilibrium and not vice versa.
As an observation about thermodynamics, this law states that, for any isolated system, if left to spontaneous evolution, heat invariably moves from higher temperatures to lower temperatures. In other words, the system tends toward disorganization, and eventually settles into a thermal equilibrium.
The accepted interpretation of the law of entropy comes from Ludwig Boltzmann, who explained the directionality toward thermodynamic equilibrium in terms of possible statistical configurations of microparticles. Because there are more ways the system can be disorganized than organized, the system statistically tends toward disorganization, encoded by Boltzmann as a simple logarithmic law:
𝘚 = k1n𝛀
Where S stands for entropy, k for Boltzmann constant, 1n for natural logarithm, and omega for the number of microscopic configurations.
On the other hand, the laws of physics as we understand them do not discriminate between the past and future: they remain the same regardless of the temporal direction! In other words, they are time-reversal invariant. If you imagine the change in time t of the universe as a movie reel, and you rewind that reel, the laws of physics themselves cannot distinguish forward from backward movement in time. Whether in our eyes the reel is playing or rewinding, certain variables are conserved, such as, minimally, linear and angular momentum, energy and electric charge.
Think of it this way: despite the fact that it appears to us as though time is inexorably moving forward, borne out in part by the sole quantity of the net increase in entropy, fundamental physics does not play favourites. To the ‘rules’ that govern the behaviour of physical systems, from the quantum scale to macroscopic gravity, past and future are not meaningful categories. While time is defined as a variable that measures duration in a system, whether at the quantum or macro-gravitational scale, it is blind or symmetrical with respect to direction.
Yet many natural phenomena appear to violate this posited time-symmetry, and appear irreversible. This irreversibility is apparent through familiar events like budding twigs in spring and the inevitability of decay and death, or more abstract descriptions like the flow of electric currents, unrestrained fluid expansion, and fundamentally, heat transfer. Squaring this apparent macro-property of the directional evolution of systems toward greater entropy with fundamental physics is an open ended question.
Either entropic-asymmetry is a fundamental description, that is to say, not emergent from some more basic and symmetrical state, or it is borne out of incomplete information, also called ‘coarse-graining’, at the macro-scale of complexity.
Are all clouds clocks or all clocks clouds?
The philosopher of science, Karl Popper, who was sympathetic to an indeterministic picture of nature, framed this puzzle through the metaphor of clocks and clouds. Clocks stand for deterministic mechanisms, whereas clouds for indeterministic systems. He famously quipped that rather than all clouds being clocks, all clocks are rather fundamentally clouds. Instead of finding determinism behind probabilistic events, a fundamental probability rather underlies all events.
There are, therefore, roughly two camps: either macroscopic complexity with its apparent irreversibility is fundamental or the time-symmetric laws of physics are fundamental, and the time-asymmetric, irreversible complexity is some form of illusion manifested within our macroscopic scale of experience and observation.
The physical chemist and Nobel laureate, Prigogine, maintained the former. The time symmetric laws are idealized models, and reality foils them. Theoretical physicist in the field of quantum gravity, Carlo Rovelli, maintains the latter: complexity and the arrow of time are illusory appearances in a time-symmetric universe.
Below we will follow a description, propounded by physical chemist Ilya Prigogine and others, that views macro-indeterminism as fundamental, rather than incomplete.
Statistical Mechanics & Dynamical Systems
Dynamical systems theory is a branch of mathematics that maps the evolution of systems in time. The states of a system are specified as variables whose evolution is calculated through a p function that takes as input positions and momenta. Those variables are trajectories of points in time. Points can represent particles at some specifiable scale or more abstract variables like inflation.
Now, if systems behaved linearly, the evolution of systems could be described at the level of individual trajectories of its components. This is done by analytically solving the differential equations for the system. However, because most systems behave non-linearly, differential equations cannot be solved by calculating individual trajectories.
Non-linear systems are those whose output is disproportional to their input. Instead, one represents the system through something called a phase space, which describes all the possible states of a system, say an ensemble of particles, all at once.
This can be done for any model aiming to compute a system whose components are represented via initial states of positions p, momenta q and relevant laws of physics, say Newton’s for simpler particle systems. When a collection of systems are taken together, we call that an ensemble. Each system in the ensemble can be represented as a phase space, which consists of all the possible states of that system given some initial conditions.
The aim of statistical mechanics, the application of statistical theory to assemblies of microscopic entities, is to describe how such ensembles (ensemble as technically defined earlier) evolve in time as a probability distribution of the possible configurations of the system. The ensemble itself can be described by a function p(q,p,t), which means that at time t, we can expect to find the system at average momentum and position q,p.
A visualization of a stochastic process: the evolution of asset prices in time.
Note that, while we spoke of physical variables, they need not be: they can also be abstract ones like demand, supply, inflation in economic models for example.
What are we to take from this?
According to the Ilya Prigogine, Nobel prize winning physical chemist whose work focused in irreversible systems, “when we go from dynamics to thermodynamics…probabilities acquire an intrinsical(sic) dynamical meaning [because] instability destroys the equivalence between the individual and statistical levels of description”(Prigogine, 35). What, in precise terms, does Prigogine mean by this?
As he puts it, the probability distribution of a phase space amounts to a complex description with additional information that is lacking at the scale of individual trajectories (Ibid, 37). Because this dynamical description is impossible at the level of individual trajectories, an inter-theoretic reduction from statistical distributions to single trajectories is also impossible (Ibid, 37).Therefore, for Prigogine, the statistical-dynamical laws are irreducible, and thereby, fundamental.
Now, let’s pause a second. I’ve interpolated the term “inter-theoretic” as an adjectival modifier of reduction to interpret what Prigogine means when he says that statistical-mechanical laws are irreducible. I take it, paradigmatically, that talk about laws concerns talk about theory, and thereby, mathematical models. Therefore, by denying that inter-theoretic reduction obtains between the two descriptive levels, what is meant is that we cannot obtain the predictive granularity of the higher-level theory from the lower-level description, i.e. interactions between single trajectories.
Denying inter-theoretic reduction, however, is less controversial than taking an ontological position: a position about the mind-independent character of the phenomena in question. Does an inference about inter-theoretic reduction also justify inferences about mind-independent properties of the systems in question? A justification could perhaps be motivated if the reason behind the failure of inter-theoretic reduction is that new, macro-scale fundamental laws are needed to describe thermodynamic phenomena. For this to be the case, we have to suppose that the fundamental laws are inadequate to account for population dynamics.
It seems that Prigogine wants to make some such leap (or inferential step) by arguing that phenomena described by statistical mechanics are inherently indeterministic. Assuming that probabilistic is tantamount to indeterministic, the notion of probability invoked by Prigogine then is a type of physical probability. Physical probability is an interpretation of the concept of probability that is not epistemological, that is to say, not concerning objective or subjective evidentiary support relations between events/data/beliefs (e.g. Bayesian probability), but rather a property of mind-independent physical-world systems. Indeed, he states “no matter how precisely matched our initial conditions are, we obtain different trajectories from them” (Ibid, 37).
Of course, since we could never run an experiment that identically replicates the state of a closed system at the micro-scale, a chaotic-theoretical interpretation about variations in initial conditions is possible, if not, preferable.
Nevertheless, assuming we’ve correctly represented Prigogine’s views, let’s recapitulate what’s going on here: the idea that nature at the macro-scale may be intrinsically probabilistic has been proposed.
As such, the question about the relationship between genuine chance and determinism becomes relevant. Are the two compatible? (Arguments in favour of either have been offered) Further, is the chance that Prigogine appears to be smuggling into statistical mechanics interchangeable with the chance at the fundamental, quantum level? And, if so, is the former grounded in the latter, or are they distinct?
Aware of this problem, some philosophers have developed accounts of probabilistic causation that are noncommittal about either determinism or indeterminism. These theories formalize a cause as an event that raises the probability of an effect, given some stable background context, and given sufficient conditions that rule out spurious correlations. These theories, unfortunately, have nothing to say about what’s actually going on: they merely dislodge the common conception of cause from sufficiency and necessity relations. Instead of causes necessitating effects by virtue of being sufficient, they rather predict them with higher frequency (or by bearing some intrinsic behavioural propensity to yields certain effects).
Here’s one formulation (among many and more sophisticated ones):
(Cart & Eels formulation) C causes E if and only if:
P(E|C&B)>P(E|∼C&B) for every background context B.
This says that C is a sufficient cause for E, iff the probability that E given C has occurred is greater than the probability that E given that C has not occurred, for some stable or invariant set of background variables B obtaining in both cases.
The question Prigogine and we are after is: what’s nature really like? In other words, we want to motivate a statement about what’s really going on independently of us, instead of merely affirming fundamental limits with respect to our ability to represent/yield approximate models of complex phenomena.
For Prigogine and some others, instability leads to non-linearity, which is marked by irreversibility, and consequently the non-integrability of such systems. The mainstream, meanwhile, see statistical mechanics as deterministic, and probabilities not as intrinsic but as missing information about micro-states. i.e. coarse-graining.
It should also be noted that population dynamics within ‘living systems’ pose further problems about population variables, such as asymmetric evolutions of the system toward some basin of attraction over another past a critical threshold(s). Not to suggest that these variables are unexplainable, but prima facie defy explanation in terms of the causal properties of composing entities (at the extra or intra-organismal scales) alone.
The surprising result, therefore, would be that the coarse-graining of statistical mechanics is not a feature of models, but captures something fundamental about dissipative, non-equilibrium thermodynamic systems.
Attractors & Computational Neuroscience
Having brought into relief the tension between entropy and time-symmetrical physical laws and the role that statistical mechanics plays in either deflating or exacerbating this tension (depending on how you interpret the theory), we can now foray into strange attractors.
An attractor is a region in phase space of a dynamical system towards which system variables evolve. An attractor can be illustrated through limit or invariant sets. These refer to the state of a dynamical system given an infinite amount of time.
The simplest type of attractor is a limit cycle, which is a closed trajectory in two-dimensions where at least one other trajectory falls into it as time approaches infinity. A real-world correlate is any system that exhibits some level of periodicity, like a pendulum or one’s heartbeat when resting.
An attractor can be best described in the context of dissipative or far-from- equilibrium thermodynamic systems. These are systems that maintain a steady state that does not devolve into thermodynamic equilibrium given certain inputs and outputs. Such systems are common in nature, from whole ecologies to stars. Anything from bacteria to grid systems must maintain an asymmetric distribution of thermic energy between their internal dynamics and their environments.
If attractors exhibit periodicity within a bound state, which is to say they instantiate system variables that tend to remain localized, then a strange attractor is a type of attractor that exhibits instability, which is to say that at least one variable evolves out of bounds.
Strange attractors are attractors that are extremely sensitive to noise. Any two points in a strange attractor will exponentially diverge from the basin given slight variations in the trajectories.
It is well and all to talk about attractors in isolated instances, but more productive to understand how different regions of phase space interact together. For example, we can envisage systems that exhibit local instability within globally stable parameters. This kind of interplay between stable and unstable phases has been proposed about brain activity and the dynamics of neural ensembles.
I want, in particular, to pursue this line of thought with respect to computational neuroscience, and the presence of stability and instability within brain circuitry.
Attractors imply self-sustained oscillations, something that has been observed to occur across neural circuitry. Neural oscillations refer to synchronizations of firing patterns of groups of neurons as a result of enforced feedback patterns. Evidence suggests that neural synchronizations can occur spontaneously in lieu of an environmental stimulus (Schneider, 2014). Evidence also suggests that neural synchronizations do not converge at exact frequencies within ensembles or groups thereof, but rather at approximate frequency ranges (Kelso & Tognoli 2014). Furthermore, oscillatory activity occurs at different levels of organization, namely: the microscopic scale involving single neurons, the mesoscopic scale involving ensembles of neurons, and the macroscopic scale involving disparate brain regions (Kelos & Tognoli 2014).
To get a wider picture of the oscillatory activity described, some hypotheses, to whose generality I subscribe, suggest that the brain consists of a system of open loops, integrated or reinforced in the organism’s historical interaction with the environment that close to full functional cycles through ongoing interaction with that environment (Aguilera et al, 2014).
That is to say, the formation and dissolution of synchronized activity engages disparate functional loops that bring to bear a whole set of cognitive functions, ranging from sensorimotor and predictive processing to time-scaled emotional rhythms, to respond to the demands of the situation. This view is in line with characterizing brain dynamics as metastable, that is to say, generally occupying states anywhere less than its least-energy state as a means of integrating and coordinating different functional parts through neural synchronizations.
Metastable dynamics may illuminate the significance of neural noise, namely random and stochastic fluctuations within single neurons or networks that are typically below the electrical threshold of action potentials. Within a metastable regime, noise may play a role in transition states, the flexibility of networks to adapt, or a regulatory role on excitatory activity by either linearizing or inhibiting it depending on the frequency level (linearizing on lower frequencies and inhibiting on higher ones).
Metastable brain dynamics are a part of the broader theoretical framework of coordination dynamics, which entertains both attractor states and non-attractor states as the overall governing regime of the brain. Kelso et al (2014) suggest that synchronous behaviour on average may not converge to identical frequencies, hence potentially ruling out attractor states as essential to brain dynamics.
Is the view that neural computation consists of limit-cycle like attractor convergence and transitions among states compatible with the suggested dynamics of metastability?
Further, what kind of functional role do neural oscillations play across organizational levels in the context of the broader regulatory role of the central nervous system (CNS)? Do the oscillatory rhythms start from the level of individual neurons or at the population level? How sensitive to noise are such networks? Are some more stable than others? Is noise a feature of the system that contributes to its flexibility, namely its ability to transition to different types of engagement? Are some oscillatory activities dependent on other, more baseline oscillations?
To get a handle on these questions, it rewards to understand the complex role of the CNS in the broader scope of the organism.
The brain, in the wider context of a multi-cellular organism, instantiates non-equilibrium thermodynamic state variables. Such state variables must be understood in the wider context of the interdependence of the entire organism. A multi-cellular organism consists of a ceaseless orchestra of processes occurring at varying levels of organization. Put simply, these include metabolic processes that convert food to energy and building blocks like proteins, nucleic acids, lipids, etc., in order to maintain an internal state within the optimal homeostatic range. What is the optimal homeostatic range? It is precisely the values to internal variables that allow the metabolic processes to carry on, so at minimum sufficient energy availability and a temperature threshold that is optimal for those processes. This homeostatic range, therefore, forms the condition of possibility for the continued integrity of the organism. Homeostasis, as such, is a global parameter that the integrative and cooperative behaviour of different bodily systems achieves.
Homeostatic demands maintain an internal milieu that forms the background or scaffolding for the complex behaviours an organism must perform in order to cope with its environment. Environment here is a broad designation that encapsulates not only physical conditions but complex social dynamics and structures that impose additional constrains and conditions of fit on the organism. To illustrate this, the concept of the ecological niche is useful. Some species fashion or engineer their environments to render them more hospitable. Some, like humans and ants, do this through mass, collective action. Because human environments exhibit the greatest degree of engineering, the environmental constraints for behavioural output are, to a very large extent, social. The social environment alone comprises a massively distributed and diversified set of hierarchies and networks within whose functional demands a single individual must modulate their behaviour to fit. This latter condition of fit usually comes in the form of a vocation that places the individual in some relation to the social whole.
I bring this up primarily to foreground the interaction between the external milieu that is, in large part, readymade or independent, and the personal milieu (the desired outcomes that have a world-to-mind direction of fit) each individual enacts across their life trajectory. To some extent, the individual seeks a region in their environmental milieu within which their idiosyncratic preferences and abilities are permitted or favoured (although those preferences are in part shaped by environmental feedback loops ). This, therefore, brings about a measure of integrability of individual and environment. But to characterize this process as a static process of fit would also be to mischaracterize it. Part of the requirements of fit mandate that individuals also transform their region of external milieu where they serve as consequential actors to accord with their personal milieu. There must be sufficient flexibility, therefore, in extant social structures and organized collectives to allow for the expression of the personal milieu of the individual. The reason for this is that the historical, endogenous activity of each individual evolves preferentially and, in part, idiosyncratically, and if those preferences (within normal limits) are denied expression, a convivial meeting of individual and broader social milieu cannot occur. This convivial dynamic must allow for the mutual transformation of each through an exchange of norms.
I have seemingly veered off-topic to set the groundwork for the informational demands that a CNS must cope with, before we delve into the dynamics that make that possible. Evidence shows that stimulus-response models of behaviour are insufficient to characterize most organisms, and that endogenous activity initiated in the nervous system under invariant environmental conditions is recurrent and ongoing even in the simplest organisms (Schneider, 2014).
As a bona-fide complex system, the brain exhibits states and behaviour not unlike those of the complex, far-from equilibrium thermodynamic systems we discussed earlier. The central nervous system consists of massively parallel and distributed processing that manages communication, information integration, and coordinated outputs across different networks. The simplest units of organization are a type of ionized cell called a neuron that conduct an electrochemical signal called an action-potential when there’s a depolarization between positively charged sodium ions outside the membrane and the negatively charged potassium emulsion inside. A re-polarization mechanism immediately triggers after the electrical signal travels through the axon by flooding the inside with potassium ions to achieve resting state polarization.
Due to its structure and size, the coordination dynamics of the CNS must balance spatial and temporal constraints to achieve certain outputs. This is evidenced by the fact that the brain instantiates point-to-point communication between neurons at the microscale, but also instantiates massive coordination at the meso and macroscales as evidenced by the varying and simultaneous levels of frequency-locking patterns (Tognoli & Kelso 2014). In other words, the CNS deploys two chief strategies to balance these constraints: synaptic information transfer and oscillatory synchronization of neural ensembles.
In accordance with the regulatory role of the CNS with regards to all parts of the body, it must instantiate certain recurrent and parallel dynamics and networks. In awake states, two conjectured recurrent networks are the so-called default and task-mode networks. They respectively refer to the networks that are activated when a person is not engaged in a task or goal-directed behaviour, wherein attention is less focused or focused at internal states, and the networks that are activated when a person engages in attention-demanding tasks. They are not entirely mutually exclusive, but certain regions associated with the former are suppressed during the latter and vice versa.
A question worth asking then is, are functions realized by the brain localized or distributed? The answer is both. While areas in the brain display a high degree of specialization, such as broad specialization of inter-hemispheric lobes (occipital lobe for the visual cortex etc), they never operate alone, but in tandem with other regions at the same time. In this sense, information-processing is both localized and distributed. Broadly speaking, there are more local connections than system wide connections in part because global connectivity is negatively correlated to the number of connections (56, Gazzaniga). Therefore, the greater the number of connections, the greater the likelihood that those are localized. (A connection refers to the likelihood that single neurons or ensembles thereof will fire together). It may be that this wasn’t always the case: that in the past the brain may have been more globally integrated than current evidence indicates (2010, McGilchrist). If the brain is more distributed than globally connected, how do we then account for the unity consciousness? This is the hypothesis that the diverse contents of conscious states are combined into a single experience, known in cognitive and neuroscience as the binding problem(s). Perhaps coordination dynamics can account for the types of recurrent networks needed to realize experiential integration or binding. From the psychological level of analysis, pre-eminent researchers into hemispheric function like Michael Gazzaniga ascribe this apparent unity of variegated stimuli into a “unified field”, to the left hemisphere, which “narrativizes” experiences and interprets events causally (2011, Gazzaniga).
This would be to miss the point being developed though. It’s not that the brain is a static structure of circuitry that passively processes information on cue from the right stimulus. It’s rather endogenously instantiating a set of recurrent states through on-going, periodical networks as background conditions for awake states, which range from more passive, restful and more engaged, focused. The attentional networks referenced, namely the default and task-mode circuitry, may be scaffolded from a more basic dynamic of self-organized criticality. This hypothesis suggests that biological neural networks work close to two broad-scale phase transitions: one phase accounting for diminution of activity, and another accounting for escalation and amplification.
Evolution toward near-attractor states may then work this way: the network fires rapidly when accompanied by arousal and intensification of awareness and information-processing, with some attractor-states having the potential to exhibit strangeness or instability due to the sensitivity of the system in accordance with the cognitive demands of the task (creativity perhaps being the most sensitive because it ignites disparate regions of the network). The revving of the network to really focused states may either be unsustainable beyond a certain temporal threshold or hard-wired to phase-transition to a more relaxed state due to the context of the evolutionary environment. The bottle-neck of awareness is allotted limited bandwidth, and its attentional exhaustion can leave the organism vulnerable to external threats.
In light of the above, the brain ought to be characterized in terms of a scaffolded structure of periodical networks recurring with varying degrees of stability. Something like the binding problem, the view that the brain binds sensory inputs into a single and integrated experience, could very well be achieved through recurring and stable networks, against whose background less stable networks account for more specific functions. Activity in the world, therefore, must activate additional networks for different demands and tasks with varying attentional resources.
Coordination-dynamics (CD) account for the evolution of the system toward attractor states, some of which, circumscribed in the symbol systems of human culture, evolve differentially with varying degrees of sensitivity in unpredictable ways. Coordination-dynamics may also account for certain large-scale and background global variables that the organism realizes, like globally-aware states, and thereby shed light on the neural correlates of consciousness, but CD does not necessarily bear any implications for higher order consciousness, such as meta-awareness.
I would conjecture that neural pathways exhibit sufficient structural flexibility to host a relatively novel infrastructure like language (whose apprehension may be inscribed in native neural circuitry a la Chomsky’s universal grammar), which then realizes meta-awareness. Linguistic meaning, which remains unresolved and is known in philosophy as the symbol grounding problem, may bear some degree of isomorphism with biological neural networks. Connectionist approaches like parallel-distributed processing (PDP) best account for the associationist way that concepts are stored in the mind: i.e. not as files in a registry, but as networks connected through degrees of strength based on “meaning” and behavioural reinforcement ( I put meaning in quotations because it’s a decomposable concept and nebulous at first glance; it further involves not well understood integrations of iconic and propositional contents).
On the other hand, natural language as a quasi-formal system in virtue of adhering to syntax, appears independent of its neural substrate. That is to say, systems of memory may know nothing of grammar or syntax, which bear closer isomorphism or structure-preserving mapping to the outer world than the connectionist system in which it is hosted. The rules of grammar may then be constraints that emerged within a social matrix of interaction whose efficacy was constrained by the ontology of the experienced-perceived outer world: stable objects changing in time mapping onto the object-predicate structure of sentences.
Meta-awareness then may be the growth of internal linguistic representations within a single mind after the communal development of syntax.
Consider a Henkin sentence:
“This sentence is provable” or “This sentence is true”.
The mere assertion of the sentence makes it true. Provided that we can formalize every symbol in the sentence, such that the indexical “this” refers to the symbol set comprising the sentence within which it occurs, then the truth-value of the sentence is not determined truth-functionally (purely logically) or by reference to a state of affairs in the world, but by its own self referentiality.
Of course, a proper Henkin sentence would have to be asserted within some formal logical notation. And even then, it would not explicitly perform its own provability, merely assert it. A longer statement would be required to prove its own theorem-hood.
I bring up a Henkin sentence because it is the observe of a Gödel sentence:
G is not formally provable within a system S.
Of course, this is a simplified restatement of G, which says that it is not provable within a formal system with sufficient representational power to state this very sentence. So if a formal system S can state G, then S is not complete. It is incomplete because it cannot prove all the truths statable within its axioms and rules of inference.
Both of these assertions are versions of Epimenides paradox, also known as the liar Paradox.
Himself a Cretan, Epimenides stated: “All Cretans are liars.”
A simpler restatement in closer Gödelian vein would be: “This sentence is a lie.”
Now, let’s shift gears a little bit.
We are familiar with the concept of a hierarchy. A hierarchy implies a division of levels between higher and lower. In social affairs, hierarchies are either a matter of convention, coercion or other type of social arrangement. The court system is an example of a formal social hierarchy; another is the corporate chain of command. But these hierarchies are intrinsically pliable. The conventions can change, coercive regimes can be overthrown, and social arrangements can be reconfigured according to circumstances.
Some hierarchies, meanwhile, are purely formal. Consider the following statement:
ℕ ⊂ ℤ ⊂ ℚ ⊂ ℝ
The symbol ⊂ stands for a subset of, N stands for the natural numbers, Z for integers, Q for rationals, and R for real numbers. This hierarchy says that each successive set contains all the members of the preceding set. Different from the hierarchies mentioned earlier, this type of hierarchy is inviolable or necessary. The notion of necessity used here is of logical necessity, which means that its negation implies a contradiction.
Or so it would seem.
Now consider hierarchies in computer programs. One of the ways we encounter hierarchies in computer programs is through recursion, which is the application of a function within itself or its type. Recursive hierarchies are also purely formal, which implies the application of logical, mathematical, and conventional rules, all of which ultimately translate into machine code instructions of zeros and ones.
Another way of conceptualizing computational hierarchy is through levels of description. The difference between machine code, assembly language, and higher-level languages is a matter of representing procedures. Higher-level languages coarse-grain the way the computer represents and executes a procedure in a way that’s closer to how a human represents it. Higher level languages, in this regard, bear greater resemblance to natural languages than machine code, which consists of electrical signals moving through switches.
Levels of abstraction in software bear parallels to levels of organization in natural systems like the human mind. Whether the metaphor amounts to a 1–1 mapping is debatable. By a 1–1 mapping I mean that the relationships between levels of abstraction exhibited by computer languages mirror exactly the relationships between levels of scale or organization in natural systems.
Does the analogy between computational abstraction and the human mind breakdown when the former is invoked as an explanation of the latter?
According to Douglas Hofstadter, the breakdown can be captured by his notion of a strange loop. The neural substrate as an analogue to computer hardware, according to him, realizes a kind of program that can be termed a strange loop or a tangled hierarchy. The hallmark of this program, tantamount to the human mind and its various contents, is that it can self-reflexively represent itself. The program bears structural resemblance to a computer program and the attendant hierarchies we discussed earlier, but ultimately the hierarchy is pliable, and the levels violable. A heterarchy is precisely such a structure, namely one where the differentiation among levels is horizontal, and thereby their rank order is not rigid or inviolable. The heterarchy of consciousness is, furthermore, a strange loop because conscious states coupled with an infinitely-recursive system like language, and given human interaction with a social world, eventually realize representations of themselves. Purportedly, Gödelian undecidability enters the picture metaphorically as the level breaking critical threshold: namely representational states achieve a global, level-violating threshold when they self-represent. It isn’t clear how exactly this could happen, and problems and possibilities are entertained below.
The authorship triangle, an example used by Hofstadter himself, may best illustrate the idea of a strange loop. Imagine three authors, A, B & C. A is a character in a novel by B, B is a character in a novel by C, and C is a character in a novel by A. Which one is real? Where is the source code? Well, they all mutually generate each other, but this would be impossible without some external layer or real text where they all exist as fictional characters. According to Hofstadter, the human mind is akin to the authorship triangle because awareness loops back on itself in such a way that no level or content takes precedence over another or escapes reflection. However, this can only be so in virtue of a neural substrate whose structure and some as yet unknown characteristics realize the level-violating mental loop. The reflective loop cannot reach and see its physical substrate from the inside, so to speak. We can do it from the outside through empirical methods, but from the inside the mind is limited to its own contents, which veil the microstructure of the vast cast of neural networks at play.
You can imagine the relationship of the strange loop or tangled hierarchy to its “inviolate” physical level as pictured by the diagram below:
Hofstadter’s characterization of the relationship between strange loops and their neural substrate.
While Hofstadter aims to capture that elusive property of the mind that generates the appearance of independence from lower-levels and infinite mutability, and gestures to do so by analogy to codification of self-reference in Gödel’s incompleteness theorem, he ultimately fails to bring that analogy to fruition as a penetrating insight on the relationship between mind and body, neural substrate and symbol substrate, to use his terminology.
At best, Hofstadter characterizes the symbol substrate as a special kind of program marked by Gödelian undecidability and in virtue of which realizes some global representation of “itself” he terms the self-symbol. Logical undecidability refers to formal systems of sufficient representational power such that no procedure exists that can determine the complete set of logically valid theorems within that system. Gödelian undecidability is a special case of undecidability whereby the G statement, which asserts its own unprovability (within a formal system able to express at least basic arithmetic), cannot be proven within the system. The remarkable thing is that it cannot be proven by virtue of its assertion of unprovability. Because G is unprovable, it’s therefore true, but it’s truth-value cannot be established within the formal system in which it is asserted. Rather, it must be established from some meta-analysis outside the system.
Hofstadter analogizes this kind of breakdown of formal consistency, the inability of the system to assert its own completeness, and the need for recourse to external systems to establish the truth of G, to the reflective capacities of consciousness. In other words, something similar must be going on at the computational scale of consciousness, whereby formal relations and hierarchies breakdown on account of a self-reflexive loop that creates a substrate that propels the lower levels on the same plane as the higher levels and vice versa. This reflexive loop realizes a global perspective by continually transcending its own limitations. With the caveat, of course, that all this “level-crossing” is confined to the symbol substrate and is unable to touch the neural one.
The problem with Hofstadter’s analogy is that the truth of Gödelian incompleteness cannot be established independently of some external meta-analysis. That external meta-analysis requires a human subject to be able to interpret Gödel’s Incompleteness Theorems in the first place. As such, it must invoke the apprehensive and reflective capacities of the human subject to make that interpretation. The question then arises what kind of baseline mind or intelligence is required to understand or interpret the Incompleteness Theorems? It appears that Hofstadter is trying to explain the reflective capabilities of the human mind by recourse to one of its products. The Incompleteness Theorems are understood in virtue of the apprehensive capabilities of the mind, but do not explain in virtue of what property the mind is capable of those apprehensive abilities (understanding, meaning, self-awareness etc). Note that I’m not saying that the Incompleteness Theorems depend on the human mind for their theoremhood; rather, that they require some symbol-grounding subject (source from which meaningless symbols have meaning) with the kind of global-reflective capacity that the human mind possesses to apprehend their meaning. It it this property that Hofstadter endeavors to explain, but in my view, does not.
Another problem with Hofstadter’s hypothesis is that what he terms the “symbol substrate” can serve as a proxy for the mind. It may be the case that properties like “self-awareness” depend or even become possible in virtue of symbolic scaffolding, which I will equate, for simplicity’s sake, with propositional contents. But there’s also good reason to contest that propositional contents or intentional states exhaust or even ground the mental. Other candidates include phenomenal consciousness and affect, which could very well be more primordial. At least as regards his views espoused in Gödel, Escher, Bach, these seem myopically informed by a computational theory of mind. The mind is a kind of symbolic system that instantiates second-order representations (representations that represent themselves). It’s not clear that mental states about mental states will give you awareness, let alone self-awareness, for the latter presupposes a self. Perhaps Hofstadter means something akin to awareness about awareness, i.e. aware states about aware states, but these cannot give you self-awareness without there being some unified structure to begin with to reflect on itself. It seems dubious then that individual second-order aware states could then instantiate awareness of “self”. (By aware states I mean conscious states) The self part needn’t come into the picture at all. All these views are contentious, and even though Hofstadter does not delve into this level of detail, it is worth exploring these possibilities as interpretations of his views. He says for example:
“The self comes into being the moment it has the power to reflect itself” (709, Hofstadter)
This seems dubious on the arguments presented earlier. The statement seems blatantly contradictory, and not in a Gödelian sense. He already presupposes a pre-reflective self in this very sentence. The plausibility of this assertion increases if we assume a pre-reflective self, for which I will argue further down.
Yet, he concedes that strange loops abound in music, art, government and formal systems. Perhaps Hofstadter holds that these are all parasitic on the mind’s original strange loop. Otherwise, if they are all on par, it seems dubious that the loosely connected instances of self-reference that he classes as strange loops, despite forming a special type of self-reference, could possibly shed explanatory light on the nature of the mind, let alone the relationship between the physical and mental if that is a viable conceptual schema at all.
Finally, Hofstadter’s views are clouded by a strict separation of the physical substrate and the mental substrate, predicated on his presumed equation of thought to computation. In his estimation, the mental is akin to a computer program that does some strange, loopy, self-reflexive things that realize the global self-symbol, but the entire strange-loopy structure of this symbolic system or program is but a coarse-grained higher-level representation of something realized by an inviolate, physical lower-level. In other words, he reduces the relationship of the mind to the brain to the relationship of higher-level languages to their machine code. With the difference being that with the mind we have here a truly special software in virtue of whose global self-representation it can rewrite its own code by reaching back down to the lower levels given that these are inscribed by the same symbolic system (higher-level language aware of itself), namely mental contents, though these cannot reach down to the true, lowest level, inscribed by the neural substrate.
While I think that Hofstadter’s views of the mind are inadequate and probably wrong, aspects of his notion of a strange loop gesture in the right direction. The notion of a strange loop, while awaiting to be made conceptually robust and technically rigorous, may be adopted towards enactivist theories of cognition. Enactivists do not circumscribe or confine the global event or variable to some computational domain realized by the brain, but apply it to ongoing loops occurring at the level of the organism as a whole in its interaction with the environment. The symbol substrate, which Hofstadter identifies as the domain of validity for strange loops, is integrally embedded in a more primordial “coupling” of organism and environment, which evolves differentially in time. Reflective selfhood, realized or exacerbated in virtue of a symbol structure like language, sits atop a more basic, affectual selfhood that is pre-reflective. This pre-reflective awareness of self is hypothesized to be realized by a combination of bodily loops that involve continual interaction between nervous and limbic systems that globally integrate proprioceptive, interoceptive, and other forms of awareness within a global workspace of awareness equivalent to working memory.
The level-scrambling of strange loops, therefore, is parasitic on this more basic information integration that forms the idiosyncratic milieu of the organism. That idiosyncratic milieu is marked by motion toward attractor states that form proxies for individual salience or meaning-creation. Endogenous movement toward salient-attractors filtered through a stratified, complex social milieu/ecological niche forms the basis for the kind of strange loop that Hofstadter propounds. Traditional level-obeying feedback loops acting in concerted fashion realize the basis for the reflexive game of reason enabled by the intellectual infrastructure of language and an external milieu that sanctions it with various practices and rules.
The mind, therefore, is not some abstract substrate reducible to a more basic physical substrate, but rather the ongoing maintenance of global variables that situate the organism toward a path of viability. The combinatorial infrastructure of language forms a sub-system of representation that catalyzes the integrated proprioceptive and interoceptive loops realized by the lower regions of the brain into complex and more formalized representations of self and environment (realized by the higher regions) in the global workspace of awareness. These global variables are identical to ongoing neural patterns wired to generate endogenous movement, but these neural patterns are indeed shaped by the internal experience and representations of the subject. Namely, the endogenous movement of the organism is its experience propelling it toward some futural horizon, and its neural patterns complexify in light of the organism’s internal view of itself and the world.
Neither separating the substrates, nor eliminating one in favour of the other works: mental contents are a kind view from the inside that present an opening for the organism’s differential evolution. The reason the category mistake arises is that a sub-set of experience, closer to thought or propositional contents, is uniquely engaged in ‘formalization’, which then appears to float independently from affective and imagistic mental contents. Without denying degrees of independence, these systems are very much mutually dependent and evolve together against a backdrop of more basic global variables, e.g. minimal-global awareness, the container of mental contents.
*the application of the technical notion of undecidability, namely the idea that there exist decision problems that no algorithm can determine a yes or no output, is applied here to dynamical systems broadly, and cognitive systems as a subset thereof that engage in cooperative and competitive games more narrowly. The cogency of this application is not sufficiently explored or determined, but is employed on the shoulders of a background assumption of indeterminacy where the informational uncertainty of a cognitive agent given some threshold of informational complexity is both multiply encodable (underdetermined) and inexhaustible. Which is to imply that at least a subset of future contingents are neither decidable nor determined in the strong sense (though adequate determinism is assumed).
Callender, Craig. “Thermodynamic Asymmetry in Time (Stanford Encyclopedia of Philosophy).” Stanford.edu, 2016, plato.stanford.edu/entries/time-thermo/.
Emili Balaguer-Ballester, et al. Metastable Dynamics of Neural Ensembles. Frontiers Media Sa, 2018.
Gazzaniga, Michael S. Who’s in Charge? : Free Will and the Science of the Brain. London Robinson, 2016.
Hájek, Alan. “Interpretations of Probability (Stanford Encyclopedia of Philosophy).” Stanford.edu, 2011, plato.stanford.edu/entries/probability-interpret/.
Hitchcock, Christopher. “Probabilistic Causation.” Stanford Encyclopedia of Philosophy, Metaphysics Research Lab, Stanford University, 2018, plato.stanford.edu/entries/causation-probabilistic/.
Hofstadter, Douglas R. Gödel, Escher, Bach : An Eternal Golden Braid. New York, Basic Books, Cop. , [Ca, 1999.
Hofstadter, Douglas R. I Am a Strange Loop. New York, Basic Books, 2008.
McGilchrist, Iain. Master and His Emissary: The Divided Brain and the Making of the Western World. Yale University Press, 2019.
Prigogine, I., and Isabelle Stengers. The End of Certainty: Time, Chaos, and the New Laws of Nature. Free Press, 1997.
Rovelli, Carlo, et al. The Order of Time. Milton Keynes, Allen Lane. Copyright, 2018.
Schneider, Gerald E. Brain Structure and Its Origins : In Development and in Evolution of Behavior and the Mind. Cambridge, Massachusetts, The Mit Press, 2014.
Sklar, Lawrence. “Philosophy of Statistical Mechanics (Stanford Encyclopedia of Philosophy).” Stanford.edu, 2015, plato.stanford.edu/entries/statphys-statmech/.
Thompson, Evan. Mind in Life : Biology, Phenomenology, and the Sciences of Mind. Cambridge, Massachusetts, The Belknap Press Of Harvard University, 2010.