The Self, its Brain, and a Solution to the Body-Mind Problem 

By Dr. Peter C. Lugten

Selected as a poster presentation for The Science of Consciousness conference, Taormina, May 2023

Abstract: 

In “The Self and Its Brain”, Popper and Eccles explored, as dualists, the problem of how an immaterial consciousness can interact with and control a material brain. Rejecting as inadequate both monist theories and other forms of dualism, they insisted that consciousness must causally affect the brain, driving its activity, noting that if consciousness made no difference to the brain’s functionality, it wouldn’t have evolved. Eccles concluded the Self-Conscious Mind is an independent entity, without a neural correlate, that actively reads impressions from brain modules at “liaison areas” in the language-capable hemisphere, and controls activity by acting back through the same areas.  

While acknowledging that the Body-Mind problem remained unsolved, Popper made four predictions:

  1. If we could understand how objective knowledge is formed as an extension of our subjective mind, it might explain the interactions between the mind and the brain.
  2. Physics may need to be open to a new discovery affecting the first law of thermodynamics.
  3. If physical determinism is true, everything we think we are doing is an illusion.
  4. The ontology of consciousness appears as if it may be an “eternal mystery”.

In this paper, I propose a solution to the Body-Mind problem that satisfies Popper’s predictions, demonstrating that the mechanisms of emergent phenomena cannot be known, in principle. The reason is found in the seeming contradiction in the behaviour of information with respect to the first two laws of thermodynamics. Physicists claim that information, considered as the microstate of the particles within an isolated system’s macrostate, can, like First Law energy, be neither created nor destroyed, yet the information in the system, like Second Law entropy, will inevitably increase. To explain how information can increase without being created, it is supposed that a superintelligence, knowing the complete microstate of a system before the entropy increases, would be able to predict where each particle would go afterwards. While this can explain interactions within the microstate, it does not work for events emerging into the macrostate like consciousness. These events are, by definition of the word “emergent”, features of a system which cannot be predicted by a complete understanding of its underlying level of composition. I believe that these events must be considered as irreversible computations, to which Landauer’s principle applies. Irreversible computations are cycles in which bits of information, temporarily stored, are then destroyed. The destruction represents work, results in heat loss, and increases entropy. Building on this, I propose that the increase in entropy in a time-irreversible, unpredictable (emergent) system requires the simultaneous deletion of information concerning the steps, or computations involved. Thus, the steps being sought in the quest to understand consciousness are destroyed as a result of entropy, and will always remain a mystery. These ideas imply that since the origin of the Universe, it was entropy, and not some panpsychic principle, that is behind the eventual emergence of consciousness, that our being conscious proves that we are not predetermined, and that entropy will never favour the emergence of conscious machines. 

Paper:

At least since the philosopher Rene Descartes first mused “Cogito, ergo sum”, human beings have puzzled over the dilemma of how it is that a tissue inside the vaults of our craniums can create the wonders that we experience as consciousness. Descartes proposed what is known as a “dualist” solution: that there is a separation between the “res extensa” of our bodies and the “res cogitans” of our thoughts. The problem here is that although the two must interact, they are not connected by any comprehensible mechanism. Other philosophers, styled “monists”, have proposed that there is no separation; that our thoughts spring directly from the biochemistry of our neurons and that the distinction between body and mind that we experience is illusory.

In “The Self and Its Brain”(1), as dualists, Sir Karl Popper partnered with neuroscientist Sir John Eccles to explore the problem that faced Descartes, that of how an immaterial consciousness can interact with and control a material brain. They rejected monist approaches such as radical materialism/behaviourism, and the dualisms of panpsychism, epiphenomenalism, parallelism and Identity theory. These theories deny that consciousness can causally affect the material brain, driving its activity. For example, they cannot account for the mental effort that, at times, is needed for the conscious mind to direct the material brain to retrieve a specific memory, and then evaluate the accuracy of the brain’s performance. Popper held that mental processes evolved under the pressure of natural selection, based on the need for purposeful behaviours to ensure survival and reproductive success. (Eccles noted that if consciousness hadn’t made a difference, it wouldn’t have evolved). Individuals must subjectively analyse in their conscious minds (what Popper called World 2) the objective world around them (Popper’s World 1) not only for a plan of action, but also to create structures in World 1 that embody aspects of conscious intelligence. These Popper called World 3, or objective knowledge. This could interact with the conscious World 2 in the future, or with the World 2 of other individuals, to greatly facilitate the efficiency of purposeful planning. Crucial to the evolution of World 3 was the acquisition of language by our early ancestors. Popper considered a central conception of the book to be that studying World 3 could illuminate the mind-body problem: he predicted that if we could understand how objective knowledge is formed as an extension of World 2, creating a ground for the realisation of our personalities, it might explain the interactions between the World 2 mind and the World 1 brain.

Sir John Eccles, in describing our experimental understanding of the functional architecture of the brain, explained the division of the cerebral cortex of the brain into two complementary hemispheres. To oversimplify, a dominant hemisphere, almost always the left side, (regardless of handedness) is associated with Wernicke’s speech area, and a non-dominant right side, is associated with visual and spatial planning, and music appreciation. All conscious voluntary motion is initiated by the awake dominant hemisphere. It incorporates certain time delays to ensure a unitary experience of external and internal sensory inputs, even to the extent of seeming to slow or speed up the sensation of time experienced in certain circumstances. Eccles postulated that the Self-Conscious Mind (SCM) is an independent entity that is responsible, without an underlying neural correlate, for the unitary character of our experiences. It actively engages in reading out impressions from a multitude of active modules in the dominant cortex, sites that he called “liaison areas”. As to where the SCM is located, said Eccles, it “is unanswerable in principle”.(p376) It materialises in its interactions with the liaison areas in search of particular modules, existing in time but not in space. While being in the present, it can retrieve important past memories (often of World 3) and imagine future goals, likewise. 

In the Dialogues between Popper and Eccles, Popper made three further predictions that anticipated the solution to the Body-Mind problem that I will present in this paper. 

Physics may need to be open to a new discovery affecting the First Law (of Thermodynamics) through which consciousness comes about.(p 541) 

“If physical determinism is true, then that is the end of all discussion or argument… Everything we think we are doing is an illusion, and that is that”.(p546)

Of how the brain came to associate in its evolutionary development with the conscious mind: “there are certain things which at least look now as if they are eternal mysteries”.(p563) 

Consciousness is an emergent property of a neurological analytic facility that, in order to survive, must blend all our experienced perceptions with the signals of internal homeostasis. It must match these with expectations, apprehend the nature of problems, rank them for urgency, and solve them based on an analysis that has to have some sort of a representation of the self in the world. In order to do this, the brain creates a metaphorical surround-sound movie screen that neuroscientist Daniel Dennett called the “Cartesian theatre”. This must transpire without the benefit of a “little man”, or homunculus, to view the scene from within, since the homunculus in turn, would need a homunculus, an infinite regress. As Professor Antonio Damasio put it: “The sense of the self in the act of knowing emerges within the movie. Self awareness is actually part of the movie, thus it creates the “seen” and the “seer”, the “thought” and the “thinker”, with no separate spectator for the movie-in-the-brain”.(2) Our conscious experience is the homunculus. It is rapidly supplied with pertinent information in a format that is readily understood. So, in effect, we are left with the question, not “Why do we have consciousness?”, but “What possible alternative way of experiencing the world could there be?”

This opens a second question, being “Why did the need to experience the world consciously arise – why did it evolve?” We could imagine a world in which we had no more consciousness than a machine. In such a world, we could reason that it was time to seek food or shelter, and that we had to earn money in order to pay for these, but we could not reason or imagine, in the absence of the sensations of pleasure, love, reward or enjoyment, that it is time to seek romance, or to perform an act that will lead to the birth of children, or to take care of those children afterwards. Consciousness, therefore, is not only a result of Darwinian selection for qualities that reward self-preservation. It is also the result of sexual selection for the sights, smells and sounds of beauty and all the rest of the emotional package that leads to reproduction. In this we see, as Popper predicted, that the World 3 environment of mating rituals and the efforts required to make ourselves (and then our children) fit, educated and attractive, is the primary driver of much conscious programming.

Consciousness is, indeed, a very neat trick, but how does the brain pull it off?

David Chalmers, also a dualist, posited a Psychological Law of the Universe to bridge the “explanatory gap” in our understanding of consciousness – Panpsychism. Presumably, for a few billion years after the Big Bang, until solar systems formed capable of housing intelligent life, this psychological law tagged along with gravity and electromagnetism without doing anything. Cristof Koch, in “The Feeling of Life Itself”(3 )quoted Erwin Schrodinger, asking, until the evolution of big brains in the Universe, did it remain “a play before empty benches, not existing for anybody, thus quite properly not existing?” This is like asking if the enormous space between solar systems is like an environment with no ecosystems, not existing for anybody, and thus not existing. I will argue that it is all Entropy’s Playground, a potential field for the emergence of consciousness. I do not feel that the mystical principle of panpsychism is necessary. Unless one can demonstrate a clear reason why entropy should favour the participation of photons in a universal consciousness, I think it highly unlikely that they do. Rather, I believe that the law of entropy, plus the uncertainty inherent in emergence, can account for consciousness. Is entropy conscious? If you want, you can believe so; since it makes no difference, we will never know.

Living organisms are islands of reduced entropy within their environments. Consciousness further reduces entropy in the world, as is seen by an examination of the structures conscious minds have built. The entropy of the consciousness emerging as output from our brain’s activity is reduced below that of the totality of the brain’s unprocessed sensory inputs, which is what makes the brain interesting. However, the processes themselves, the very uncertainty associated with them, and the seeming impossibility of our being able to understand them, guarantees that entropy, which is a measurement of this kind of change, will favour the evolution of consciousness.   

According to the physicist Professor John A. Wheeler, information is fundamental to the physics of the Universe. He thought if physical laws could be recast in informational terms, they might become congruent with Chalmer’s psychological law, giving rise to a grand theory of information. While this idea is purely speculative, there is a strong connection between information and entropy, and an exploration of this is needed to understand consciousness. 

The amount of information in a message is, in most contexts, proportional to its length in characters or digits. Likewise, Entropy, per the equation of Ludwig Boltzmann (S = k log W), is the number of digits of probability in a system, and represents the possible combinations of activity that we are ignorant of. The more certain an event is, the less surprising it will be and the less information it will contain, and therefore, a gain in information (by which, I do not imply known information, which has zero entropy) is an increase in uncertainty or entropy. An increasing entropy implies an increasing uncertainty, or number of possible outcomes, being associated with an increased number of microstates within a macrostate. (Microstates are subunits of a system, or macrostate, which can be arranged differently within it.) If we are about to toss a coin, or roll a die, there is not yet any information about the outcome, and zero entropy. Having tossed the coin, but not observed the result, the surprise upon learning the result will be 50%, increasing the entropy of the information proportional to ½. After rolling the die, obtaining the number 1, there is a greater increase in surprise, and in entropy, proportional to ⅚, because the result was not a 2, 3 ,4 ,5 or a 6. An increase in the number of possible outcomes in “information space” is equivalent to an increase in disorder in the world: more facts to distinguish between. The information space can refer to the possible arrangements of sand grains on a beach, or atoms in a jar; for practical purposes, impossible to apprehend. Entropy is the amount of “missing information” needed to determine what specific microstate your system (or information space) is in. While the thermodynamic entropy of a physical system is measured in physical units (Joules of energy divided by the Absolute temperature), the informational entropy is measured in abstract mathematical units – bits.

In fact, once we learn the result of the coin toss, or the roll of the die, the information, its uncertainty and its entropy, drops to zero. Information here is defined as being the opposite of knowledge. The loss of entropy associated with gaining knowledge is made up for by the increase in entropy associated with our brain functions, especially that associated with gaining consciousness.

The term “information” has meanings at different levels which could be confused, especially with regards to the (imperfect) analogy between information and entropy. For instance, the First Law of Thermodynamics states that in an isolated system, energy can be neither created nor destroyed; it is believed that the same applies to information, with two caveats. What was known as the “Black Hole paradox” suggested that information may be destroyed by a black hole that subsequently radiates away; this paradox seems to have been solved at time of writing, but does not concern this discussion. Confusion is added by the Second Law of Thermodynamics, which states that the amount of entropy in an isolated system cannot decrease – it tends inevitably to increase until the system achieves equilibrium. This is a property which seems to “emerge” from quantum physics, in which all interactions are perfectly reversible. According to the Transactional Interpretation of Quantum Mechanics, this occurs with the loss of any “phase coherence” in quantum states, with the resultant “throwing out of information” to create “blurring” at the classical level.(4)  In particular, it emerges when we are dealing with large numbers of particles in a statistical fashion. Also, while information is conserved, entropy is not. Therefore, if the amount of information is proportional to the entropy, then this means that the amount of information, too, must increase until the Universe reaches the equilibrium of heat death far off in the future. Professor Sean Carroll, in “The Big  Picture”(5), explained that the information which is conserved is the microstate, made up of the positions and momenta of particles, not the information in the system’s macrostate, of which we might or might not have knowledge. The embedded, classical or macroscopic information, of which we can seek knowledge, is not conserved, and can be copied or deleted perfectly. Therefore a book, which is full of classical level information, can be incinerated in a fire, and thus destroyed, which will increase the entropy of the macrostate, as well as (one might think) that of the scattered atoms in the microstate. However, the radiation and the atoms in the smoke and the ash could theoretically be traced back to their original positions in the book, so this conserves information at the level of the microstate, even as we know this would decrease the entropy back to zero and will never happen. Likewise, the question of what happens to all the information in our brains, if not our minds, after we die is exactly analogous to what happens to the information in the book when it is consumed by the fire. Someone who knew all the trajectories of all the particles after classical information is destroyed could reconstruct all the information laid out in neural pathways. 

This explains how information escapes destruction, but not how it fails to be created as entropy is increased. This must mean that someone who knew the trajectories of all the relevant particles before the book was burnt would know, from its precise microscopic status, the information that was about to be increased as the printed paper transformed into smoke, radiation and ash. Since they would be able to predict what would happen next, the information of this system is thus increased without being “created”. 

The First and Second Laws can be reconciled by asserting that information in a system can be increased without being created. However, this requires that the future course of the atoms and particles be determined by knowledge of the system at present.

A different theory suggests that information is related to but not equivalent to entropy. What is conserved is some combination of the two, with one increasing as the other decreases. A deterministic knowledge of the future is no longer required. Given the quantum necessity of chaotic indeterminism in the Universe, for instance, as described by Ilya Prigogine in his “The End of Certainty”(6), I believe this interpretation will bear more fruit. 

Observing the result of any increase in classical information, and hence entropy, and the committing of the information, as knowledge, to memory, has the effect of decreasing one’s own personal entropy. It creates knowledge that can be used to do work. When we communicate a piece of information, information is converted to knowledge, and the overall entropy and uncertainty decreases, at the level of people’s brains. Acquisition of information decreases entropy, and its loss increases it, and this can be described perfectly well by Boltzmann’s equation. However, the acquisition, subconscious and conscious processing, communicating and memorization of information is a result of energy intensive neuronal activity, which increases entropy. Even calculating what to do increases entropy.

Because of this, Yunus Cengal has suggested that “the notion of conservation of information should be limited to the physical universe governed by the laws and forces of physics, and it should be referred to as physical information ..to clearly distinguish it from other forms of information or knowledge”(7) Here we have to acknowledge there are limits to the reach of the theory of the Conservation of Information, such as occurs when it encounters an unpredictable, irreversible transaction such as the leap into consciousness. Nonetheless, I will proceed to consider what might happen to the conservation of information when it makes that leap. 

Computation requires a temporary storage of information, upon which the mind, (or calculator), acts in order to perform the calculation. It cannot be stored indefinitely, and its erasure, the destruction of this stored information, in order to proceed to the next calculation, increases entropy. In 1961, Rolf Landauer realised that any logically irreversible computation, i.e., erasing a bit of information, results in a minimal but non-zero amount of work, dumping heat into the environment, and hence increasing entropy.(8)

It is clear there is a great decrease in entropy resulting from the creation of conscious knowledge, which is mirrored in the organisation we have imposed on the world around us. Considering the activity of each synapse involved with the generation of consciousness as an informational transaction, the decrease in our personal entropy must be at least balanced by the simultaneous increase in entropy associated with the possible microstates of consciousness. The entropy of the mysterious computations that engender consciousness and the uncertainty surrounding them being significantly high, the likelihood of our ever understanding those transactions becomes correspondingly small. Yunus Cengel describes consciousness as a purposive agency, endowed with a directed causality, as opposed to the laws of physics, which are non-purposive causative agencies, and simple emergent properties, which are non-causative.(9) I propose that emergent systems, such as the emergence of classical from quantum physics, the emergence of life from chemistry, and the emergence of consciousness, involve the irretrievable destruction of microscopic information and that the uncertainty about them is the result of entropy. Dr Ruth Kastner, a proponent of the Transactional Interpretation of Quantum Mechanics, has discussed how entropy itself may arise together with classical physics from a loss of quantum level information during the absorption of photons, which can be understood as leading to a generalised form of Spontaneous Symmetry Breaking(4). (Spontaneous Symmetry Breaking is better known for its role in separating out the four fundamental forces of Nature at the beginning of the Universe). Entropy thus arises from the spontaneous breaking of the symmetry of the unitary time evolution of the quantum state according to Schrodinger’s equation for its wave function, which otherwise would result in the possible outcomes of particle interactions always summing to a predictable 100%.This occurs with the destruction of any “computation” involved in the symmetry breaking, in the form of, as we’ve seen, the loss of phase coherence in quantum states that “throws out information” to create the emergent, but blurred, classical level. The more the microscopic information about a process is erased, the less we can predict about that process macroscopically, and uncertainty increases. An emergent increase in macroscopic (conscious) information in this setting is simply not predictable, unlike the situation discussed earlier, before a book is destroyed by fire. I propose that to avoid the “creation” of information during emergence, confounding the putative First Law, the increase in entropy “requires”, or occurs with, the simultaneous destruction of the computational pathways involved in the emergence. This, then, destroys an equivalent amount of information. Only in this way can the First and Second Laws be reconciled during the phenomenon of emergence. The destruction of the information at the level of microstates required by entropy may mean that information is related to but not equivalent to entropy, and that only some combination of the two is being conserved. (As a corollary to this, I think we can say that physical determinism is incompatible with the emergent phenomenon of consciousness, and that therefore, since we are conscious, we are not predetermined).

Physicists David Layzer and Robert O. Doyle have shown that the creation and embodiment of information occurs with a local decrease in entropy, or a pocket of negative entropy.(10) Entropy greater than the information increase must be radiated away as heat or as pure information. In quantum mechanics, information is governed by a conservation law, which prohibits the exchange of heat for negative entropy. Doyle notes that quantum mechanics combines a deterministic wave aspect with an indeterministic particle aspect. An electron can end up randomly in any one of the physically possible states of a measuring apparatus plus the electron, with the probabilities of each state given by the wave function. This “collapse of the wave function”, reducing multiple probabilities into one actuality, drops local entropy of the measuring device commensurate with the increased information and there is a discharge of heat to carry away the positive entropy. This irreversibly creates information at a purposive level (the deliberate measurement) and negative entropy newly embodied in the apparatus. Adequate but imperfect determination occurs, says Doyle, through averaging huge numbers of quantum interactions over large objects. 

I propose the following solution for the enigma of how quantum information may increase while still being conserved.

This reconciliation between the First and Second Laws required by the quantum conservation of information can be stated as a new Law (or at least a principle) of Thermodynamics: The increase in entropy in a time-irreversible, unpredictable (emergent) isolated system requires the simultaneous permanent deletion of information concerning the steps, or computations, involved. The local increase in negative entropy is balanced by positive entropy radiated away as heat.

It says, in effect, that to increase information without creating it, the process of creating the information must be deleted simultaneously with creating it. 

 This new law seems to me to be necessary to explain emergent phenomena. I will use the example of Maxwell’s demon to relate this to the emergence of consciousness. At the quantum level, information is binary bits related to the microstates of particles.This is believed to be conserved in a manner analogous to energy. Its destruction would be equivalent to the destruction of the “missing” information needed to determine what microstate your system is in. This would be equivalent to the destruction of entropy, and is therefore, at least in non-emergent systems, impossible. But at the classical level, information can be in the form of ideas, and can be copied or deleted perfectly – it can be scrambled without being lost. James Clerk Maxwell imagined a demon that could defeat entropy by effortlessly opening a trapdoor between two chambers in a box and allowing fast air molecules to collect on one side. This would create an engine perpetually capable of doing work. However, as Landauer showed, the demon was thwarted because each time it opened the door, it collected information which could not be stored indefinitely. Therefore, information ultimately had to be erased at the end of each cycle, as the demon prepared to open the door again. According to Landauer’s Principle, the irreversible loss of information would be associated with heat loss that would reduce the system’s ability to perform work. It increased entropy enough to counter the decrease in entropy that accompanied the movement of atoms to one side. Similarly, to increase the level of order in the world gained by our becoming conscious, entropy must simultaneously increase through the loss of certainty associated with the unpredictable process. This occurs through the destruction of a portion of the information-space that could become known to us, specifically, that portion involved in the process of emergence. When Maxwell’s demon opens the gate between our subconscious and our consciousness, presumably sitting in Eccles’s Liaison Areas of the dominant hemisphere, the information that is erased is that describing the pathway of how consciousness emerges. Furthermore, whatever is going on in our Liaison Areas to generate consciousness must be accompanied by an irreversible, unpredictable process that in turn converts our thoughts into actual instructions to move muscles, one that neurons can follow. This could be called a process of “convergence”. 

I’ll note here a startling implication for General Artificial Intelligence. Since we can never learn for ourselves the computations that lead to consciousness, we’ll never be able to program them into a computer. Furthermore, a computer will never need to be conscious to perform its computations, any more than a toaster needs consciousness to make toast. I believe that the process of becoming conscious must be evolutionary, and was probably associated with the massive increase in complexity brought about when asexually reproducing organisms began to reproduce sexually. With this development arose a need for a primitive understanding of why and how to perform reproduction successfully, without which, consciousness would have remained superfluous. It follows that ethicists need not worry about how to fairly treat conscious computers until two computers should fall in love, and are prepared to die in order to defend their baby.

The emergence and convergence of consciousness are hidden deep within the computational workings of the brain by the inescapable tyranny of entropy and its irreversible tendency towards increasing diversity and disorder. However, we can say that the situation is not consistent with simple monism, but is effectively compatible with property dualism, specifically, a causally interactive dualism that is, at a hidden level, monist. There is no need for the explanatory gap to be bridged by the Psychological Principle. We must return to the question: “What possible alternative way of experiencing the world could there be?” as being the best answer to the Body-Mind question. I propose that it will never be possible to characterise consciousness more descriptively than that. Popper was correct to predict that the mystery of consciousness evolved in tandem with drives propelled by World 3, that it would require that we are not predetermined, that it would require a new law of physics, and that it would never be more than partially understood by science. Indeed, I’m obliged to conclude that the body-mind problem will never be solved until entropy can be defied, sometime after all the world’s broken eggs have reassembled themselves, and all the world’s toothpaste has squeezed itself back into the tubes.  

References:

  1. Sir Karl Popper, Sir John Eccles. The Self and its Brain; Springer-Verlag, 1977
  2. Antonio Damasio. The Feeling of What Happens; Harcourt Brace, 1999
  3. Cristof Koch. The Feeling of Life Itself; MIT Press, 2019
  4. Kastner, Ruth E. “On Quantum Non-Unitarity as a Basis for the Second Law of Thermodynamics”. Entropy, 2017, 19(3): arXiv: 1612.08734  
  5. Carroll, Sean. The Big Picture; Dutton, Random House, 2016
  6. Ilya Prigogine. The End of Certainty  Time, Chaos and the New Laws of Physics; Free Press, 1997
  7. Yunus A. Cengel. “On Entropy, Information and the Conservation of Information”. Entropy, 2021, 23(6) 779
  8. Landauer, Rolf. “Irreversibility and Heat Generation in the Computing Process”. IBM Journal, July, 1961, pp 183-191
  9. Yunus A. Cengel. “The mind-brain problem from the perspective of agency”. J. Neurobehav Sci 2022; 9: 7-16
  10. Robert O. Doyle. http://www.informationphilosopher.com; How is information created?
%d bloggers like this: