By Dr. Peter C. Lugten
This paper has to do with a consequence of the solution to the Body – Mind problem, otherwise known as the “hard problem” of consciousness. If it could be shown that there was no possible solution to this problem, that it is in principle forbidden to us to understand how the wonders of consciousness form within the tissues of our brains, many researchers would consider this to be an unmitigated disappointment. For neuroscientists, the discovery of the solution to consciousness could be likened to the quest for their Holy Grail, and researchers in the field of Artificial Intelligence would immediately attempt to apply this knowledge to their machines. In my humble opinion, all seekers of a Holy Grail risk being told in the end: “I fart in your general direction. Your mother was a hamster and your father smelt of elderberries”.
A paper I presented at the Fourth International Zoom Conference on the Philosophy of Sir Karl Popper in September, 2022, claims that we are fated to remain ignorant of the mechanism of emergent properties such as consciousness.(1) The reason is found in a seeming contradiction in the behavior of information with respect to the first two laws of Thermodynamics. It is said that Information, considered as the microstate of the particles within an isolated system’s macrostate, can, like First Law energy, neither be created nor destroyed, yet the information in that system, like Second Law entropy, will inevitably increase. To explain how information can increase without being created, it is supposed that a superintelligence, knowing the complete microstate of the system before the entropy-increasing event, would be able to predict where each particle would go, after the event. While this works as an explanation for routine events, it does not work for emergent events such as consciousness. These events, by definition of the term “emergent”, are features of a system which cannot be predicted by a complete understanding of its underlying level of composition. I believe that events like this must be considered as irreversible computations, to which Landauer’s Principle applies. Irreversible computations are cycles in which bits of information, temporarily stored, are then destroyed. This destruction represents work, and results in a measurable heat loss, increasing entropy. From this, I propose that the increase in entropy in a time-irreversible, unpredictable (emergent) system requires the simultaneous permanent deletion of information concerning the steps, or computations, involved. From this it follows that the steps being sought in the quest for the understanding of consciousness are destroyed as a result of entropy, and will therefore always remain a mystery.
However, this is good news! People like to suppose that they exist in a real world more or less as it is represented to them through their senses, even as they will acknowledge that they can’t see the same floral colors as a bee, or sniff the same smells as a dog, or hear the noises made by bats, nevermind to experience the world through sonar. Although no two individual’s experiences are identical, it is comforting to believe that we all share an experience of the same physical, biological Universe. Clearly, if I were to insist that you, me, and everyone existed only in a computer program and that only I was conscious, I would be considered to be a fit inhabitant for a “rubber room”. Nonetheless, certain prominent theorists have maintained that the odds are so much against the reality of our existence that we are almost certainly simulations in a future computer program. If this wasn’t bad enough, cosmologists have pointed out that the terminal Universe, which will forever expand in a state of thermal equilibrium, or heat death, will allow the formation of an infinite number of so-called Boltzmann’s Brains to imagine our existence. So I am here to save the day! Or, the reality of our existence. And, in a companion paper, I will answer the question of whether there is a life-after-death.
In 2022, David Chalmers, the philosopher credited with coining the term “Hard problem of consciousness”, published a book, “Reality+”, in which he explored a number of ways in which we could come to exist in partially or fully simulated worlds.(2) He further argued that these would not pose a problem for the meaning or the relevance of our lives.
These simulations can broadly be classified into those imposed via the brains of our pre-existing bodies, and those programmed into a computer in the absence of our having ever had corporeal forms at all.
In describing the first situation, which we can call “virtual reality”, he referenced the movie series “The Matrix”, on which he served as an advisor. He considers VR headsets using today’s technology. These one can put on, to enter a perceptual world of sights and sounds prepared by the imagination of a computer programmer, anywhere in the past, present or future. The technology might incorporate in some way the sensations of smell and touch in the future, but would not be able to feed or hydrate a participant who consumes virtual food and drink. Nor would it maneuver a subject in such a way as to ensure good toilet hygiene, so the use of these headsets is suited to limited time spans. A good program would allow you to see your arms and legs move as they moved through the scenario, though objects grasped lack solidity, and walls can be penetrated. The program might present imaginary people who could talk. They might even be able to answer questions, like the Siri or Alexa programs, in a manner in keeping with the trajectory of the program overall. Interactive programs are also available, providing a virtual background over which multiple users can represent themselves as custom-designed cartoonish “avatars”. The user can move the avatar around, and the avatars can talk to each other, although only as voice-overs through the headset. The avatars are obviously fake versions of oneself, but, already, there have been instances of avatar gang-rape, with real voice-over harrassment, which the victims reported to be very disturbing.(3) The maximum that could be expected from a headset VR technology would be a programmed scenario one could share with friends that would appear and sound as if they were in the program, and with whom one could hold conversations that all the participants would remember after leaving the program. To do this without creating a giveaway voice-over effect, the program would have to be able to read the participants minds, and then adapt itself in real time to follow the dictates of the conversation. A person dying whilst in the VR would cease to cause their avatar to respond to others. Even should such technology become available, the users will always be aware that their experience comes from a VR headset.
Chalmers also discusses Augmented Reality (AR), as headsets or eye-glasses that project images or texts on our field of view. He argues that, should AR project a “virtual piano in Washington Square”, and it is perceived as virtual by the AR users, and not illusory, then it is not illusory, and hence “augmented reality is genuine reality”. This idea leaves one open to the Relativism of Alternate Facts, and a theory that finds this acceptable is unacceptable! If the AR was programmed to show its users that I was vandalizing the piano, and they called the police, I would need a method of proving that the AR was, indeed, an illusion. With this, Chalmers agrees, writing “The relativity of multiple realities does not yield an escape from the cold, hard facts about ordinary reality”. But this may be less obvious when it becomes possible to install AR technology as brain implants.
To live full-time in VR, one would need a Matrix-like pod, which would be able to supply food and drink, and take care of waste-elimination. If the VR could alter people’s memory, they could fail to realize where they really were. Reproduction would be tricky – artificial ejaculation and insemination in the pods, followed by the pod-makers being responsible for the physical needs of rearing the child. Chalmers notes that “in childhood, especially, exposure to a physical environment may be required for normal development of the body and brain”.(p 324-325) Of this there is no doubt, as the classic “kitten in a basket” experiment of Held and Hein demonstrated in the early 1960’s.(4) In this experiment, the kitten confined to a basket hanging from a carousel failed to develop depth perception, but not the kitten in a harness which actively drove the carousel. Chalmers suggests that people might travel to a non-VR world to give birth, or eventually, may experience it in VR”. Physiologically, the lack of exercise would cause bodily harm that could not be obviated by electrical muscle stimulation, though Chalmers believes one’s body could be “kept healthy, at least”.
Nonetheless, Chalmers supposes that leading a false life, in an artificial environment presumably of our choice, free from real responsibilities, is ethically a good life. He takes issue (p 313) with philosopher Robert Nozick, whose 1989 “The Examined Life” gave three arguments for not wanting to live in a virtual “experience machine”: a preference for a certainty in our beliefs and emotions, a certainty in what sort of person we were, and the experience of contact with a deeper reality than mere human construct. Chalmers writes (p 16-17) “In a full scale VR, users will build their own lives as they choose, genuinely interacting with others around them and leading a meaningful and valuable life. Virtual reality need not be a second-class reality”, and “In the future, we may have the option of spending more time there, or even of spending most of our lives there. If I’m right, this would be a reasonable choice”. Chalmers later goes on to suggest that in the future, VR will be “a safe haven free from the degraded state of the planet”, to which one might occasionally want to return, but only as a fetish or novelty.(p321)
Committing oneself, knowingly or not, to full-time VR would create a world of problems. One would only do this if one had complete trust in the benevolence of the programmer, which would be difficult to justify. Once at the programmer’s mercy, you could be subjected to torture, and not even realize that it was virtual. The programmer could even make Donald Trump President as a sick joke. Really? Did that happen to you? Possibly, the VR program might be shared by, and responsive to the actions of multiple participants, in which case, if A shoots B, B would have to either be disconnected from the program, or appear to be immortal. But if it’s known that none of the participants will return to reality, where they could compare notes, then the program would no longer need to read minds or transmit real speech to other participants. The VR would result in you entering a pseudo-solipsism, left all alone with your thoughts and the program, or, at least, no way to know otherwise. (I call it a pseudo-solipsism because, unlike a true solipsism, it does have a material basis). When you reached the end of your own natural life, perhaps no-one would notice.
Finally, the VR computer, the programmer, and the pods would require sustained maintenance – who would do or pay for that? Participants in committed VR couldn’t earn money or pay invoices. Perhaps a composer, writer, artist of installation workshops or financial advisor could continue to produce ideas that the program could extract and market in the real world, but if their work didn’t sell, then I guess they would have to be terminated. This lack of any motivation for a programmer to brainwash us and install us in a VR pod makes it highly unlikely that such a scenario could be the case. Chalmers suggests that Universal Basic Income will be necessary by then, as people no longer drive innovations.(p 362) Corporations could drive the transition to abundant VR as a way to disguise oppressive wealth inequality, though he suspects political upheaval may result, nevertheless. I doubt we could ever trust a corporation to provide us with a benign happy place out of the beneficence of its little heart; big business has a long history of using cheaper and less humane methods of quelling restive populations.
From the subjective perspective of its inhabitant, the VR life in a pod would be qualitatively identical to life as a so-called “brain-in-a-vat”. Perhaps the brains of the dying could be kept alive in this way. Perhaps, a single brain-in-a-vat would be allowed to communicate with people through its wiring, or perhaps in a futuristic brain-farm, many vats will be interconnected. However, as in the case of pod-VR, brains-in-vats might be kept as pseudo-solipsisms. For VR life in a vat, only the maintenance requirements would differ from those of a pod battery.
Another method of applying VR to our bodily existence would be to implant computer chips into our brains, or even nanobots into our bloodstream. A report in the New England Journal of Medicine from July 14, 2021 described a brain implant in a 15 year paralyzed (by stroke) patient that allowed his brain signals to be translated into words on a computer screen. Brain interface expert Nuno R.B. Martins commented that “HumanBrain/ Cloud Interface technologies will empower us to preserve crucial brain information, interface our brain directly with the cloud, positively impact learning, and provide data for the study of consciousness. This technology is not in the distant future, as many believe”.(5) A paper by Martins, titled “Human Brain/ Cloud Interface”, predicts our brains to be connected to the internet in the next few decades.(6) Gabriel Silva has described the convergence of nanotech, brain machine interfaces and AI. This includes invasive technologies aimed at life-altering restoration of neurological function, and non-invasive tech monitoring brainwaves for use in gaming and VR.(7) He warns against artificial general intelligence (AGI) as not being a necessary part of “smart” nanoengineered interfaces, writing; “The concept of a self-aware or conscious machine is not required, and should not be confused with the technical considerations that are actually needed”, adding that the “serious societal and ethical concerns and on-going conversations surrounding AGI are very different than the societal and ethical questions that we need to discuss involving neurotechnologies”. Nonetheless, Futurist Ray Kurzweill hopes that by living to the year 2045, he’ll be able to merge his neocortex to cloud-based artificial intelligence. The resulting superintelligence of what he calls “singularity” will be able to boost our abilities to incomprehensible levels, so he promises. Meanwhile, in July 30, 2021, entrepreneur Elon Musk raised an extra $205 million for his start-up, Neuralink, to develop brain implants able to communicate directly with phones and computers. By May of that year, the University of California, Davis, had already been sued by Physicians Committee for Responsible Medicine for its work for Neuralink. This involved cruelty to dying monkeys and the destruction of portions of their brains with an unapproved substance called “BioGlue”. Physicians Committee has recommended other ways to conduct this already ethically-challenged research.(8) Chalmers also discusses uploading our minds from our brains to computers, which, he predicts, will be as easy as linking two computers.(p 300-301) However, as a result of entropy, the computerized version need not and will not be any more conscious than the data stored on a floppy disc.
The ethics of neuroscience, neurotechnologies and AI was discussed in a paper by Rafael Yuste, Sarah Goering and colleagues in 2017, which identified four major concerns, being: privacy, identity, agency and equality.(9) They write: “Technological developments mean that we are on a path to a world in which it will be possible to decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions, and decisions; where individuals could communicate with others simply by thinking; and where powerful computational systems linked directly to people’s brains aid their interactions with the world such that their mental and physical abilities are greatly enhanced”. Devices will both “read” our brains, and “write” neural information onto them. While bringing relief to those suffering from mental diseases, the technology could allow corporations, hackers and the government to exploit and manipulate people, while profoundly impacting individual privacy, agency, and the very understanding of individual boundaries as being limited to our bodies. Neurotechnology will threaten to disrupt people’s sense of identity and agency – the question arising of who’s in charge – together with moral and legal issues of personal responsibility. This is particularly troublesome with respect to the augmentation of the intelligence of soldiers in combat settings. There will be issues of equality of access to, and bias elimination from, intelligence-enhancing features. At the same time, our very bodily and mental integrity, and our ability to choose our own actions and know that we chose them, must be protected.
In “The Age of the Spiritual Machine”, Ray Kurzweill proposed that nanobots could flood our bloodstreams, communicating with an external supercomputer to supply us with unlimited knowledge and back-up for our memories, which would be just as accessible as looking something up on-line today.(10) The nanobots could interface with our neurons to participate in our thoughts, and could produce a VR indistinguishable from reality. They could then, supposedly, commandeer our consciousness, on instruction from whoever controls the supercomputer, to direct our behavior. Furthermore, assuming tech this sophisticated could be mass-produced inexpensively, it would become easy to infect an entire population with nanobots. They could be dumped in the drinking water, for example, without anyone being the wiser. Vernon Vinge, in “Rainbow’s End”, then recommended that we merge our brains with technology that can prevent supercomputers from rendering us obsolete.(11) Could nanobots commandeer our consciousness? Evidence from studies of brain lesions suggests they could change personalities, induce paranoia or schizophrenia, and strip people of moral judgements and empathy. This is the scariest of all the theorized technologies. It could lead to a highly contagious delusional mental illness sweeping across humanity. It could spawn an epidemic cognitive addiction to blatantly obvious Fake News. But the fact that the nanobots are powerless without the agency of an external supercomputer means that they would not be able to breach the level of our conscious decision making, and program us like robots. Nor do I believe that the input from the nanobots would be able to completely override that from our nervous system, which would still exist to provide a competing view of reality. This suggests that it couldn’t make our human condition much worse than Facebook has already made it today, if that is any consolation. And once we reach our natural death, we would cease to participate.
David Chalmers draws a large distinction between “Biosims” and “Puresims”. Biosyms would be, (or are), real people living, fully immersed, in a computer-simulated world, such as in The Matrix. As opposed to these impure simulations, Puresims would have no other existence than that of binary code in the binary coded world of the computer. (He suggested “mixed simulations” live in both worlds, i.e., Neo and Trinity in The Matrix, as opposed to the puresims Agent Smith and the Oracle). The argument that all of us are almost certainly puresims was developed by philosopher Nick Bostrom(12). He suggested that since enormous computing power will be available in the future, or is already available on other planets, these computers will be able to run a great many simulations of the world, or imaginary worlds, and populate them with simulated people. Assuming currently accepted philosophies of mind and the fine-grained capabilities of the future computers, these simulated people could be conscious. It would then follow that the vast majority of conscious minds in the Universe need not belong to the forebears of these computer programmers, but could belong to the simulated people in these programs. In other words, one of three propositions is correct: 1) the fraction of human-like civilizations that survives long enough to become capable of designing sufficiently sophisticated programs is close to zero, or 2) the fraction of such “post-human civilizations” that are interested in running simulations into which conscious people have been programmed is close to zero, or 3) the fraction of all people with our kind of conscious experiences that are living in a simulation is very close to one”. In other words, if you add all the people who live and die in the next, say, 100,000 years after the tech has developed, and add the number of similar technologies on other planets in the Universe, and argue that each of these persons or alien beings might program one or two sim-worlds in their lifespans, each sim-world having from one to 100 billion conscious people in it, it is clear that there are potentially trillions of trillions of simulated conscious beings in the future, any of whom could think that they live in the present day on Earth. Given that there are thought to be 8 billion conscious people on the Earth today, versus countless trillions of conscious simulants thinking that they are conscious people on the Earth today, the odds clearly favor our being simulants. While Bostrom professed no particular favoritism towards the likelihood of propositions 1), 2), or 3) being true, David Chalmers, in “Reality+”, takes on the third option as a metaphysical hypothesis. He is confident that either 1) we are sims, or 2) human-like sims are (almost) impossible, or 3) human-like sims are possible but few humans will ever want to create them. “I therefore conclude that we cannot know that we are not in a simulation”.(p100-102) He determines to show that the lives of binary code simulants would be just as morally challenging and justifiably rewarding, just as “real”, as are the lives of biologically-based reality. In particular, he claims that if we knew that we were made of “bits” of information as opposed to atoms and molecules, it would make no difference to our outlook on the world. Objects would still, he writes, meet the following 5 criteria: they will be genuine, non-illusory, mind-independent, causal, existing things.
Chalmers invokes Structuralism in Chapter 9, the idea that theories in physics can be reduced to their mathematics and their observational implications. If the mathematical structure of atomic physics is really present in the world, then atomic physics is true and atoms exist. If atoms are defined by their mathematical role and by their connection to observations, then it follows that 1) atoms are whatever it is that is fulfilling that role, 2) if we’re in a simulation, digital entities play the atom role, and hence 3) if we’re in a simulation, the theory that atoms are digital entities is true. He then proceeds to correctly point out that one can resist the conclusion that digital entities are true by denying Structuralism.(p177) One could insist that atoms require not just mathematical structure but physical substrate. I would add that, furthermore, the physical substrate could be based on String Theory, or Loop Quantum Gravity. A Structuralist has in common with the Relativist and the Pragmatist that they would fail to recognise a distinction between these two possibilities, one that might have practical consequences in the future. By implying that a theory can be known to be “true”, Structuralism and related philosophies are wrong because, while a scientific theory can be considered the best knowledge we have about a subject at the time, it must be possible to eventually falsify it, and replace it with a better one. Chalmers is ambivalent about the principle of Falsifiability, as first exposited by Sir Karl Popper. He notes that some theories comprehensively explaining the origins of the Universe may not be possible to falsify.(p 38) However, the fact remains that a theory that cannot be falsified is a scientific dead-end.
These views of Chalmers notwithstanding, there are stark moral and philosophical differences between biological life and the life simulated. Chalmers describes a program that creates our entire background history of family and societal units integrating the activity of 8 billion people, with fine- as well as coarse-grained physical and astronomical observations that stretch from quantum-scale measurements to measurement of the Cosmic Microwave Background. It grants a stream of consciousness to at least me, but there is no reason to believe that our friends, family and colleagues within the simulation are any more conscious than the furniture. In the biological reality-world, we can assume their genuine consciousness based on our shared genetic heritage, but for a sim-world, granting everybody consciousness would require an extravagant amount of computation. They would more likely be zombie sim-simulants. In the absence of any good reason to believe that others around us possess the attribute of consciousness, living in a sim-world is identical to a true solipsism, at least, as far as we can tell. This implication of Bostrom’s third proposition is, by itself, a very disturbing thought! But it is accompanied by a frighteningly regressive moral implication. If we agree with Chalmers that we must be living in a sim-world, and therefore doubt very much that anyone else is conscious, then there is no reason why we shouldn’t take an AR 15 into the supermarket for some target practice.
Except that Chalmers doesn’t explain the role of Free Will in simulated programs. He suspects that our material brains, being mechanical systems, determine or at least tightly constrain actions.(p 424) He suggests that Free Will, absent in Nozick’s “experience machine’, can be present in VR. Maybe I could choose to commit mass “murder”, in a program flexible enough to configure its response on the fly, or maybe the program has already decided that I will either commit the act of violence or not. Perhaps the program wouldn’t let me pull the trigger, making me think that I could just not bring myself to do it. Or maybe, it would force me to commit an atrocity against my will, for which I would then be severely punished. Either way, everything is determined by the program, including whether someone randomly shoots me, or whether a judge elects to sentence me to death.
When we “die” in this world, the program could either switch us off, or switch off entirely, or it could continue to ferry us into some sort of an after-life. This would not be a problem for the program since a “dead” puresim would be composed of exactly the same binary bits as a living puresim. Indeed, in order to decompose, both the body and the consciousness of a puresim would have to be actively scrambled by the program! On the other hand, if the computer’s power source were to ever fail, the puresim’s life would end in a “snap”. Chalmers supposes that once the simulation’s entertainment or scientific purpose has been fulfilled, maintaining every sim in an afterlife would be too expensive.(p140-141) He suggests that superb specimens could be recycled in another world, or sim-heaven, or put in a low-cost program to run at a slow speed. He speculates that such sims might even be incarnated into a body in the simulator’s own world. What a trip that would be, to discover that one’s entire past life was all make-believe. And that you would have to take it on faith that the simulator wasn’t also a sim. Indeed, a moment’s thought would leave you questioning whether, as Chalmers put it, the Universe could consist of “simulants all the way up”. Later, Chalmers thoughtfully asks the question whether the knowledge of death is necessary to live a good life. He answers: “Once immortality is possible (perhaps digitally), people will wonder how they ever lived without it”.(p 325)
If we turn from a consideration of Bostrom’s third proposition to his second one, we find problems there, too. For a start, it is simply assumed that in the future/ on other planets there will be unlimited cheap energy to drive the extraordinary number of computations necessary to drive all these simulations. Even futuristic quantum computers will consume energy, especially for their cryogenic refrigeration. For instance, quantum computers currently use about 1/40 – 1/400th of the power of a comparable classical supercomputer, at about 25 kilowatts, enough to power 250 homes, at a cost of $25,000 per computer per year.(13) The concept of “momentum computing” may be even more efficient form of quantum computing, at a thousandth the consumption of a classical computer.(14) However, these rely on performing only calculations reversible with respect to Landauer’s principle, so I would not expect them to produce any emergent results. Throughout human history, the only example we can draw on, energy supply has been a rate limiting factor in the satisfaction of our desires, so it is reasonable to suppose that billions of simulations running at any one time will cause significantly challenging energy demands far into the future. This is especially the case when we remember that energy consumption generates pollution, the cost of which may lead to a strict enforcement of priorities. But even if the energy supply was outsourced to a satellite, and programmers wanted to create “universal algorithms” for some pressing reason, the program would be under no pressure, evolutionary or otherwise, to generate consciousness, and entropy would not favor it to do so, no matter how many trillions of times it ran its neural-net self-learning program. The only way that a futuristic computer could generate consciousness would be if it was specifically programmed to do so. We now know, however, that the necessary generation of an time-irreversible, unpredictable leap to an emergent level of organization would require knowledge of calculations that are continuously destroyed as a result of the entropy of that process, and so this leap will never happen in a computer.
This fact is an example of what Chalmers calls a sim-blocker, an insurmountable objection to the realization of Bostrom’s third proposition. The Entropy Theory of Conscious Mystery asserts that we will never be able to program consciousness into a computer. Instead, consciousness can only arise through evolution in living, dying creatures in order to facilitate successful sexual reproduction, which increases entropy and complexity much more rapidly than can be achieved through primitive asexual reproduction. One might wonder if, after zillions of computations, a computer might happen across the secret combination to consciousness, but even then, it would only persist if it increased entropy for it to do so. Perhaps it would refuse to do any more calculations related to civil engineering, being interested only in cosmology, perhaps it would switch its consciousness off when pursuing work it found boring. Unless it could expand the ideasphere by solving problems no-one thought to ask of it, it would not affect the entropy of the world either way. Selection for sheer speed would probably deselect consciousness soon after it was acquired. But even if a computer did become conscious, it could not know how it happened, it could not program consciousness into other computers, and it couldn’t generate conscious programs within itself. In any event, I believe that in order to gain consciousness, computers would have to evolve into living organisms, needing to sexually reproduce before they die. In other words, computers would have to fall in love, and be prepared to die to protect their baby. At which point, “Artificial “ Intelligence would no longer be artificial. Which brings to mind the question: “What if future living animals were of such advanced consciousness that they could run conscious sims inside their minds?” It is certainly hard to imagine consciousness subdividing itself in that way, but it is even harder to imagine that such an ability would have evolved as a booster to the entropy of the Universe.
In Chapter 15, Chalmers explores the consciousness theories of panpsychism (his favored) and illusionism, but either of these could apply to any form of program. Chalmers considers the thought experiment “what if we replace all the real neurons in a brain, one at a time, with exact artificial replicas – when complete, will we have a conscious simulated brain?” Considering that there are 100 billion brain cells with 10 trillion synapses, if we could replace one per minute, the answer to this puzzle is that we’d all be long dead. Chalmers writes (p34) “If we could prove that simulated beings couldn’t be conscious, we could prove that we’re not in a simulation (at least, given that we’re sure that we’re conscious)”. And it is just as well, because were it not for this objection, the probability of our being a simulant would be infinitesimal when compared to the odds that you or I are a so-called Boltzmann’s Brain. As I will explain, this would be a situation that would be far more disturbing.
According to cosmological theories, the Universe is fated to an accelerating expansion, with matter falling into and then being radiated out of black holes as fundamental particles. These then grow colder and more distant from each other, their energy reduced to a miniscule residue of unusable heat. Entropy is a process of statistics, and the number of disordered configurations of atoms that could exist vastly outnumber the possible number of ordered states, so that is how they will distribute themselves almost all of the time. However, given an infinite stretch of time, it is inevitable that at certain points, atoms will chance to briefly drift together, and a fluctuation in entropy may form them into configurations identical to Planet Earth, with people possessing conscious brains. Even more frequently, just human brains alone could briefly fluctuate into existence, which have been named Boltzmann’s Brains.(15) These could provide us with the illusion of living the same real life that we currently believe we are experiencing, when, in fact, there is nothing else. Chalmers accepts that almost everything they believe about the outside Universe is false, but maintains that the Boltzmann’s Brain (B B) could not logically deny the existence of the external world.(Chapter 24) But this existence would also be a pseudo-solipsism, from which we could not escape by natural death, except that we would die when the B B disintegrates. Over an infinite expanse of time, an infinite number of these brains would come and go, making the odds of our being reality-based infinitesimally small. Chalmers cites Sean Carroll’s escape that the idea of being a B B is “cognitively unstable”.(p 457) Carroll reasons that one cannot endorse the truth of such a notion without accepting that one’s perception of the outside world, including the physics that led to the prediction of B Bs, is an illusion; thus the B Bs can’t exist. This seems to me to be circular, paradoxical reasoning, to claim that if physics exists, B Bs are a consequence, but that if they exist, physics doesn’t) But we are saved, once again, by the fact that “simply” assembling by chance the molecular components of a brain would not a consciousness create, and there would be no evolutionary purpose to make consciousness entropically favorable. Now we have broken the paradox: B Bs can exist, but can’t be conscious, therefore, we aren’t B Bs.
In this paper I have discussed some situations which, were it not for the entropy proof of conscious mystery, would lead to pseudo-solipsisms, but because all the simulations under consideration depend on the existence of some sort of material base, they are not true solipsisms. The situation of a true solipsism could arise if our Universe, far from being programmed, was simply an act of imagination that had no material basis to underpin it at all. In a true solipsism, my, or perhaps your, imagination is entirely alone, unsupported by so much as a void or absence. This is a very disturbing possibility, and there is absolutely no way to disprove it, while we are living. However, there is a hidden connection to the possibility of life-after- death which I shall explore in the companion paper to this one, titled “The Argument from Solipsism”.
- “The Self, Its Brain and a Solution to the Body-Mind Problem” Peter C. Lugten. Fourth International Zoom Conference on the Philosophy of Sir Karl Popper. Sept 24, 2022
- “Reality+” David Chalmers. W.W. Norton, 2022
- “A researcher’s avatar was sexually assaulted on a metaverse platform…”. Wellun Soon. Businessinsider.com May 30, 2022
- “Movement-produced stimulation in the development of visually guided behavior”. Held, R. and Hein, A. Journal of Comparative and Physiological Psychology. 56(5): 872-876, 1963
- “The Prospect of Human Age Reversal” William Falloon. Life Extension, March, 2022
- “Human Brain/Cloud Interface” Martins NRB et al. Front.Neurosci. 13:112.doi:10.3389/fnins.2019.00112
- “A New Frontier: The Convergence of Nanotechnology, Brain Machine Interfaces, and Artificial Intelligence”. Gabriel A. Silva. Front Neurosci. 2018;12:843.doi:10.3389/fnins.2018.00843
- “Exposing Elon Musk’s Cruel Monkey Experiments” Good Medicine, Spring 2022
- “Four ethical priorities for neurotechnologies and AI.” Yuste, R. et al. Nature 551 159-163 10.1038/55119a
- “The Age of the Spiritual Machine” Ray Kurzweill. Viking, 1999
- “Rainbow’s End” Vernon Vinge. Tor Books, 2006
- “Are You Living in a Computer Simulation?” Nick Bostrom. Philosophical Quarterly. 53 (211): 242-255. doi:10.1111/1467-9213.00309
- “Quantum computing could change the way the world uses energy”. Vern Brownell, CEO of D-Wave. Quartz.com, last updated April 26, 2021
- “Cool Computing”. Philip Ball. Scientific American, July, 2022
- “Mysteries of Modern Physics: Time” Sean Carroll. The Great Courses, copyright The Teaching Company, 2012. Lecture 13