Reengineering Reality

Beyond the Metaverse

digital physics

Digital physics

The idea that physics is essential digital goes back to 1967. Konrad Zuse was a German engineer, computer scientist and inventor that is best known for the design and implementation of the first working programmable computer, the Z3, completed in 1941. Increasingly, Zuse is also regarded as the person to first put forward the simulation hypothesis, the view that all processes in the universe are computational and that life, earth, everything in universe we know is a simulation. Zuse argued that the universe is computed by some sort of cellular automaton or other piece of discrete computing machinery.

This view was remarkable because physical laws such as Newton’s law of gravity had always been seen as continuous. In his book Rechnender Raum Calculating Space published in 1969 he described his theory. Zuse predicted that there would be great potential for physicists and experts on data processing to learn from each other and in his book Rechnender Raum he developed several basic ideas and attempted a comparison between classical physics, quantum physics and calculating space. He hoped that through cybernetics, combination of computer sicence disciplines, a true bridge could be build between physics and automaton theory. Stephan Wolfram, computer scientist and founder of Mathematica software, built further on the cellular automaton work of Zuse in the 1980s, but we return to his work a little bit later in this chapter.

A second influential person in the development of digital physics theory is the pioneering physicist David Bohm. In his book Wholeness and the Implicate Order, published in 1980, Bohm describes how the entire universe can be viewed as a gigantic, moving hologram or holomovement that includes a total order that contains both an implicate (also refered to as “enfolded”) and explicate (“unfolded”) order. The implicate order is viewed by Bohm as the deeper and more fundamental order of reality which captures the wierdness of behavior of quantum particles (e.g. non-locality, quantum superposition) discovered on (sub)atomic level in the 20th century. The explicate order is the reality that we normally perceive. Through a continuous process of unfolding and enfolding, the sub-atomic particles dissolve and emerge in the explicate order from the deeper implicate order. Bohm recognized that the mathematics of quantum theory dealt primarily with the implicate pre-space and how an explicate order of space and time emerges from it, whereas general relativity primarily deals with geometry and the movement of particles and fields. He went further in that the implicate order could be organized by a second implicit order or even in an infinite series of implicit orders that effect each other through unfolding and enfolding. In the case of consciousness, Bohm characterised consciousness as a process in which content that was previously implicate is presently explicate and vice versa, supported by evidence that memory in the human brain may be enfolded within every region of the brain rather than being localized. Bohm saw a deep connection between non-locality in quantum theory and consciousness and this lead him to believe conciousness could be contained deep in implicit orders and exist in different degrees in all matter.

Less than a decade later, the theoretical physicist John Archibald Wheeler build on the views of Zuse, Wolfram and Bohm suggesting that what constitutes reality is the result of countless binary decisions, true/false, yes/no, on/off, which result from interacting with or observing the universe. Wheeler presented his notion of “it from bit” at a conference in 1989: “Every it — every particle, every field of force, even the space-time continuum itself — derives its function, its meaning, its very existence entirely from binary choices, bits. What we call reality arises in the last analysis from the posing of yes/no questions.” [link needed]. Like Bohm, Wheeler also recognized the connection between physics, observer-participancy and information: Physics gives rise to observer-participancy; observer-participancy gives rise to information; and information gives rise to physics.

To get a sense of what it from bit means consider that you reading this sentence is the result of a cosmic number of calculations made by the universe since the Big Bang with binary choices that eventually led to your brain interpretating this sentence and making the binary choice whether you believe this sentence or not. Everything physical, chemical, biological is calculated. The universe computes quantum fields, chemicals, bacteria, human beings, stars and galaxies. As it computes, it maps out its own spacetime geometry to the ultimate precision allowed by the laws of physics. Computation is existence.

Bits of information

How did the idea of universe as a computer take hold and how do computation and physics relate? The first premise of digital physical physics, the view that the world cannot be infintely broken down but that there exist smallest parts, atoms that cannot be divided further, can be traced back to ancient Greece and the theory of Democritus1. Democritus probably lived between around 460 and 370 BC, but little is known of him directly. Aristotle writes the atomic theory to Democritus but he probably learned a lot from his teacher Leucippus. The theory of Democritus stated that everything consists of atoms and that atoms are indestructable and unchangeable. Democritus knew that if you would cut wood in half you would end up two smaller pieces of wood and reasoned that you cannot continue to cut wood in halves indefinitely, therefore there must be some smallest, indivisible unit or atom. Democritus and other atomicists further associated properties like hard or soft, sweet or sour, wet or dry to characterize atoms of substances. Iron atoms were hard, dry because iron substances have these characteristics (at low temperature). Water on the other hand would consist of atoms that were wet and soft. There is an infinite number of atoms of different shapes and sizes in our universe and they are continuously in motion in between these atoms there is nothing but empty space according to Democritus. Ultimately, largely due to the influence and views of Aristotle and Plato who rejected the atomic view, the theory of Democritus was forgotten for almost 2000 years before rediscovered again in the 20th century.

The bits Wheeler refers to can be seen as the digital version of the “atoms” of Democritus except that these new digital atoms are the basis not only of matter, as the Greeks thought, but of energy, motion, mind and life: The dominant view in physics since Aristotle and continuing with the laws of Isaac Newton and general and special relativity theories of Albert Einstein up into the 20th century had been that energy, space and time are continuous. In the 20th century this worldview changed when physicsts discovered that the sub-atomic world behaved very differently according to the rules of quantum physics.

The first hint that energy exists in discrete quanta goes back to the German physicist Max Planck who suggested in 1900 that the energy carried by electromagnetic waves could only be released by packets of energy, an integer number of discrete, equal-sized parts. In 1905, Einstein published a paper that proposed that light energy is carried in discrete quantized packets to explain experimental data from the photoelectric effect (the emission of electrons when electromagnetic waves such as light hit a material). Einstein theorized that the energy in each quantum of light was equal to the frequency of light multiplied by Planck’s constant. A photon light particle above a threshold frequency has the required energy to eject a single electron, creating the observed effect. Einstein was awarded a Nobel Prize in 1923 for the discovery of the law of the photoelectric effect after experimental validation by Robert Millikan who was awarded a Nobel Prize in 1923.

The photoelectric effect helped to propel the then-emerging concept of the wave-particle duality of light. Light simultaneously possesses the characteristics of both waves and particles, each being manifested according to the circumstances. The effect was impossible to understand in terms of the classical wave description of light as the energy of emitted electrons did not depend on the intensity of the incident radiation but on the frequency.

Danish physicist Niels Bohr discovered that around each atomic nucleus there were several orbits of electrons and that the state of an atom changed because electrons moved between orbits. Bohr’s model explained many important properties of the photo-electric effect discovered by Einstein on the (sub)-atomic level. But while Bohr and other scientists could have applied the laws of Newton to the orbits of electrons around the atomic nuclear, Bohr and others such as Werner Heisenberg and Enrico Fermi demonstrated that electrons could be both waves and particles at the same time. Heisenberg showed how you could not observe both the position of a particle and its momentum at the same time. The new quantum physics was so different and unfamiliar compared to classical physics at the time that it led Einstein to his famous statement that God does not play dice.

Not only matter and energy but space itself may be discrete in nature. Loop quantum gravity is a theory that attempts to merge quantum mechanics and general relativity. Unlike string theory which aims for a unified theory that incorporates the four fundamental forces (gravitation, electromagnetic, weak nuclear and strong nuclear), loop quantum gravity focusses on qunatum gravity only. According to Einstein’s general relativity theory, gravity is not a force but a fundamental property of spacetime itself. Loop quantum gravity is an attempt to develop a quantum theory of gravity based directly on the geometric formulation of spacetime. To do this in loop quantum gravity gravity is quantized in a similar way to the quantization of photons in the quantum theory of electromagnetism and the discrete energy level of atoms. The crucial difference between photons (quanta of the electromagnetic field) and quanta of gravity is that photons exist in space, whereas the quanta of gravity consitute space themselves. The location of single quanta of space is not defined with regard to something else but only by the links and relations they express. The curvature of this spin network is the curvature of spacetime in Einstein’s theory and hence the force of the gravational field. Just like quanta of light are constantly in flux, the quanta of space are also dynamic in the relations they express. If space is a spin network, spacetime is generated by processes in which these spin networks transform into one another described by sums over spin foams. Underneath the calm appearance of the macroscopic reality surrounding us is the microscopic swarming of quanta, which generates space and time. We return to this topic of representation later in this chapter when we discuss hypergraphs as a way to represent reality.

Computing is universal

The recent attention of physicists to computational theories to model the behavior and state of matter, energy and spacetime is based on three awakenings that have shown the breath and depth of computation in our everyday life.

The first is that computation can describe all things. The first modern analog computer was a tide-predicting machine invented by Sir William Thomson in 1872. With this special purpose computer the ebb and flow of sea tides and the irregular variations in their heights could be calculated. These calculations previously done manually required a lot of labor effort and were very error prone and the invention of Thomson in 1872 was a real time saver. Early in the 20th century digital computers were used for military purposes to solve the problem of firing a torpedo at a moving target. Arguably the most famous example is the work of Alan Turing a British scientist and pioneer in computer science that helped to develop a machine the break the German Enigma code. General Dwight D. Eisenhower told the British Intelligence chief in July 1945 that ULTRA as the machine was called ‘saved thousands of British and American lives and, in no samll way, contributed to the speed with which the enemy was routed and eventually agreed to surrounder”. In the decades after World War 2 we discovered that we can describe anything, books, music, video, 3D objects, human DNA, emotions, social networks, global supply chains and manufacturing and even artificial intelligence – with computation.

The second insight was that all things can compute. The first computing device invented by Charles Babbage in the early 19th centurt was an analogue, mechanical computer built from mechnical components such as levers and gears and took input from paper punched cards. Early digital computers were electromechnical, electric switches drove mechanical relays to perform the calculation. The Z2, created by Konrad Zuse in 1939 was one of the earliest examples of an electromechnical computer. Electromechnical computers however had a low operating speed and were eventually superseded by much faster all-electric computers first using vacuum tubes such as the Colossus, a set of computers developed by British code breakers.

The University of Manchester’s experimental Transistor Computer is widely believed to be the first transistor computer when it came into operation in November 1953, but it was IBM who shipped the IBM 608, the world’s first commercial transistor calculator in December 1957. The first transistors were made of geranium but they were superseded by transistors made of silicon in the mid-1950s that could operate at higher temperatures and less expensive due to the greater abundance of silicon. The cost of early digital computers had been high and the computers were not very reliable because they had many different components. Since the invention of the transistor in 1947, the expectation was that many transistors could soon be integrated onto small circuits to offer cheap, reliable computation with a fraction of space required but this proved to be challenging. Robert Noyce invented the first monolithic integrated circuit chip at Fairchild Semiconductors in 1959 and started the global semiconductor industry that todays empowers billions of computers, smartphones, tablets and the Internet of Things with silicon-based chips.

Instead of germanium, silicon or other inorganic substrates to compute, organic material can also be used as a substrate for computation. In 1994, Leonard Adleman demonstrated a proof-of-concept use of DNA as a form of computation. The TT-100 was a test tube filed with 100 microliters of a DNA solution and Adleman managed to solve an instance of the Hamiltonian path problem known as the travelling salesman problem: Given a network of connected cities with distances in between, what is the shortest possible paths that a salesman can follow to visit each city only once. For this purpose different DNA fragments were created, each representing a city that had to be visited. By combining the different DNA fragments in a test tube, different travel routes are being formed in the test tube as a kind of massive parallel computer that computes all possibilities. At the end, through a chemical reaction, the longer routes were eliminated and the shortest routes emerge. DNA computing can be faster and smaller in size for specialized problems due to its ability to make high number of multiple computations in parallel but typically has a slow processing speed. It takes minutes, hours compared to miliseconds before a DNA computer gives an answer.

Much faster than DNA computing and even conventional electronic computation would be optical computing. Optical computing uses photons generated by lasers for computation. As photons travel at the speed of light, optical computers would be faster than their electronic computer counterparts but controling and switching the light path is challenging. As with the fabrication of integrated circuits based on silicon, it proves challenging today to fabricate optical elements that are very small, most research systems are bench-top-size, instead of chip-sized. Photonic computers are getting increasing attention however as the exponential advance of current technology becomes increasingly difficult to achieve and power dissipation per silicon chip rises. Using optical computing systems may send data between components 100 times faster using just 10 % of the required energy.

The third and most remarkable insight ties the first two together into a new view: Not only can computation describe all things and all things can compute, computing is universal. Alan Turing proved that any computation executed by one finite-state machine, writing on an infinite tape (known later as a Turing machine), can be done by any other finite-state machine on an infinite tape, no matter what its configuration.

Turing did not come up with a machine, he made an abstract mathematical model for all machines. Any algorithm ever built, from a simple multiplication to the latest computer game can run on a Turing machine. While this may seem obvious for us today, the thinking in the 1930s was that computers could only be purpose-built for one task. Turing proved that computers could be general purpose and up until today, the Turing machine is the most widely used model of computation and basis of all computers, tablets, smartphones and other computing devices.

The Turing machine has four main parts:

  • An infinite roll of tape. This tape can be written on, and written symbols can be erased or rewritten with different ones. The tape is the data in short-term temporary memory or long-term permanent storage in your computer.
  • A writing head. The head can move up and down the tape and write, erase or re-write symbols on the tape. Like the head of the computer harddisk or memory addressing in a memory chip.
  • State register. The state register stores the state of the Turing machine. This state register contains the current state the machine is in and which symbol is currently below the head of the machine.
  • Action table. The action table describes the next symbol to be written, where the head should move and what the next state is. The action table contains the program, instructions that the computer runs.

Because the tape of a Turing Machine could contain the action table of another Turing Machine, Alan Turing had created the Universal Turing Machine and proved that computation is universal.

Emergent complexity

Could the universe be a gigantic Turing Machine? In the 1940s, Stanislaw Ulam and John von Neumann, discover a model of computation which they called cellular automata which is a variation on the Turing machine. A cellular automata has the following characteristics:

  • Cells exist on a grid and each cell can take on a number of finite states, e.g. ‘on’ or ‘off’. The grid can viewed as a 2 dimensional Turing tape.
  • The initial state (time t=0) is set by assigning a value to each cell in the grid.
  • The next state (t=t+1) or generation is determined by some fixed rule that takes into account the current state of the cell and its neighboor cells.

Cellular automata demonstrated how complex patterns could emerge by applying the same basic rule repeatedly. In the 1970s John Conway created a zero player game, called Game of Life, using cellular automata in which the player could choose an initial state and then observe the evolution of the cells. Each cell could be in one of two possible states: dead or alive. The player would set the values of cells at the beginning. The rule to determine the next state is the following:

  • Any cell with fewer than two live neigbours dies (underpopulation)
  • Any cell with two or three live neighbours lives on to the next generation
  • Any cell with more than three live neighbours dies (overpopulation)
  • Any dead cell with exactly three live neighbour becomes alive (reproduction).

Game of Life triggered a lot of interest when it was released by Conway from scholars in various fields such as computer science, biology, economics and physics. Researchers were inspired by how such complex patterns could emerge from such simple rules and experimented with modifying the initial state and rules with the aim to model and understand processes in their scientific disciplines. You can try it out yourself here: https://playgameoflife.com/

When Konrad Zuse put forward the simulation hypothesis in which reality is computed by the universe, he was inspired by cellular automata and their inherent properties to create spontenous order from an initially disorganized system (self-organization) and behavior that is not seen in its individual parts but by the system as a whole (emergence). Planets, stars, entire galaxies are the result of countless of generations of computations at the sub-atomic level since the Big Bang.

Cognitive and computer scientist, Marvin Minsky proposed in the society of mind theory published in 1986 [book reference] that minds are computed on brains and that the human mind and other cognitive systems actually consists of a vast collection of individually simple agents that produce intelligent behavior The power of the society of mind theory is that there is no small human required that sits in our brain to control our thoughts. Consciousness and intelligence are emergent properties of interacting cells, agents:

What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle. —Marvin Minsky, The Society of Mind, p. 308

Around the same time, British inventor, scientist and enterpreneur Stephen Wolfram started to do research in cellular automata to categorize different automata and explore their properties in depth. In 2002 Wolfram summarized his findings of many years of research his lengthy book “A New Kind of Science” [https://science.slashdot.org/story/02/05/21/146210/a-new-kind-of-science]. Wolfram’s central thesis is that an entirely new method is needed to understand complex system because traditional mathematics based on the equals sign have failed to describe such systems adequately. Wolfram claims that simple algorithms or rules based on the theory of cellular automata can provide fundamental insights into complex system behavior.

In the first chapters of his books he outlines a systematic analysis of the 256 simplest rule sets for the most basic cellular automatons and identifies different classes of complex systems, ranging from simple and predicable to highly unique shapes and unpredictable sequences. Rule set involving multiple colors instead of black and white, rule sets that only update only one grid square instead of all neighboors, rule sets that embody full blown Turing machines, automatons that perform calculations, automatons that are multi-dimensional entities, he explores these variants of ever increasing complexity in depth and in detail to make the argument that cellular automatons are a viable alternative to mathematics in modelling the inherent complexity of the natural world.

In the second half of his book Wolfram applies his zoo of cellular automatons species to various real-world topics such as crystal growth, material fracture, Brownian motion (random movement of small particles in liquids), biological growth (fractal patterns we see in plants and animals, both microscope and at large) and even social and economic phenomena such as the growth of cities or behavior of markets and fundamental physics itself, a topic we return to shortly. Although Wolfram provides many thought provoking ideas and examples, a common criticism is that his work is not scientific: It is impossible to confirm or reject his theory with experiments.

In everyday life we expect that simple systems display simple behaviors while complex systems present complex behaviors. The complexity of a system we correlate with the number of business rules or the number of lines of code a computer program we need to model its behavior and state. One of the core discoveries Wolfram makes is not only that simple cellular automata can give rise to complex behavior but there is no relation between computational structures and their behavior. A simple computational structure can give rise to behaviors of any degree of complexity. This is known as the Principle of Computation Equivalence and it implies there are fundamental limitations on our ability to understand and to predict the behavior of systems. If a very simple system can demonstrate behavior which is computational equivalent to a very complex system, our understanding of how natural phenomena work has no solid foundation. What humankind has discovered are simple behaviors that can be described by simple structures, patterns and formulas, the tip of the iceberg. Since patterns and formulas are our way to predict the future of natural phenomena, the consequence is that most of the universe may be unpredictable. The universe in itself could be deterministic but in practice we can never figure its pattern [http://spacecollective.org/Spaceweaver/2493/The-Principle-of-Computational-Equivalence]. To truly understand the universe we would have to run it from its beginning to the end or as Wolfram says:

“And indeed in the end the PCE encapsulates both the ultimate power and the ultimate weakness of science. For it implies that all the wonders of the universe can in effect be captured by simple rules, yet it shows that there can be no way to know all the consequences of these rules, except in effect just to watch and see how they unfold.” – Stephan Wolfram.

The Principle of Computational Equivalence explains how complex non-deterministic behavior in nature can arise from computational structures of any degree of complexity. There are two different ways of studying a computational system that are entangled: how the system behaves over time and the state of the system. In the next section we will look deeper into how we might represent the state of the universe at a given time.

Evolving hypergraphs

To build up on theories on how the state of the universe can be mathematically represented, it is useful to start with discrete mathematics and graph theory in particular. A graph is a pair G = (V,E) where V is the set of vertices (also called nodes) and E is the set of edges (or links). The first paper on graph theory is by Leonhard Euler on the Seven Bridges of Königsberg  and published in 1736. The city of Königsberg in Prussia (now KaliningradRussia) was set on both sides of the Pregel River, and included two large islands—Kneiphof and Lomse—which were connected to each other, or to the two mainland portions of the city, by seven bridges. The problem was to devise a walk through the city that would cross each of those bridges once and only once. Euler showed that the choice of route inside each land or island had no influence. The only important feature is the sequences of bridges to cross. Euler abstracted the problem in a set of vertices (land masses) and a set of edges (bridges). Euler’s solution of the Königsberg bridge problem known as an Euler’s walk is considered to be the first theorem of graph theory and the first true proof in the theory of networks. Today graph theory is used to model many types of relations and processes in physical, biological, social and information systems. If the universe is built of discrete bits of matter, energy and space, then discrete mathematics and hypergraphs may help us further in our understanding, like the land masses and bridges of Euler.

Richard Feynmann discovered in 1948 how graphs could be used as a way to represent the behavior and interaction of subatomic particles in quantum fields. These so-called Feynmann diagrams give a simple visualization of otherwise abstract formulas that would be hard to comprehend. A vertex in a Feynmann diagram is an event that happens in spacetime. Particles are seen as edges. Two particles either merge in an event, leading to a new particle or a particle is split into two particles with different properties. Complex behavior seen in particle accelerators such as at CERN where particles are accelerated to high-speeds can be represented as sequence of Feymann diagrams.

Feynmann diagrams can be seen as representations of the behavior and interaction of individual particles through time. Mathematician Roger Penrose developed the concept of spin networks to represent the states and interactions between particles and fields in quantum mechanics. A spin network is a graph where each edge is a world line of a unit, either an elementary particle or compound particle. The world line can be seen as the path through space time a particle follows. Edges are connected to vertices. A vertex may be viewed as an event where either a unit splits into two units or two units merge into one, similar as in the Feynmann diagram. A spin network describes the interrelationships between multiple units in a quantum field. Penrose [Roger Penrose, Angular momentum: an approach to combinatorial space-time, in Quantum Theory and Beyond, ed. T. Bastin, Cambridge University Press, Cambridge, 1971, pp. 151–180. Available in PDF and Postscript, or as LaTeX source code] wanted to get rid of the continuum and build up physics theory from discreteness to close the gap the between classical physics (continous) and quantum physics (discete). When zooming out on a spin network the discrete nature of quantum physics disappears and makes way for the continuous nature of the universe as we experience it.

In Loop Quantum Gravity (LQG), the spin network representation is used to describe the quantum state of a gravitational field. The crucial difference between photons (quanta electromagnetic field) and the nodes of the graph (quantum of gravity) in LQG is that photons exist in space, whereas the quanta of gravity consistute space themselves The quanta of gravity are not in space, they are themselves space. If you would step from grain to grain along the links until you return to the grain where you started you have made a ‘loop’. The mathematics of the theory determines the curvature of every closed circuit on a spin network, and this makes it possible to evaluate the curvature of space time, and hence the force of the gravitional field, from the structure of a spin network.[Rivelli]. Zooming out on the gravitational spin network, quantum space becomes continuous in nature.

Einstein showed that space and time should be treated together as spacetime. The curvature of spacetime determines the strength of the gravitational field. Particles with mass follow the curvature of space. If spin networks describe the structure of space, what structure will spacetime have? Imagine that we take the spin network graph and move every node, then every node draws a line through time. Every node can also open up into two or more nodes, just like a particle can split up into two or more particles. The resulting structure is called a spin foam following the analogy with faces of soap bubbles. The spin foam of a given boundary represents all the possible spacetimes and all possible trajectories that exist. The probability of seeing balls going out in one way or another can be computed by summing over all possible spactimes.

Reality as we know it according to Rivelli [Rivelli] is made entirely from quantum fields that live on top of each other: Space is nothing more than a field made of quanta and time emerges out of this, on top there are fields of quanta of light and matter.

Wolfram’s more recent investigations [ [https://writings.stephenwolfram.com/2020/04/finally-we-may-have-a-path-to-the-fundamental-theory-of-physics-and-its-beautiful/] ] seem to indicate that complex, dynamically evolving hypergraphs2 can mimic many of the features of our universe. Wolfram believes that hypergraphs could resolve current disputes about which speculative theories are the best bets for explaining fundamental physics.

Projected experience

The second law of thermodynamics states that the entropy of an isolated physical system can never decrease. The effects we see everywhere around us, when we put a tea bag in a glass of hot water, the tea leaves always dissolve and color the water green or brown, never the other way around. If we shoot a balloon filled with the water, it will explode in a burst of water, never do see water spontaneously recombining and filling the balloon again. The second law of thermodynamics does not seem to hold for black holes: Black holes are objects that have so much mass that their gravity is so strong that not even light can escape from it. When matters thus disappears into a black hole, its entropy would be gone for good, and the second law seems to be violated.

In the 1970s, Demetrious Chirstodoulou, graduate student of Wheeler at Princeton and Stephen Hawing of the University of Cambridge independently proved that in various processes, such as black hole mergers, the total area of the event horizon never decreases. Jacob Berkenstein in 1972 generalized the second law by arguing that the sum of black hole entropies and ordinary entropy outside the black hole cannot decrease. The generalized second law can cope with the discovery of Stephen Hawking in 1974 that black holes are not fully black but actually radiate energy: The entropy of the Hawking radiation more than compensates for the decrement in black hole entropy, preserving the generalized second law whereas the theorem of Chirstodoulou and Hawking would be violated as the total area of the event horizon would decrease because of Hawking radiation.

But it is another aspect of what Berkenstein and Hawking found that is truly astonishing. Namely, that the maximum possible entropy depends on the boundary area instead of the volume. Normally, one would expect the entropy not to be the square of the radius but the cube of a radius of an object. If we build a computer we expect that the maximum information we can store in a computer depends on the volume of the computer, if the computer is two times the width, two times the depth and two times the height, we expect 8 times as much information to be stored. Berkenstein and Hawking discovered that the maximum possible information depends on the boundary area.

This surprising result has a natural explanation if the holographic principle first proposed by Nobel laurent Gerard ‘t Hooft of University of Utrecht and elaborated by Lenard Susskind is true [https://www.scientificamerican.com/article/information-in-the-holographic-univ/ ]. In the everyday world, a hologram is a special kind of photograph that generates a full 3D image when it is illuminated. All the information describing the 3D scene is encoded in the pattern on the 2D piece of film. The holographic principle states that a physical theory defined on the 2D boundary of the region completely describes the 3D physics. If a 3D system can be fully described by a physical theory operating solely on its 2D boundary, one would expect that the information content of the 3D system would not exceed that of the 2D boundary. Gerard ‘t Hooft and Leonard Susskind effectively resolved the black hole information paradox by reducing the number of dimensions of which a black hole exists. ‘t Hooft and Susskind reasoned further that if it possible to describe black holes in one dimension less, this would also be possible for other gravity systems that contain less information, in particular for models of quantum gravity.

‘t Hooft and Susskind published their ideas in 1974 but the big question of course was how a lower dimensional model of a gravity system would look like exactly. For 25 years this question remained unanswered until Juan Maldacena published his AdS/CFT correspondence. Maldacena showed in a detailed descriptions both sides of his duality. One on side the string theory (model of quantum gravity) in ten dimensions, on the other side the Conforming Field Theory in four dimensions. The surprise turned out to be that not just one dimension disappeared but six: five on the string theory side and one on the CFT side. This mind-bending idea shook the world of physics because it eloquently combines relativity and quantum mechanics; encapsulating contradictory physics of objects at both cosmic and subatomic scales. [https://www.quantumuniverse.nl/snaren-en-holografie-10-het-holografisch-principe]

The 4D dimensional space and time we observe in our everyday experience could be a projection from lower dimensional complex, dynamically evolving hypergraphs that bring forward space, time, energy and matter. The universe could be a cosmic simulation that is calculated and our experience is a projection, like the software running on a computer to render a virtual reality environment. There is one element we still miss, the observer of the universe.

Conscious interaction

We can observe the universe because the laws of the universe have been compatible with the development of sentient life. The anthropic principle ( from the Greek anthropos”, or human) is the philosophical premise that any data we collect about the universe is filtered by the fact that, for it to be observable at all, the universe must have been compatible with the emergence of conscious and sapient life that observes it. Different variations on the anthropic principle exist, depending on how strongly the universe is to emerge conciousness [Anthropic Explanations in Cosmology (pitt.edu) ]. The weak antropic principle states that the circumstances of our universe are such that the emergence of life is possible. The fact that we can observe the universe, proves that we are result of a chain of selections or probabilities that happened to favor life. The strong anthropic principle as proposed by John D. Barrow and Frank Tipler, posits that the universe is somehow compelled to eventually have sentient life emerge within it. The circumstances are such that the emergence of life is inevitable. Traditionally, many religions view this as the work of our Creator who designed and created the entire universe and all life in it.

The simulation hypothesis posits that we are in fact living in a simulation created by our Ancestors who created our universe to observe us which is another variant on the strong anthropic principle. Philosopher Nick Bostrom popularized the strong anthropic simulation hypothesis. Bostrom hypothesizes that future generations have enormous amounts of computing power available and one of the things later generations could do with their super-powerful computers is to run detailed simulations of their descendants. Suppose now that the simulated people are conscious and the simulation is sufficiently fine-grained that they cannot determine if they live in a simulation or not. The logical conclusion is that it is then possible to argue that that we are likely among the simulated minds rather than among the original biological ones. If we don’t think we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of simulations. [Bostrom, 2013, are we living in a computer simulation].

Bostrom further posits that at least one of the following is very likely to be true:

  1. The faction of human-level civilizations that reach a posthuman stage is very close to zero;
  2. The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero;
  3. The fraction of all people with our kind of experiences that are living in a simulation is very close to one.

Whether we are among the simulated minds or original biological ones does not matter if we remember the first principle that computation is universal and the universe as a computer John Wheeler suggested that reality is created by observers and that “no phenomenon is a real phenomenon until it is an observed phenomenon” and he coined the term Participatory Anthropic Principle. Wheeler went further to suggest that we are participants in bringing into being not only the near and here, but the far away and long ago.”[Reference: Radio Interview With Martin Redfern] . The universe we observe today is the result of countless generations of observations that eventually trace back to the birth of our universe.

The view of Wheeler echoes the thoughts of the founder of the theory of immaterialism, philosopher George Berkeley (1685-1753), that to be means to be perceived (esse est percipi). Berkeley rejects the existence of material substance and instead contends that familiar objects we see around us are ideas perceived by minds and as a result cannot exist without being perceived.

Wheeler does not deny the existence of matter but adopts the so-called Copenhagen interpretation of quantum mechanics: According to the Copenhagen interpretation, subatomic particles prior to being measured or interacted with, do not exists in a single state and location but in a superposition described by a probability distribution (wave function) of possible outcomes. Once a particple is observed, it instantenously collapses into a single position. The Copenhagen interpretation is the oldest and most widely held interpretation of quantum mechanics but other interpretations exist.

Roger Penrose [Emperor’s New Mind] asserts that human consciousness is non-algorithmic and therefore cannot be modelled by a conventional Turing machine. Penrose hypothesizes that quantum mechanics plays an essential role in understanding human consciousness: What if there are molecular structures in our brains (microtubules) that are able to adopt a superposition state, just like sub-atomic particles, that collapse into a single state and location leading to a conscious action that fires our neurons to think or act? Maybe, says Penrose, our ability to sustain seemingly incompatible mental states is no quirk of perception but a real quantum effect. The collapse of the wave function is critical for consciousness and it leads to the physical behavior that is non-algorithmic and transcends the limits of computability, this shows how the human mind has abilities that no Turing machine can possess.

More recently, Penrose interpreted the occasions of experience as the quantum state reductions occurring at the Planck scale, where gravitational spin networks encode protoconsciousness [http://www.bbc.com/earth/story/20170215-the-strange-link-between-the-human-mind-and-quantum-physics ]. The ‘observer’ may not need to exist at the macroscopic level of human brains, but be present at the Planck level of space time in the gravitational spin network: Penrose proposes that a quantum states remains in superposition until the difference in space-time curvature reaches as signficant level. The reduction of molecular structures in our brains by quantum superposition to classical output states occurs by an objective factor that is influenced by a non-computable factor ingrained in fundamental spacetime. Recent findings show that the so-called Orchestrated Objective Reduction proposal by Penrose-Hameroff may not be found in the microtubules which transport materials inside cells [ (PDF) Penrose-Hameroff orchestrated objective-reduction proposal for human consciousness is not biologically feasible (researchgate.net) ][ Consciousness in the universe: A review of the ‘Orch OR’ theory – ScienceDirect ]. Whether proven or not, Penrose and Hameroff give a modern view on panexperientialism [Panpsychism – Wikipedia ], the view that conscious experience is fundamental and ubiquitous, that consciousness results from discrete physical events, events that have always existed in the unverse as protoconscious events, acting on precise physical laws not yet fully understood. This brings us to the last section of this chapter where we look at the limitations and criticism of digital physics theory.

Experimental validation

If reality is fundamentally discrete then we should be able to determine spatial and temporal resolution barriers. We should be able to see the pixels in the fabric of reality. Although physicists think that there are some absolute limits in what constitutes meaningful small distances or time intervals—the Planck scale and Planck time—that has to do with the limits of our current understanding of physics rather than the kind of resolution limits on our pixelated screen. Recent research suggests that the true limit of meaningful intervals of time might be orders of magnitude larger than the traditional Planck time (which itself is 10-43 seconds).

One of the experiments to actually determine whether we live in a simulated reality is the Holometer based at the US Department of Energy’s Fermilab in Illinois. The Holometer consists of a small array of lasers and mirrors and a control room to capture the data and is able to measure movements that last only a millionth of a second and distances that are a billionth of a billionth of a a meter. – thousand times smaller than a single proton. Sofar the holometer experiment did not detect holographic noise, the quantum fluctuations of space, that would be expected by theory. If quantum jitter exists, it is either much smaller than the Holometer can detect, or it is moving in directions the current instrument is not configured to observe.[Holometer rules out first theory of space-time correlations (phys.org) ]

Another way to verify the simulation hypothesis would be to look at limitations with regard to information storage and calculation to understand if there is sufficient storage and computing power available in the universe. Extrapolating from the current state of digital computation, we know that a simulation will have to make approximations to save on storage and calculation overheads. A physical system can be described using a finite number of bits. Each particle in the system acts like the logic gate of a computer. Its spin axis can point in one of two directions, thereby encoding a bit, and can flip over, thereby performing a simple computational operation. The system is also discrete in time. It takes a minimum amount of time to flip a bit. The time it takes to flip a bit, t, depends on the amount of energy you apply, E. The more energy you apply, the shorter the time can be according to the phyics theory. Seth Lloyd and Jack Ng calculated the memory capacity and operations per second for different types of physical systems, see table below taken from [Black Hole Computers – Scientific American ]


Conventional computer (2020)Ordinary matter (one kilogram occupying volume of one liter)Black holeUniverse
Memory capacity1012 bits1031 bits1016 bits10123 bits
Operations per second1014 (single)1020 (parallel)1051 (serial)10106 (parallel). (10123 operations during its lifetime so far)

Nick Bostrom makes a rough estimation for the cost of creating a realistic simulation of human history that is indistinguishable from physical reality for human minds in the simulation between 1032 and 1035 operations3. If we assume a rough approximation of a single planetary mass computer is able to calculate 1042 operations per second, then such a computer could simulate the entire mental history of humankind (ancestor simulation) in less than 10-7 seconds. We can conclude therefore according to Bostrom that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations in at the same time and still have a vast majority of computing cycles left for other purposes.

Bostrom himself does not say the simulation argument must be true and in this chapter we have seen that digital physics does not care about who calculates the universe why. The universe simply computes quantum fields, chemicals, bacteria, human beings, planets, stars, galaxies – itself into existence from bits. Bits are not limited to matter or energy, even space itself is granular. As it computes, it maps out its own spacetime geometry and on top of the gravational quantum fields it maps out the are the quantum fields of light and matter. The ‘observer’ does not need to exist at the macroscopic level but at the Planck level of space time in the gravitational spin network, continously changing and evolving the computation and itself. When we zoom out to the macroscopic levels and low energy levels, the discrete, lower dimensional universe appears continuous and higher dimensional, like a holographic movie projection. We are able to capture some of the complex behavior of the computational universe at different levels of existence in mathemical equations and try to unite these theories but there is no way to know all the consquences of all these rules, except in effect just to watch and see how they unfold.

The elegance of digital physics is that instead of studying a natural phenomenon and subsequently discover a law of nature or program that describe its behavior accurately, we can start to design a new program and reverse engineer a system that actually displays the phenomemon. So instead of using computers as tools in a traditional reductionist approach to measure using sensors and test hypotheses, we can use a constructivist approach and use computers as tools to recreate physical phenomena and aim to minimize the error between the phenomena and the algorithms. Physics based machine learning may one day help us to address unsolved challenges in physics, analogous to how machine learning has helped biology to understand protein folding and improve disease understanding and drug discovery [‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structures (nature.com) ].

Physicists also discover that the the realm of the smallest particles is not the only place to find these fundamental laws of nature. A simple example is sound waves, the synchronized oscillations of molecules of matter. By using the rules of quantum theory, these waves themselves can be described in terms of particles. These “phonons” are elementary packets, or “quanta,” of sound, and their behavior is similar to that of photons, the quanta of light. In 2011, Israelian researchers created a sonic black hole [Physicists create sonic black hole in the lab ] in a laboratory setting in which sound waves rather than light waves are absorbed and cannot escape. They were able to keep an event horizon for phonons for 20 milliseconds before the sonic black hole became unstable and collapsed. In the future, the sonic black hole may give scientists a glimpse of Hawking radiation, the predicted thermal radiation of black holes due to quantum effects what would cause black holes to shrink and eventually evaporate completely.

All this is part of a much larger shift in the very scope of science, from studying what is to what could be according to Robert Dijkgraaf, director and professor at the Institute for Advanced Study in Princeton, New Jersey [The End of Physics? | Quanta Magazine ]: In the 20th century, scientist sought out the building blocks of reality: the molecules, atoms and elementary particles out of which all matter is made; the cells, proteins and genes the make life possible. In the 21st century we start to explore all there is to be made with these building blocks, physical, chemical, biological, social and informational. We went through a process of deconstructing reality down to the bits and algorithms that make up reality, in the next chapter we will try to start reconstructing reality from these building blocks…

1Indian atomism, e.g. Nyaya-Vaisesika school, may be dating back even further than Greek atomism. In Vaisesika physics, atoms had 24 different possible quantities, divided between general and specific properties. Furthermore, the Nyaya-Vaiseka physics had elaborate rules on how atoms first combine in tuples and triplets to be the smallest forms of visible matter, this is remarkable given the fact that in the standard model of physics, quarks combine into an atomic nucleus.

2 French mathematician Claude Berge was first to define hypergraphs as a generalization of graphs where a vertex can have more than two edges in the early 1970s. Hypergraphs share many of the properties of ordinary graphs but have the advantage that they are not restricted to (curved) 2D planes but can represent more complex geometries and relationships between bits.

3100 billion humans * 50 years/human * 3 million secs/year * 1014 – 1017 operations in each human brain per second = 1032 – 1035 operations.

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *