vol VI: Essays
Essay 12: Why do we observe a quantized Universe? (2009)
1. Abstract
2. Computer networks
3. The mathematical theory of communication
4. Turing machines and ‘logical continuity’
5. Communication and quantum mechanics
6. Coding delay and special relativity
7. ‘Spooky action at a distance’
8. Local gauge invariance in a network
9. The transfinite network
10. Formalism and the physical world
11. Creation
12. Annihilation
13. Determinism and indeterminism
14.The distinction between past and future
15. Quantum computation
16. Evolution
17. Conclusion: ‘The unreasonable effectiveness of mathematics’
1. Abstract
Why do we observe a quantized Universe? Here I propose that it is because we and every other entity in the Universe are parts of a digital computer network. We begin to model the Universe as a finite computer network like the internet. We extend this model mathematically to a network with a countable infinity of fundamental processes corresponding to the computable functions represented by halting Turing machines Mathematical continuity is replaced by the more powerful notion of logical continuity implemented formally by mathematical proof and practically by symbolic computing machines. Tanenbaum: Computer Networks, Alan Turing: On Computable Numbers, with an application to the Entscheidungsproblem
2. Computer networks
Since all information is encoded physically, practical computer networks are built of a physical layer which correlates physical states or signals with the information in messages. This hardware layer is driven by various strata of software. A stable network requires error free communication so that the first software layer in practical networks is usually devoted to error detection and correction. Rolf Landauer: Information is a Physical Entity
An ‘atomic’ communication is represented by the transmission of single packet from one source to another. Practical point to point communication networks connect many sources, all of which are assigned addresses so that addressed packets may be steered to their proper recipient. This ‘post office’ work is implemented by further network layers.
Each subsequent software layer uses the layer beneath it as an alphabet of operations to achieve its ends. The topmost layer, in computer networks, comprises human users. These people may be a part of a corporate network, reporting through further layers of management to the board of an organization. By analogy to this layered hierarchy, we may consider the Universe as a whole as the ultimate user of the universal network.
Processes in corresponding layers (‘peers’) of two nodes in a network may communicate if they share a suitable protocol. All such communication uses the services of all layers between the peers and the physical layer. These services are generally invisible or transparent to the peers unless they fail. Thus two people in conversation are generally unaware of the huge psychological, physiological and physical complexity of the systems that make their communication possible.
3. The mathematical theory of communication
The mathematical theory of communication shows that we can make communication error free by coding our messages into packets that are so far apart in message space that the probability of their confusion is negligible. Shannon sought the limits of error free communication over noiseless and noisy channels. The theory he developed is now well known and lies at the heart of communication networks worldwide Claude Shannon: Communication in the Presence of Noise, Claude E Shannon: A Mathematical Theory of Communication, Khinchin: Mathematical Foundations of Information Theory
The validity of these strategies is illustrated by our current ability to send gigabytes of information error free over noisy phone lines. The quantization of communication at the microscopic level supports the hypothesis that our world is a communication network that has evolved to resist error Wojciech Hubert Zurek: Quantum origin of quantum jumps: breaking of unitary symmetry induced by information transfer and the transition from quantum to classical.
A system that transmits without errors at the limiting rate C predicted by Shannon’s theorems is called an ideal system. Some features of an ideal system are:
1. To avoid error there must be no overlap between signals representing different messages, They must, in other words, be orthogonal, as with the eigenfunctions of a quantum mechanical observable.
2. Such ‘basis signals’ may be chosen at random in the signal space, provided only that they are orthogonal. The same message may be encoded into any satisfactory basis provided that the transformations used by used by the transmitter and receiver to encode the message into the signal and decode the signal back to the message are inverses of one another.
3. The signals transmitted by an ideal system are indistinguishable from noise. The fact that a set of physical observations looks like a random sequence is not therefore evidence for meaninglessness. Until the algorithms used to encode and decode such a sequence are known, nothing can be said about its significance.
4. Only in the simplest cases are the mappings used to encode and decode messages linear and topological. For practical purposes, however, they must all be computable with available machines.
5. As a system approaches the ideal, the length of the transmitted packets, the delay at the transmitter while it takes in a chunk of message for encoding, and the corresponding delay at the receiver, increase indefinitely.
4. Turing machines and ‘Logical Continuity’
Classical physics, from the time of Aristotle, has assumed that the physical Universe is continuous. Democritus, it is true, postulated discrete atoms, but even these moved in a continuous space. This is a natural assumption, given the continuous appearance of macroscopic spatial motion. Aristotle
Yet, as a matter of fact, all observations and discussions of the Universe are quantized. Experimental physics revolves around classifying (‘binning’) and counting events. When we observe the output of physicists and mathematicians (‘the literature’) we see that it too is quantized, into discrete volumes, articles, words and symbols, like this.
Shannon showed that appropriate coding enables free communication but his work did not reveal the codes to be used. The search for optimal codes has involved much work and continues. We can be certain, however, that encoding and decoding processes must be deterministic so that the original message can be recovered exactly. These processes must therefore be implemented by deterministic computers which we model with Turing machines. Hill: A First Course in Coding Theory, Davis: Computability and Unsolvability
We are thus led to introduce a new understanding of continuity, logical continuity, based on the mathematical notions of proof and computation. The archetype of logical continuity, corresponding to a continuous function in analysis, is a proof: a logically watertight connection between some hypothesis (say Euclid's axioms) and some conclusion (eg Pythagoras' theorem) that can be executed by a suitably programmed deterministic computer van Heijenoort. A halting Turing machine is a logical continuum, moving deterministically from an initial state to a final state.
From a practical point of view, logical continuity takes precedence over classical continuity. So our practical mathematical understanding of continuity is based on logical processes (like the epsilon-delta argument) that assume that the points in a continuous set line are so crowded together that we can always find another point between any two. Hille: Analytic Function Theory
Mathematical continuity by proximity motivated Cantor’s search for the cardinal of the continuum. Using logical continuity, Cohen has shown that Cantor’s approach, using theory based on sets of discrete elements, can say nothing about classical mathematical continuity. Cohen’s logically continuous argument shows that Cantor’s continuum hypothesis is independent of set theory. Dauben: Georg Cantor: His Mathematics and Philosophy of the Infinite, Cohen: Set Theory and the Continuum Hypothesis
5. Communication and quantum mechanics
The mathematical formalism of quantum mechanics assumes that the state space of the physical Universe can be represented by state vectors in complex Hilbert space of finite or infinite dimension. The joint state of two communicating quantum systems is represented by vectors in the tensor product space of the Hilbert spaces of the constituent systems.
The continuous evolution of state vectors in an isolated quantum system is described by unitary operators on their Hilbert space governed by Schroedinger’s equation. Since such a system is isolated, however, this continuous evolution is not directly observed but is inferred from the observed success of its consequences.
Mathematically this evolution is deterministic and reversible so that we may think of it as a process of encoding the same message in different bases. The Schroedinger equation applies equally at all energies and all levels of complexity of state vectors. The only truly isolated system is the Universe as a whole, represented in its simplest state by the initial singularity Hawking & Ellis: The Large-Scale Structure of Space-time.
The continuous evolution of an isolated quantum system is understood to be interrupted by an observation or measurement. When we observe a system, we do not see the whole continuous system, but only one or other of the basis states (eigenvectors) of the operator we use to observe the system. The mathematical formalism of quantum mechanics cannot predict which eigenvector we will observe, only the relative frequencies of the observed eigenvalues.
Zurek has shown that this restriction on the completeness of observation is necessary if we are to obtain information from a quantum system. This suggests that the quantization of observation and the requirements of mathematical communication theory are consistent with one another. From a communication point of view, quantum mechanics does not reveal actual messages but rather the traffic on various links. If we assume that the transmission of a message corresponds to a quantum of action, the rate of transmission in a channel is equivalent to the energy on that channel.
Further, the statistical properties of a quantum observations are identical to the statistical properties of a communication source. Like the probability of emission of the various letters of a source, the probabilities of observing various eigenstates of a quantum system are normalized to 1. This constraint is established in quantum theory by the unitarity of the evolution and observation operators. This leads us to think of the eigenstates of a quantum observation as the letters of alphabet of a communication source.
From an abstract point of view there is but one Hilbert space of each dimensionality and there is no preferred set of orthonormal basis states. The transformation approach to quantum mechanics pioneered by Dirac shows how one basis may be converted into another by unitary operators which preserve orthonormality. Dirac: The Principles of Quantum Mechanics .
We may see the communication theoretic equivalent of quantum mechanical transformations as the computational transformation of messages between different encodings using different alphabets.
6. Coding delay and special relativity
From a space-time point of view, isolated quantum systems are essentially one dimensional, parametrized by energy, frequency or time. Nor do the systems described by quantum mechanics have any memory. The outcome of an observation on a quantum system depends only on the immediate state of the system and the state of the observer. Zee: Quantum Field Theory in a Nutshell, Feynman: The Feynman Lectures on Physics, Volume 3: Quantum Mechanics
A further indication that it is plausible to consider the Universe as a communication network is provided by the structure of space-time revealed in the special theory of relativity. The events at rest in any local inertial frame are time ordered, and we believe that an event can only influence events that follow rather than precede it in time. In physics, the velocity of light is taken as the fixed and finite maximum velocity for the transmission of information from one point in space-time to another.
One consequence of the finite maximum velocity of communication is that events in an intertial frame moving relative to oneself look different. Assuming that the laws of physics are the same in every inertial frame, we may derive the Lorentz transformation which brings the distorted appearance of moving systems back to the normal appearance of things in our own rest frame.
There is nothing special about the velocity of light in this derivation. The same structures are found when we substitute the speed of the postal service, for instance, into the causal ordering of human events.
The algorithms used to defeat error in communication induce delay. In order to encode a message of n symbols into a transmissable signal function, the encoding machine must wait for the source to emit n symbols, even if the computation required for encoding is instantaneous. Since computation itself involves error free communication within the computer, we can expect it too to add delay.
We are therefore led to believe that communication delay and quantization are connected. Further, since the least delay corresponds to the maximum velocity, we propose that the algorithm used to encode information into signals that travel at the velocity of light is the fastest algorithm in the Universe.
7. ‘Spooky action at a distance’
Field theories are attempts to avoid the assumption of action at a distance where distance is understood in the ordinary spatial sense or in the relativistic sense of ‘spacelike distance’. The more powerful notion of logical continuity is not specifically related to geometric continuity. Being networked creatures we intuitively apply logical continuity in our conversations. The spatial positioning of people conversing is generally irrelevant. On the other hand to make sense, the elements of the conversation must be time ordered from the point of view of each person, which implies that there can be no conversation across a spaceline separation.
The Einstein, Podolsky Rosen thought experiment, since frequently realized, proposed that entangled particles could act upon one another at a distance even though their separation was spacelike, requiring something greater than the velocity of light to account for their correlation (if it is due to communication). Einstein, Podolsky and Rosen: Can the Quantum Mechanical Description of Physical Reality be Considered Complete?, Groblacher: An experimental test of non-local realism
We assume that the velocity of light is finite rather than infinite because of the delay in the transmission of a photon from point to point due to error preventative encoding. Conversely, we might speculate that communications that cannot go wrong, that is communications that effectively carry no information, might require no encoding and therefore travel at infinite velocity. The observed correlations between entangled photons have been found to propagate at many times c, and the measurements are compatible with infinite velocity, ie instantaneous transmission, over many kilometres Salart: Testing the speed of 'spooky action at a distance'.
In practical communication networks a message originating with user A is passed down through the software layers in A’s computer to the physical layer which carries it to B’s machine. It is then passed up through B’s software layers until reaches a form that B can read. By analogy, communication between one system in the Universe must pass down to the ultimate physical layer (which we might identify with the structureless initial singularity) then up again to the peer system receiving the message.
It may be that the simplicity (high symmetry) in the lower layers of this network make encoding for error prevention unnecessary, so that instantaneous communication is possible.
8. Local gauge invariance in a network
The intersection of special relativity and quantum mechanics is quantum field theory. This intersection yields a strong set of constraints on the Universe that limit its degrees of freedom and are verified experimentally. Peskin & Schroeder: An Introduction to Quantum Field Theory
A fundamental symmetry of quantum field theories is local gauge invariance. From the point of view of algorithmic information theory a field theory is a system of algorithms (expressed in mathematical and computational notation) whose domain is spacetime. These algorithms transform a measured initial situation in the Universe to a measured final situation some timelike interval away. The grail is an algorithm that performs this operation without error, like the statement ‘in a closed system, energy before transformation equals energy after.
Local gauge invariance tells us that there may be a set of degreess of freedom associated with every point in space-time whose values can be set independently of their values at neighbouring points. Local gauge theories have nothing to say about these values, and so are consistent with all of them.
This feature of these theories is exactly what we expect in a compliant communication network, that is one indifferent to the actual content of the messages transmitted. This same property is manifest in the mathematical theory of communication which is indifferent to actual messages and their meaning, and is interested only in their statistical structure of their messages.
9. A transfinite symmetric network
Local gauge invariance enables us to construct a transfinite network which serves as a configuration space for the Universe. We begin with the Cantor Universe. Cantor proved Cantor’s theorem: given a set of any cardinal n, there exists a set with a greater cardinal . Using this theorem, Cantor constructed the formal Cantor Universe by transfinite recursion. The set N of natural numbers is countably infinite. Cantor used the symbol ℵ0 to represent the cardinal of N. The power set of N, P(N) has the next greatest cardinal ℵ1, and so on without end. Mendelson: Introduction to Mathematical Logic, Cantor: Contributions to the Founding of the Theory of Transfinite Numbers
The cardinal (or power) of a set is independent of its ordering. The space chosen here, the symmetric universe (think symmetric group), is constructed by replacing the axiom of the power set in the proof of Cantor's theorem with an 'axiom of permutation' which encapsulates the properties of order and permutation. The resulting structure is capable of representing any group, since the permutation group of any finite cardinality contains all possible groups of that power. We assume, using Cantors principle of finitism, that this property extends into the transfinite domain Higman: Applied Group-Theoretic and Matrix Methods, Hallett: Cantorian Set Theory and Limitation of Size.
It is a peculiarity of transfinite cardinal arithmetic that 2aleph(n) = aleph(n)aleph(n) = aleph(n)! = aleph (n +1) so that the cardinals of peer layers in the Cantor Universe and the symmetric universe are equivalent.
We can understand a permutation as a set of n memories in which n different data symbols may be housed to give n! different arrangements. Both the data symbols and the memories can be named by the natural numbers 1 . . . n . . . .
We now imagine a locally gauge invariant field theory whose domain is the natural numbers. Local gauge invariance enables us to arbitrarily assign a natural number to each point in the domain. If each of these values is unique, we see that local gauge invariance is indifferent to the permutation of local values. We suppose that this idea can be extended to the transfinite domains of the Cantor Universe.
Let us imagine that the actual work of permutation in the symmetric universe (ie its dynamics) is executed by Turing machines. As formal structures these Turing machines are themselves ordered sets, and are to be found among the ordered strings contained in the Universe.
The installation of these Turing machines turns the symmetric universe into the symmetric network. This network is a set of independent memories able to communicate with and change one another via Turing machines. The internet is a finite example of such a network, the memories of servers, routers, clients and users changing each other’s states through communication.
It seems clear that the transfinite symmetric universe has sufficient variety to be placed in one-to-one correspondence with any structure or process in the Universe. In a case where a given layer of the network universe is found to be too small to accommodate the system of interest, we have only to move up through the layers until we find a level whose cardinal is adequate for the task.
Permutations can be divided into subsets or cycles of smaller closed permutations. This process means that no matter what the cardinal of a permutation, we can find finite local permutations whose action nevertheless permutes the whole Universe. Moving my pen from a to b (and moving an equivalent volume of air from b to a ) is such an action.
Incidentally this cyclic nature of computation may explain the ubiquity of complex (periodic) numbers and functions in quantum mechanics.
10. Formalism and the physical world
The formal structure of the symmetric network gives us a ‘mathematician’s eye’ view of the Universe. The symbols used in formal theories are assumed to be eternal (immobile) and distinct (orthogonal). From this point of view the Cantor Universe and the symmetric network exist formally in complete detail and formal interactions are instantaneous. On the other hand, the physical Universe is mobile and often uncertainly defined. How do we fit the formal model to the physical Universe?
We begin very close to the initial singularity with a state represented by a vector in a one dimensional Hilbert space whose evolution is governed by Schroedinger’s equation. Let this system interact with or observe itelf. We represent the joint state created by this communication in a two dimensional Hilbert space. Moving back the other way, we imagine the initial singularity to be a point (a vector in a zero dimensional Hilbert space) with no dynamics and completely isolated (by the definition of the Universe). This point satisfies the definition of a formal symbol.
Operations in the symmetric network, like those in everyday computer networks, are represented as strings of logical operations (a program). The structure of the symmetric network is such that every point in it is unique and every operation changes the whole Universe. In the mathematician’s eye view we see a kinematic sequence of representations of the Universe, each differing from its predecessor by intervening operations.
In this picture mathematics explains its own existence through ‘fixed point’ theorems. In general, there is a fixed point in any mapping which maps a set onto itself. Since (by assumption) there is nothing outside the Universe, all its mappings are onto itself and the resulting invariant fixed points may be mapped to mathematical symbols. This makes science possible, since only immobile features of the dynamic Universe can be usefully represented by the fixed formal texts of the scientific literature. Casti: Five Golden Rules: Great Theories of 20th-Century Mathematics - and Why They Matter
11 Creation
Maxwell’s Demon embodied the power engineer’s hope for perpetual motion. Although we see perpetual motion in the reversible systems described by quantum mechanics, Szilard killed this hope for systems involving irreversible observation. Maxwell’s demon can do no better than a reversible heat (Carnot) engine, conserving entropy but never decreasing it. Szilard: On the decrease of entropy in a thermodynamic system by the intervention of intelligent beings
Here we introduce Shannon’s Demon, whose task to increase entropy. Given the bad press that entropy has had over the last century, the term Demon seems appropriate to the contemporary zeitgeist.
Let us define creation as the production of new information and entropy. The fundamental axiom of information theory tells us that the information carried by a point is equal to the entropy of the space in which the point resides, so that information and entropy are created ‘simultaneously’. Defining a theory simultaneously defines a space of possible theories.
We communicate the dynamics of the world through text or formalism which, because of its stability, can be transmitted as a message. The symmetric universe is a space in which we wish to make a point about the generation of entropy, to show how the simple initial singularity differentiates itself into the complex Universe we observe today.
Let us guess that formally the same force (‘Cantor force’) makes the Universe grow as makes the transfinite numbers grow. In other words, avoidance of contradiction is the source of motion, since Cantor’s proof is non-constructive. The Universe becomes more complex because it can only do otherwise by contradicting itself. We now seek to explain the mechanism of this complexification using the theory of communication.
We begin with ‘Cantor symmetry’ that is symmetry with repsect to complexity. Cantor, explaining the genesis of transfinite numbers, wrote: . . . the transfinite cardinal numbers can be arranged according to their magnitude, and, in this order, form, like the finite numbers, a “well ordered aggregate” in an extended sense of the words. Out of ℵ0 proceeds by a definite law, the next greater cardinal number ℵ1, and out of this by the same law the next greater, ℵ2 and so on . . . without end Cantor, ibid. page 109.
This definite law (here represented by permutation) is indifferent to the size of the number upon which is operates to create its successor. Let us call the existence of this definite law Cantor symmetry or symmetry with respect to complexity.
A layered computer network shares this symmetry, since propositional calculus is to be found in operation throughout the system, insofar as it is observable. From the formal network point of view, complexity is irrelevant, as the theory of communication is indifferent to the entropy of the sources it describes.
Quantum mechanics also enjoys this symmetry. The formalism of quantum mechanics is essentially the same regardless of the dimensionality of the Hilbert space being used to model the system studied Feynman, ibid. In theory we can construct ‘the wave function of the Universe’. This project may run into the sort of contradictions associated with Russell’s set of all sets. However, we anticipate no such problem constructing the state vectors of subsystems (including ourselves) of the Universe. Such state vectors can be represented in the tensor product space of all the Hilbert spaces of the constituent particles. Everett III, DeWitt & Neill: The Many Worlds Interpretation of Quantum Mechanics
The defining property of each transfinite number is that it is greater than its predecessor. This suggests that transfinity is relative, and reduces our problem to the question: how does one become two, where 1 stands for ℵn and 2 stands for ℵ>n.
We may see this as a question of resolution. Viewed from a distance, a pair may look like a unity. As we come closer, however, we resolve the individuals. In quantum mechanics that what we see depends on how we look. From this point of view, the resolution of a pair and the creation of a pair may be seen as the same operation analogous to the creation of information by the creation of the space in which the informative point or message lies.
Exploiting Cantor symmetry, we may obtain some clues to the solution of this problem from our own intellectual operations. As Misner, Thorne and Wheeler note
‘all laws and theories of physics . . . have this deep and subtle character, that they both define the concepts they use . . . and make statements about these concepts. . . . Any forward step in human knowledge is truly creative in this sense: that theory, concepts, law and method of measurement — forever inseparable — are born into the world in union. Misner, Thorne & Wheeler: Gravitation.
Clear and distinct ideas emerge only slowly from the mists of ignorance. The structure of the symmetric universe is such that if we have we have resolution such that ℵ0 = 2, then ℵ1 = 4, ℵ2 = 16, ℵ3 = 2 exp 16 = 65536 and so on. The cardinals of the higher layers of the network give these layers the computing power to correct error in the lower layers, so bookstrapping their own complexity.
12. Annihilation
Maxwell’s demon fails because observation is irreversible, a fact common to both classical statistical mechanics and quantum mechanics. Special relativity opened our eyes to the relationship E = mc2 which we understand to mean that pure energy can materialise into particles and particles annihilate into pure energy.
There is perpetual motion in the reversible world described by quantum mechanics, but not in the irreversible world of quantum field theory, where particles are created and annihilated Zee, ibid.
Irreversibility is consistent with the structure of the symmetric universe. Although we speak about it in abstractions which are superpositions of many possibilities, the real world is concrete. Every memory is in a fixed state and they are all unique, so that there are as many states as memory locations. This explains the utility of permutation as a formal framework to describe the dynamics of the Universe. This situation is consistent with the quantum no-cloning theorem Wooters & Zurek: A single quantum cannot be cloned.
From a concrete point of view therefore, we can say that every memory is always in some state (since this is the definition of state) and that when we change the state of a memory we erase the old state while writing the new, an irreversible step.
Since it is the function of communication to change the states of memory, we find that every communication is associated with an annihilation, making communications irreversible. The way back to the starting point has been erased.
13. Determinism and indeterminism
Although Shannon’s theorems tell us that it is possible to send messages error free over a noisy channel, they do not provide explicit methods. Many methods of encoding and data compression have since been developed, some implemented in analogue circuits like FM radio, but most now depend on digital computation.Turing devised his machine to show that there exist incomputable functions, thus answering Hilbert’s decision problem. It follows from Turing’s result that it may not be possible to compute all possible encodings.
We may imagine a communication system as a set of computable functions that can be strung together (as in an ordinary computer) to transform the input of a channel to its output. From this point of view the terms ‘computer’ and ‘channel’ mean the same. Turing machines are deterministic, so we might expect the behaviour of an isolated channel to be rather like the deterministic evolution of the wave function that quantum mechanics attributes to an isolated particle.
This determinism is broken, however, when processes communicate. Turing envisaged this situation when he described the oracle- or o-machine (ref?). An oracle machine, like any real computer, proceeds with a computation until it can go no further, at which point it halts and waits for input from an outside source or oracle. In practical communication networks, processing may also be interrupted by a message which claims priority over the current process.
A message entering a network using an accepted protocol changes the state of the machine receiving it. Given communication delay (which may effectively cause space-like separation of machines) it is impossible to predict when a machine will be interrupted and hence what the effect of the interrupt may be. Uncertainty (ie unpredictability) thus enters a network, even if all the functions driving the network are computable.
This opens the way to understanding the probabilistic nature of quantum mechanics if we interpret a quantum system as a network. In this way we replace a deterministic continuous formalism with a probabilistic interpretation with a set of deterministic machines whose communication with one another produces a set of events whose occurrences are unpredictable.
The events themselves, however, fall into equivalence classes which are determined by the communication protocols in use. The number of possible protocols is restricted to the number of possible Turing machines, that is to a countable infinity. If we consider quantum eigenfunctions to be computable, this suggests that the number of distinct eigenfunctions (ie observables) in the Universe is itself countably infinite.
Feynman was one of the first to realize that the physical processes described by quantum mechanics can be conceived as computations. Since that time, the field of quantum information and quantum computation has grown considerably and experimentalists have been able to implement some simple algorithms quantum mechanically. Feynman: Feynman Lectures on Computation.
14. The distinction between past and future
Until a computer has halted, its state is uncertain. Once it is halted, its state becomes definite. The same relationship exists between past and future. The past is definite, and not subject to any uncertainty principle. The future, however is uncertain, to at least the limits specified by quantum mechanics. The network model therefore suggests that the boundary between past and future can be modelled by the boundary between halted and not-halted computations.
15. Quantum computation
Many hope that quantum computers will turn out to be more powerful that Turing machines for two reasons. First, in quantum theory an operator like the Hamiltonian in Schroedinger’s equation acts on all the elements of a superposition simultaneously. This gives hope for the execution massively parallel calculations in one operation Nielsen & Chuang: Quantum Computation and Quantum Information.
Second, because a continuous variable is held to be able to encode an infinite amount of information. Since quantum amplitudes are represented by continuous functions, it is hoped that quantum computers will effectively complete operations on ‘words’ of infinite length in one operation.
To the contrary, we have the approach taken by algorithmic information theory, which measures the total information in a transformation by the length of the program required to implement the transformation. From this point of view, the equation x = y defined on the real line does not contain an infinite amount of information, but only the few bits represented by the symbolic expression ‘x = y ’. This alerts us to the fact that the entropy we assign to a set depends on how we decide to count its elements. From a computational point of view, the algorithmic measure seems most appropriate. Chaitin: Information, Randomness & Incompleteness: Papers on Algorithmic Information Theory
16. Evolution
From a logical point of view, we are not so much concerned with the size as with the algorithmic complexity of the Universe. Although there are only a countably infinite number of different Turing machines (computable functions), the symmetric network described above is transfinitely larger, so that we may see computation as a limiting resource in the Universe. This lays the foundation for an evolutionary scheme: many possibilities, confronting limited resources select for those processes that use the resource most efficiently to secure their own survival and replication.
This process of selection may be reflected in the usefulness of the principle of least (or extremal) action in searching for candidate models of the Universe
17. Conclusion: ‘The unreasonable effectiveness of mathematics’
the ‘unreasonable effectiveness of mathematics’ in the physical sciences. If these is any truth in the picture painted here, this fact may have a simple explanation. Mathematics is a consistent symbolic system. The stationary points (particles or messages) in the observed Universe also form a symbolic system whose consistency is guaranteed by the dynamic processes of whose limits they are. Given that the symmetric network spans the whole space of consistent symbolic systems, it may not be suprising to find that mathematics is wonderfully effective as a descriptor of the world, as Galileo proposed. Wigner: The unreasonable effectiveness of mathematics in the natural sciences