natural theology

We have just published a new book that summarizes the ideas of this site. Free at Scientific Theology, or, if you wish to support this project, buy at Scientific Theology: A New Vision of God

Contact us: Click to email

Essay 18: Logical physics (2017)

An essay on logical continuity

0: Abstract
1: Introduction: Consistency and logical continuity
2: Mapping the world
3: Is logical physics possible?
4: Writing and formalism
5: The mathematical community
6: Mathematical physics
7: Fixed point theory and cosmic dynamics
8: The fixed points of the universe
9: Computers and logical continuity
10: A transfinite computer network
11: From quantum mechanics to spacetime
12: The transfinite universe
13: Identifying the universe with God
14: Why did the universe "explode"
15: The mathematical theory of communication
16: Why is the Universe quantized?
17: The Universe as a communication network
18: Quantum mechanics describes a computable network
19: The transfinite computer network: a bridge between physics and metaphysics

Orwell: 'To see what is in front of one's nose needs a constant struggle.' George Orwell: In front of your nose
0: Abstract

Over the last few centuries, science has provided us with a quite comprehensive picture of our universal habitat that begins with the initial singularity predicted by general relativity and traces the evolution of the universe to its present state.

For many physicists, this work is an approach to a ‘theory of everything’. It provides a strong foundation for practical technologies like computers, power grids, space vehicles and medicine, yet it has little to say about more complex issues like intelligence and spirituality. Physics is also facing a major theoretical problem known as the cosmological constant problem. Cosmological constant problem - Wikipedia

There are two approaches to the cosmological constant. One is by measurement, and astronomical observations give us a fairly clear estimate of the value of the constant. It is quite small. The other is by computation. Here we find that conventional quantum field theory gives a truly enormous value for the constant. One way out is to say that quantum mechanics deals only in energy differences, not absolute energies, so this does not matter. This does not work, however, because gravitation is sensitive to the absolute value of energy, and if the quantum mechanical computation was right, the universe would be nothing like what we observe.

The fundamental aim of this site is to show that the universe is divine. Traditionally the majority of philosophers have postulated a very real and sharp distinction between such categories of being as spirit and matter, living and non-living, intelligent and not-intelligent, eternal and ephemeral and so on.

Here I propose a model that embraces these and all other distinctions in the world in a single consistent picture. This model serves as a foundation for a scientific metaphysics. It aims to treat the universe as a logical and psychological structure akin to mind. This idea is sometimes called panpsychism. This approach may also yield an answer to the cosmological constant problem. We will see. Panpsychism - Wikipedia

Back to top

1: Introduction: Consistency and logical continuity

We may identify two sorts of continuity geometric continuity and logical continuity. Geometric continuity is formally represented by the real line which we understand to be a set of points without gaps or jumps. It is smooth and differentiable. Its points may be brought into correspondence with the real numbers. It is the standard classical way of representing motions like the Earth's orbit around the Sun. Real line - Wikipedia

The second sort of continuity is logical continuity, what mathematicians call proof. More loosely a logical continuum is a story, narrative, movie, or any other work of art which 'makes sense', even though it may be broken up into sections and jump from one scene to another. A proof is a formally connected set of logical steps that leads from one or more hypotheses to a conclusion, like Pythagoras theorem about the sides of a right angled triangle. A good story is a series of scenes that work together to reach a satisfactory conclusion. Mathematical proof - Wikipedia, Narrative - Wikipedia

Any proof can be simulated mechanically with an abstract Turing machine, the deterministic logical machine devised by Turing to define computation and prove that some functions are not computable. Since Turing's time a huge variety of physical computing machines have been devised to perform all sorts of logical functions. Complete proofs may be very long and broken down into sub-proofs or lemmas. In a similar way, large computations may be broken down into subroutines to make them easier to read and maintain.Turing machine - Wikipedia, Lemma (mathematics) - Wikipedia

Here we suggest that the continuity of the physical world is logical, rather than geometric. This seems to be counterintuitive because in our macroscopic world things seem to move smoothly. Nevertheless quantum mechanics tells us that there is an 'atomic' event, measured by the quantum of action, and in the microscopic world things step along one quantum at a time. This approach is equivalent to thinking of the world psychologically rather than physically. Logical though proceeds in steps and we know that in practice logical continuity takes preference over geometric community because logical continuity is more powerful than geometric continuity. All the theorems of mathematics, including theorems about continuity, are proved logically. All our mathematical knowledge of the continuum has been developed logically. We also see that events in the macroscopic world also proceed stepwise. First get up, get dressed, make coffee . . . the first steps in the sequence of steps that take us through the day. Hille: Analytic Function Theory, 2 Volumes

This suggests that it must be possible to conceive of the physical world formally as a logical world. Here we want to explore the creation of larger processes out of smaller ones in order to understand the evolution of the Universe from the initial structureless singularity predicted by the general theory of relativity (analogous to the classical God), to the complex system which we currently inhabit. Hawking & Ellis: The Large Scale Structure of Space-Time

Back to top

2: Mapping the world
And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!" "Have you used it much?" I enquired. "It has never been spread out, yet," said Mein Herr: "the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well. -- Lewis Carroll, Sylvie and Bruno Concluded (1893). Lewis Caroll

What are physicists trying to do? They do not want to make a topographical map like the one Lewis Carrol is talking about. That is a job for geographers. What they want to come up with is a map or diagram of the mechanism of the physical world that shows how it works. There are two approaches to this task, classical physics and quantum physics. The process of making these maps is called the scientific method.

Classical physics has been with us a long time, beginning in ancient times. The first discoveries were made by architects, surveyors and the engineers who built pyramids, dams, aqueducts, ships and all sorts of weapons and fortifications. This proved quite satisactory until the nineteenth century, when intractable problems began to arise in trying to understand the relationship between massive hot objects like the fire or the sun and radiation. Classical physics - Wikipedia

Quantum physics began when Gustav Kirchoff showed that there must be a fixed relationship between the temperature of a hot body and the spectrum of that radiation that it emits. Many workers tried to devise a mathematical expression for this relationship. In 1900 Max Planck succeeded with what turned out to be a radical breakthrough. Instead of picturing the relationship between temperature and spectrum as a continuous function, he assumed that the relationship is quantized. He devised the correct function and he was able to measure the quantum of action involved in the relationship. Gustav Kirchoff (1860): On the relation between the radiating and absorbing powers of different bodies for light and heat, Planck's Law - Wikipedia, Black-body radiation - Wikipedia

The quantum of action has an an exceedingly small value, now fixed at 6.62607015 × 10-34 Joule.seconds. The quantum sets the detailed scale of the physical universe so that we find that invisibly small creatures such as a typical bacteria whose mass is about one billionth of a milligram lives at a rate about about a billion billion billion billion quanta of action per second. Classical physics had absolutely no idea of the incredible amount of detail in the universe. On the other hand, the Universe is truly enormous. Our map of the universe must embrace this immense range of scale. 2019 redefinition of SI base units - Wikipedia,

Back to top

3. Is logical physics possible?

Science proceeds into the unknown by conjecture and refutation. We start with a body of data arising from experimental work. Experimentation involves stimulating some portion of the world and observing the outcome. Many of the measurements that led to quantum mechanics were made by heating atoms of various substances and measuring the frequencies of the resulting radiation with a spectrometer. The problem then was to imagine the mechanism within the atoms which would account for the frequencies observed. Noyes & van den Berg: Bit-String Physics: A Finite and Discrete Approach to Natural Philosophy

The simplest atom is hydrogen, with just one electron. The spectrum of hydrogen has a number of sharply defined frequencies ("lines") which fall into a well defined sequences, the hydrogen spectral series. Quantum physics sought to explain these lines. Hydrogen spectral series - Wikipedia, Hydrogen atom - Wikipedia

Three features of atomic spectra stand out. The first is their complexity. There are ultimately a countable infinity of spectral lines associated with each atom or molecule, each corresponding to a particular transition in the energy states of electrons in the potential well they share with atomic nuclei.

The second is their precision. As the technology of measurement has improved, we have discovered that there appears to be no limit to the precision with which natural frequencies are defined. One technological application of spectral frequencies is the design of atomic clocks. The state of this art has reached the point where the best clocks are accurate to about one second in the age of the universe. Atomic clock - Wikipedia, W. F. McGrew et al: Atomic clock performance enabling geodesy below the centimetre level

The third is that every electronic transition in an atom is associated with a precisely one quantum of action, emitting or absorbing a photon with spin 1, that is with angular momentum of one quantum of action.

These three constraints define the task of any theory of the atom: it must provide a mechanism that connects the execution of a quantum of action to the frequency of the photon which is absorbed or emitted by that action. Not only that, but it seems that this connection is deterministic, there is no uncertainty between the magnitude of the quantum of action and the frequency of the photon. The only uncertainty involved is related to the moment in time that this transitions takes place. This timing cannot be predicted with any precision, but what can be predicted, as larof the many possible transitions will occur. The theory connecting the quantum of action and frequency must therefore yield two definite frequencies.

Quantum electrodynamics (QED) enables us to compute the spectral frequencies of atomic radiation and the rate of emission and absorption of photons, that is quanta of radiation, with exquisite precision. The standard method of computation has come to rely on two ideas developed by Richard Feynman, the path integral formulation of quantum mechanics and Feynman diagrams, which provides a graphical method of organizing the computation of the expanding network of interactions that influence every quantum event. The necessary computations are very complex and bedevilled by the appearance of infinities which need to be eliminated by renormalization. The question here is whether QED can be simplified by thinking of the physics involved as the work of a network of logical machines, that is computers, communicating with one another to produce the effects that we observe. The extreme precision that we observe in nature seems to point to a quantized logical mechanism rather than one based on continuous analogue processes, which are prone to degradation by noise. Claude Shannon: Communication in the Presence of Noise, Kirchoff's law of thermal radiation - Wikipedia, Codec - Wikipedia

Back to top

4: Writing and formalism

One of our greatest technological inventions is writing, a technique for capturing the ephemeral sounds of speech in permanent meaningful marks on some medium like rock, clay or papyrus. The invention of writing marks the boundary between history, which is specifically based on written records, and archaeology, which attempts to understand ancient people by the durable artefacts they left behind. We may see art and technology as forms of writing, but less explicitly connected to the thought of the writer.

Face to face human communications are dramatic and dynamic, ranging from deadpan to lethal. On the other hand, the invention of writing in all its forms gave an enormous boost to an eternal, timeless view of the world, capturing passion in fixed text. Writing made many changes to society. In enabled the ruling class to communicate their opinions widely in permanent form, laying the foundations of bureaucracy, empire and the rule of law. It t laid the foundations for science, giving a new foundation for truth: writing that conforms to reality.

We may guess that writing also had the effect of playing down the importance of ephemeral phenomena and emphasizing the permanent. In about 500 bce Parmenides made this idea explicit, maintaining that true knowledge could only be based on unchanging reality. He postulated a central, permanent eternal heart to reality which was the true foundation of knowledge.

Parmenides idea was taken up by Plato, who imagined that the structure of the world is a pale shadow of a heaven of unchanging forms forms. Hundreds of years later, Plato's ideas fed into the theological understanding of the Christian, an eternal, omnisicient omnipotent being, the creator with total control over the word.

Parmenides, Plato, the Pythagoreans and many others identified mathematics as the epitome of truth snd eternity. Since the invention of writing, the development of mathematics has continued almost unbroken through human history.

Back to top

5: The mathematical community

Mathematics is both a profession and a literature. The profession uses the literature to communicate with itself and the rest of the world. The literature is created and curated by the profession. We may see that the literature represents the set of fixed points that the profession has accumulated through its lifetime.

The psychological point of view can be exhibited by considering the mathematical community and their devotion to logical rather than geometric continuity. The conscious mind of a mathematician awaiting insight may be a featureless continuum but then something springs into mind, the next step in a proof for instance, and something new has been constructed. Our mathematical foundation for creation we take to be permutation and we construct a phase space for the creative universe based on Cantor's construction of transfinite numbers. So my blank mind, looking out the window and listening to the crows, has formulated another permutation of English words to represent an element of digital field.

Mathematics is formalized fiction working with an infinite supply of symbols, a transfinite supply of mappings between symbols, and a search for consistency. In this respect it is not much different from artistic or literary fiction. Consistency in art may not be is not so important but we would be disappointed with a detective novel whose denouement is not consistent with the clues.

From a dynamic point of view, mathematics is the set of stationary points in the mathematical community. Valid mathematical theorems are considered to be true for all time and so can be represented by static eternal texts which can be shared between mathematicians and serve both as records of past achievement and the foundations for further work.

Here we follow Hilbert’s notion that mathematics is a purely formal system, a set of games played with symbols whose rules may be freely created by mathematicians. The principal criterion is that the games created should be self consistent and interesting. Hilbert accepted the classical view that mathematics is consistent, complete and computable.

As is well known, Gödel and Turing showed that the this classical view is untenable: there are symbolic systems which are both incomplete and incomputable. These proofs rely on the theory of sets invented by Georg Cantor is his quest for the cardinal of the continuum.

Since at least the time of Aristotle the Universe has been considered to be a continuous structure, although it is obviously composed of discrete entities like people, trees, numbers and atoms. The motivation for seeing it as continuous is probably the continuous the appearance motion. Parmenides denied the full reality of motion, and his student Zeno contrived a number of imaginative proofs that motion is not possible. These proofs depend upon the assumption that a continuum is made of a large number of points. The basic idea is that before you can get to a point, you must get to the point before it, and since there are an infinite nmber of points, you will never get anywhere. Zeno's paradoxes - Wikipedia

We may consider a point as a named entity. It has been known for a long time that neither the natural nor the rational numbers are sufficient to name all the points in a geometric line. This problem prompted the invention of real numbers and the idea that there is a real number corresponding to every point in a line. The cardinal of the continuum then becomes the cardinal of the set of real numbers.

Cantor’s approach to counting (ie naming) the real numbers is based on the idea of position significant representation of numbers. If we are to represent the cardinal of a set by simple marks on paper, like a prisoner’s calendar, we need a mark for every element of the set, and when we have done we have simply created a map of the set like Carroll 1 to 1 map. . We can compress this representation using decimal notation, each additional digit increasing by a factor of ten the number of points that we can name.

Formalist mathematics says that a structure exists if it is in no way self contradictory. We may thus say that the set of natural numbers which contains all the numbers generated by Peano’s axioms exists. This set is said to be countably infinite. It has a natural order, each number being one greater than the number before it, and no greatest number, since we can always add another one.

Cantor assumed that there was no contradiction involved in talking about the set of all natural numbers. Since the number of natural numbers cannot be a natural number, he coined a new name for the cardinal of the set of natural numbers, 0.

He then considered the set of all the combinations (subsets) or permutations of the set of natural numbers, and proved that the cardinal of this set, which he called 1 was greater than 0. Repeating this construction, he created sets with cardinal 2, 3 and so on without end. He guessed that 1 is the cardinal of the set of real numbers (that is the cardinal of the continuum), but was unable to prove this. The reason for this is that the concept of set is indifferent to the number of elements in a set, so there is no way to use set theory by itself to estimate the cardinal of a set.

Like motion, the mathematical continuum is considered to smoothly join the two points at the ends of a continuous line. Cantor augmented this notion of continuity with a more powerful notion, which we call logical continuity, the connection established by logical argument between the hypothesis and conclusion of a valid mathematical theorem.

Logical continuity is implemented by proof or computation. A computer is a deterministic device (often called a Turing machine) that proceeds from some initial state to a final state through a series of steps, each of which is determined by the state of the system at the end of the previous step. Turing showed that some initial states are incomputable, that is the computer will never reach a final state.

Turing found that there are as many computable functions as there are natural numbers, that is they are the elements of a countably infinite set.

We may interpret written mathematics (like Cantor’s papers) as a representation of fixed points within the mathematical community insofar as mathematics is archived and communicated through space and time by the literature. As a representation it is identical to the particles that we observe when we study the physical Universe and we may take the view that the creation of both literature and physics are isomorphic processes differing only in complexity. We map this complexity using the Cantor hierarchy of transfinite Hilbert spaces and assume that they are all share the property that they are logically consistent structures.

Toward the end of the nineteenth century Cantor’s invention of the transfinite numbers, Cantor’s paradox and the other paradoxes of self reference led to a careful re-evaluation of the foundations of mathematics. This led ultimately to the theorems of Gödel and Turing: a suitably complex consistent symbolic system must be both incomplete and incomputable. This introduced uncertainty into mathematics which had once been considered a system able to answer every well formed question. Cantor's paradox - Wikipedia

Wigner highlights the miraculous correspondence between mathematical symbolism and observations of the physical Universe: . . . the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and there is no rational explanation for it. Eugene Wigner: The Unreasonable Effectiveness of Mathematics in the Natural Sciences

Wigner's observation compels us to examine the relationship between mathematics and physics. Here I wish to suggest an explanation of Wigner's observation. The idea is essentially very simple. Science is devoted to detecting and connecting the fixed points of the Universe. Mathematics, on the other hand, represents the fixed points of a subset of the Universe, the minds of the mathematical community.

Insofar as the Universe is one and consistent, it is not surprising that we find a considerable degree of isomorphism or symmetry between these two sets of fixed points. The link is proof, which is a symmetry embodied in a Turing machine joining two points in symbolic space, an hypothesis and a conclusion. Here we would like to model deterministic processes in the Universe by proofs, or equivalently, Turing machines. Symmetry - Wikipedia

Quantum mechanics has shown us that all the observable features of the Universe (upon which science is based) are discrete (quantized) events. Nevertheless, quantum mechanics assumes that the mechanism underlying these quantized observations can be represented by continuous mathematics. Here I suggest the alternative to this assumption, that we can better describe the Universe by assuming that it is digital 'to the core'.

This digitization suggests that we can see the Universe as a logical, rather than a geometric continuum. The mathematical representation of a logical continuum is the Turing machine, a stepwise digital process that leads deterministically from an initial condition to a final condition. We may see such logical continua as the fixed points in the universal dynamics which form the goal and substance of science. Alan Turing: On Computable Numbers, with an application to the Entscheidungsproblem

Back to top

6: Mathematical physics

We can imagine that the derivation of mathematics from the physical world began with the need to count livestock, money, slaves and other discrete objects. These numbers could be used for inventory control and accountants learnt to use arithmetic to produce accounts of business dealings.

Mathematics was also applied to the measurement and calculation of continuous distances and angles like those used by surveyors and architects. This became geometry. Since then arithmetic and geometry have remained the staple foundations of mathematical physics. Galileo made this observation explicit at the beginning of the scientific age when he wrote

Philosophy [i.e. physics] is written in this grand book — I mean the Universe — which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth. The Assayer - Wikipedia

Since Galileo's time, mathematics and physics have grown in parallel, nurturing one another. One of the most important developments was the invention of calculus by Newton and Leibniz to study continuous motion. For physicists the application of calculus was justified by the fact that it yielded results consistent with observation, but the questions it raised about infinitesimals and continuity became serious problems for pure mathematicians.

It has been known from ancient times that there are points in the real line that do not correspond to any rational number. This led to the real numbers and prompted Georg Cantor to seek a way to represent the cardinal of the continuum. Cantor used set theory to develop the transfinite numbers which he proposed for this purpose.

Set theory and the associated idea of correspondence laid the foundation for the development of function space. Cantor proposed that the first transfinite number 0 be the cardinal of the set of natural numbers. He thought that the second transfinite number 1, the cardinal of the set of 0! mappings of the natural numbers onto themselves might be the cardinal of the continuum. Cohen showed that set theory cannot tell us the cardinal of the continuum since there is nothing in the nature of a set that specifies a certain cardinal: sets are symmetrical with respect to their cardinal number. Set Theory and the Continuum Hypothesis

Another concern of mathematicians since earliest times has been the solution of polynomial equations. Cardano used the idea of an imaginary number, the square root of -1, in his search for the solution to cubic equations. This line of thought eventually led to the fundamental theorem of algebra whose important consequence is that the field of complex numbers is algebraically closed. Fundamental theorem of algebra - Wikipedia

The combination of function space and complex numbers enabled the construction of the complex Hilbert space upon which quantum mechanics is founded. Von Neumann presented an axiomatic foundation for Hilbert space and then examines its properties before showing that it is an ideal foundation for quantum mechanics and applying it to the greatest problem in quantum mechanics, the question of measurement and the 'collapse of the wavefunction'. von Neumann: Mathematical Foundations of Quantum Mechanics

Hilbert spaces, both real and complex have an inner product analogous to the dot product in ordinary Euclidean space. The inner product of a vector with itself gives its length. The inner product of two different vectors yields the cosine of the angle between them. If their inner product is zero, they are perpendicular or orthogonal.

Hilbert spaces are also complete, meaning that the limit of any infinite series within the space is an element of the space. The axioms of quantum mechanics are built around complex Hilbert space. Like Euclidean space Hilbert space has a split personality, discrete orthogonal dimensions, each of which is parametrized by a continuous complex number. Quantum mechanics has a similar personality: beneath the appearances and invisible to us we imagine complex wave functions expressed as differential equations with potentially infinite sets of solutions. The events we observe, however, are discrete particles like photons and electrons and the events themselves are not part of a continuum like the flow of a continuous fluid but discrete atoms of action, measured by Planck's quantum of action. Like a dynamic version of Democritus' "uncuttable" atom, there is no observable action smaller than the quantum. All larger actions appear to be built of countable numbers of quanta of action.

Back to top

7: Fixed point theory and cosmic dynamics

The decision to place God outside the world was forced upon the ancients by their inability to reconcile dynamic and static systems. Since the world is clearly dynamic, and true and permanent knowledge requires stasis, they concluded that God and the world are different entities. Plato built on Parmenides’ work, and the idea has since become the foundation of Christian metaphysics. The Christian God is eternal, and time and space are understood to be inconsistent with the divine nature. Plato: Parmenides

We represent motion mathematically using functions which map the domain of the function onto its range. From a formal point of view, we see no reason not to approximate the life of God by functions, finite and transfinite. Luitzen Brouwer and others have found that any mapping f of a continuous, compact, convex set onto itself must contain a fixed point, that is a point x for which f(x) = x. We can imagine that there are as many fixed points as there are different continuous mappings of a set onto itself. William K. Allard: Brouwer's fixed point therem

Following this hint, we may guess that the although the divine Universe is pure action, the invariances we observe come not from outside the dynamics, but are part of it. Fixed point theory shows us that the concept of motion includes not-motion. We find a similar idea in mathematics, where the concept of number includes 0.

We may understand fixed point theory to have made Parmenides’ dichotomy unnecessary. Even though the Universe is purely dynamic, if it fulfills the hypotheses of fixed point theorems, we may expect to find fixed points which are not outside the dynamics, but within it.

Brouwer’s theorem is subject to three conditions, continuity, convexity and compactness.

Does the Universe meet these criteria? Does a Universe of pure action have fixed points? Observation and quantum theory suggest that it does. As far as we can tell, electrons and photons have had the same properties now as they had very soon after the Universe began to differentiate. There are countless fixed points in the universe, ranging in size from fundamental particles to planets, stars and galaxies. As mappings change, fixed points may change, and so fixed points may not last for ever. Things like the velocity of light, c and the quantum of action h seems to have remained unchanged for the life of the universe, but other fixed points, like ourselves, last for maybe a hundred years at most.

It also seems reasonable to expect that the Universe is convex and compact. By definition there is nothing outside it so that it cannot have holes and must contain its own boundaries. Aristotle imagined a duality whose elements he named potential (dunamis) and act (entelecheia). He also introduced one axiom: no potential can actualize itself. The actualization of any potential required the existence of an efficient cause already in act. From this he concluded that there must be an unmoved first mover which is the source of all motion in the world.

Aquinas concludes that God, like Aristotle's first mover, is pure actuality. This God is a is a living God. Aquinas following Aristotle, defines life as self-motion. Further, motion is the passage potency to act. This would seem to imply that the living God contains potential, which contradicts the assertion that it is actus purus.

The received solution is that this definition of motion applies only to physical being, not intellectual being. Aquinas writes:

. . . action is twofold. Actions of one kind pass out to external matter, as to heat or to cut; whilst actions of the other kind remain in the agent, as to understand, to sense and to will. The difference between them is this, that the former action is the perfection not of the agent that moves, but of the thing moved; whereas the latter action is the perfection of the agent.

Hence, because movement is an act of the thing in movement, the latter action, in so far as it is the act of the operator, is called its movement, by this similitude, that as movement is an act of the thing moved, so an act of this kind is the act of the agent, although movement is an act of the imperfect, that is, of what is in potentiality; while this kind of act is an act of the perfect, that is to say, of what is in act as stated in De Anima iii, 28. In the sense, therefore, in which understanding is movement, that which understands itself is said to move itself.

It is in this sense that Plato also taught that God moves Himself; not in the sense in which movement is an act of the imperfect. Thomas Aquinas Summa Theologiae I, 18, 3: Is life properly attributed to God?

Since we are here supposing that the Universe is divine, we may consider all actions of the Universe as falling into the second class. This is consistent with our modern understanding of energy. Unlike Aristotle's potency and act, kinetic and potential energy are precisely equivalent, freely interconvertable, as we see in the pendulum. In modern physics potential is an active force, so we may think of the action of a pendulum moving from potential to kinetic energy and back again as a transition from act to act of the type Aquinas attributes to immanent actions.

If we assume that fixed point theory is indifferent to the complexity of the sets to which it is applied we might say that the mathematical literature is a (multi-dimensional) fixed point in the human mathematical community. We thus arrive at a sort of bootstrap: the existence of fixed points in dynamical systems explains how the active living mathematical community can arrive at proofs that establish fixed relationships between certain propositions, in particular the proof that under certain conditions there are fixed points in dynamic systems.

We can now go on to model the relationship of the fixed points to one another. By studying these relationships we gain insight into the underlying divine dynamics. We may imagine establishing a correspondence between quanta of action in the Universe, each of which has unique identity, and the natural numbers. The 0 quanta of action can be ordered in 1 different ways. This suggests how a transfinite computer network might apply to the fixed points of the world.

Back to top

8. The fixed points of the Universe

Einstein emphasized that the aim of physical science is to determine the invariant features of the Universe, that is its fixed points. This suggests an explanation for Wigner's observation that mathematics often fits the observed Universe with wondrous precision. Both the observable Universe and the mathematics are fixed points in a dynamic system, on the one hand the whole Universe, on the other a subset of the Universe we call the mathematical community. This suggests the existence of a formal symmetry (symmetry with respect to complexity) which couples mathematics to the Universe.

For most of is history, mathematics was confined to the exploration of the magnitudes measured by the natural and real numbers. Classical physics operates in this realm. Georg Cantor opened up a new world when he invented the transfinite numbers. Cantor wanted to find a number big enough to represent the cardinal of the continuum. Transfinite numbers - Wikipedia

He began with the set of natural numbers, cardinal 0, generated the next transfinite cardinal 1 by enumerating the set of mappings or functions of the natural numbers onto themselves. 2 is the cardinal of the set of all permutations of the set whose cardinal is 1 and so on without end. Cantor saw the process of permutation as a 'unitary law' which could be used to generate transfinite numbers ad infinitum. Function space - Wikipedia

Let us assume that the transfinite numbers form a space large enough to be placed into one-to-one correspondence with the fixed points of the Universe. Fixed point theorems tell us that certain dynamic systems must have fixed points, and the observations of quantum mechanics give us instances when these theorems apply.

The higher transfinite numbers are very complex objects, being permutations of permutations of . . . and so we can expect to be able to find a transfinite number corresponding to any situation we observe. We have devised this structure using simple natural numbers, but back to top

Quantum field theory is our best attempt so far to produce a comprehensive theory of the physical Universe. Quantum field theory begins is a union of quantum mechanics and the special theory of relativity.

Quantum mechanics raised many mathematical questions that were ultimately settled by realizing that quantum mechanics works in a function space, Hilbert space. The mathematical foundations of quantum mechanics in complex Hilbert space were developed axiomatically by von Neumann. The quantum mechanical energy equation for a isolated quantum systems appears to work perfectly. The solutions to this equation may be a finite, countably infinite or transfinite superposition of states. von Neumann: Mathematical Foundations of Quantum Mechanics

Fixed point theory tells us that under various circumstances, dynamic systems can have fixed points. Brouwer's fixed point theorem, for example, says: for any continuous function f mapping a compact convex set into itself there is a point x such that f(x) = x. Each function in this mapping may have its own fixed point, and since there are 2 mappings of the real numbers onto themselves, we might expect a similar number of fixed points.

Our expectation of fixed points is met by quantum mechanics. The underlying mathematical theory suggests that the continuous superposition of solutions to the energy equation evolves deterministically and that each element of the superposition is in perpetual motion at a rate proportional to its energy given by the equation E = hf. The wave equation is normalized so that the sum of all the frequencies to be found in the superposition is equal to the total energy of the system modelled.

An isolated quantum system is observed or measured by coming into contact with another quantum system. An observation is represented by an ‘observable’ or measurement operator, M, and we find that the only states that we see are eigenfunctions of M. These states are the fixed points under the operation of M given by the ‘eigenvalue equation’ Mψ = mψ . The scalar parameter m is the eigenvalue corresponding to the eigenfunction ψ. The eigenfunctions of a measurement operator are M are determinate functions or vectors which can be computed from M. Eigenvalues and eigenvectors - Wikipedia

Although the continuous wave function is believed to evolve deterministically, and the eigenfunctions of a measurement operator can in principle be computed exactly, we can only predict the probability distribution of the eigenvalues revealed by the repetition of a given measurement.

These frequencies are predicted by the Born rule: pk = |<mk | ψ>|2 where ψ is the unknown pre-existing state of the system to be measured and pk is the probability of observing the eigenvalue corresponding to the kth eigenfunction mk of M . Provided the measurement process is properly normalized, the sum of the probabilities pk is 1. When we observe the spectrum of a system, the eigenfunctions determine the frequencies of the lines we observe and the eigenvalues the line weights. Born rule - Wikipedia

The fixed points described by quantum mechanics provide a foundation for all our engineering of stable structures. The purpose of engineering is to manipulate the probability of events in our favour by applying our scientific understanding of how events are constructed in reality.

Zee notes that quantum mechanics is in effect one dimensional field theory operating in the time/energy dimension. The effect of digitization in this dimension is reflected in the uncertainty relation Δ E.Δ t ≈ h. Zee: Quantum Field Theory in a Nutshell

Back to top

9: Computers and logical continuity

Cantor's proof, like all other accepted mathematical proofs, has been judged logically sound by the mathematical community. The logic of the proof connects an initial set of hypotheses to a conclusion. This connection is an example of what I want to call logical continuity. A logical continuum is identical to a halting Turing machine, which joins a final state to an initial state through a series of deterministic logically sound steps. The Turing machine is the mathematical archetype of the everyday computers now found almost everywhere in the world. Alan Turing: On Computable Numbers, with an application to the Entscheidungsproblem

Some fixed points in the Universe appear to last forever, others, like ourselves and atomic states, have shorter lifetimes. In physics we describe these changes at two levels, kinematic, which simply records what happens, and dynamic, which seeks to explain the mechanism (often invisible) that causes the motion. We might call a system natural if its kinematics is driven directly by its dynamics. Otherwise it is artificial. The motion picture industry specializes in making realistic looking kinematics using tricks or lighting, modelling and computer simulations to avoid the expense of constructing the real dynamics, like an actual sinking of a real Titanic.

At the fundamental level, physicists observe the kinematics of the world but must speculate about the dynamics, which is invisible. Experimental physicists use article accelerators like the Large Hadron Collider to build up kinematic pictures of the behavior of these particles. We test our dynamical theories by seeing how well they imitate the kinematic behavior actually observed.

Modern physics provides two somewhat incompatible models for the universal dynamics. The Standard Model, built with quantum field theory, describes the smaller scale structure of the Universe down to particles like electrons and quarks which are believed to have no size. The large scale structure of the Universe is described by Einstein' general theory of relativity. Between them, these theories have allowed us to develop quite a comprehensive history of the expanding observable Universe since it was very small. A quantum field theory of gravitation on the other hand, remains elusive.

Heisenberg helped us come to terms with the weirdness of quantum theory by stating that we can only be certain about what is observable and must speculate about what is invisible. The hypothesis underlying quantum theory posits a world described by the continuous evolution of deterministic complex wave functions. These functions are solutions to the differential equations, the best known of which is the Schrödinger equation. This equation is understood to have many, possibly an infinite number, of solutions which are believed to exist simultaneously. An individual observation, however, reveals only one of these solutions. This selection of one possible solution our of many is called the 'collapse of the wave function'.

The wave function description of quantum dynamics seems to work very well, but because it is invisible, we cannot be sure of it. In particular, we cannot be sure that it is really described by continuous mathematics. Another ay to look at the situation is to recognise that all processes in the universe are dynamic, proceeding at a rate determined by their energy. We might then imagine a quantum system as something like a spinning roulette wheel which is stopped when we observe it and the solution we see is the one where the ball lay where the wheel stopped. Werner Heisenberg: Quantum-theoretical re-interpretation of kinematic and mechanical relations

###

At the macroscopic scale, motion appears to be continuous, and it seems natural to model it with continuous mathematics. We may date modern mathematical physics from Newton’s discovery of calculus which provides a means to discuss ephemeral states of motion using static mathematical symbolism to express the invariant features (stationary points) of a motion. Newton's approach worked, so the mathematical foundations of calculus were not really questioned until the nineteenth century. The majority of mathematicians eventually convinced themselves that the the mathematics of the continuum are logically sound. A notable standout was Leopold Kronecker, who said that "God made the integers, all else is the work of man." Leopold Kronecker - Wikipedia

From the point of view of algorithmic information theory a continuum carries no information since, like a blank sheet of paper, it incorporate no significant marks to encode information. Here we encounter the problem with the omniscience of God. On the one hand, God is considered to be absolutely simple, like a continuum with no features. On the other, God is supposed to know everything past, present and future, actual and possible. Ancient philosophers, with no knowledge of information, thought that information and intelligence correlated with spirituality, and since God was considered the most perfect of spiritual being, it followed for them that it was also the most knowledgeable. Now we know that information is physical, discreet items of information represented by discrete physical objects, as the information in this paragraph is represented by letters. Algorithmic information theory - Wikipedia, Aquinas, Summa: I, 14, 1: Is there knowledge in God?, Rolf Landauer: Information is a physical entity

The algorithm that expresses Newton's second law of motion may be written in four characters, F = ma. By convention F, m and a are continuous quantities represented by real numbers and '=' establishes that the expression is an equation with equal numbers on both sides. Since the continuous quantities of themselves add no information, we assume that the information content of the algorithm is exhausted by its symbolic representation which is just 28 bits if we attribute 7 bits to each character. We may look upon this algorithm as a symmetry. It is applied (ie "broken" ) by entering specific values for its independent variables to compute the value of the dependent variable. These numbers are found to correspond closely to physical measurements, so we believe the equation represents a real causal connection in the world.

Back to top

10: A transfinite computer network

Here we propose an alternative model of the universal dynamics based on the notion that the that the invisible dynamics is logically rather than geometrically continuous. This approach is suggested by the hypothesis that the Universe is divine. If this is the case, we would expect the observable kinematics, that is the observable fixed points in the divine dynamics, to be the outcome of logically consistent processes reflecting the internal consistency and intelligence of divine action.

So let us assume that the eigenfunctions of quantum mechanical observables are computable functions. Instead of there being a continuous spectrum of 1 eigenfunctions, there are only 0 of them, that is the number of Turing machines whose algorithm is expressed by a finite ordered set of symbols. From this point of view, a quantum mechanical observation is equivalent to the exchange of a message between two quantum systems. The computations encoding and decoding such a message are accomplished by Turing machines.

Turing machines are deterministic. Many features of the world also bear deterministic relationships to one another, like the spectral frequencies of an atom. It seems natural to attribute this determinism to a deterministic natural process.

Back to top

11. From quantum mechanics to space-time
The general theory of relativity suggests that the universe began an initial singularity which is identical to the traditional, a completely structureless entity which is the source of the God enormously complex universe in which we live. The mathematical theory of fixed points provides us with a starting point, explaining why there is any structure at all in the universe. We now want to go into detail. Traditionally God crested the Universe using its omnipotence and omniscience to design snd built the universe it is from scratch, starting with nothing. Let us begin by describing the size and complexity of the world we inhabit. The universe is expanding, which is natural enough since we know that it started as an infinitesimal point, without space and time. Because it is expanding, the further we look into the distance, which is also the past, the faster the galaxies we see are moving away from us. We eventually come to a distance beyond which we cannot see because things are moving away at the velocity of light. This is known as an event horizon. This horizon surrounds us, and is currently more than 40 billion light years away. About 14 billion years have passed since the universe began to expand inside the initial singularity. Universe - Wikipedia

The mass of ordinary matter within this gigantic sphere is about 1053 kilograms. Current theory suggests that this is only about 5% of the total matter in the observable universe, the rest comprising dark matter and dark energy which we cannot see but whose presence is suggested by cosmological models based on general relativity and the structure of the cosmic background radiation. The ordinary matter content is clumped into galaxies, stars and other dense bodies like planets and asteroids. If it was spread out evenly it would amount to about one hydrogen atom in every four cubic metres of space.

The mass of a hydrogen atom is about 2 x 10-27 kilograms, so the mass of ordinary matter in the observable universe corresponds to about 1080 hydrogen atoms. The hydrogen atom itself comprises a number of fundamental particles. Perhaps the most common fundamental particle in the universe is the photons. A recent estimate puts their number at about 1084. Chelsea Gohd: How many photons has the universe produced in its life?

Perhaps the most interesting statistic for the Universe is its rate of execution of quanta of action which we take here to be a measure of its bandwidth, measuring its rate of computation in quanta of action. To arrive at this figure, we convert its mass to energy, using the formula E = mc2 and then convert energy to frequency of action with the formula f = E/h, where h is Planck's constant, so that , f = mc2/h, about 10104 actions per second over a period of about 1017 seconds (14 billion years), for a total of 10121 computations.

This is a measure of the processing it took the universe to evolve to its current state. The general process was first suggested in ancient times by the Christian doctrine of the Trinity. It is basically a matter of self reflection and copying in the network structure. Modern physics describes this process by quantum mechanics, which describes a communication network.

The mathematical domain of quantum mechanics is Hilbert space. What see is particles meeting one another in spacetime and interacting as part of the universal quantum network. We may see thus network as a structure built of atoms of communication. Such an atom comprises two sources and the messages between them. We work from the local to the general. Each of these atoms may exist at any level of complexity from interactions between fundamental particles to people to clusters of galaxies.

The fundamental particles fall into two classes know as fermions and bosons. Fermions are structural particles. No two fermions can be in the same place at the same time. Bosons, on the other hand, are messengers, carrying signals between the fermions that mediate the interactions between them. A hydrogen atom, for instance, comprises two fermions, a proton and an electron, bound together by the exchange of photons which are photons.

The first steps in the creation of the universe from the initial singularity must involve the creation of space-time and the fundamental particles which inhabit it. Physicists on the whole take spacetime for granted as a background or domain for everything else and try to explain the creation, annihilation and interactions of the fundamental particles as the product of fields, which are mathematical functions whose domain is spacetime. The resulting theory, quantum field theory, provides quite a good explanation of the behaviour of the world called the Standard Model, but this picture leaves a lot unexplained and is generally believed to be a step on the way to something better. Here we take more of a philosophical rather than a mathematical view and try to look deeper. Here space-time is not so much as a passive background for the lives of the particles as intimately involved with them.

Mathematics is very handy in physics, but it has to be applied, and the application must usually be described with the full power of natural language. It is not possible to write physics in pure mathematics. There have to be descriptive passages between the mathematical passages describing what the symbols mean and why we are performing such and such operations on the quantities represented. a similar situation applies when we are giving a logical explanation of a physical processes, describing the use of the symbols and the logical operators.

It is here that we enter the realm of logical physics, the title of this essay. The first step is to justify the application of logic, the stepwise process of arguing from premisses to conclusions. Since ancient times, most scientifically inclined people, which includes philosophers, have assumed that the universe is continuous despite the obvious fact that it is full of discrete objects ranging from stars and people to blades of grass and grains of sand. The only real justification for this view is that motion, space and time appear to be continuous. History of logic - Wikipedia

The principal problem with continuity is that a continuum carries no information. Information is a physical entity, and is closely coupled to marks, that is discontinuities. A blank sheet of paper tells us nothing, Only when it is covered with marks like these letters, diagrams and pictures can we extract any meaning from it. The meaninglessness of continuity is built into our nature. We are forever seeking newness, variety and excitement. We shun the droning speech of longwinded teachers, the endless similarity of day after day of identical work and the boring continuity o long journeys. We go out of our way to punctuate our lives with events, parties, ceremonies and games. I am bore with writing this (as you may be with reading it) and am going to take a break at the movies. Psychologically, we naturally notice any motion and change in our environment. Life itself is a string of events, beginning with birth and ending in death, the boundaries of personal existence.

We may think of both the ancient completely featureless God and the initial singularity as continua, and creation as the introduction of marks or structure into these entities. Fixed point theory tells us that this is inevitable in a consistent system. The next step is to try to understand how these fixed points are sculpted into the universal structure we now experience. Given what we already know, the most fruitful approach may be to look at the relationship involve a relationship between logic and quantum mechanics in the light of evolution. Each of us is an event, a complex symbol.

Back to top

12: The transfinite universe

As Parmenides realized, we cannot make a permanent record of motion per se, but we can write down the invariant features of a motion. Like Newton, we often represent invariants with algorithms called differential equations, formal mathematical texts that capture the essence of a motion in symbols and enable us to reproduce it by computation.

Calculus reopened old questions about continuity, points and infinitesimals. By the nineteenth century these questions had been solved to almost everybody’s satisfaction with the theory of limits. There are no atoms in classical continuous mathematics. Every interval can be subdivided ad infinitum.

The ancients knew that one cannot label all the points on the natural line with integral numbers, or ratios of integers. Instead we need the real numbers. Just as in a line there are points between any two points, there are real numbers between any two real numbers.

Georg Cantor, working on the assumption that a line is made of points, set out to find the cardinal of the continuum: how many points does it take to make a line? As the points become smaller, their number becomes larger, so Cantor created a new world of numbers, the transfinite numbers, which, he hoped, would be big enough to number all the points in a line, no matter how small the points or long the line. Cantor: Contributions to the Theory of Transfinite Numbers

Like Newton, Cantor invented a new branch of mathematics, now known as set theory, to study his problem. Set theory and logic have become foundations for mathematics. A set is a collection of elements with individual identities so that they can be counted and ordered. The principal operation in set theory is establishing one-to-one correspondences between elements of sets. So we can imagine the set of all the natural numbers. The cardinal of this set cannot be a particular natural number, because we can always add 1 and get a larger number. Instead Cantor named the cardinal of the set 0, the first transfinite number, using the first letter of the Hebrew alphabet. The fist transfinite number is the least upper bound the natural numbers.

He then exploited the power of order: he showed that the cardinal of the set of all orderings or permutations of the natural numbers, 1, is strictly greater than 0. The set of orderings of 1 items produced the next transfinite cardinal, 2, and so on without end. Cantor hoped that 1 would turn out to be the cardinal of the continuum. Cohen showed, nearly seventy years later, that the continuum hypothesis is independent of the standard axioms of set theory. Set are symmetrical with respect to cardinal numbers: a set is a set, no matter how many elements it contains.Nevertheless, set theory remains a principal tool for the development of mathematics. Cohen: Set Theory and the Continuum Hypothesis

Cantor's theory upset some theologians who felt that infinity is the unique attribute of God and there can be no 'created' infinity. This problem was solved by Cantor's tacit use of formalism, subsequently made explicit by Hilbert. Formalism is the process of manipulating symbols under the sole constraint of consistency. Although in reality all information is represented physically, mathematicians are still free to imagine that the symbol x may stand for anything, such as the infinite set of natural numbers, or the set of all permutations of the natural numbers.

Much of Cantor's work was theologically motivated, and he imagined an absolute infinity which was characteristic of God. He also found that the existence of such an infinity was not self consistent, something now known as Cantor's paradox. Cantor's proof for the existence of the transfinite numbers tells us that given a set of plausible assumptions, every set, no matter how big, can generate a bigger set. This would hold for the absolutely infinite set as well, which is therefore no longer absolutely infinite. Formally the absolutely infinite set cannot exist because it is by definition inconsistent with Cantor's theorem. Dauben: Georg Cantor: His Mathematics and Philosophy of the Infinite, Hallett: Cantorian Set Theory and Limitation of Size

From this we may conclude that transfinite mathematics does not have an element corresponding to God. Instead it has a series of elements which may approach but never reach the immensity of God. We can talk about these subsets of the whole (sometimes called universes of discourse) without contradiction. Further, Cantor's theorem guarantees that no matter how large a universe of discourse we decide to study, it will remain always a subset of the strictly greater set arising from the Cantor expansion of our chosen universe.

Back to top

13: Identifying the Universe with God

Let us now identify the traditional God of Aquinas with the initial singularity. This singularity, which antedates the emergence of the universe as we know it, was predicted by Hawking and Ellis as a consequence of Einstein’s general theory of relativity. This identification is founded on the fact that both God and the initial singularity are identically without structure and the source of the world.

Astronomy is one of the oldest empirical sciences. Its subject, the heavens, is easily observed, and its utility for navigation was obvious to those who travelled at sea or in the desert where terrestrial landmarks are few.

Speculation about the courses of the planets led to early efforts at kinematic and dynamic models of the world. Ancient kinematics was based on the belief that the heavens were perfect, and that perfect motion is circular. This led to a complex system of epicycles to explain the wandering motions of the planets. The ancient dynamic belief that continued motion required continued impulsion led to the notion that Aristotle’s unmoved mover or other analogous beings were necessary to keep the planets in motion, and that this motion was transmitted to the world below. These ideas have since been revised, but led to the lasting discoveries in physics, arithmetic and geometry attributed to Euclid, Ptolomey, Archimedes and many others.

In the seventeenth century Galileo realised that continued motion did not require continued impulsion and formulated what is now known as Newton’s first law of motion: a body at rest remains at rest and a body in motion continues its motion in a straight line unless acted upon by a force.

Kepler calculated that the planets move in ellipses with the Sun at one focus, thus removing the need for epicycles. Copernicus saw that the computational picture was greatly simplified by placing the Sun near the centre of the cosmos, perfecting the cosmological kinematics of the day.

Newton introduced force with his second and third laws, and postulated the source of force in the heavens, gravitation, governed by his law of universal gravitation. His work provided a clear dynamical picture that guided engineering and astronomy until the twentieth century. His application of geometry and calculus to astronomy also prompted many mathematical developments.

Newton did not know how gravitation actually worked (Hypotheses non fingo) and left the matter in the hands of God (Newton). Newton's work laid the foundations for the classical dynamics of massive particles, but it had very little to say about electromagnetism which has become the subject of careful study in the seventeenth century.

Natural magnets (lodestones) have been used since ancient times as aids to navigation. The Earth, as recognized by William Gilbert, is a gigantic magnet pointing lodestones to magnetic north (Gilbert).

Interest in electricity and magnetism grew with the work of early experimenters, Volta, Ampere and Faraday, leading to the publication by James Clerk Maxwell of a set of differential equations which accurately model the whole field of classical electromagnetism (Faraday, Maxwell). Maxwell realised that his equations described light as an electromagnetic wave, laying the foundation for modern wireless technology.

Einstein arrived at special relativity by wondering what it would be like to travel alongside a light beam. On the basis of Galilean relativity, he might hav expected the light beam to appear stationary. Driving at 100 kilometres per hour, a vehicle in the next lane, also travelling at 100 kph, appears to be stationary. The Galilean transformation used simple arithmetic, 100 - 100 = 0. In Einstein’s case, however, he thought that Maxwell’s equations would require that the light beam should be travelling at the speed of light c, relative to the light beam rider, even though he was moving at c already.

In 1905 Einstein published The electrodynamics of moving bodies which announced the special principle of relativity: every observer in inertial motion (that motion in which Newton’s first law holds) sees exactly that same laws of physics (Einstein 1905). This includes the fact that the velocity of light will be the same for every observer, regardless of the their state of inertial motion. The transformation that makes this possible is the Lorentz transformation. This transformation accommodates the fact that c + c = c.

Special relativity deals with inertial motion and does not take acceleration (and therefore gravitation) into account. Einstein moved to the study of gravitation with the insight that observers in free fall do not feel their own weight, that is they are in inertial motion. This observation sets the special theory up as a starting point for the general theory.

The second insight at the root of the general theory is that acceleration and gravitation are equivalent. Newton’s theory of gravitation assumed a fixed three dimensional geometric frame of reference, a separate universal time frame and (in the case of gravitation) instantaneous action at a distance.

The special theory invalidated these assumptions. Space and time were merged into space-time. Although every observer saw identical laws of physics in their own inertial frames, the Lorentz transformation meant that in order to maintain the constancy of the velocity of light, the appearances of events in in other inertial frames in relative motion are changed. Distances are foreshortened in the direction of motion and times are are extended. Causal influences are no longer instantaneous but travel, at a maximum, with the velocity of light.

Einstein could no longer use the normal procedure in physics of setting up a frame of reference and expressing all measurements relative to that frame. Explaining why it took so long to move from special to general relativity, Einstein said that it took him a long time to realize that coordinates did not need to have an immediate metrical meaning.

Eventually he realized that the answer to his problem was the dynamic differential geometry developed by Riemann. Using the mathematical insights of Gauss and Riemann he was able to produce consistent theory which fitted all the requirements of special relativity and cosmological observations.

Free fall is inertial motion and a particle in inertial motion is said to follow a “geodesic”. The curvature of space described by general relativity causes particles on nearby geodesics accelerate toward one another even though they feel no force, a violation of Newton’s second law. The effect is considered to result from the geometry of curved space-time.

The general theory relativity was published by Einstein in 1916 (Einstein 1916). Since then its major predictions have been verified and it is generally considered to be the standard model of the large scale structure of the universe. It is in everyday use for precision celestial navigation and global positioning systems. The mathematical work to extract the predictions of the initial singularity and black holes from the theory was published in 1975 (Hawking and Ellis). These authors write:

Einstein's General Theory of Relativity . . . leads to two remarkable predictions about the universe: first that the final fate of massive stars is to collapse behind an event horizon to form a 'black hole' which will contain a singularity; and secondly that there is a singularity in our past which constitutes, in some sense, a beginning to our universe. Our discussion is principally aimed at developing these two results.

The existence of theoretically predicted initial singularity is supported by observational evidence, although the first 400 000 years or so of the life of the universe are invisible to us. This singularity ‘predates’ space and time. It is therefore without space-time structure, or any of the structure contained within space-time, so it is simple and eternal. It is also understood to be the source of the universe. We can no more ask where it came from that we can ask where God came from. Insofar as there is are no grounds for distinguishing these two entities, it seems reasonable to identify them.

The mathematical formalism of the general theory predicts that the universe must either expand or contract. We observe that as a whole the universe is expanding, but that local contractions form back holes. Hawking and Ellis extrapolate the expansion back in time to the initial singularity

These considerations led us to a first argument for the identification of God and the intial singularity conceived as the source of the Universe.

(1) God has four attributes: it exists; it is eternal prior to time; it is absolutely simple, prior to space; and it is the source of the universe. .

(2) The initial singularity, predicted by the general theory of relativity has the same four attributes, it exists, it pre-exists space and time, is has no structure and it is the source of the universe.

Consequently, (3) we have the germ of a case for identifying the traditional God and with universe via the initial the initial singularity.

Cosmologists understand the initial singularity to be the source of the “big bang”. How and why this happened is not known. This is not an explosion of stuff into existing space, but rather the creation of space-time and all that it contains.

The cosmic background radiation gives us evidence back to about 400 000 years after the beginning, and evidence from high energy physics is used to extrapolate back to very close to the beginning (Hinshaw et al).

Cosmological evidence for the existence of black holes is now very strong and the recent detection of gravitational waves gives us a new window on very high energy interactions close to the beginning (Begelman).

In traditional theology God remains eternally absolutely simple. Although the identification of God and the Universe made here seems quite plausible, we immediately run into the problem that the observable universe is exceedingly complex, so that its continued identification with a simple God becomes moot.

Back to top

14: Why did the universe "explode"

Why did the completely simple God of antiquity become the complex universe we inhabit? An answer is provided by the demands of consistency. God, mapping onto itself, gives rise to fixed point. All the structures we see in the universe are the fixed points of god.

The notion of fixed points in the divine dynamics is not new. It is implicit in the Catholic doctrine of the Trinity. The transfinite network expands this idea from trinity to transfinity.

The first work in this direction was done by the Fathers of the Church in their attempts to reconcile the Christian notion that God is three Persons, Father, Son and Spirit, with the unitary God embodied in Jewish culture.

The first person to attempt an understanding of the relationship between the Father and the Son was the evangelist John, who wrote: In the beginning was the Word, and the Word was with God, and the Word was God (John 1:1). This idea was developed by Augustine, Aquinas and Lonergan to produce what we might call the standard model of the Trinity. Lonergan: Verbum, Augustine: The Trinity, Aquinas, Summa, I, 27, 1: Is there procession in God?, Lonergan: The Triune God, Doctrines, Lonergan: The Triune God, Systematics

The Son proceeds from the Father rather as the mental word proceeds from the mind. From an information processing point of view, the procession of the Word is an act of copying, producing an identical but distinct entity. In Aquinas' model the persons of the Trinity are differentiated by the real relationships between them.

Here we view the Trinity as an instance of the simplest element of a communication network. The "atom" of communication is two sources connected by communication channel. In the Trinity the sources are Father and Son, and the channel between them is the Spirit, their love for one another. Or we might say that Father and Son are two fixed points in God, and the Spirit is their dynamical relationship, or that the Father and Son are fermions, and the Spirit is the boson through which they communicate.

In software terms we have a parent, a child and a communication protocol. Such units can communicate with one another to form more complex networks. We model these networks as computer networks, because there is plenty of clear mathematical theory available in this field and it makes intuitive sense because we are natural communicators. The network technology we have developed in the last century provides us with concrete examples of the abstract ideas in question 50. The explosive growth of the internet gives us a slow motion glimpse of the big bang, the initial stages in the development of a network of fixed points in the divine dynamics. Tanenbaum: Computer Networks

A network is a structure of processors and memory. The processors read data from memory, possibly transform it in some way and write it back into memory. Here we imagine that the world is revealed through the emergence of fixed points in the divine dynamics so that there is no real distinction between God and the world. The natural numbers are infinite because we can always add 1. The transfinite numbers grow without limit because we can always create the set of permutations of the biggest one we have in hand. This network can therefore be imagined big enough to be put into correspondence with any physical Universe, no matter how large. The transfinite space expands as the Universe expands in both size and complexity.

Like everyday computer networks, the transfinite computer network is layered. A message passing between two users goes from the sender down through many layers of software to the physical layer, is transmitted through the physical layer to the recipient machine, and then moves up through software layers to the receiving user. This process is transparent to the users. From this point of view, all communication in the world is rooted in God.

The question we have not addressed at all is how can there be so many identical particles and what distinguishes them - t is space-time location, which must be something which has information value. And another thing: What do we do with the zero point energy. How does the

The behaviour of an electron depends on the other particles it is communicating with, just as the behaviour of a person is modified by their friends. We cannot observe a 'bare' human being any more than we can observe a bare electron since the very act of observation destroys the bareness. Equally,we cannot observe an inertial frame, because as soon as we observe it it is no longer inertial. All this fits in with the problems if quantum field theory, but by imagining the layers network, we should be able to imagine the 'bare' layers, starting from the most basic.

Things like boson and fermion we develop simply by the algorithm that in a pure symmetry all possibilities have an equal chance, and the number of possibilities is thecaridnal of the symmetry. Cardinal of snowflake symmetry is 6, coin symmetry is 2, continuous (geometric) symmetry = ℵmaybe 1, the 'cardinal of the continuum'. In way I have too many pieces to juggle, but the network layering principle may help to organize them. Back to top

15: The mathematical theory of communication

The mathematical theory of communication shows that we can make communication error free by coding our messages into packets that are so far apart in message space that the probability of their confusion is negligible. Shannon sought the limits of error free communication over noiseless and noisy channels. The theory he developed is now well known and lies at the heart of communication networks worldwide Claude Shannon: Communication in the Presence of Noise, Claude E Shannon: A Mathematical Theory of Communication, Khinchin: Mathematical Foundations of Information Theory

The validity of these strategies is illustrated by our current ability to send gigabytes of information error free over noisy phone lines. The quantization of communication at the microscopic level supports the hypothesis that our world is a communication network that has evolved to resist error Wojciech Hubert Zurek: Quantum origin of quantum jumps: breaking of unitary symmetry induced by information transfer and the transition from quantum to classical.

Computer networks require communication between independent computers. We assume that a stable network requires error free communication. We find, as a matter of observation, that the Universe is quantized. The reason for quantization is made clear by the mathematical theory of communication. The mathematical theory of communication establishes the limits to error free communication over error prone physical channels.

Here we take the quantization of the Universe as evidence that it can be modelled as a communication system. Science proceeds by measurement. Shannon, who founded the mathematical theory of communication, saw that entropy can be used as measure of information [Shannon 1948, Khinchin 1957]. The information carried by a point in any space is equivalent to the entropy of the space.

Entropy is simply a count, usually converted to a logarithm for ease of computation. In communication theory we imagine a message source A with a source alphabet of i letters ai whose probability of emission is pi. The sum of these probabilities is taken to be 1, meaning that at any moment the source is emitting one and only one letter. The entropy H of such a source is defined to be H = - ∑i pi log2 pi . By using the logarithm to base 2 we measure of entropy (and information) in bits.

The mathematical theory of communication is not concerned with the meaning of messages, only with the rate of error free transmission of strings of symbols from a certain source over a certain channel.

Given this measure of information, Shannon sought limits on the rate of error free communication over noiseless and noisy channels [Shannon 1948]. The theory he developed is now well known and lies at the heart of communication engineering.

In essence, Shannon showed that by encoding messages into larger blocks or packets, these packets can be made so far apart in message space that the probability of confusing them (and so falling into error) approaches zero. This is identical to the quantization observed wherever we look in the Universe.

For a given channel, Shannon’s theorems define a maximum rate information transmission C. A system that transmits without errors at the rate C is an ideal system. Features of an ideal system that are relevant here are:

1. In order to avoid error, there must be no overlap between signals representing different messages. They must, in other words, be orthogonal, as with the eigenfunctions of a quantum mechanical basis [Zurek 2007]. In other words, error free communication demands quantization of messages.

2. Such ‘basis signals’ may be chosen at random in the signal space, provided only that they are orthogonal. The same message may be encoded in any orthogonal basis provided that the transformations used by the transmitter and receiver to encode and decode the message are modified accordingly.

3. The signals transmitted by an ideal system are indistinguishable from noise. The fact that a set of physical observations looks like a random sequence is not therefore evidence for meaninglessness. Until the algorithms used to encode and decode such a sequence are known, nothing can be said about its significance.

4. As the system approaches the ideal and the length of the transmitted signal increases, the delay at the transmitter while it takes in a chunk of message for encoding, and the corresponding delay at the receiver, increase indefinitely. The ideal rate C is only reached when packets comprise a countably infinite number of bits.

5. Only in the simplest cases are the mappings used to encode and decode messages linear and topological. For practical purposes, however, they must all be computable. In addition, in order to recover encoded messages, the computations used to encode messages must be invertible so that the decoded message is identical to the original.

***Mathematics when applied to physics may be seen as a hierarchy of groups with increasingly complex rules of composition. We begins with the integers and the operations of addition and subtraction, then to the real numbers with operations of multiplication and division. The complex numbers may be combined by exponentiation and logarithms. The final development embraces groups of vectors and matrices and the operations defined on them.

Each of these steps required the development of a new way of expressing the concept of number and the establishment of proofs justifying various assertions about the new numbers, such as the existence of irrational numbers.

We might date the beginning of modern mathematical physics to the invention and application of differential and integral calculus. Calculus enables the free movement from finite to infinitesimal quantities and back again. A differential equation like the quantum mechanical energy equation expresses the relationship between infinitesimal quantitites such as ∂φ/∂t and some other quantity to give us the equation ∂φ/∂t = Hφ.

We may view an analytic continuum as the carrier of either no information, or an infinite amount of information. From an engineering point of view a continuum carries no information other than that it is present. From the point set point of view, however, a continuum contains a transfinite number of points which may be placed into correspondence with the real numbers and can be used as a mark for the purpose of representing information.

The hope that a quantum computer may be more powerful than a Turing machine depends upon the latter view, but it may be that the Universe itself takes the engineering approach. The search for underlying analytic continuity is pointless if a continuum is not observable. This idea is consistent with Landauer's idea that information is physical.

What we observe is that networks based on computers are able to send messages error free through noisy environments, and that the existence of logical continuity in the Universe makes the existence of complex ordered structures possible. back to top

Quantum mechanics as we know it is based on continuous (that is analogue) computation. Can a digital computer produce the same results as quantum mechanics? In other words, is the Universe digital 'to the core' founded on logical rather than analytic continuity? From an algorithmic point of view, continuous symmetries represent nothing, that is they are computationally equivalent to no-operation. They are the boundaries of the observable Universe, equivalent in communication terms to no signal, only (perhaps) an unmodulated carrier. Carrier signal - Wikipedia

Like Parmenides, quantum theory proposes an invisible deterministic process underlying the observed world. Traditionally this invisible process is modelled by continuous mathematics. It seems that a digital computation would also be invisible. To transmit its instantaneous internal state, a computer must stop what it is doing and process a message to the outside observer. It is impossible for a computer to transmit every state of its operation to an observer because the transmission of messages is also a computation and the process would never halt.

Since Feynman devised quantum Hamiltonians that modelled the classical functions of logic, there has been a growing conviction that quantum processes may be modelled as computations. Feynman: Lectures on Computation

Quantum mechanics is based on the field of complex numbers. Complex numbers are essential to quantum mechanics first because they are periodic (and so can model the clock and all the other periodic processes in a computer), and second because of their arithmetic properties: we model interference by addition and changing phase (motion in space-time) by multiplication. These two features are combined in Feynman's path integral method to yield a fixed amplitude for various quantum processes. Complex number - Wikipedia, Path integral formulation - Wikipedia

A complex number has two orthogonal dimensions which communicate by multiplication, i2 = -1. There is no problem implementing finite versions of complex arithmetic in a digital computer.

There are high hopes in the quantum computing community that we may eventually devise quantum mechanical computers more powerful than Turing machines. The essence of quantum computation's claim to greater power is that a formally perfect analogue machine can transform large sets of data (ie representations of real or complex numbers) in one operation. This assumption implies that state vectors can carry an infinite amount of information and that matrix operations on these vectors are in effect massively parallel computations, dealing with the complete basis of the relevant Hilbert space simultaneously.

The atomic process  of a digital computer, on the other hand, is a one bit operation, p becomes not-p. However the logical proof  of the analogue contention is digital, using point set theory. Point set theory assumes that all points in a continuum are orthogonal and uniquely addressed by real numbers.

The epsilons and deltas in Weierstrasse's formal definition of continuity are at every point in the limiting process definite numbers. As we approach the continuous limit, these numbers are believed to hold a definite functional relationship to one another even as their measures approach zero. From an algorithmic points of view, the strength of this argument lies in the assumption that this functional relationship holds. We find in the physical world, however, that there are no discrete symbols of measure zero: the smallest meaningful measure is Planck's constant. Karl Weierstrass - Wikipedia, Algorithmic information theory - Wikipedia

Cantor explicitly quantized the study of the continuum by inventing set theory which deals with 'definite and separate objects'. Cantor set out to measure the cardinal of the continuum using set theory. Cohen later showed this is not possible, since the concept of set is independent of (orthogonal to) cardinality, ie sets are symmetrical with respect to size, so that no information about a cardinal is available from purely set theoretical considerations. Cantor, Cohen

The formalism of quantum mechanics enjoys a similar symmetry: it is indifferent to the number of components in its vectors, that is to the dimension of the Hilbert space of interest. We accept systems from one state up to the cardinal of the continuum where the quantum formalism is used to represent a classically continuous variables We this property is also a symmetry with respect to complexity and serves as bridge to connect Hilbert spaces with any number of dimensions.

Logical symmetry also enjoys symmetry with respect to complexity, so that logical arguments about large and complex sets obey the same rules as logical arguments about atomic entities. Logical continuity (epitomized by current cosmology) thus carries us from the initial state of the Universe to its current state, and gives us means to study the future. Algorithmic information theory - Wikipedia

***

Back to top

16: Why is the Universe quantized

Why do we observe a quantized Universe? Here I propose that it is because we and every other entity in the Universe are parts of a digital computer network whose integrity is maintained by error free communication. To achieve this, the world must implement the mathematical theory of communication.We begin to model the Universe as a finite computer network like the internet. We extend this model mathematically to a network with a countable infinity of fundamental processes corresponding to the computable functions represented by halting Turing machines. Mathematical continuity is replaced by the more powerful notion of logical continuity, implemented formally by mathematical proof and practically by symbolic computing machines. Tanenbaum: Computer Networks, Alan Turing: On Computable Numbers, with an application to the Entscheidungsproblem Writing is both formal and (usually) intelligible. It lies in the logical [psychological] realm. Nevertheless we do need some sort of underlying physical structure to keep the words in place, in this case paper. So we look for the fundamental logical connections of the universe in the gravitational era which lies at the origin of spacetime, or should we say the quantum or divine era of pure action.

Since all information is encoded physically, practical computer networks are built of a physical layer which correlates physical states or signals with the information in messages. This hardware layer is driven by various strata of software. A stable network requires error free communication so that the first software layer in practical networks is usually devoted to error detection and correction. Rolf Landauer: Information is a Physical Entity

An ‘atomic’ communication is represented by the transmission of single packet from one source to another. Practical point to point communication networks connect many sources, all of which are assigned addresses so that addressed packets may be steered to their proper recipient. This ‘post office’ work is implemented by further network layers.

Each subsequent software layer uses the layer beneath it as an alphabet of operations to achieve its ends. The topmost layer, in computer networks, comprises human users. These people may be a part of a corporate network, reporting through further layers of management to the board of an organization. By analogy to this layered hierarchy, we may consider the Universe as a whole as the ultimate user of the universal network.

Processes in corresponding layers (‘peers’) of two nodes in a network may communicate if they share a suitable protocol. All such communication uses the services of all layers between the peers and the physical layer. These services are generally invisible or transparent to the peers unless they fail. Thus two people in conversation are generally unaware of the huge psychological, physiological and physical complexity of the systems that make their communication possible.

back to top

17: The Universe as a communication network

We can approach the existence and underlying dynamics of fixed points in the Universe from another direction, by considering the Universe as a communication network. The mathematical basis for an error free network was developed by Claude Shannon. Claude Shannon: Communication in the Presence of Noise

Errors are caused by noise in the signal space. The fundamental strategy for error correction is to make the signal space so large that legitimate messages can be placed so far apart that their probability of confusion is minimal. These messages are, in effect, orthogonal, quantized or digitized.

A conversation requires two sources and a communication channel between them. The aim of communication is to transmit a true copy of a set of data from one point in space-time to another within the forward light cone of the origin. Meaning is not an issue for a communication engineer.

The input to the transmitter is a message, a string of symbols may be represented by a point in function space. The transmitter feeds the channel with a a signal, the point in signal space corresponding to the point in message space. The encoding and decoding between message and signal maps between the two spaces.

Two conditions are required to overcome error. First the size of the signal space must be made so great that the error balls surrounding each point in the signal space do not overlap, which would enable one signal to be confused with another. Second, the transmitter must establish a unique mapping between messages and signals which can be inverted by the receiver. The algorithms for encoding and decoding messages in order to achieve the first condition must be computable.

A quantum measurement may be considered as a communication source. Communication theory characterizes a source S by its alphabet of symbols si and the corresponding probabilities pi of emission of each of the symbols. These probabilities are normalized by the requirement Σi pi = 1, ie the source emits one and only one symbol at a time. The symbols emitted by a quantum measurement are the eigenvalues of the measurement operator and their normalized frequencies are predicted by the Born rule. We see that quantum sources emit discrete orthogonal symbols as required by the theory of communication.

The coincidences between the mathematical theory of communication and the output of quantum mechanics suggest that we picture the Universe as a communication network. In this picture, quantum observations or measurements are seen as the transmission of messages between quantum systems.

Freedom from error also requires that the operations of mappings from message to signal and its inverse be deterministic. This requirement suggests the guess that the total set of eigenfunctions of the Universe is the set of computable functions and so is equivalent to the first transfinite cardinal0. Computable function - Wikipedia

This identification is equivalent to the quantum mechanical trick of placing the system under study in a finite box to select a finite number of states. The box here is the set of reversible computable functions embodied in the measurement operator. This operator has an orthogonal basis which interprets the unknown quantum state being measured in terms of this basis. We might say that it induces a basis in the Hilbert space being observed which, before the observation, had no particular basis. The orthogonal basis of a Hilbert space has the same number of dimensions as the space itself. Insofar as the basis is countable with respect to the uncountable set of possible vectors in a Hilbert space, there is a high probability that there are a lare number of possible orthogonal bases.

Back to top

18: Quantum mechanics describes a communication network

Quantum mechanics is a formal description of a Computer network. This network is constructed in Hilbert space, a complex function space capable of describing the dynamics of computation. It describes how we observe the results of these computations.

The mathematical formalism of quantum mechanics assumes that the state space of the physical Universe can be represented by state vectors in complex Hilbert space of finite or infinite dimension. The joint state of two communicating quantum systems is represented by vectors in the tensor product space of the Hilbert spaces of the constituent systems.

The machinery of quantum mechanics is embodied in matrices that map Hilbert space into and onto itself. We interpret a quantum mechanical event as an interaction of sources of information, that is an act of communication. We have seen that a a source A has an alphabet of the i symbols that it is capable of sending and receiving, ai. The symbols in used in quantum mechanics are the eigenvalue which are produced by the operation of the orthogonal set of eigenvectors of the measurement operator.

Communication theory assumes that a source deals with one symbol at a time, so that the sum of the probabilities pi of emission of the symbols Σi pi = 1. This condition is enforces in quantum mechanics by normalization.

Probabilities in quantum mechanics are computed as the absolute square of the wave function φ, P = |φ|2. The normalization of probability requires that the sum of the absolute squares of the coefficients of the basis vectors of φ must be 1.

The continuous evolution of state vectors in an isolated quantum system is described by unitary operators on their Hilbert space governed by Schroedinger’s equation. Since such a system is isolated, however, this continuous evolution is not directly observed but is inferred from the observed success of its consequences.

Mathematically this evolution is deterministic and reversible so that we may think of it as a process of encoding the same message in different bases. The Schroedinger equation applies equally at all energies and all levels of complexity of state vectors. The only truly isolated system is the Universe as a whole, represented in its simplest state by the initial singularity Hawking & Ellis: The Large-Scale Structure of Space-time.

The continuous evolution of an isolated quantum system is understood to be interrupted by an observation or measurement. When we observe a system, we do not see the whole continuous system, but only one or other of the basis states (eigenvectors) of the operator we use to observe the system. The mathematical formalism of quantum mechanics cannot predict which eigenvector we will observe, only the relative frequencies of the observed eigenvalues.

Zurek has shown that this restriction on the completeness of observation is necessary if we are to obtain information from a quantum system. This suggests that the quantization of observation and the requirements of mathematical communication theory are consistent with one another. From a communication point of view, quantum mechanics does not reveal actual messages but rather the traffic on various links. If we assume that the transmission of a message corresponds to a quantum of action, the rate of transmission in a channel is equivalent to the energy on that channel.

Further, the statistical properties of a quantum observations are identical to the statistical properties of a communication source. Like the probability of emission of the various letters of a source, the probabilities of observing various eigenstates of a quantum system are normalized to 1. This constraint is established in quantum theory by the unitarity of the evolution and observation operators. This leads us to think of the eigenstates of a quantum observation as the letters of alphabet of a communication source.

From an abstract point of view there is but one Hilbert space of each dimensionality and there is no preferred set of orthonormal basis states. The transformation approach to quantum mechanics pioneered by Dirac shows how one basis may be converted into another by unitary operators which preserve orthonormality. Dirac: The Principles of Quantum Mechanics .

We may see the communication theoretic equivalent of quantum mechanical transformations as the computational transformation of messages between different encodings using different alphabets.

The mathematical formalism of quantum mechanics assumes that the state space of the physical Universe can be represented by state vectors in complex Hilbert space of finite or infinite dimension. The joint state of two communicating quantum systems is represented by vectors in the tensor product space of the Hilbert spaces of the constituent systems.

The continuous evolution of state vectors in an isolated quantum system is described by unitary operators on their Hilbert space governed by Schroedinger’s equation. Since such a system is isolated, however, this continuous evolution is not directly observed but is inferred from the observed success of its consequences.

Mathematically this evolution is deterministic and reversible so that we may think of it as a process of encoding the same message in different bases. The Schroedinger equation applies equally at all energies and all levels of complexity of state vectors. The only truly isolated system is the Universe as a whole, represented in its simplest state by the initial singularity Hawking & Ellis: The Large-Scale Structure of Space-time.

The continuous evolution of an isolated quantum system is understood to be interrupted by an observation or measurement. When we observe a system, we do not see the whole continuous system, but only one or other of the basis states (eigenvectors) of the operator we use to observe the system. The mathematical formalism of quantum mechanics cannot predict which eigenvector we will observe, only the relative frequencies of the observed eigenvalues.

Zurek has shown that this restriction on the completeness of observation is necessary if we are to obtain information from a quantum system. This suggests that the quantization of observation and the requirements of mathematical communication theory are consistent with one another. From a communication point of view, quantum mechanics does not reveal actual messages but rather the traffic on various links. If we assume that the transmission of a message corresponds to a quantum of action, the rate of transmission in a channel is equivalent to the energy on that channel.

Further, the statistical properties of a quantum observations are identical to the statistical properties of a communication source. Like the probability of emission of the various letters of a source, the probabilities of observing various eigenstates of a quantum system are normalized to 1. This constraint is established in quantum theory by the unitarity of the evolution and observation operators. This leads us to think of the eigenstates of a quantum observation as the letters of alphabet of a communication source.

From an abstract point of view there is but one Hilbert space of each dimensionality and there is no preferred set of orthonormal basis states. The transformation approach to quantum mechanics pioneered by Dirac shows how one basis may be converted into another by unitary operators which preserve orthonormality. Dirac: The Principles of Quantum Mechanics .

We may see the communication theoretic equivalent of quantum mechanical transformations as the computational transformation of messages between different encodings using different alphabets.

Back to top

19: The transfinite computer network: a bridge between physics and metaphysics

Engineered networks are layered, a technology necessary to make them easy to construct, expand and troubleshoot. It has long been noticed that the world itself is layered, larger things being built out of smaller ones until we come to an ultimate atom. Tanenbaum: Computer Networks

We transform this idea into a transfinite network by mapping the layers of the universal network onto the sequence transfinite numbers, beginning by letting the natural numbers correspond to the physical layer of the Universe. The eigenfunctions of this physical layer are the countably infinite set of Turing machines.

Each subsequent software layer uses the layer beneath it as an alphabet of operations to achieve its ends. The topmost layer, in engineered networks, comprises human users. These people may be a part of a corporate network, reporting through further layers of management to the board of an organization.

By analogy to this layered hierarchy, we may consider the Universe as a whole as the ultimate user of the universal network. Since the higher layers depend on the lower layers for their existence, we can expect an evolutionary tendency for higher layers to curate their alphabets to maintain its own stability.

Processes in corresponding layers (‘peers’) of two nodes in a network may communicate if they share a suitable protocol. All such communication uses the services of all layers between the peers and the physical layer. These services are generally invisible or transparent to the peers unless they fail. Thus two people in conversation are generally unaware of the huge psychological, physiological and physical complexity of the systems that make their communication possible.

Let us imagine that the actual work of permutation in the symmetric universe (ie its dynamics) is executed by Turing machines. As formal structures these Turing machines are themselves ordered sets, and are to be found among the ordered strings contained in the Universe.

The installation of these Turing machines turns the transfinite universe into the transfinite network. This network is a set of independent memories able to communicate with and change one another via Turing machines. The internet is a finite example of such a network, the memories of servers, routers, clients and users changing each other’s states through communication.

It seems clear that the transfinite network has sufficient variety to be placed in one-to-one correspondence with any structure or process in the Universe. In a case where a given layer of the network universe is found to be too small to accommodate the system of interest, we have only to move up through the layers until we find a level whose cardinal is adequate for the task. Ashby Cybernetics

Permutations can be divided into subsets or cycles of smaller closed permutations. This process means that no matter what the cardinal of a permutation, we can find finite local permutations whose action nevertheless permutes the whole Universe. Moving my pen from a to b (and moving an equivalent volume of air from b to a) is such an action. Permutation - Wikipedia

Although there are 1 mappings of the 0 natural numbers onto themselves, there are only 0 different Turing machines. As a consequence, almost all mappings are incomputable, and so cannot be generated by a deterministic process. Nevertheless a mapping once discovered may be tested by a computable process. Here we see an echo of the P versus NP problem. P versus NP problem - Wikipedia

From a communication point of view, quantum mechanics does not reveal actual messages but rather the traffic on various links. If we assume that the transmission of a message corresponds to a quantum of action, the rate of transmission in a channel is equivalent to the energy on that channel, information encoded in the energy operator, H.

Further, the collapse of the wave function may be analogous to the completion of a halting computation. The completion of a computation is associated with a quantum of action. Eigenfunctions are orthogonal to one another to prevent error. Every eigenfunction has an inverse to decode the message it has encoded. Wave function collapse - Wikipedia

back to top

#*#

How do we hook transfinite numbers onto Hilbert s? First we associate rays [axes, base states] in Hilbert space with computers and establish a one-to-one correspondence between dimensions in Hilbert space snd orthogonal computations or algorithms. This enables a one to one correspondence between the dimensions and the natural numbers, giving us a Hilbert whose cardinal we can call 0. We then apply permutation to get a function space of the natural numbers onto themselves, whose cardinal is 1. From the point of view of 1 0 has measure zero, so we can imagine the 1 Hilbert spaces as orthogonal to the whole 0 space. We can go on in this vin with no upper bound, as Cantor has shown. The systm may look exceedingly big to model the universe which looks relatively discrete and finite from a local point of view. There are a number of ways to del with this.

The first arises from the distinction between 'platonic' snd 'machine' infinity which enables us to set 0 to 2 or any other

[page 213]

finite number without affecting the logic of Cantor's proof. The second is the effect of natural selection which picks out dynamic structures which are 'self-bootstrapping, able to reproduce and propagate themselves. Even though such systems may be exceedingly improbable from a random variation point of view, the vast transfinite numbers almost guarantee their realization, and their realization greatly reduces the resourced available for other possibilities by reproductively sequestering resources. Further, although this system is very large it is constructed of 'atoms' of communication which we intuitively understand, since we are born communicators [The diminution of resources is a result of the various conservation laws that operate in nature].

. . . We want to argue that the universe is the mind of God. Tis argument has three steps: 1: that the universe is a network; 2: that network are the foundation of mind; and 3: that the universe is self-sufficient.

Feynman 1995, 2003: Gravitation: Hatfield page xxxiii: 'The charge associated with gravitation is mass, which we expect from special relativity to be equivalent to energy. Since everything that we know about has energy, it appears that gravity should couple to everything. The particle that mediates the gravitational force is called the graviton. Since a graviton has energy, gravitons can directly interact with each other.' This seems all wrong and I like the geometrical version better, remembering that energy-momentum behave like geometry (Feynman 1997 page 105). Richard Feynman: Feynman Lectures on Gravitation, Feynman: Six Not So Easy Pieces

We are equating intelligent design with evolution by natural selection and proposing that random processes occasionally come up with systems that are consistent enough to maintain their own existence in the face of error, that is reproduce [retry]. We think that the development of the brain may work the same way, large numbers of random connections, ie random processes being pruned by the elimination of those that don't turn out to be consistent with the system [just like a university, excluding the failures]. Now I am thinking of a similar mechanism to explain insight, when an idea propagates to a sufficient number of connections it becomes conscious, like this idea has done, and we are beginning to see more clearly hoe intelligence i a natural feature of networks, trying everything and every now and then coming up with a breakthrough, an insight whose consequences in the wider world may be good or bad, like the insight that led to the construction of nuclear weapons [or the wheel]. Maybe this ideas has some connection to scepticism, stress testing intellectual products to see whether they are a good safe investment. Jesse J. Prinz: Is Attention Necessary and Sufficient for Consciousness?

We can begin with a very simple machine. We input one, it operates and halts, outputting zero. We can couple this to another almost identical machine [using the same algorithm ('not')]. We inout 0 (coming perhaps from the previous machine and it operates and halts, outputting 1. These two machines, operating sequentially have effectively done nothing yet we can think of an endless string of them as a clock, going tick-tock or as an infinite wave train. The Nyquist-Shannon theorem tells us that the two machines can reproduce a wave, in effect 'sampling' it at twice its frequency, and one complex cycle is equivalent to one quantum of action, given the Planck-Einstein relation E=hf. Nyquist-Shannon sampling theorem - Wikipedia

We can imagine more complex ways of doing nothing, constructing machines that input one binary number and output another connected to another machine so that the output of the first becomes the input of the second whose output is identical to the input of the first, a simple codec. So where does this idea lead? At present it is just another brick for a wall yet to be built, but maybe it can serve as a digital model of the superposition of many frequencies, each corresponding to a sequence of not operations by which this last machine remains linear and lossless as it does its work. Codec - Wikipedia

Feynman 1995 page 9: Total gravitational energy of all the particles in the universe is something like GMM/R where R = Tc where T is the Hubble time. If we compare this to the total rest energy of the universe we find that GM2/R = Mc2 so the total energy of the universe is zero. Does not make a lot of sense, but I love it. Zero-energy universe - Wikipedia

Friday 26 April 2019

Feynman 1995 page 13: 'The traditional description of the total quantum mechanics of the world by a complete Monster Wavefunction (which includes all observers) obeying a Schrödinger squation

i∂Ψ/∂t =

implies an incredibly complex infinity of amplitudes [given Ψ a vector in a transfinite dimensional Hilbert Space].

From an ancient point of view the principal objection to making the universe divine is space-time, since The ancient god is considered to be outside space and time. Another ay of expressing this is that the divinity is absolutely simple [and it is assumed that space-time is complex, full of stuff]. The generic cure for this problem id fixed point theory, which basically tells us that it would be inconsistent for there not to be an answer. The next step may be based on the idea that quantum mechanics is the theory of everything and we can find fixed points in quantum mechanics which arise from observation which we can, in the traditional world, think of the Father mapping itself onto itself to yield the Son. This too seems good. Next we observe from the Minkowski metric that space and time seem to be inverses of one another, so that they can be seen as emergent from some prior unitary system [time being the dynamic aspect, space being the kinematic aspect]. What we need, therefore, is a link between quantum mechanics and Minkowski space which we can guess to be something to do with the emergence of energy-momentum from energy. How does this work? Always we need to find an explanatory mechanism which can be expressed in symbols that is written down as a step forward from Feynman's Monster wavefunction. We might see the clue in Feynman;s three rules of quantum theory: 1: P = probability, φ = probability amplitude. P = |φ|2; 2: When an event can occur in several different ways, the probability amplitude of the event is the sum of the amplitudes for each way considered separately, there is interference φ = φ1 + φ2, P = |φ1 + φ2|2; 3. When it is possible to see which path is actually taken, P = P1 + P2. So the ancient conundrum: observability determines what happens. Feynman, Leighton & Sands FLP III:01: Quantum Behaviour

Symmetry: an invisible action. Feynman, Leighton & Sands FLP I:52: Symmetry in Physical Laws

We take the fundamental symmetry to be conservation of angular momentum, so physical laws are unchanged if the phase of the wave function is shifted by an arbitrary constant (since the superposition is unchanged φ' = φe, |φ|2 = |φ'|2.

'The conservation law which is connected with the quantum mechanical phase seems to be the conservation of electrical charge'. Here we have a connection to the velocity of light and the structure of space-time, but what? What did Feynman know? Charge conservation - Wikipedia

Why does the absolute square of the probability amplitude give a probability? Because the invisibility theorem assumes that quantum processes can only do one thing at a time, so they are time division multiplexed (because they are one dimensional 'field' theories) (Zee) and communicating i a process just like any other, so a system has to stop whatever it is doing to process the message that tells us what it is doing, but what it is doing then is communicating and it has to stop that to communicate, setting up an infinite regress that says that it basically has to stop everything to communicate.So the dynamic motion described by evolution of probability amplitude must remain hidden when an observer demands a readout on what it is doing, and the system yields the solution to the wave equation that it was working on when this interruption [observation] came.

No agent can concentrate on doing two things at once: eg fighting.

Let us say that potential is the fixed point of the action mapping onto itself, so the Son is the potential corresponding to the actual Father.

We continue to suck physics out of the Trinity.

@@@@@

Copyright:

You may copy this material freely provided only that you quote fairly and provide a link (or reference) to your source.

Further reading

Books

Caroll, Lewis, Sylvie and Bruna Concluded, Dover 1988 ' Novel for children by Lewis Carroll published in 1889. The work evolved from his short story "Bruno's Revenge," published in 1867 in Aunt Judy's Magazine. With its sequel, Sylvie and Bruno Concluded (1893), it was his final work for children. The novel attained some popularity, but was considered puzzling and disjointed. Containing more banter between the titular siblings than plot, the convoluted story operates on two parallel levels, one realistic and didactic, and the other dreamlike and fantastic. It includes elements of fairy tales (Sylvie and Bruno are fairy children bent on doing good works and saving a throne), sentimental moralizing, and edifying episodes espousing social reform.' -- The Merriam-Webster Encyclopedia of Literature 
Amazon
  back

Hawking, Steven W, and G F R Ellis, The Large Scale Structure of Space-Time, Cambridge UP 1975 Preface: Einstein's General Theory of Relativity . . . leads to two remarkable predictions about the universe: first that the final fate of massive stars is to collapse behind an event horizon to form a 'black hole' which will contain a singularity; and secondly that there is a singularity in our past which constitutes, in some sense, a beginning to our universe. Our discussion is principally aimed at developing these two results.' 
Amazon
  back

Hille, Einar , Analytic Function Theory, Volume 2 , Chelsea 1973 Foreword: 'Volume II ... is a direct continuation of volume I.'  
Amazon
  back

Links

2019 redefinition of SI base units - Wikipedia, 2019 redefinition of SI base units - Wikipedia - the free encyclopedia, 'The kilogram, ampere, kelvin, and mole will then be defined by setting exact numerical values for the Planck constant (h), the elementary electric charge (e), the Boltzmann constant (k), and the Avogadro constant (NA), respectively. The metre and candela are already defined by physical constants, subject to correction to their present definitions. The new definitions aim to improve the SI without changing the size of any units, thus ensuring continuity with existing measurements.' back

Algorithmic information theory - Wikipedia, Algorithmic information theory - Wikipedia, the free encyclopedia, 'Algorithmic information theory is a subfield of information theory and computer science that concerns itself with the relationship between computation and information. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."' back

Aquinas, Summa: I, 14, 1, Is there knowledge in God?, ' I answer that, In God there exists the most perfect knowledge. To prove this, we must note that intelligent beings are distinguished from non-intelligent beings in that the latter possess only their own form; whereas the intelligent being is naturally adapted to have also the form of some other thing; for the idea of the thing known is in the knower. Hence it is manifest that the nature of a non-intelligent being is more contracted and limited; whereas the nature of intelligent beings has a greater amplitude and extension; therefore the Philosopher says (De Anima iii) that "the soul is in a sense all things." Now the contraction of the form comes from the matter. Hence, as we have said above (Question 7, Article 1) forms according as they are the more immaterial, approach more nearly to a kind of infinity. Therefore it is clear that the immateriality of a thing is the reason why it is cognitive; and according to the mode of immateriality is the mode of knowledge. Hence it is said in De Anima ii that plants do not know, because they are wholly material. But sense is cognitive because it can receive images free from matter, and the intellect is still further cognitive, because it is more separated from matter and unmixed, as said in De Anima iii. Since therefore God is in the highest degree of immateriality as stated above (Question 7, Article 1), it follows that He occupies the highest place in knowledge. back

Atomic clock - Wikipedia, Atomic clock - Wikipedia, the free encyclopedia, 'An atomic clock is a clock device that uses an electronic transition frequency in the microwave, optical, or ultraviolet region of the electromagnetic spectrum of atoms as a frequency standard for its timekeeping element.' back

Black-body radiation - Wikipedia, Black-body radiation - Wikipedia, the free encyclopedia, 'Black-body radiation is the type of electromagnetic radiation within or surrounding a body in thermodynamic equilibrium with its environment, or emitted by a black body (an opaque and non-reflective body) held at constant, uniform temperature. The radiation has a specific spectrum and intensity that depends only on the temperature of the body.' back

Chelsea Gohd, How many photons has the universe produced in its life?, ' The team found that the amount of starlight, or the number of photons (particles of visible light) that stars have emitted throughout the history of the observable universe is 4×10^84 photons. . . . “By using blazars at different distances from us, we measured the total starlight at different time periods. We measured the total starlight of each epoch — one billion years ago, two billion years ago, six billion years ago, etc. — all the way back to when stars were first formed. This allowed us to reconstruct the EBL [Extragalactic Background Light] and determine the star-formation history of the universe in a more effective manner than had been achieved before,” Vaidehi Paliya, a co-author and postdoctoral fellow who analyzed almost nine years of relevant data, said in a statement. back

Classical physics - Wikipedia, Classical physics - Wikipedia, the free encyclopedia, ' Classical physics refers to theories of physics that predate modern, more complete, or more widely applicable theories. If a currently accepted theory is considered to be modern, and its introduction represented a major paradigm shift, then the previous theories, or new theories based on the older paradigm, will often be referred to as belonging to the realm of "classical physics". back

Cosmological constant problem - Wikipedia, Cosmological constant problem - Wikipedia, the free encyclopedia, 'In cosmology, the cosmological constant problem or vacuum catastrophe is the disagreement between measured values of the vacuum energy density (the small value of the cosmological constant) and the zero-point energy suggested by quantum field theory. Depending on the assumptions[which?], the discrepancy ranges from 40 to more than 100 orders of magnitude, a state of affairs described by Hobson et al. (2006) as "the worst theoretical prediction in the history of physics." ' back

Gustav Kirchoff (1860), Ueber das Verhältniss zwischen dem Emissionsvermögen und dem Absorptionsvermögen der Körper für Wärme und Licht, Translated by Guthrie, F. as Kirchhoff, G. (1860). "On the relation between the radiating and absorbing powers of different bodies for light and heat". Philosophical Magazine. Series 4. 20: 1–21. back

History of logic - Wikipedia, History of logic - Wikipedia, the free encyclopedi, ' The history of logic deals with the study of the development of the science of valid inference (logic). Formal logics developed in ancient times in India, China, and Greece. Greek methods, particularly Aristotelian logic (or term logic) as found in the Organon, found wide application and acceptance in Western science and mathematics for millennia. The Stoics, especially Chrysippus, began the development of predicate logic. back

Hydrogen atom - Wikipedia, Hydrogen atom - Wikipedia, the free encyclopedia, 'A hydrogen atom is an atom of the chemical element hydrogen. The electrically neutral atom contains a single positively charged proton and a single negatively charged electron bound to the nucleus by the Coulomb force. Atomic hydrogen constitutes about 75% of the elemental (baryonic) mass of the universe.' back

Hydrogen spectral series - Wikipedia, Hydrogen spectral series - Wikipedia, the free encyclopedia, ' The emission spectrum of atomic hydrogen has been divided into a number of spectral series, with wavelengths given by the Rydberg formula. These observed spectral lines are due to the electron making transitions between two energy levels in an atom. The classification of the series by the Rydberg formula was important in the development of quantum mechanics. The spectral series are important in astronomical spectroscopy for detecting the presence of hydrogen and calculating red shifts.' back

The Assayer - Wikipedia, The Assayer - Wikipedia, the free encyclopedia, ' Philosophy [i.e. physics] is written in this grand book — I mean the Universe — which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language and interpret the characters in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometrical figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering around in a dark labyrinth.' back

Lemma (mathematics) - Wikipedia, Lemma (mathematics) - Wikipedia, the free encyclopedia, 'In mathematics, a "helping theorem" or lemma (plural lemmas or lemmata) is a proven proposition which is used as a stepping stone to a larger result rather than as a statement of interest by itself.[1] The word derives from the Ancient Greek λῆμμα ("anything which is received, such as a gift, profit, or a bribe").' back

Leopold Kronecker - Wikipedia, Leopold Kronecker - Wikipedia, the free encyclopedia, 'Leopold Kronecker (December 7, 1823 – December 29, 1891) was a German mathematician who worked on number theory and algebra. He criticized Cantor's work on set theory, and was quoted by Weber (1893) as having said, "God made natural numbers; all else is the work of man".' back

Mathematical proof - Wikipedia, Mathematical proof - Wikipedia, the free encyclopedia, 'In mathematics, a proof is an inferential argument for a mathematical statement. In the argument, other previously established statements, such as theorems, can be used. In principle, a proof can be traced back to self-evident or assumed statements, known as axioms, along with accepted rules of inference.' back

Narrative - Wikipedia, Narrative - Wikipedia, the free encyclopedia, ' A narrative or story is an account of a series of related events, experiences, or the like, whether true (episode, vignette, travelogue, memoir, autobiography, biography) or fictitious (fairy tale, fable, story, epic, legend, novel). The word derives from the Latin verb narrare (to tell), which is derived from the adjective gnarus (knowing or skilled). Along with exposition, argumentation and description, narration, broadly defined, is one of four rhetorical modes of discourse. More narrowly defined, it is the fiction-writing mode in which the narrator communicates directly to the reader.' back

Panpsychism - Wikipedia, Panpsychism - Wikipedia, the free enecylopedia, 'In philosophy, panpsychism is the view that consciousness, mind, or soul (psyche) is a universal and primordial feature of all things. Panpsychists see themselves as minds in a world of mind. Panpsychism is one of the oldest philosophical theories, and has been ascribed to philosophers like Thales, Parmenides, Plato, Averroes, Spinoza, Leibniz, and William James. Panpsychism can also be seen in ancient philosophies such as Stoicism, Taoism, Vedanta and Mahayana Buddhism.' back

Planck's Law - Wikipedia, Planck's Law - Wikipedia, the free encyclopedia, 'In physics, Planck's law describes the spectral radiance of electromagnetic radiation at all wavelengths from a black body at temperature T. As a function of frequency ν. back

Real line - Wikipedia, Real line - Wikipedia, the free encyclopedia, 'In mathematics, the real line, or real number line is the line whose points are the real numbers. That is, the real line is the set R of all real numbers, viewed as a geometric space, namely the Euclidean space of dimension one. It can be thought of as a vector space (or affine space), a metric space, a topological space, a measure space, or a linear continuum.' back

Turing machine - Wikipedia, Turing machine - Wikipedia, the free encyclopedia, A Turing machine is a hypothetical device that manipulates symbols on a strip of tape according to a table of rules. Despite its simplicity, a Turing machine can be adapted to simulate the logic of any computer algorithm, and is particularly useful in explaining the functions of a CPU inside a computer. The "machine" was invented in 1936 by Alan Turingwho called it an "a-machine" (automatic machine). The Turing machine is not intended as practical computing technology, but rather as a hypothetical device representing a computing machine. Turing machines help computer scientists understand the limits of mechanical computation.' back

W. F. McGrew et al, Atomic clock performance enabling geodesy below the centimetre level, ' The passage of time is tracked by counting oscillations of a frequency reference, such as Earth’s revolutions or swings of a pendulum. By referencing atomic transitions, frequency (and thus time) can be measured more precisely than any other physical quantity, with the current generation of optical atomic clocks reporting fractional performance below the 10−17 level. However, the theory of relativity prescribes that the passage of time is not absolute, but is affected by an observer’s reference frame. Consequently, clock measurements exhibit sensitivity to relative velocity, acceleration and gravity potential. Here we demonstrate local optical clock measurements that surpass the current ability to account for the gravitational distortion of space-time across the surface of Earth. In two independent ytterbium optical lattice clocks, we demonstrate unprecedented values of three fundamental benchmarks of clock performance. In units of the clock frequency, we report systematic uncertainty of 1.4 × 10−18, measurement instability of 3.2 × 10−19 and reproducibility characterized by ten blinded frequency comparisons, yielding a frequency difference of [−7 ± (5)stat ± (8)sys] × 10−19, where ‘stat’ and ‘sys’ indicate statistical and systematic uncertainty, respectively. Although sensitivity to differences in gravity potential could degrade the performance of the clocks as terrestrial standards of time, this same sensitivity can be used as a very sensitive probe of geopotential. Near the surface of Earth, clock comparisons at the 1 × 10−18 level provide a resolution of one centimetre along the direction of gravity, so the performance of these clocks should enable geodesy beyond the state-of-the-art level. These optical clocks could further be used to explore geophysical phenomena, detect gravitational waves, test general relativity and search for dark matter.' back

www.naturaltheology.net is maintained by The Theology Company Proprietary Limited ACN 097 887 075 ABN 74 097 887 075 Copyright 2000-2020 © Jeffrey Nicholls

### stuff
4: Computers, quantum mechanics and special relativity

At the beginning of the twentieth century the fundamental invariants of the Universe were captured in the theories of quantum mechanics and relativity. Special relativity and quantum mechanics have since been combined in an uneasy coalition called quantum field theory which has given us the Standard Model.

The standard Model is unable to cope with gravitation. This difficulty suggests that the Standard Model does not yet map properly onto the Universe. Although some physicists consider that their theory brings them close to the mind of God, they could not be further from the truth. At best they have decoded elements of the alphabet the divine world uses to write its utterances.

Following the lead of physics we propose to make a logical and mathematical model of the Universe, a mathematical metaphysics. There are two reasons for this approach: first, mathematics has a much larger vocabulary that natural languages, so it is better suited to deal with something as large as the Universe; and second, because mathematics has a special private language it loses very little in translation, so mathematical metaphysics may transcend linguistic and cultural barriers.

Quantum mechanics models the fixed points of an action as the eigenfunctions of the operator we use to observe the action. Quantum mechanics has a persistent reputation for counter-intuitivity. This may be because it does look rather strange from the point of view of the billiard ball world of classical dynamics. On the other hand, quantum mechanics makes a lot of sense if we look at in terms of communication. We model communication systems as computer networks. Let us therefore look for analogies between the Universe and computer networks.

1. Computers are digital, and so is the Universe

Although from the time of Aristotle the Universe has been considered a continuous system, careful observation shows that the smallest events in the Universe are discrete steps measured by Planck’s quantum of action, and all larger processes are synthesized from these smallest steps. We also notes that everything we can observe, from grains of dust to galaxies, is also a clear and discrete object with a specific form, what Descartes might call a clear and distinct idea.

2. Both computers and the Universe are systems in motion

The universe is in perpetual motion like the complex wave equations of quantum mechanics. The entropy of the universe is always increasing. Although abstract quantum mechanics is mathematically reversible, the quantum interactions of particles take place in the product spaces of the interacting particles, thereby increasing the complexity of the universe. The deepest layers of quantum mechanical behaviour are invisible to us. All these features and more can be explained by computer networks. To show this, we lay the mathematical foundations of computation and communication beginning with their source, the mathematical community itself.

###