natural theology

We have just published a new book that summarizes the ideas of this site. Free at Scientific Theology, or, if you wish to support this project, buy at Scientific Theology: A New Vision of God

Contact us: Click to email

Essay 13: On modelling the world (2010)

[Submitted to the Australian Journal of Philosophy]

In the beginning God created the Heavens and the Earth.
(Genesis 1:1 [Unknown authors, circa 1000 bce] Genesis)

0. Abstract

1. Introduction

2. A definite model: computer networks

3 The physical layer: some fixed points
  3.1 Aristotle’s unmoved mover
  3.2 The world of light
  3.3 The initial singularity
  3.4 The isolated quantum system
  3.5 The Turing machine
  3.6 Isomorphism

4. From isolation to communication
  4.1 The Word of God
  4.2 Quantum measurement

5. Quantum field theory and the computer network
  5.1 Why is the Universe quantized?
  5.2 Calculus and the quantum
  5.3 Action and the path integral
  5.4 Feynman diagrams

6. A transfinite computer network
  6.1 The Cantor Universe
  6.2 The transfinite symmetric universe
  6.3 Cantor symmetry
  6.4 Introducing computers into the Cantor universe
  6.5 Computation: quantum information theory
  6.6 Creative evolution

7. Some corollaries
  7.1 ‘The Unreasonable effectiveness of mathematics’
  7.2 The view from inside the Universe
  7.3 Network intelligence
  7.4 Past and future
  7.5 Entanglement and ‘spooky action at a distance’
  7.6 Numerical results

0 Abstract

Each of us follows a path through life. To follow a path, one needs navigation, and to navigate one needs both a model (map) of the space to be navigated and a method of locating oneself on the map. Traditionally, navigation in human space is studied by philosophers and theologians with very different world views. Thales - Wikipedia

Over the last few centuries, science has provided us with a quite comprehensive picture of our habitat that begins with the initial singularity predicted by general relativity and traces the evolution of the Universe to its present state.

To physicists, this work is an approach to a ‘theory of everything’. It provides a good foundation for practical technologies like computers, power grids, space vehicles and medicine, but has little to say about human spiritual issues. The purpose of this article is to extend the physical model into the spiritual realm, rather as Aristotle generated metaphysics out of physics. Aristotle: The Internet Classics Archive | Works by Aristotle

Traditionally the majority of philosophers and theologians have postulated a very real and sharp distinction between such categories of being as spiritual and material, living and non-living, intelligent and not-intelligent, eternal and ephemeral and so on.

Here I propose a model that embraces these and all other distinctions in the world in a single consistent picture.

back to toc

1 Introduction

Physical cosmology has taught us that in terms of space and time we and our planet Earth are a small element of a very large Universe. Yet we are still inclined to believe that we are unique, rather overlooking the intelligence of the Universe that made us. Here we assume that creative intelligence is universal and attempt to understand how it works to create the Universe, ourselves, and our theories about ourselves. Universe - Wikipedia

This essay is based on a model of the Universe derived from the structure of the computer communication networks that are rapidly evolving round us. Some of the oldest questions in philosophy concern the relationship between motion and stillness. As far as we know, Parmenides started the ball rolling when a Goddess told him that true reality was full and eternal, the moving world being somehow less than fully real and an imperfect object of true knowledge. John Palmer (Stanford Encyclopedia of Philosophy): Parmenides

Truth may be defined as an isomorphism between two meaningful entities, often a sentence and reality: The truth of a sentence consists in its agreement with (or correspondence to) reality. Parmenides is speaking of an eternal correspondence between eternal reality and eternal knowledge. Here we admit a time element to truth: he is standing on his head remains true as long as he is standing on his head, but becomes false when he falls over. Alfred Tarski (1944): The semantic concept of truth and the foundation of semantics

We live in a dynamic Universe which nevertheless exhibits points that remain fixed for various lengths of time. Some, the subjects of science, may remain true for all time. Others may be temporary, but we may track changes by revising our sentences as fast as the reality to which they refer is changing. This idea is encapsulated in mathematical sampling theorems, that tells us that as long as we sample a changing reality twice as fast as the maximum frequency of change, our samples will be a true representation of the reality sampled. Nyquist-Shannon sampling theorem - Wikipedia

Here we propose that a class of mathematical theorems known as fixed point theorems provide a relatively simple answer to the problematic relationship between motion and stillness. These theorems, pioneered by Brouwer, tell us that whenever a convex compact set is mapped onto itself at least one point remains unmoved. Jean van Heijenoort (1999): From Frege to Gödel: A Source Book in Mathematical Logic 1879 - 1931, John Casti (1996): Five Golden Rules: Great Theories of 20th-Century Mathematics - and Why They Matter

It is easy to see that the Universe might fulfill the hypotheses of these theorems. Convex means that it has no ‘holes’: one can get from any A to any B without having to go outside the Universe; compact means that it contains its own boundaries. By definition, there is nothing outside the Universe. Brouwer allows us to turn Parmenides around, seeing the dynamics as primary and the forms, that is the fixed points, as fixed points in the motions. The consistency of the form is explained by the consistency of the motion that contains it which is in turn constrained by the characteristics of the set being mapped onto itself. Brouwer fixed point theorem - Wikipedia

This approach is consistent with the view that the Universe is pure act (in the sense used by Aquinas, and so divine. It is also consistent with the idea that we are inside rather than outside the divinity, so that the dynamics of the Universe is equivalent to the life of God. Thomas Aquinas, Summa, I, 2, 3: Does God exist?, Thomas Aquinas Summa Theologiae I, 18, 3: Is life properly attributed to God?

back to toc

2 A definite model: the computer networks

A network is a set of computers (like the internet) physically connected to enable the transmission of information. Although the processes in individual computers are deterministic, a network is not, since any computer may be interrupted at any point in its process and steered onto a different course by information received.

At its most abstract, a network comprises a set of addressed memories and a set of agents that can read from and write to those memories. The engineering of robust practical computer networks is a complex business. Andrew Tanenbaum (1996): Computer Networks

To deal with these complex problems, networks are constructed in layers. The lowest layer is the physical (or hardware) layer, the wires, transistors, etc necessary to physically realize formal logical operators and connect them together. The physical layer is necessary because all observable information is represented physically, as electric currents, ink on paper, sounds and so on. The operation of the physical layers is described by quantum mechanics, but it is designed to be quite deterministic so that the error rate is low and can be controlled by the application of the mathematical theory of communication [5.1 below]. Rolf Landauer (1999): Information is a physical entity

At the opposite extreme is the user layer, people sending and receiving data through the network. In between these two layers are the various software layers that transform user input into a suitable form for physical transmission and vice versa.

Each subsequent software layer uses the routines of the layer beneath it as an alphabet of operations to achieve its ends. Extrapolating beyond the computer network, a human in the user layer may be a part of a corporate network, reporting through further layers of management to the board of his organization. By analogy to this layered hierarchy, we may consider the Universe as a whole to be the ultimate user of a universal network.

Corresponding layers (‘peers’) of two nodes in a network may communicate if they share a suitable protocol. All such communication uses the services of all layers between the peers and the physical layer. These services are generally invisible or transparent to the peers unless they fail. Thus in conversation we are almost completely unaware of the complex physiological, chemical and physical processes that make our communication possible.

back to toc

3 The physical layer: some isolated fixed points

Using the network model, we interpret motion as the transmission and receipt of messages. Further, we accept Landauer’s hypothesis that all information is represented physically, so that there are no ‘pure spirits’. Everything is embodied. We begin our discussion with a set of isolated fixed points which have been defined in various historical contexts. We then note their identity before turning to the task of binding them into a communication network.

3.1 Aristotle’s ‘unmoved mover’

Aristotle supposed that nothing could move itself since he saw motion as the realization of a potential, and held it to be axiomatic that no potential could actualize itself. This led him to propose an unmoved mover ‘whose essence is actuality’ as the source of all motion in the world [Aristotle, Metaphysics 1071b3 sqq]. Unmoved mover - Wikipedia

Aristotle notes that ‘the object of desire and the object of thought move without being moved’ [1072a26]. The first mover moves things, not by pushing them, but by attracting them as a ‘final cause’.

back to toc

3.2 The world of light

The fundamental axiom of special relativity is that observers moving without acceleration (inertially) and at rest with respect to the object observed see the same data. In particular, all inertial observers will see the same velocity of light regardless of their state of motion.

A simple geometrical argument based on this observation leads to the Lorentz transformation. Some well known consequences of this transformation are that if I observe you going past me at a significant fraction of the velocity of light, your clocks will appear slow relative to mine, your rulers will appear foreshortened in the direction of motion and your mass will appear greater.

The mathematical apparatus of Lorentz transformation was greatly simplified by Minkowski. In the Minkowski representation the space-time interval between events appears to be the same regardless of the relative inertial motions of the observer and observed. Using this metric, we arrive at a property of photons and other particles travelling at the velocity of light: from the point of view of any observer a clock on a photon appears to be stopped and the photon has zero length. Given these facts, we might imagine that from the point of view of an hypothetical observer, photons exist in an eternal world of zero size, another fixed point. The velocity of light is not special here. A similar situation arises whenever the speed of communication is equal to the speed of the entity with which one attempts to communicate. Streater & Wightman (2000): PCT, Spin, Statistics and All That, Minkowski space - Wikipedia

back to toc

3.3 The initial singularity

Einstein built the theory of general relativity on the special theory. The general theory predicts that the Universe is either expanding or contracting. Observation suggests that the latter is true, and we can extrapolate toward the past to a point known as the initial singularity where space-time as we experience it ceases to exist. This point, representing the beginning of the whole Universe, is effectively fixed and isolated since there is, by the definition of Universe, nothing outside it. Misner, Thorne & Wheeler (1973); Gravitation, Hawking & Ellis (1975): The Large Scale Structure of Space-Time

back to toc

3.4 The isolated quantum system

Quantum mechanics falls naturally into two sections. The first describes the unitary evolution of state vectors in an isolated quantum system. The second describes what happens when such isolated systems communicate. Here we deal with the isolated system, turning to the theory of communication below.

Quantum theory is developed in a mathematical Hilbert space. Hilbert spaces are a species of function space, each point in the space representing a function or computation which transforms an input domain to an output range. We may consider this text as a function from the natural numbers to the English alphabet. The domain of this function is the natural numbers which we use to number and order the symbols in the text, beginning with 1 for the first symbol and n for the last. John von Neumann (2014): Mathematical Foundations of Quantum Mechanics

The value of this function at any location is the symbol appearing at that point. We can represent this text in an n dimensional space, each dimension corresponding to a symbol. We can use the same space to represent any n symbol text. Using this approach, each text is represented by a vector or point in n-symbol text space.

Quantum states are represented by vectors in Hilbert space where the symbols are not letters of the alphabet but complex numbers. The evolution of isolated quantum systems is governed by the Schrödinger equation

iℏ ∂ψ / ∂τ = H ψ

where ψ is a state vector, H represents the energy associated with each state vector, is Planck’s constant, τ is time and i is the symbol for the ‘imaginary’ basis vector of complex numbers. Schrödinger equation - Wikipedia

This equation describes a superposition of an infinity of different frequencies, analogous to the sound of an orchestra. It is a multidimensional version of the ‘Einstein-deBroglie’ equation, E =ℏω, which represents the fixed mechanical relation between energy and frequency, ω. Planck-Einstein relation - Wikipedia

Quantum processes are in perpetual motion as long as there is energy present. Given that there is nothing outside the Universe, such processes must map the Universe onto itself, and we would therefore expect fixed points. These fixed points are identified by the eigenvalue (special value) equation H ψ = c ψ, where c is an eigenvalue. This equation is satisfied by those eigenfunctions of the operator H which leave the phase of the state vector ψ unchanged although they may change the length of the vector and therefore the probability of the state it represents via the Born rule. Born rule - Wikipedia

This representation is purely conjecture, since we cannot, by definition, observe an isolated system and gain information from it. Nevertheless, the quantum mechanics built on this idea works, making it practically credible.

back to toc

3.5 The Turing machine

A Turing machine is a formal deterministic machine generally agreed to be able to compute all computable functions. It thus serve as a definition of ‘computable’. From a practical point of view, a computer is the physical embodiment of an algorithm or set of algorithms. There are a countable infinity of different algorithms. A machine which can be programmed to compute any of these algorithms is called a universal Turing machine. Alan Turing (1936): On Computable Numbers, with an application to the Entscheidungsproblem, Matin Davis (1982); Computability and Unsolvability

Any algorithm except the simplest can be constructed by a sequence of lesser algorithms (like the individual arithmetic operations in multiplication). Modern computers implement Russell’s idea that logic embraces the whole of computable (and communicable) mathematics, since in a binary digital computer all algorithms are ultimately broken down to a sequence of propositional functions. Further, all these operations can be modelled with a single operation known to logicians as the Sheffer Stroke and to computer people as the NAND gate. Whitehead & Russell (1910, 1962): Principia Mathematica, Elliott Mendelson (1997): Introduction to Mathematical Logic

Part of the power of a digital computer lies in its ability to execute very simple logical functions repeatedly (periodically) at high frequency, and to build simple high frequency routines into longer complex routines which execute at a lower frequency. We may thus imagine a computation as a superposition of frequencies analogous to the superposition of frequencies we see in quantum systems.

Further, the eigenvalues selected by the eigenvalue equation correspond to eigenfunctions of the operator involved, which correspond in turn to algorithms or Turing machines. The similarity between quantum mechanics and computation was first noted by Feynman and has now become an important area of research. Richard Feynman (2007): Feynman Lectures on Computation, Nielsen & Chuang (2016): Quantum Computation and Quantum Information.

back to toc

3.6 The isomorphism of these isolated points

Each of the systems outlined above contains an invisible isolated process: Aristotle’s god enjoys the pleasure of thinking about itself [1072b15] while remaining completely independent of the world; to communicate with an inertial system would be to exert a force upon it so that it was no longer inertial; the initial singularity is isolated since it contains the Universe outside which there is, by definition, nothing; according to the theory, isolated quantum systems cannot be observed without changing them; one cannot observe the internal state of a Turing machine without breaking into its process and so destroying its determinism.

Here we propose that motions of these isolated systems are equivalent to Turing machines and therefore isomorphic to one another despite their differences in definition and their historical cultural roles. This proposition is based on the premise that the motion of a universal Turing machine embraces all computable functions, that is all classically observable transformations.

back to toc

4 From isolation to communication

4.1 The Word of God

In ancient times many cultures established a one to one correspondence between their Gods and different features of human existence like love, war, reproduction and so on. The Hebrews, on the contrary, became monotheist attributing all functions to one God which was ultimately transformed into the Christian God.

One of their thorniest problems facing Christian theologians was reconciling Hebrew monotheism with the Trinity that the early writers had woven into the New Testament: the Father (reminiscent of the Old Testament God), the Son (who came to Earth as a man) and the Holy Spirit (who guides the evolution of Christianity).

The standard psychological model of the Trinity was first suggested by Augustine and developed by Aquinas (Augustine ad 400; Aquinas ad 1265). Their theories of knowledge require that the known exist in the knower, a representation often called a ‘mental word’ by analogy to the spoken word derived from it. Augustine of Hippo (419, 1991): The Trinity, Aquinas, Summa, I, 27, 1: Is there procession in God?

Aquinas saw this representation as accidental in human knowledge but essential in God. Thus he considered God’s knowledge of themself, the Word of God, equivalent to God, distinguished only by the relationships of paternity and sonship established by the procession of the Word from the Father. Bernard Lonergan (1997): Verbum: Word and Idea in Aquinas, Lonergan (2007): The Triune God: Systematics

From our point of view, the procession of the Son from the Father is the equivalent to the creation of a minimal two unit network within an isolated system. Christianity, guided by its interpretation of the Biblical text, proposes that the Spirit is the substantial love between the Father and the Son, but stops the process at this point. Here we see this ancient model as representing the first steps in the creation of a transfinite network of independent but communicating agents which is isomorphic the the Universe as we know it.

back to toc

4.2 Quantum measurement

We learn about isolated quantum systems by observing or measuring them. Mathematically, quantum mechanics represents a measurement as an operator called an observable.

In quantum mechanics the outcomes of measurement are restricted to eigenvalues corresponding to eigenfunctions of the observable. The mathematical ideas involved here are complex but, since we are dealing with communication and we are natural communicators, quantum measurement is easily explained in terms of human experience and communication theory.

Quantum mechanics tells us, in effect, that the things we see are the things we look for. From this point of view, the information available from quantum mechanics lies not in what we see, but in the frequency of appearance of the different things seen.

The measured observable S is the operator representing the system we are using to sense the unknown state ψ of the system to be measured. We can only observe elements of the set of the eigenfunctions {sk} of the operator S. No other vectors are visible, although we might assume that they exist at some point in the evolution of the measured system.

The outcome of the measurement is predicted by the Born rule:

pk = | < sk | ψ > |2

where pk is the probability of observing the kth eigenfunction. Born rule - Wikipedia (link above)

Perhaps the biggest surprise in quantum mechanics is that quantum measurement does not reveal the fixed value of a given set of parameters at a given time, like the position of Mars at time t.

Instead it yields a probability distribution, telling us that if the observed system is in state ψ and we observe with operator S, we will see one or other of a fixed set of results, each appearing with a certain probability. When we measure the spectrum of an atom, we find ‘lines’ of fixed frequency corresponding to various processes in the atom, and each line has a certain ‘weight’ which tells us how frequently the corresponding process occurs.

The sum of the probabilities of the possible outcomes of an observation must be 1, since it is certain that one and only one of them will appear at each measurement. This constraint is identical to the constraint on a communication source: the sum of the probabilities of emission of the various letters of the source alphabet must also be 1. This mathematical similarity leads us to consider that a sequence of quantum measurements is (at the level of letter frequencies) statistically identical to the output of a communication source. We can understand a quantum system as a communication source which speaks to us when we prompt it.

back to toc

5 Quantum field theory and the computer network

The formalism of quantum mechanics makes no particular reference to ordinary physical space or time, but deals essentially with the motion of state vectors in Hilbert space. Historically, quantum mechanics was first applied to Newtonian space, but it soon became clear that a true description of the world must combine quantum mechanics with special relativity in Minkowski space. The result of this marriage is quantum field theory (QFT).

QFT is the foundation of the Standard model, which although very successful, fails to comprehend gravitation and suffers from some logical and mathematical difficulties. Martinus Veltman (1994): Diagrammatica: The Path to the Feynman Rules

Perhaps the most counterintuitive feature of the Universe, from the point of view of the continuous mathematical models of classical physics, is that all physical observations are quantized or ‘digital’. Experimental physics revolves around classifying (‘binning’) and counting events. When we observe the output of physicists and mathematicians (‘the literature’) we see that it too is quantized, into discrete volumes, articles, words and symbols.

Many of the problems of QFT arise from the attempt to explain these quantized observations using continuous mathematics. Here we propose that these problems can be overcome by applying discrete mathematics, which embraces integral (Diophantine) arithmetic, logic and the theory of computation (Chaitin 1987). Gregory Chaitin (1987): Algorithmic Information Theory

back to toc

5.1 Why is the Universe quantized

Here we take the quantization of the Universe as evidence that it can be modelled as a communication system. Science proceeds by measurement. Shannon, who founded the mathematical theory of communication, saw that entropy can be used as measure of information [Shannon 1948, Khinchin 1957]. The information carried by a point in any space is equivalent to the entropy of the space. Claude E Shannon (1948); A Mathematical Theory of Communication, Aleksandr Khinchin (1957): Mathematical Foundations of Information Theory

Entropy is simply a count, usually converted to a logarithm for ease of computation. In communication theory we imagine a message source A with a source alphabet of i letters ai whose probability of emission is pi. The sum of these probabilities is taken to be 1, meaning that at any moment the source is emitting 1 and only 1 letter. The entropy H of such a source is defined to be H = − Σi pi log2 pi. By using the logarithm to base 2 we measure of entropy (and information) in bits.

The mathematical theory of communication is not concerned with the meaning of messages, only with the rate of error free transmission of strings of symbols from a certain source over a certain channel.

Given this measure of information, Shannon sought limits on the rate of error free communication over noiseless and noisy channels). The theory he developed is now well known and lies at the heart of communication engineering.

In essence, Shannon showed that by encoding messages into larger blocks or packets, these packets can be made so far apart in message space that the probability of confusing them (and so falling into error) approaches zero. This seems identical to the quantization observed wherever we look in the Universe. Everything is a packet, a tree, a planet or a grain of sand. Despite this manifest quantization many still maintain that the physical world is continuous.

For a given channel, Shannon’s theorems define a maximum rate information transmission C. A system that transmits without errors at the rate C is an ideal system. Features of an ideal system that are relevant here are:

1. In order to avoid error, there must be no overlap between signals representing different messages. They must, in other words, be orthogonal, as with the eigenfunctions of a quantum mechanical basis. In other words, error free communication demands quantization of messages. Wojciech Hubert Zurek (2008): Quantum origin of quantum jumps: breaking of unitary symmetry induced by information transfer and the transition from quantum to classical

2. Such ‘basis signals’ may be chosen at random in the signal space, provided only that they are orthogonal. The same message may be encoded in any orthogonal basis provided that the transformations used by the transmitter and receiver to encode and decode the message are computable and modified accordingly.

3. The signals transmitted by an ideal system are indistinguishable from noise. The fact that a set of physical observations looks like a random sequence is not therefore evidence for meaninglessness. Until the algorithms used to encode and decode such a sequence are known, nothing can be said about its significance.

4. As the system approaches the ideal and the length of the transmitted signal increases, the delay at the transmitter while it takes in a chunk of message for encoding, and the corresponding delay at the receiver, increase indefinitely. The ideal rate C is only reached when packets comprise a countably infinite number of bits.

5. Only in the simplest cases are the mappings used to encode and decode messages linear and topological. For practical purposes, however, they must all be computable. In addition, in order to recover encoded messages, the computations used to encode messages must be invertible so that the decoded message is identical to the original.

back to toc

5.2 Calculus and the quantum

Experience has shown that mathematical models are extraordinarily effective tools for predicting the behaviour of the physical world. The general approach to modelling a physical system is to create a ‘configuration space’ large enough to be placed into one to one correspondence with all possible physical states of the system and then to seek ‘laws’ or ‘symmetries’ (often expressed as equations) that constrain events in the configuration space to just those that are actually observed in the world. Eugene Wigner (1960): The Unreasonable Effectiveness of Mathematics in the Natural Sciences

We may date modern mathematical physics from Newton’s discovery of calculus which provides a means to discuss ephemeral states of motion using static mathematical symbolism to express the invariant features (stationary points) of a motion.

The configuration space for the Newtonian dynamics of the Solar system is a three dimensional Euclidean space and a universal time that applies equally to all points in the space. Newton’s fundamental insight is that acceleration (a) is proportional to force (F), expressed by the equation a = F/m, where the mass m is the constant of proportionality between force and acceleration.

Acceleration is the rate of change of velocity (v) with respect to time, in symbols a = dv/dt. Velocity itself is is the rate of change of position (x) with respect to time: v = dx/dt. Putting these together we write a = d2x/dt2 = F/m.

Such differential equations are statements of differential calculus, and enable us to calculate accelerations and forces from measurements of position. The moving system changes, the differential equation does not. Using the laws discovered by Kepler in the astronomical measurements of Brahe, Newton was able to arrive at his law of gravitation: the gravitational force acting between two heavenly bodies is proportional to the product of their masses divided by the square of the distance between them.

The inverse of differentiation is integration, which enables us to work in the opposite direction, from forces and accelerations to positions. Given the law of gravitation, integration enables us to predict the future positions of the planets given their current positions. Together differentiation and integration are the subject of calculus.

Mathematically, calculus is closely related to continuity. Newton’s discoveries led close reexamination of the problems of continuity reflected in Zeno’s paradoxes and the the ancient discovery that no rational number corresponds to certain geometric magnitudes like the diagonal of a unit square.

The remarkable success of Newton’s methods led most physicists and philosophers to assume that physical Universe is continuous. Quantum mechanics was born, however, when Planck discovered that continuous mathematics cannot describe the relationship between ‘matter’ and ‘radiation’.

The development of quantum mechanics was rather slow and painful, since it was not clear how to apply the traditional mathematical methods of calculus to discrete quantum events. The eventual solution is Born’s rule, stated above (section 3.4).

The continuous equations of quantum mechanics do not describe the motions of particles in the Newtonian sense. They describe instead the probabilities of various discrete events. This raises the problem of reconciling the quantum mechanical and Newtonian descriptions of the world. A common route begins with Feynman’s path integral formulation. Path integral formulation - Wikipedia

back to toc

5.3 Action and the path integral

Although Newton’s method of predicting the future positions of the planets looks simple in theory, the difficulty of applying it to systems three or more bodies motivated the search for more powerful methods.

An important results of this search is Hamilton’s principle: that the world appears to optimize itself using the principle of stationary action. Hamilton defined action (S) as the time integral of the Lagrangian function (L). The Lagrangian is the difference between the kinetic T(t) and potential V(t) energy of a system expressed as functions of time: L = T - V; S = ∫ L dt. The action is said to be stationary when small variations in the Lagrangian do not change the action. Hamilton's principle - Wikipedia

Hamilton’s principe has proven enormously fruitful, and serves as a bridge between classical and quantum mechanics. One way to understand this is Feynman’s path integral formulation of quantum mechanics. Path integral formulation - Wikipedia

A classical particle moving from a to b moves on a definite path through space and time which may be precisely computed using Hamilton’s principle. In contrast, Feynman assumed that all possible paths from a to b are equally probable, and then used Hamilton’s principle and the quantum mechanical principle of superposition to select a particular path.

It is clear from the solution to the Schrödinger equation set down above that one quantum of action corresponds to one period of the state vector or, in our model, one elementary computation. The rate of rotation of a state vector is directly proportional to the energy associated with it so that the total number of revolutions made by the state vector associated with a particle following a certain path is exactly equivalent to the action associated with that path.

It is an axiom of quantum mechanics that the amplitude for an event which can occur in many independent and indistinguishable ways is obtained by adding the amplitudes for the individual events: ψ = Σiψi.

This is the superposition principle, and is the case in the famous two slit experiment. Feynman added amplitudes for all possible indistinguishable paths to obtain the total amplitude for the event, and idea that originated with Dirac. Double-slit experiment - Wikipedia, P. A. M. Dirac (1933): The Lagrangian in Quantum Mechanics

Feynman’s principal contribution was to apply the principle of stationary action to this superposition. The path of stationary action is that path whose neighbours have almost the same action, so that the amplitudes add ‘constructively’ thus (according to Born’s rule) maximizing the probability that this is the path actually taken. Feynman’s approach thus provides a rational basis for Hamilton’s principle: the classical trajectory is the one whose neighbours all have almost the same action.

How does this look from the computational point of view? We regard each quantum of action as the physical manifestation of one logical operation. A ‘path’ then corresponds to an algorithm, an ordered sequence of logical operations.

The efficiency of such algorithms is measured by the number of operations necessary to go from the initial state a to the final state b, more efficient algorithms requiring fewer operations and being very similar to one another, so that the algorithms clustered around the most efficient algorithm will be very close together, that is, their action will be stationary under slight variations

back to toc

5.4 Feynman diagrams

The Feynman diagram is a natural complement to the path integral. The path integral gives equal weight to all possible paths between two quantum states and uses superposition and Hamilton’s principle to select the most probable path. A Feynman diagram is a representation of such a path, and the probability of a particular transition is calculated as a superposition of all the relevant Feynman diagrams.

A Feynman diagram represents a network in which the lines represent particles (messages) and the vertices or nodes where the lines meet represent interactions between the particles. The path integral approach assigns equal weight to all possible paths, and from that computes the probability of a particular transition. The Feynman diagram aggregates these weights into a larger structure that represents the fact that particles interact in every possible way: many path integrals contribute to one Feynman diagram.

Taking an even broader view, one may see the whole Universe as an aggregate of aggregates of Feynman diagrams, thus building up a universal network that embraces all physical interactions. In the network picture, we see particles as messages and vertices in the Feynman diagram as computers in the network whose inputs and outputs are messages. We can now turn from this intuitive approach to the Universe as a computer network to a more formal development of the idea.

back to toc

6 A transfinite computer network

6.1 The Cantor Universe

The purpose of this section is to construct a logical phase space in which to model the behaviour of the universal network.

We see a network as a set of memory locations whose states can be changed by the users of the network. As in everyday networks and computers, every memory location has an address and content which may be read from and written to by a processor.

In a finite network, these addresses may be placed into correspondence with the natural numbers. To address the memory in an infinite network, we turn to the transfinite numbers invented by Georg Cantor (1898). Cantor (1897, 1955): Contributions to the Founding of the Theory of Transfinite Numbers

Cantor established the transfinite cardinal and ordinal numbers by Cantor’s theorem: given a set of any cardinal n, there exists a set with a greater cardinal. He proved this using his famous diagonal argument. A modern axiomatic proof relies on the axiom of the power set: given a set S, there exists the set P(S) of all subsets of S called the power set of S. If the cardinal of S is n, the cardinal of P(S) is 2n. This is the true for all finite or infinite values of n greater than 1.

Using this theorem, Cantor constructed the formal ‘Cantor universe’ by transfinite recursion. The set N of natural numbers is said to be countably infinite. Cantor used the symbol 0 to represent the cardinal of N. The power set of N, P(N) has the next greatest cardinal 1, and so on without end. Cantor hypothesized that1was the cardinal of the continuum. Cohen later found that this continuum hypothesis is independent of the axioms of modern set theory. Cohen (1980): Set Theory and the Continuum Hypothesis

back to toc

6.2 A transfinite symmetric universe

Here we construct a Cantor universe in a slightly different way, using permutations rather than subsets to generate greater cardinals. Permutation simply means rearrangement, so the triplet {a, b, c} may be arranged in six ways abc, acb, bac, bca, cab, cba. In general n different things can be arranged in n × (n-1) × . . . × 2 × 1 different ways, called factorial n and written n!. The set of permutations of a number of objects form a group called the symmetric group.

We begin with the natural numbers, N. These numbers have a natural order, but we can also order them in any other way we like to give a set of permutations of N. We know that there are n! permutations of n objects and conjecture that there are n+1 permutations of n things. Every one of these permutations is formally different.

We take this structure as the configuration or memory space for a transfinite, symmetric network. The essential feature of this model is the fact that by ordering sets to create position significant numerals, we gain a huge increase on the complexity of numbers that can be represented with a given stock of symbols. This fact, it seems, lies at the root of the creative power of the Universe.

Cantor made a very bold claim when he presented his transfinite cardinal and ordinal numbers to the world. He wrote:

The concept of "ordinal type" developed here, when it is transferred in like manner to "multiply ordered aggregates", embraces, in conjunction with the concept of "cardinal number" or "power" introduced in §1, everything capable of being numbered (Anzahlmassige) that is thinkable and in this sense cannot be further generalized. It contains nothing arbitrary, but is the natural extension of the concept of number. (Cantor 1898; 1955: 117)

We might call this structure a transfinite symmetric network. Transfinite because it is isomorphic to Cantor’s transfinite cardinal and ordinal numbers. Symmetric because it contains all possible permutation (= symmetric) groups which in turn contain all possible groups.

back to toc

6.3 ‘Cantor symmetry’

Cantor [1955: 109], explaining the genesis of transfinite numbers, writes:

We shall show that the transfinite cardinal numbers can be arranged according to their magnitude, and, in this order, form, like the finite numbers, a “well ordered aggregate” in an extended sense of the words. Out of 0 proceeds by a definite law, the next greater cardinal number 1, and out of this by the same law the next greater, 2 and so on . . . without end.

This ‘definite law’ is indifferent to the size of the number upon which is operates to create the successor to that number. Let us call this definite law Cantor symmetry or symmetry with respect to complexity.

In the symmetric universe as we have described it, the ‘definite law’ is the process of generating all the permutations of a given set of operations. The order of operations in the real world is often important (to make an omelette, beat the eggs then cook rather than cook then beat the eggs).

Such operations are ‘non-commutative’. One of the many features that distinguishes quantum from classical mechanics is non-communtative multiplication. Cantor symmetry provides us with an heuristic principle for understanding interactions at all levels of the Universe. At this level of abstraction, the gossip at a cocktail party and the inner workings of an atom share the same network features, and we may use one to throw light upon the other.

Cantor symmetry applies to quantum mechanics so that our understanding of two state systems is essentially the same as our understanding of system with a transfinite number of states.

back to toc

6.4 Introducing computers into the Cantor universe

The cardinal of the set of Turing machines is the same as the cardinal of the set of natural numbers, so that a one-to-one correspondence may be established between them. We may imagine the set of natural numbers in their natural order, 1, 2, 3, . . . as being mapped to the set of Turing machines, perhaps in an order determined by the complexity of each machine measured by the length of its tape when its computation is complete.

Nineteenth century mathematicians were inclined to define a continuum as a set of densely packed points. This idea is inherently self contradictory since a point by definition is an isolated element so that no matter how closely we pack points, they do not become become continuous. Aristotle provided a better definition of continuity: for him entities were continuous if they shared ends in common. Aristotle (continuity): Physics: V, iii.

Computers communicate by sharing a memory, writing to and reading from the same location. This has the same structure as Aristotle's idea of continuity, since the computer form a unity through their shared memory. Even though permutation of natural numbers leaves the numbers discrete, changing their formal order, a permutation of computers requires the rearrangement of their connection which determines the order in which information is processed as it is passed along the line of computers, which in turn determines the outcome of a computation. The situation is analogous to the coupling of links in a chain, each link sharing its structure with the link on either side.

back to toc

6.5 Computation and quantum information theory

The transfinite symmetric universe is a 'Platonic' structure whose principal properties are:

1. Unlimited size, sufficient to be placed in one to one correspondence with any real system of events, no matter how complex. We measure the size of an event by the number of quanta of action involved in its execution. By this measure, the largest event is the life of the Universe itself, which contains all other events.

2. Layered, since each new level of complexity arises from permutations of the elements of the layer beneath it, giving a sequence of ever increasing complexity 0, 1, 2 . . .

Our task now becomes to find the constraints on this space which enable us to map it to the Universe of experience. We may imagine a communication system as a set of computable functions that can be strung together (as in an ordinary computer) to transform the input of the channel to its output. Isolated Turing machines are deterministic. This determinism is broken, however, when processes communicate. Turing envisaged this situation when he described the oracle- or o-machine. An oracle machine, like any real computer, can stop and wait for input from an outside source or oracle.

In practical communication networks, processing may also be interrupted by a message which claims priority over the current process. A message entering a network using an acceptable protocol changes the state of the machine receiving it. Given communication delay (which may effectively cause space-like separation of machines) it is impossible to predict when a machine will be interrupted and what the effect of the interrupt may be. Uncertainty (ie unpredictability) thus enters a network, even if all the functions driving the network are deterministically computable.

This opens the way to understanding the probabilistic nature of quantum mechanics if we interpret a quantum systems as networks. In this way we replace a deterministic continuous formalism with a probabilistic interpretation with a set of deterministic machines whose communication with one another produces a set of events whose occurrences are unpredictable.

The events themselves, however, fall into equivalence classes which are determined by the communication protocols in use. The number of possible protocols is restricted to the number of possible Turing machines, that is to a countable infinity. If we consider quantum eigenfunctions to be computable, this suggests that the number of eigenfunctions (ie observables) in the Universe is itself countably infinite. One motivation for the development of quantum computers is the belief that they will be much more powerful than digital machines. This hope is based on two features of current quantum mechanics, continuity and superposition.

First, because quantum mechanical superpositions in Hilbert space have continuous complex amplitudes, it is felt that even a vector in two dimensional Hilbert space (a ‘qubit’) may encode an infinity of detail and therefore be equal to an infinite digital memory.

Second, because in an infinite dimensional Hilbert space each vector may be decomposed into a superposition of an infinity of other vectors, all of which are acted upon simultaneously by operators on the space, it is felt that quantum computations can be massively parallel, computing all the instances of an algorithm represented by each element of the superposition simultaneously.

To the contrary, we have the approach taken by algorithmic information theory, which measures the total information in a transformation by the length of the program required to implement the transformation (Chaitin 1987).

From this point of view, the equation x = y defined on the real numbers does not represent an infinite amount of information, but only the few bits represented by the symbolic expression ‘x = y’. This alerts us to the fact that the entropy we assign to a set depends on how we decide to count its elements (Chaitin 1975). From a computational point of view, the algorithmic measure seems most appropriate.

Further, from the point of view of communication theory, a continuum is the essence of confusion. In continuous mathematics, the real information is carried only by the singularities, eigenvalues and other fixed points we may find within or on the boundaries of a continuous set.

From this we conclude that one can neither compute a continuum, nor use a continuum to compute, except insofar as it can be broken into disjoint pieces. All computation modelled by the Turing machine must be digital or quantized, that is executed using discrete symbols, that is discrete physical entities.

Quantum mechanics makes sense if we look at it from the point of view of communication. It is an intuitively satisfactory point of view, since we are born natural communicators. It emerges from the above discussion that when we communicate with the world, even a quantum system, we are part of a network.

back to toc

6.6 Creative evolution

Cosmological theory, supported by a wealth of observation, suggests that the Universe began as a ‘formless and void’ initial singularity. (Genesis; Hawking & Ellis 1975). We have a quite comprehensive picture of the development of the Universe within this singularity, but very little information about (i) why the initial singularity existed in the first place and (ii) what motivates its complexification. Teilhard de Chardin (1965): The Phenomenon of Man

There is really nothing to be said about (i), except to accept that we are here. The second question is more amenable to treatment. As Parmenides noted, we can only know dynamic systems through their stationary points. The stationary points in the transfinite computer network are the elements of the Cantor universe which we understand to be memory locations which carry information.

The ‘expansion’ of the Cantor universe is driven by Cantor’s theorem, which tells us that given any set, there exists a set of higher cardinal. The proof of Cantor’s theorem is non-constructive: it is proved by showing that its falsity implies a contradiction. Insofar as the Universe is consistent, it necessarily increases its cardinality. We take this ‘Cantor force’ to be the ultimate driver of universal expansion.

The theory of evolution is founded on the Malthusian observation that the exponential growth of reproducing organisms will eventually exhaust any resource supply, no matter how great. As a result the number of organisms has an upper limit and fitter organisms will survive at the expense of the less fit.

We note first that while there are only a countably infinite number of different Turing machines (computable functions), the phase space of the model is transfinitely bigger, so that we may see computation as a limiting resource in the Universe. This lays the foundation for an evolutionary scheme: many possibilities, confronting limited resources select for those possibilities that use the resource most efficiently to secure their own survival and replication.

Although the evolutionary paradigm was first developed in the biological realm, the layered network model suggets that it accounts for the selection of all structure in the Universe, from the most elementary particles in the physical layer through all user layers to the the Universe as a whole.

back to toc

7 Some corollaries

7.1 ‘The Unreasonable effectiveness of mathematics’

Eugene Wigner noted the ‘unreasonable effectiveness of mathematics’ in the physical sciences [5.2]. If there is any truth in the picture painted here, this fact may have a simple explanation. Mathematics is a consistent symbolic system devised by the mathematical community. We may think of the mathematical literature as a set of stationary points in the dynamics of this community.

More generally stationary points (particles or messages) in the observed Universe also form a symbolic system whose consistency is guaranteed by the dynamic processes which contains them. Given that the symmetric network spans the whole space of consistent symbolic systems, it may not be surprising to find that mathematics is wonderfully effective as a descriptor of the world, as Galileo proposed.

back to toc

7.2 The view from inside the Universe

The transfinite computer network is a dynamical model whose size is limited only by consistency. It was originally developed in a theological context as a candidate model of God. The idea is to map it onto the observed world as evidence for the thesis that the the Universe is divine. Traditional models of God and the world suggest that they could not be more different. This difference is partly based on ‘proofs’ for the existence of God which prove, in effect, that God is other than the world.

In the picture proposed here, we see ourselves as part of the living God. As noted above, Christian theologians long ago developed models of God that allowed for a Trinity of personalities in God. Here we allow for a transfinity of personalities, where we mean by personality a communication source, any entity capable of sending and receiving messages, whether it be an atom or a God.

Our existence depends upon many layers of structure beginning with fundamental particles and moving through atoms, molecules and cells, each of which is a layered network within itself and connected to its peers through an ‘internet’.

Further we ourselves are elements of larger networks, families, tribes and nations. In all of these structures, the fundamental binding element is communication, and it is hard to imagine any feature of human experience that does not fall within the network paradigm.

back to toc

7.3 Network intelligence

Only two sources may be involved in any particular communication within a network, but, since each of these machines is connected to many other machines, what we see if we look at a given machine for long enough is not just the machine we are looking at, but a superposition of all the inputs from all the machines in the network, weighed by the probability of communication between them and the machine we are watching.

We may see this scenario as analogous to the structure of a biological neural network like our own central nervous systems, where memory in the network is encoded in the weights of synaptic junctions between the individual neurons in the network. We may see these weights as analogous to the amplitudes for the propagation of signals between the various vertices of a Feynman diagram.

There can be little doubt that the neural networking in our central nervous systems is the hardware layer of the human mind and the seat of our intelligence. In the network paradigm proposed here, the Universe as a whole is a network similar to our individual neural networks so that we may see the cosmic intelligence as isomorphic (up to a Cantor symmetry) with our microcosmic minds.

back to toc

7.4 Past and future

Observable events lie on the boundary between past and future. Turing answered Hilbert’s Entscheidungsproblem by devising a machine that could perform anything that might reasonably be called a computation and then showing that there was a set of incomputable problems which such a machine could never complete.

In the context of Cantorian set theory, Turing showed that there is a just a countable infinity of different Turing machines, capable of evaluating a countable infinity of computable functions. Since there is a transfinite number of possible functions mapping the set of natural numbers onto itself, computable functions represent at most a small fraction of possible functions.

Until a computer has halted, its state is uncertain. Once it is halted, its state becomes definite. In the network no computer can determine its inputs, and so its output, if any, is also indeterminate. As a consequence, the past is definite, and not subject to any uncertainty principle. The future, however, cannot be completely determined by the past. The network model therefore suggests that the boundary between past and future can be modelled by the boundary between halted and not-halted computations.

back to toc

7.5 ‘Entanglement’ and ‘spooky action at a distance’

In a layered network, each layer provides an alphabet of processes or tools which are used by higher layers for their own purposes. Thus molecules use atoms, cells use molecules, people use computers, corporations use people and so on. If we assume that the fundamental hardware of the Universe is the initial singularity, peer processes in all the layers of complexity which have arisen from this singularity still use it as the ultimate physical layer for all their communications with one another. Einstein, Podolsky & Rosen (1935): Can the Quantum Mechanical Description of Physical Reality be Considered Complete?

In this picture, space-time as we experience it is an an emergent property of the Universe, a layer built on the lower layer described by quantum mechanics. The structure of space-time is described by special relativity and depends upon the finite velocity of light.

As noted above, the algorithms used to defeat error causes communication delay the transmission and reception of messages. In order to encode a message of n symbols into a transmissable signal function, the encoding machine must wait for the source to emit n symbols, even if the computation required for encoding is instantaneous. Since computation itself involves error free communication within the computer, we can expect it too to add delay. We therefore are led to believe that communication delay and quantization are connected. Further, since the least delay corresponds to the maximum velocity, we propose that the algorithm used to encode information into signals that travel at the velocity of light is the fastest algorithm in the Universe.

As ‘nature abhors a vacuum’ physicists traditionally abhor ‘action at a distance’. This abhorrence has been the historical motivation for field theories and the notion that the forces between systems are mediated by the exchange of particles.

The Einstein, Podolsky Rosen thought experiment, since frequently realized, proposed that entangled particles could act upon one another at a distance even though their separation was spacelike, requiring something greater than the velocity of light to account for their correlation (if it is due to communication). Aspect, Grangier & Roger (1982): Experimental Realization of the Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell's Inequalities

We assume that the velocity of light is finite rather than infinite because of the delay in the transmission of a photon from point to point due to error preventative encoding. Conversely, we might speculate that communications that cannot go wrong, that is communications that effectively carry no information, might require no encoding and therefore travel at infinite velocity.

The observed correlations between entangled photons have been found to propagate at many times c, and the measurements are compatible with infinite velocity, ie instantaneous transmission over any distance. Salart, Baas, Branciard, Gisin & Zbinden (2008): Testing the speed of 'spooky action at a distance'.

In a practical communication network a message originating with user A is passed down through the software layers in A’s computer to the physical layer which carries it to B’s machine. It is then passed up through B’s software layers until reaches a form that B can read. By analogy, communication between one system in the Universe must pass down to the ultimate physical layer (which we might identify as the initial singularity) then up again to the peer system receiving the message. It may be that the simplicity in the lower layers of this network make encoding for error prevention unnecessary, so that instantaneous communication is possible. An alternative may be that spacetime does not exist in the quantum world, so that there is no distance so that all communications occur by contact.

back to toc

7.6 Numerical results

The first successful quantum field theory, quantum electrodynamics, has enabled physicists to compute the outcome of experiments with precision in the realm of parts per billion. Such precision compels one to believe that the theory is an accurate description of reality. Richard Feynman (1988): QED: The Strange Story of Light and Matter

Quantum mechanical measurements are essentially frequencies, that is rational numbers expressing the ratio of the occurrences of two events. A probability is a prediction of a frequency. When we say that the probability of a tossed fair coin coming up heads is 1/2 we are predicting that if we throw a coin a sufficient number of times, the ratio of head to tails will approach as closely as we like to 1/2.

The measurement and computation of frequencies requires both definitions of the events to be compared (heads, tails) and a count of these events. The quantum mechanical eigenvalue equation embodies both these requirements: the eigenfunction which defines the event and the eigenvalue which measures its frequency.

One must ask: if the continuum calculus methods of conventional quantum mechanics work so well, how are they to be reconciled with the digital approach suggested here. The matter clearly requires much further discussion in the genre more appropriately published in physics journals but an outline of the answer seems to be present in the layering of networks. Physics deals with the physical layer of the universal network, which may nevertheless comprise a number of layers beginning with the totally simple, isolated and meaningless initial singularity upon which are built quantum mechanics and space-time.

A foundation of quantum mechanics is the principle of superposition which works because quantum mechanics is linear. When the amplitudes of two states are added the resulting amplitude is simply the sum of the two added states: the whole is no greater or less than the sum of its parts. This contrasts with more complex non-linear systems for which the relationship between whole and parts is more complex, as illustrated by the old adage ‘the straw that breaks the camels’s back’: a minor input leading to a major effect. Thus, in the quantum mechanical context, continuous and digital mathematics are indistinguishable.

(Revised 2 February 2022)

Back to essays, toc

Copyright:

You may copy this material freely provided only that you quote fairly and provide a link (or reference) to your source.


Further reading

Books

Augustine (419, 1991), Saint, and Edmond Hill (Introduction, translation and notes), and John E Rotelle (editor), The Trinity, New City Press 399-419, 1991 Written 399 - 419: De Trinitate is a radical restatement, defence and development of the Christian doctrine of the Trinity. Augustine's book has served as a foundation for most subsequent work, particularly that of Thomas Aquinas.  
Amazon
  back

Cantor (1897, 1955), Georg, Contributions to the Founding of the Theory of Transfinite Numbers (Translated, with Introduction and Notes by Philip E B Jourdain), Dover 1895, 1897, 1955 Jacket: 'One of the greatest mathematical classics of all time, this work established a new field of mathematics which was to be of incalculable importance in topology, number theory, analysis, theory of functions, etc, as well as the entire field of modern logic.' 
Amazon
  back

Casti (1996), John L, Five Golden Rules: Great Theories of 20th-Century Mathematics - and Why They Matter, John Wiley and Sons 1996 Preface: '[this book] is intended to tell the general reader about mathematics by showcasing five of the finest achievements of the mathematician's art in this [20th] century.' p ix. Treats the Minimax theorem (game theory), the Brouwer Fixed-Point theorem (topology), Morse's theorem (singularity theory), the Halting theorem (theory of computation) and the Simplex method (optimisation theory). 
Amazon
  back

Chaitin (1987), Gregory J, Algorithmic Information Theory, Cambridge UP 1987 Foreword: 'The crucial fact here is that there exist symbolic objects (i.e., texts) which are "algorithmically inexplicable", i.e., cannot be specified by any text shorter than themselves. Since texts of this sort have the properties associated with random sequences of classical probability theory, the theory of describability developed . . . in the present work yields a very interesting new view of the notion of randomness.' J T Schwartz 
Amazon
  back

Cohen (1980), Paul J, Set Theory and the Continuum Hypothesis, Benjamin/Cummings 1966-1980 Preface: 'The notes that follow are based on a course given at Harvard University, Spring 1965. The main objective was to give the proof of the independence of the continuum hypothesis [from the Zermelo-Fraenkel axioms for set theory with the axiom of choice included]. To keep the course as self contained as possible we included background materials in logic and axiomatic set theory as well as an account of Gödel's proof of the consistency of the continuum hypothesis. . . .'  
Amazon
  back

Davis (1982), Martin, Computability and Unsolvability, Dover 1982 Preface: 'This book is an introduction to the theory of computability and non-computability ususally referred to as the theory of recursive functions. The subject is concerned with the existence of purely mechanical procedures for solving problems. . . . The existence of absolutely unsolvable problems and the Goedel incompleteness theorem are among the results in the theory of computability that have philosophical significance.' 
Amazon
  back

Feynman (1988), Richard, QED: The Strange Story of Light and Matter, Princeton UP 1988 Jacket: 'Quantum electrodynamics - or QED for short - is the 'strange theory' that explains how light and electrons interact. Thanks to Richard Feynmann and his colleagues, it is also one of the rare parts of physics that is known for sure, a theory that has stood the test of time. . . . In this beautifully lucid set of lectures he provides a definitive introduction to QED.' 
Amazon
  back

Feynman (2007), Richard, Feynman Lectures on Computation, Perseus Publishing 2007 Amazon Editorial Reviews Book Description 'The famous physicist's timeless lectures on the promise and limitations of computers When, in 1984-86, Richard P. Feynman gave his famous course on computation at the California Institute of Technology, he asked Tony Hey to adapt his lecture notes into a book. Although led by Feynman, the course also featured, as occasional guest speakers, some of the most brilliant men in science at that time, including Marvin Minsky, Charles Bennett, and John Hopfield. Although the lectures are now thirteen years old, most of the material is timeless and presents a "Feynmanesque" overview of many standard and some not-so-standard topics in computer science such as reversible logic gates and quantum computers.'  
Amazon
  back

Hawking (1975), Steven W, and G F R Ellis, The Large Scale Structure of Space-Time, Cambridge UP 1975 Preface: Einstein's General Theory of Relativity . . . leads to two remarkable predictions about the universe: first that the final fate of massive stars is to collapse behind an event horizon to form a 'black hole' which will contain a singularity; and secondly that there is a singularity in our past which constitutes, in some sense, a beginning to our universe. Our discussion is principally aimed at developing these two results.' 
Amazon
  back

Khinchin (1957), Aleksandr Yakovlevich, Mathematical Foundations of Information Theory (translated by P A Silvermann and M D Friedman), Dover 1957 Jacket: 'The first comprehensive introduction to information theory, this book places the work begun by Shannon and continued by McMillan, Feinstein and Khinchin on a rigorous mathematical basis. For the first time, mathematicians, statisticians, physicists, cyberneticists and communications engineers are offered a lucid, comprehensive introduction to this rapidly growing field.' 
Amazon
  back

Lonergan (1997), Bernard J F, and Robert M. Doran, Frederick E. Crowe (eds), Verbum: Word and Idea in Aquinas (Collected Works of Bernard Lonergan volume 2), University of Toronto Press 1997 Jacket: 'Verbum is a product of Lonergan's eleven years of study of the thought of Thomas Aquinas. The work is considered by many to be a breakthrough in the history of Lonergan's theology . . .. Here he interprets aspects in the writing of Aquinas relevant to trinitarian theory and, as in most of Lonergan's work, one of the principal aims is to assist the reader in the search to understand the workings of the human mind.' 
Amazon
  back

Lonergan (2007), Bernard J F, and Michael G Shields (translator), Robert M Doran & H Daniel Monsour (editors), The Triune God: Systematics, University of Toronto press 2007 De Deo trino, or The Triune God, is the third great instalment on one particular strand in trinitarian theology, namely, the tradition that appeals to a psychological analogy for understanding trinitarian processions and relations. The analogy dates back to St Augustine but was significantly developed by St Thomas Aquinas. Lonergan advances it to a new level of sophistication by rooting it in his own highly nuanced cognitional theory and in his early position on decision and love. . . . This is truly one of the great masterpieces in the history of systematic theology, perhaps even the greatest of all time.' 
Amazon
  back

Mendelson (1997), Elliott, Introduction to Mathematical Logic, Chapman & Hall/CRC 1997 Amazon product description: 'Product Description The Fourth Edition of this long-established text retains all the key features of the previous editions, covering the basic topics of a solid first course in mathematical logic. This edition includes an extensive appendix on second-order logic, a section on set theory with urlements, and a section on the logic that results when we allow models with empty domains. The text contains numerous exercises and an appendix furnishes answers to many of them.Introduction to Mathematical Logic includes:opropositional logicofirst-order logicofirst-order number theory and the incompleteness and undecidability theorems of Gödel, Rosser, Church, and Tarskioaxiomatic set theoryotheory of computabilityThe study of mathematical logic, axiomatic set theory, and computability theory provides an understanding of the fundamental assumptions and proof techniques that form basis of mathematics. Logic and computability theory have also become indispensable tools in theoretical computer science, including artificial intelligence. Introduction to Mathematical Logic covers these topics in a clear, reader-friendly style that will be valued by anyone working in computer science as well as lecturers and researchers in mathematics, philosophy, and related fields.'  
Amazon
  back

Misner (1973), Charles W, and Kip S Thorne, John Archibald Wheeler, Gravitation, Freeman 1973 Jacket: 'Einstein's description of gravitation as curvature of spacetime led directly to that greatest of all predictions of his theory, that the universe itself is dynamic. Physics still has far to go to come to terms with this amazing fact and what it means for man and his relation to the universe. John Archibald Wheeler. . . . this is a book on Einstein's theory of gravity. . . . ' 
Amazon
  back

Nielsen (2016), Michael A, and Isaac L Chuang, Quantum Computation and Quantum Information, Cambridge University Press 2016 Review: A rigorous, comprehensive text on quantum information is timely. The study of quantum information and computation represents a particularly direct route to understanding quantum mechanics. Unlike the traditional route to quantum mechanics via Schroedinger's equation and the hydrogen atom, the study of quantum information requires no calculus, merely a knowledge of complex numbers and matrix multiplication. In addition, quantum information processing gives direct access to the traditionally advanced topics of measurement of quantum systems and decoherence.' Seth Lloyd, Department of Quantum Mechanical Engineering, MIT, Nature 6876: vol 416 page 19, 7 March 2002. 
Amazon
  back

Streater (2000), Raymond F, and Arthur S Wightman, PCT, Spin, Statistics and All That, Princeton University Press 2000 Amazon product description: 'PCT, Spin and Statistics, and All That is the classic summary of and introduction to the achievements of Axiomatic Quantum Field Theory. This theory gives precise mathematical responses to questions like: What is a quantized field? What are the physically indispensable attributes of a quantized field? Furthermore, Axiomatic Field Theory shows that a number of physically important predictions of quantum field theory are mathematical consequences of the axioms. Here Raymond Streater and Arthur Wightman treat only results that can be rigorously proved, and these are presented in an elegant style that makes them available to a broad range of physics and theoretical mathematics.' 
Amazon
  back

Tanenbaum (1996), Andrew S, Computer Networks, Prentice Hall International 1996 Preface: 'The key to designing a computer network was first enunciated by Julius Caesar: Divide and Conquer. The idea is to design a network as a sequence of layers, or abstract machines, each one based upon the previous one. . . . This book uses a model in which networks are divided into seven layers. The structure of the book follows the structure of the model to a considerable extent.'  
Amazon
  back

Teilhard de Chardin (1965), Pierre, The Phenomenon of Man, Collins 1965 Sir Julian Huxley, Introduction: 'We, mankind, contain the possibilities of the earth's immense future, and can realise more and more of them on condition that we increase our knowledge and our love. That, it seems to me, is the distillation of the Phenomenon of Man.'  
Amazon
  back

van Heijenoort (1999), Jean, From Frege to Gödel: A Source Book in Mathematical Logic 1879 - 1931. , iUniverse.com 1999 Amazon book description: 'Collected here in one volume are some thirty-six high quality translations into English of the most important foreign-language works in mathematical logic, as well as articles and letters by Whitehead, Russell, Norbert Weiner and Post…This book is, in effect, the record of an important chapter in the history of thought. No serious student of logic or foundations of mathematics will want to be without it.' 
Amazon
  back

Veltman (1994), Martinus, Diagrammatica: The Path to the Feynman Rules, Cambridge University Press 1994 Jacket: 'This book provides an easily accessible introduction to quantum field theory via Feynman rules and calculations in particle physics. The aim is to make clear what the physical foundations of present-day field theory are, to clarify the physical content of Feynman rules, and to outline their domain of applicability. ... The book includes valuable appendices that review some essential mathematics, including complex spaces, matrices, the CBH equation, traces and dimensional regularization. ...' 
Amazon
  back

Whitehead (1910, 1962), Alfred North, and Bertrand Arthur Russell, Principia Mathematica (Cambridge Mathematical Library), Cambridge University Press 1910, 1962 The great three-volume Principia Mathematica is deservedly the most famous work ever written on the foundations of mathematics. Its aim is to deduce all the fundamental propositions of logic and mathematics from a small number of logical premisses and primitive ideas, and so to prove that mathematics is a development of logic. Not long after it was published, Goedel showed that the project could not completely succeed, but that in any system, such as arithmetic, there were true propositions that could not be proved.  
Amazon
  back

Links

Alan Turing (1936), On Computable Numbers, with an application to the Entscheidungsproblem, 'The "computable" numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by some finite means. Although the subject of this paper is ostensibly the computable numbers, it is almost equally easy to define and investigate computable functions of an integral variable of a real or computable variable, computable predicates and so forth. . . . ' back

Alfred Tarski, The semantic concept of truth and the foundation of semantics, Philosophy and Phenomenological Research 4 (1944)., Originally published in Philosophy and Phenomenological Research 4(1994). 'Our discussion will be centered around the notion of truth. The main problem is that of giving a satisfactory definition of this notion, i.e. a definition that is materially adequate and formally correct. . . . ' back

Alfred Tarski (1944), The semantic concept of truth and the foundation of semantics, Philosophy and Phenomenological Research 4 (1944)., Originally published in Philosophy and Phenomenological Research 4(1944). 'Our discussion will be centered around the notion of truth. The main problem is that of giving a satisfactory definition of this notion, i.e. a definition that is materially adequate and formally correct. . . . ' back

Aquinas Summa Theologiae I,18, 3, Is life properly attributed to God?, Life is in the highest degree properly in God. In proof of which it must be considered that since a thing is said to live in so far as it operates of itself and not as moved by another, the more perfectly this power is found in anything, the more perfect is the life of that thing. ' back

Aquinas, Summa I, 2, 3, Is life properly attributed to God?, Life is in the highest degree properly in God. In proof of which it must be considered that since a thing is said to live in so far as it operates of itself and not as moved by another, the more perfectly this power is found in anything, the more perfect is the life of that thing. ' back

Aquinas, Summa, I, 27, 1, Is there procession in God?, 'As God is above all things, we should understand what is said of God, not according to the mode of the lowest creatures, namely bodies, but from the similitude of the highest creatures, the intellectual substances; while even the similitudes derived from these fall short in the representation of divine objects. Procession, therefore, is not to be understood from what it is in bodies, either according to local movement or by way of a cause proceeding forth to its exterior effect, as, for instance, like heat from the agent to the thing made hot. Rather it is to be understood by way of an intelligible emanation, for example, of the intelligible word which proceeds from the speaker, yet remains in him. In that sense the Catholic Faith understands procession as existing in God.' back

Aristotle, The Internet Classics Archive | Works by Aristotle, A comprehensive database of Aristotle's works. back

Aristotle (continuity), Physics V, iii, 'A thing that is in succession and touches is 'contiguous'. The 'continuous' is a subdivision of the contiguous: things are called continuous when the touching limits of each become one and the same and are, as the word implies, contained in each other: continuity is impossible if these extremities are two. This definition makes it plain that continuity belongs to things that naturally in virtue of their mutual contact form a unity. And in whatever way that which holds them together is one, so too will the whole be one, e.g. by a rivet or glue or contact or organic union. ' 227a10 sqq back

Aspect, Grangier & Roger (1982), Experimental Realization of the Einstein-Podolsky-Rosen-Bohm Gedankenexperiment: A New Violation of Bell's Inequalities, 'The linear-polarization correlation of pairs of photons emitted in a radiative cascade of calcium has been measured. The new experimental scheme, using two-channel polarizers (i.e., optical analogues of Stern-Gerlach filters), is a straightforward transposition of Einstein-Podolsky-Rosen-Bohm gedankenexperiment. The present results, in excellent agreement with the quantum mechanical predictions, lead to the greatest violation of generalized Bell's inequalities ever achieved.' back

Born rule - Wikipedia, Born rule - Wikipedia, the free encyclopedia, 'The Born rule (also called the Born law, Born's rule, or Born's law) is a law of quantum mechanics which gives the probability that a measurement on a quantum system will yield a given result. It is named after its originator, the physicist Max Born. The Born rule is one of the key principles of the Copenhagen interpretation of quantum mechanics. There have been many attempts to derive the Born rule from the other assumptions of quantum mechanics, with inconclusive results. . . . The Born rule states that if an observable corresponding to a Hermitian operator A with discrete spectrum is measured in a system with normalized wave function (see bra-ket notation), then the measured result will be one of the eigenvalues λ of A, and the probability of measuring a given eigenvalue λi will equal <ψ|Pi|ψ> where Pi is the projection onto the eigenspace of A corresponding to λi'. back

Brouwer fixed point theorem - Wikipedia, Brouwer fixed point theorem - Wikipedia, the free encyclopedia, ' Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after Luitzen Brouwer. It states that for any continuous function f with certain properties there is a point x0 such that f(x0) = x0. The simplest form of Brouwer's theorem is for continuous functions f from a disk D to itself. A more general form is for continuous functions from a convex compact subset K of Euclidean space to itself.' back

Claude E Shannon (1948), A Mathematical Theory of Communication, ' The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages.' back

Double-slit experiment - Wikipedia, Double-slit experiment - Wikipedia, the free encyclopedia, 'In the double-slit experiment, light is shone at a solid thin plate that has two slits cut into it. A photographic plate is set up to record what comes through those slits. One or the other slit may be open, or both may be open. . . . The most baffling part of this experiment comes when only one photon at a time is fired at the barrier with both slits open. The pattern of interference remains the same as can be seen if many photons are emitted one at a time and recorded on the same sheet of photographic film. The clear implication is that something with a wavelike nature passes simultaneously through both slits and interferes with itself — even though there is only one photon present. (The experiment works with electrons, atoms, and even some molecules too.)' back

Einstein, Podolsky & Rosen (1935), Can the Quantum Mechanical Description of Physical Reality be Considered Complete?, A PDF of the classic paper. 'In a complete theory there is an element corresponding to each element of reality. A sufficient condition for the reality of a physical quantity is the possibility of predicting it with certainty, without disturbing the system. In quantum mechanics in the case of two physical quantities described by non-commuting operators, the knowledge of one precludes the knowledge of the other. Then either (1) the description of reality given by the wave function in quantum mechanics is not complete or (2) these two quantities cannot have simultaneous reality. Consideration of the problem of making predictions concerning a system on the basis of measurements made on another system that had previously interacted with it leads to the result that if (1) is false then (2) is also false, One is thus led to conclude that the description of reality given by the wave function is not complete.' back

Eugene Wigner (1960), The Unreasonable Effectiveness of Mathematics in the Natural Sciences, 'The first point is that the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it. Second, it is just this uncanny usefulness of mathematical concepts that raises the question of the uniqueness of our physical theories.' back

Hamilton's principle - Wikipedia, Hamilton's principle - Wikipedia, the free encyclopedia, 'In physics, Hamilton's principle is William Rowan Hamilton's formulation of the principle of stationary action . . . It states that the dynamics of a physical system is determined by a variational problem for a functional based on a single function, the Lagrangian, which contains all physical information concerning the system and the forces acting on it.' back

Harry Nyquist, Certain Topics in Telegraph Transmission Theory , ' Synopsis—The most obvious method for determining the distor- tion of telegraph signals is to calculate the transients of the tele- graph system. This method has been treated by various writers, and solutions are available for telegraph lines with simple terminal conditions. . . . The present paper attacks the same problem from the alternative standpoint of the steady-state characteristics of the system. This method has the advantage over the method of transients that the complication of the circuit which results from the use of terminal apparatus does not complicate the calculations materially.' back

John Palmer (Stanford Encyclopedia of Philosophy), Parmenides, ' Immediately after welcoming Parmenides to her abode, the goddess describes as follows the content of the revelation he is about to receive:
You must needs learn all things,/ both the unshaken heart of well-rounded reality/ and the notions of mortals, in which there is no genuine trustworthiness./ Nonetheless these things too will you learn, how what they resolved/ had actually to be, all through all pervading. (Fr. 1.28b-32) ' back

John von Neumann (2014), Mathematical Foundations of Quantum Mechanics, ' Mathematical Foundations of Quantum Mechanics by John von Neumann translated from the German by Robert T. Beyer (New Edition) edited by Nicholas A. Wheeler. Princeton UP Princeton & Oxford. Preface: ' This book is the realization of my long-held intention to someday use the resources of TEX to produce a more easily read version of Robert T. Beyer’s authorized English translation (Princeton University Press, 1955) of John von Neumann’s classic Mathematische Grundlagen der Quantenmechanik (Springer, 1932).'This content downloaded from 129.127.145.240 on Sat, 30 May 2020 22:38:31 UTC back

Minkowski space - Wikipedia, Minkowski space - Wikipedia, the free encyclopedia, ' In mathematical physics, Minkowski space or Minkowski spacetime is a combination of Euclidean space and time into a four-dimensional manifold where the spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded. Although initially developed by mathematician Hermann Minkowski for Maxwell's equations of electromagnetism, the mathematical structure of Minkowski spacetime was shown to be an immediate consequence of the postulates of special relativity.' back

Nyquist-Shannon sampling theorem - Wikipedia, Nyquist-Shannon sampling theorem - Wikipedia, the free encyclopedia, ' In the field of digital signal processing, the sampling theorem is a fundamental bridge between continuous-time signals (often called "analog signals") and discrete-time signals (often called "digital signals"). It establishes a sufficient condition for a sample rate that permits a discrete sequence of samples to capture all the information from a continuous-time signal of finite bandwidth.' back

P. A. M. Dirac (1933), The Lagrangian in Quantum Mechanics, ' . . . there is an alternative formulation [to the Hamiltonian] in classical dynamics, provided by the Lagrangian. This requires one to work in terms of coordinates and velocities instead of coordinates and momenta. The two formulation are closely related but there are reasons for believing that the Lagrangian one is more fundamental. . . . Secondly the lagrangian method can easily be expressed relativistically, on account of the action function being a relativistic invariant; . . .. ' [This article was first published in Physikalische Zeitschrift der Sowjetunion, Band 3, Heft 1 (1933), pp. 64–72.] back

Path integral formulation - Wikipedia, Path integral formulation - Wikipedia, the free encyclopedia, 'The path integral formulation of quantum mechanics is a description of quantum theory which generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique trajectory for a system with a sum, or functional integral, over an infinity of possible trajectories to compute a quantum amplitude. . . . This formulation has proved crucial to the subsequent development of theoretical physics, since it provided the basis for the grand synthesis of the 1970s which unified quantum field theory with statistical mechanics. . . . ' back

Planck-Einstein relation - Wikipedia, Planck-Einstein relation - Wikipedia, the free encyclopedia, 'The Planck–Einstein relation. . . refers to a formula integral to quantum mechanics, which states that the energy of a photon (E) is proportional to its frequency (ν). E = hν. The constant of proportionality, h, is known as the Planck constant.' back

Rolf Landauer (1999), Information is a Physical Entity, 'Abstract: This paper, associated with a broader conference talk on the fundamental physical limits of information handling, emphasizes the aspects still least appreciated. Information is not an abstract entity but exists only through a physical representation, thus tying it to all the restrictions and possibilities of our real physical universe. The mathematician's vision of an unlimited sequence of totally reliable operations is unlikely to be implementable in this real universe. Speculative remarks about the possible impact of that on the ultimate nature of the laws of physics are included.' back

Salart, Baas, Branciard, Gisin & Zbinden (2008), Testing the speed of 'spooky action at a distance', ' Correlations are generally described by one of two mechanisms: either a first event influences a second one by sending information encoded in bosons or other physical carriers, or the correlated events have some common causes in their shared history. Quantum physics predicts an entirely different kind of cause for some correlations, named entanglement. This reveals itself in correlations that violate Bell inequalities (implying that they cannot be described by common causes) between space-like separated events (implying that they cannot be described by classical communication). Many Bell tests have been performed, and loopholes related to locality and detection have been closed in several independent experiments. It is still possible that a first event could influence a second, but the speed of this hypothetical influence (Einstein's 'spooky action at a distance') would need to be defined in some universal privileged reference frame and be greater than the speed of light. Here we put stringent experimental bounds on the speed of all such hypothetical influences. We performed a Bell test over more than 24 hours between two villages separated by 18 km and approximately east-west oriented, with the source located precisely in the middle. We continuously observed two-photon interferences well above the Bell inequality threshold. Taking advantage of the Earth's rotation, the configuration of our experiment allowed us to determine, for any hypothetically privileged frame, a lower bound for the speed of the influence. For example, if such a privileged reference frame exists and is such that the Earth's speed in this frame is less than 10(-3) times that of the speed of light, then the speed of the influence would have to exceed that of light by at least four orders of magnitude.' back

Schrödinger equation - Wikipedia, Schrödinger equation - Wikipedia, the free encyclopedia, ' In quantum mechanics, the Schrödinger equation is a partial differential equation that describes how the quantum state of a quantum system changes with time. It was formulated in late 1925, and published in 1926, by the Austrian physicist Erwin Schrödinger. . . . In classical mechanics Newton's second law, (F = ma), is used to mathematically predict what a given system will do at any time after a known initial condition. In quantum mechanics, the analogue of Newton's law is Schrödinger's equation for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localized). It is not a simple algebraic equation, but in general a linear partial differential equation, describing the time-evolution of the system's wave function (also called a "state function").' back

Schroedinger picture - Wikipedia, Schroedinger picture - Wikipedia, the free encyclopedia, 'In physics, the Schrödinger picture (also called the Schrödinger representation) is a formulation of quantum mechanics in which the state vectors evolve in time, but the operators (observables and others) are constant with respect to time.' back

Thales - Wikipedia, Thales - Wikipedia, the free encyclopedia, 'Thales of Miletus (Greek: Θαλῆς (ὁ Μιλήσιος), Thalēs; c. 624 – c. 546 BC) was a pre-Socratic Greek philosopher from Miletus in Asia Minor, and one of the Seven Sages of Greece. Many, most notably Aristotle, regard him as the first philosopher in the Greek tradition. Aristotle reported Thales' hypothesis about the nature of matter – that the originating principle of nature was a single material substance: water.
According to Bertrand Russell, "Western philosophy begins with Thales." Thales attempted to explain natural phenomena without reference to mythology and was tremendously influential in this respect.' back

Thomas Aquinas Summa Theologiae I, 18, 3, Is life properly attributed to God?, Life is in the highest degree properly in God. In proof of which it must be considered that since a thing is said to live in so far as it operates of itself and not as moved by another, the more perfectly this power is found in anything, the more perfect is the life of that thing. ' back

Thomas Aquinas, Summa, I, 2, 3, Does God exist?, 'I answer that, The existence of God can be proved in five ways. The first and more manifest way is the argument from motion. . . . ' back

Universe - Wikipedia, Universe - Wikipedia, the free encyclopedia, 'The Universe is all of spacetime and everything that exists therein, including all planets, stars, galaxies, the contents of intergalactic space, the smallest subatomic particles, and all matter and energy. Similar terms include the cosmos, the world, reality, and nature. The observable universe is about 46 billion light years in radius. back

Unmoved mover - Wikipedia, Unmoved mover - Wikipedia, the free encyclopedia, ' The unmoved mover (Ancient Greek: ὃ οὐ κινούμενον κινεῖ, romanized: ho ou kinoúmenon kineî, lit. 'that which moves without being moved'] or prime mover (Latin: primum movens) is a concept advanced by Aristotle as a primary cause (or first uncaused cause) or "mover" of all the motion in the universe. As is implicit in the name, the unmoved mover moves other things, but is not itself moved by any prior action. In Book 12 (Greek: Λ) of his Metaphysics, Aristotle describes the unmoved mover as being perfectly beautiful, indivisible, and contemplating only the perfect contemplation: self-contemplation. He equates this concept also with the active intellect. This Aristotelian concept had its roots in cosmological speculations of the earliest Greek pre-Socratic philosophers and became highly influential and widely drawn upon in medieval philosophy and theology. St. Thomas Aquinas, for example, elaborated on the unmoved mover in the Quinque viae. ' back

Wojciech Hubert Zurek (2008), Quantum origin of quantum jumps: breaking of unitary symmetry induced by information transfer and the transition from quantum to classical, 'Submitted on 17 Mar 2007 (v1), last revised 18 Mar 2008 (this version, v3)) Measurements transfer information about a system to the apparatus, and then further on – to observers and (often inadvertently) to the environment. I show that even imperfect copying essential in such situations restricts possible unperturbed outcomes to an orthogonal subset of all possible states of the system, thus breaking the unitary symmetry of its Hilbert space implied by the quantum superposition principle. Preferred outcome states emerge as a result. They provide framework for the “wavepacket collapse”, designating terminal points of quantum jumps, and defining the measured observable by specifying its eigenstates.' back

www.naturaltheology.net is maintained by The Theology Company Proprietary Limited ACN 097 887 075 ABN 74 097 887 075 Copyright 2000-2022 © Jeffrey Nicholls