volume II: Synopsis
part III: Modern Physics
page 23: Recursion
One may wonder why periodic functions ('wave functions') are so useful for describing the Universe. Periodic functions are conveniently represented by complex exponentials. One of the most pregnant equations in physics is Euler's formula relating the complex exponential to the periodic functions sine and cosine. The principal mathematical formalism of quantum mechanics is worked out in the field of complex numbers. Only near the end of our calculations do we need real numbers, since we can make no observable sense of a complex amplitude. Eugene Wigner, Complex number - Wikipedia, Exponential function - Wikipedia, Euler's formula - Wikipedia
Why are waves so common? Basically, it is because the Universe is a dynamic system analogous to a computer in which large actions are compounded of repeated smaller actions. Periodicity in build into computers and the Universe for the same reason. It takes many small steps to make a big step. The theory of computation is sometimes called recursive function theory because computers are periodic, computing big functions by repeating the same small actions again and again. The fundamental frequency in a computer is the clock, and we often measure the power of a computer by its clock frequency. In the physical world energy and momentum are measures of processing frequency. Every tick of the cosmic clock is a quantum of action. Computability theory - Wikipedia
We model the Universe as a transfinite computer network. In this model the transfinite numbers serve as memory addresses and the computers are the processes that manipulate the memory.
The first computer programmer was the inventor of the computer, Alan Turing. Turing developed the software necessary to make a mechanical model of a machine not much more complicated than a typewriter into a device, the 'Universal Turing Machine' that can, in the opinion of the mathematical community, compute anything computable. Turing devised his machine to answer a question posed by David Hilbert: Can all mathematical proofs (usually performed by mathematicians) be carried out by a deterministic machine executing a predetermined algorithm? Alan Turing - Wikipedia, Alan Turing
Fortunately (for mathematicians), Turing's answer is no. In fact there are only a countable number of computable functions. Since there are an uncountable number of functions from the natural numbers to themselves, this tells us that only an infinitesimal fraction of these functions are computable. In the network model, it seems reasonable to map the set of eigenfunctions of quantum mechanics onto the set of computable functions. The eigenfunctions, in other words, are quantized like the natural numbers. This idea might also lead us to suspect that the probability of observing various eigenvalues is related to the time it takes to compute the corresponding eigenfunction. Eigenfunction - Wikipedia, Wojciech Hubert Zurek
Turing built up his machine using layers of subroutines. Turing begins with a machine that prints the sequence 010101. . . This little machine is recursive, returning to its starting point after it has printed each 01 pair. Ater a few examples of more complex 'tables' (programs) he introduces 'abbreviated tables' or 'skeleton tables' which have placeholders for previously defined simple tables ('subroutines'). The full table is created by filling in the placeholders in the skeleton table with with the previously defined subtable. This process can be used to build up very complex machines out of relatively simple components. Modern computers have a similar software structure.
We can imagine such a machine having a 'frequency spectrum', simple operations occurring more frequently than more complex operations, the overall computation occurring just once if the machine halts with an answer, or not at all of the machine can find no answer and goes on forever.
Turing's machine, which he called an automatic machine, proceeds deterministically: either it finds an answer and halts, or else it goes on forever. He also imagined choice machines which proceed to point beyond which the next operation is indeterminate and wait for an external agency to make the choice for them. We might call such machines network machines. The machine on which I am typing processes each letter that I type and then waits for the next keystroke to determine what to do next.
Energy is conserved in the physical Universe, which means that the overall rate of action measured in quanta per second is constant. Conservation of energy thus implies that the Universe is a perpetual motion machine, responsible for its own activity. From earliest times, life has been described as self motion, so it is natural to see the Universe as a living organism. This organism is partitioned into smaller organisms, such as you and I and all the other entitites we can distinguish in the whole. Acting together, these partial events form the life of the Universe, tantamount to the life of God. Conservation of energy - Wikipedia, Perpetual motion - Wikipedia
Let s assume that the computers working in a network spend their time encoding and decoding messages to one another, they are in modern language 'codecs'. A codec halts when it has encoded or decoded a message. The output of a halted machine is a coded message. This message, input to a network machine, is received if it can be decoded, and changes the state of the machine. If the machine cannot decode it, we assume that the message is ignored and has no effect on the machine.
We consider messages to be fixed points in the network system, since they are in effect the states of halted computers. Furthermore, because of the layered structure of the system, messages may be of any scale, so that in the abstract model a photon and a human being are equally messages and fixed points in the system. Messages, including ourselves, are created and annihilated by encoding and decoding, and the conservation of energy maintains this cyclic process over a vast range of frequencies from that corresponding to the total energy of the Universe to that corresponding to the lifetime of the Universe, ie zero. Photon - Wikipedia
The model suggests that the Universe has the particle/wave duality considered to be characteristic of the quantum microscosm at all levels of complexity. So we find that the world comprises an enormous number of distinct (particulate) entities, molecules, bacteria, people, trees, houses and so on. We also see that many of the processes in the world are cyclic. We build houses brick by brick, walk step by step, wheels go round and round and the sun rises and sets.
All the information in a quantum state vector is carried by its phase or angle, and the probability of interaction between different vectors depends upon the phase difference between them. We see this at work in the two slit experiment, where the phases of particle coming from each slit determine whether they will interfere constructively or destructively. We may imagine that when processes are exactly out of phase, one undoes the work of the other, and we are left with nothing. If they are in phase, on the other hand, they cooperate. Phase (waves) - Wikipedia,
We may think of the phase as a measure of progress in a halting computation, a full cycle corresponding to a complete calculation which corresponds in turn to a quantum of action (which may be the sum of smaller quanta). The completion of such a cycle is signalled by halting, that is the machine stops in a fixed state which is its output or message. In the case of the two slit experiment with electrons, an electron will most probably appear at the point where the signals from both slits are in phase.
Expanding on this concept, Feynman developed his path integral method in quantum mechanics by adding the phases throughout space and time and assuming that the particle followed the path along which the integrated action was stationary. We would like to interpret this to mean that the world takes the paths marked out by optimum computational algorithms, so that any deviation from the algorithm would require an increase of action, that is an increase in the number of steps in the computation. Such algorithms may be optimized by natural selection in a Universe where time is of the essence. Feynman, Path integral formulation - Wikipedia
[revised 24 May 2013]
Back to Synopsis toc
You may copy this material freely provided only that you quote fairly and provide a link (or reference) to your source.
Click on the "Amazon" link below each book entry to see details of a book (and possibly buy it!)
|Berndt, Ronald M, and Catherine H Berndt, The World of the First Australians : Aboriginal Traditional Life Past and Present., 1988 Foreword: '[This book] gives a comprehensive picture of traditional Aboriginal culture, with special attention to certain areas which the authors know personally. This is how life was lived before the coming of Europeans, or before European influence dramatically modifed it. ...
Here the major focus is on traditionally oriented Aborigines whose way of life is rapidly disappearing. Over most of the Continent it is already a thing of the past.
The material is fully dicumented, with references.
|Feynman, Richard P, and Albert P Hibbs, Quantum Mechanics and Path Integrals, McGraw Hill 1965 Preface: 'The fundamental physical and mathematical concepts which underlie the path integral approach were first developed by R P Feynman in the course of his graduate studies at Princeton, ... . These early inquiries were involved with the problem of the infinte self-energy of the electron. In working on that problem, a "least action" principle was discovered [which] could deal succesfully with the infinity arising in the application of classical electrodynamics.' As described in this book. Feynam, inspired by Dirac, went on the develop this insight into a fruitful source of solutions to many quantum mechanical problems.
|Sigmund, Karl, Games of Life: Explorations in Ecology, Evolution and Behaviour, Oxford UP 1993 Jacket: 'This book takes us on a tour through the games and computer simulations that are helping us to understand the ecology, evolution and behaviour of real life - from cat and mouse to cellular automata, from the battle of the sexes to artificial life, from poker to prisoner's dilemma.'
|Smolin, Lee, The Life of the Cosmos, Oxford University Pres 1997 Jacket: 'Smolin posits that a process of self-organisation like that of biological evolution shapes the universe, as it develops and eventually reproduces through black holes, each of which may result in a big bang and a new universe. Natural selection may guide the appearance of the laws of physics, favouring those universes which best reproduce. . . . Smolin is one of the leading cosmologists at work today, and he writes with an expertise and a force of argument that will command attention throughout the world of physics.'
|Stewart, Ian, Life's Other Secret: The new mathematics of the living world, Allen Lane 1998 Preface: 'There is more to life than genes. ... Life operates within the rich texture of the physical universe and its deep laws, patterns, forms, structures, processes and systems. ... Genes nudge the physical universe in specific directions ... . The mathematical control of the growing organism is the other secret ... . Without it we will never solve the deeper mysteries of the living world - for life is a partnership between genes and mathematics, and we must take proper account of the role of both partners.' (xi)
|Suzuki, Daisetz Teitaro, Studies in Zen, Rider and Co, for the Buddhist Society 1953 Studies in Zen is the eigth volume of the collected works of DT Suzuki. Jacket: 'These studies, packed with the jewels of Zen wisdom, and written with unrivalled knowledge, will appeal to all who seek a deeper understanding of Eastern ways of thought and spiritual achievement. For Zen is unique in the whole range of human understanding, and Dr. Suzuki is accepted as its greatest exponent.
|Wilson, Edward Osborne, Sociobiology: The new synthesis, Harvard UP 1975 Chapter 1: '... the central theoretical problem of sociobiology: how can altruism, which by definition reduces personal fitness, possibly evolve by natural selection? The answer is kinship. ... Sociobiology is defined as the systematic study of the biological basis of all social behaviour. ... It may not be too much to say that sociology and the other social sciences, as well as the humanities, are the last branches of biology waiting to be included in the Modern Synthesis.'
|Turing, Alan, "On Computable Numbers, with an application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, 2, 42, 12 November 1937, page 230-265. 'The "computable" numbers maybe described briefly as the real numbers whose expressions as a decimal are calculable by finite means. Although the subject of this paper is ostensibly the computable numbers, it is almost as easy to define and investigate computable functions of an integrable variable or a real or computable variable, computable predicates and so forth. The fundamental problems involved are, however, the same in each case, and I have chosen the computable numbers for explicit treatment as involving the least cumbrous technique. I hope shortly to give an account of the rewlations of the computable numbers, functions and so forth to one another. This will include a development of the theory of functions of a real variable expressed in terms of computable numbers. According to my definition, a number is computable if its decimal can be written down by a machine'. back |
|Alan Turing On Computable Numbers, with an application to the Entscheidungsproblem 'The “computable” numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means. Although the subject of this paper is ostensibly the computable numbers, it is almost equally easy to define and investigate computable functions of an integral variable or a real or computable variable, computable predicates, and so forth. The fundamental problems involved are, however, the same in each case, and I have chosen the computable numbers for explicit treatment as involving the least cumbrous technique.' back |
|Alan Turing - Wikipedia Alan Turing - Wikipedia, the free encyclopedia 'Alan Mathison Turing, OBE, FRS ( 23 June 1912 – 7 June 1954), was an English mathematician, logician, cryptanalyst, and computer scientist. He was highly influential in the development of computer science, providing a formalisation of the concepts of "algorithm" and "computation" with the Turing machine, which played a significant role in the creation of the modern computer. Turing is widely considered to be the father of computer science and artificial intelligence. . . . ' back |
|Complex number - Wikipedia Complex number - Wikipedia, the free encyclopedia 'In mathematics, a complex number is a number which is often formally defined to consist of an ordered pair of real numbers (a,b), often written z = a + ib
Complex numbers have addition, subtraction, multiplication, and division operations defined, with behaviours which are a strict superset of real numbers, as well as having other elegant and useful properties. Notably, the square roots of negative numbers can be calculated in terms of complex numbers. |
Complex numbers were invented when it was discovered that solving some cubic equations required intermediate calculations containing the square roots of negative numbers, even when the final solutions were real numbers. Additionally, from the fundamental theorem of algebra the use of complex numbers as the number field for polynomial algebraic equations means that solutions always exist. The set of complex numbers is said to form a field which is algebraically closed, in contrast to the real numbers.' back
|Computability theory - Wikipedia Computability theory - Wikipedia, the free encyclopedia 'Computability theory, also called recursion theory, is a branch of mathematical logic that originated in the 1930s with the study of computable functions and Turing degrees. The field has grown to include the study of generalized computability and definability. In these areas, recursion theory overlaps with proof theory and effective descriptive set theory.
The basic questions addressed by recursion theory are "What does it mean for a function from the natural numbers to themselves to be computable?" and "How can noncomputable functions be classified into a hierarchy based on their level of noncomputability?". The answers to these questions have led to a rich theory that is still being actively researched.' back |
|Conservation of energy - Wikipedia Conservation of energy - Wikipedia, the free encyclopedia ' In physics, the conservation of energy' states that the total amount of energy in any isolated system remains constant but cannot be recreated, although it may change forms, e.g. friction turns kinetic energy into thermal energy. In thermodynamics, the first law of thermodynamics is a statement of the conservation of energy for thermodynamic systems, and is the more encompassing version of the conservation of energy. In short, the law of conservation of energy states that energy can not be created or destroyed, it can only be changed from one form to another.' back |
|Double-slit experiment - Wikipedia Double-slit experiment - Wikipedia, the free encyclopedia 'The double-slit experiment, sometimes called Young's experiment, is a demonstration that matter and energy can display characteristics of both waves and particles. In the basic version of the experiment, a coherent light source such as a laser beam illuminates a thin plate pierced by two parallel slits, and the light passing through the slits is observed on a screen behind the plate. The wave nature of light causes the light waves passing through the two slits to interfere, producing bright and dark bands on the screen — a result that would not be expected if light consisted strictly of particles. However, at the screen, the light is always found to be absorbed as though it were composed of discrete particles or photons.This establishes the principle known as wave–particle duality.' back |
|Eigenfunction - Wikipedia Eigenfunction - Wikipedia, the free encyclopedia 'In mathematics, an eigenfunction of a linear operator, A, defined on some function space is any non-zero function f in that space that returns from the operator exactly as is, except for a multiplicative scaling factor. More precisely, one has Af =
λf for some scalar, λ, the corresponding eigenvalue.' back |
|Eugene Wigner The Unreasonable Effectiveness of Mathematics in the Natural Sciences 'The first point is that the enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious and that there is no rational explanation for it. Second, it is just this uncanny usefulness of mathematical concepts that raises the question of the uniqueness of our physical theories.' back |
|Euler's formula - Wikipedia Euler's formula - Wikipedia, the free encyclopedia 'Euler's formula states that, for any real number x , exp(ix) = cos(x) + i sin(x) where e is the base of the natural logarithm, i is the imaginary unit, and cos(x) and sin(x) are trigonometric functions. The formula is still valid if x is a complex number, and so some authors refer the more general complex version as Euler's formula.
Richard Feynman called Euler's formula "our jewel" and "the most remarkable formula in mathematics". (Feynman, Richard P. (1977). The Feynman Lectures on Physics, vol. I. Addison-Wesley, p. 22-10. ISBN 0-201-02010-6.) back |
|Exponential function - Wikipedia Exponential function - Wikipedia, the free encyclopedia 'In mathematics, the exponential function is the function ex, where e is the number (approximately 2.718281828) such that the function ex is its own derivative. . . . As in the real case, the exponential function can be defined on the complex plane in several equivalent forms. . . . The exponential function is periodic with imaginary period 2πi . . . '
|Path integral formulation - Wikipedia Path integral formulation - Wikipedia, the free encyclopedia 'The path integral formulation of quantum mechanics is a description of quantum theory which generalizes the action principle of classical mechanics. It replaces the classical notion of a single, unique trajectory for a system with a sum, or functional integral, over an infinity of possible trajectories to compute a quantum amplitude. . . . This formulation has proved crucial to the subsequent development of theoretical physics, since it provided the basis for the grand synthesis of the 1970s which unified quantum field theory with statistical mechanics. . . . ' back |
|Perpetual motion - Wikipedia Perpetual motion - Wikipedia, the free encyclopedia 'Perpetual motion describes hypothetical machines that operate or produce useful work indefinitely and, more generally, hypothetical machines that produce more work or energy than they consume, whether they might operate indefinitely or not.
There is undisputed scientific consensus that perpetual motion in a closed system would violate the first law of thermodynamics and/or the second law of thermodynamics.' back |
|Phase (waves) - Wikipedia Phase (waves) - Wikipedia, the fre encyclopedia 'The phase of an oscillation or wave is the fraction of a complete cycle corresponding to an offset in the displacement from a specified reference point at time t = 0. Phase is a frequency domain or Fourier transform domain concept, and as such, can be readily understood in terms of simple harmonic motion. The same concept applies to wave motion, viewed either at a point in space over an interval of time or across an interval of space at a moment in time. Simple harmonic motion is a displacement that varies cyclically, as depicted below:. back |
|Photon - Wikipedia Photon - Wikipedia, the free encyclopedia 'In physics, a photon is an elementary particle, the quantum of the electromagnetic interaction and the basic unit of light and all other forms of electromagnetic radiation. It is also the force carrier for the electromagnetic force. ' back |
|Wojciech Hubert Zurek Quantum origin of quantum jumps: breaking of unitary symmetry induced by information transfer and the transition from quantum to classical 'Submitted on 17 Mar 2007 (v1), last revised 18 Mar 2008 (this version, v3))
"Measurements transfer information about a system to the apparatus, and then further on -- to observers and (often inadvertently) to the environment. I show that even imperfect copying essential in such situations restricts possible unperturbed outcomes to an orthogonal subset of all possible states of the system, thus breaking the unitary symmetry of its Hilbert space implied by the quantum superposition principle. Preferred outcome states emerge as a result. They provide framework for the ``wavepacket collapse'', designating terminal points of quantum jumps, and defining the measured observable by specifying its eigenstates. In quantum Darwinism, they are the progenitors of multiple copies spread throughout the environment -- the fittest quantum states that not only survive decoherence, but subvert it into carrying information about them -- into becoming a witness.' back |