vol VI: Essays
Essay 20: On leading theology into Cantor's Paradise (2018)
Designing heaven on Earth
Outline
0: Abstract: Form and potential in creation
1: God: From Homer to Lonergan
2: Divine logic, divine dynamics
3: Scientific method
4: Can theology be a modern science?
5: On modelling God: the Trinity
6: Simplicity, dynamics and fixed points
7: Cantor's Paradise
8: Why does the universe become more complex?
9: The logical origin of time and space
10: Hilbert space and quantum mechanics
11: Why is the Universe quantized?
12: Spin and space-time: boson and fermion
13: Entanglement: QFT describes a network
14: A transfinite computer network
15: Lagrangian mechanics and evolution
16: General relativity: the cosmic network
17: Network intelligence and consistency
18: Symmetry, invisibility and scale invariance
19: Physics is mind: panpsychism
20: Humanity: Cutting theology free from politics
21: Conclusion: Life in the divine world
0: Abstract: Form and potential in creation
The Hebrew God, Yahweh, is one. The Christian writers transformed this God into a Trinity of persons, Father, Son and Spirit. We understand a person to be both a sender and receiver of messages, generically, a source.
The classical God has three fundamental properties, it exists, it is completely without structure, and it creates the universe. These are identically the properties of the initial singularity predicted by the general theory of relativity. Starting from the model developed by Christian theologians trying to understand the Trinity, we develop a mathematical scenario for the evolution of our universe from the initial singularity by analogy with the generation of the transfinite numbers in Cantor's set theory. Initial singularity - Wikipedia, Georg Cantor: Contributions to the Founding of the Theory of Transfinite Numbers, Thomas Jech: Set Theory
We understand this evolution to be driven by the Cantor force, a consequence of Cantor's theorem, which demands that a consistent formal system must become more complex. This potential drives the increase of the entropy of the universe. The Catholic theologian Pierre Teilhard de Chardin called this complexification. Teilhard de Chardin: The Phenomenon of Man
Plato guessed that the structure of the observable world is shaped by invisible, eternal, perfect forms. Our world, he thought, is just a pale shadow of these forms. Aristotle brought these forms down to Earth with his theory of hylomorphism which he further developed into the theory of potential and actuality that Aquinas used to lay the medieval foundations for modern Christian theology. This essay follows in the steps of Aristotle by taking the view, like Plato, that forms have a real existence that guides the structure of the world. Theory of Forms - Wikipedia, Allegory of the cave - Wikipedia, Hylomorphism - Wikipedia, Actus et potentia - Wikipedia
The modern version of this idea is quantum field theory, which proposes a space of invisible fields to guide the behaviour of the observable world. This theory is beset by serious problems. Practically, the most acute is the 'cosmological constant problem'. One interpretation of quantum field theory predicts results that differ from observation by about 1o0 orders of magnitude, ie 10100. The philosophical object of this essay is to re-interpret the relationship between mathematical theory and reality in a way that points to a solution this problem. Quantum field theory - Wikipedia, Cosmological constant problem - Wikipedia
To explain this I follow in some detail the long and winding trail from the absolute simplicity of the initial singularity to the majestic complexity of the present universe. This all happens inside the God, not outside as the traditional story tells us. We are part of the divine world, owe our existence to it, share its conscious intelligence, created, as the ancients said, in the image of God. The most powerful product of this intelligence is reflected in the mathematical formalism that shapes our selves and our world.
1: God — from Homer to Lonergan
A story records a sequence of events. There is plenty of competition for the greatest story every told, but the winner, without doubt, is story of the Universe itself, an exciting cosmic sequence of events that is just beginning for us after a 14 billion year prologue. Chronology of the universe - Wikipedia
There are endless subplots within the cosmic story. In the Western Catholic world a popular contender is the Christian History of Salvation. We have a short fourth century version of this story in the Nicene Creed which was written when Constantine asked the Christian bishops to standardize Christian doctrine within the Empire. An extended modern version of this story is the Catholic Catechism, written to summarize Catholic doctrine after the Second Vatican Council. The Catechism is about 1000 times longer than the creed. Catechism of the Catholic Church - Wikipedia, Nicene Creed - Wikipedia, Pope John Paul II
The God of the Summa is the conclusion of about 1700 years of physical, logical and mathematical thought. This work began about 500 bce when Parmenides and his contemporaries began to think critically about the mythical Gods they had inherited from the poets Homer. Parmenides had the idea that if there was to be true knowledge of the moving universe it must have a still eternal core of perfect truth. This idea was taken up by Plato, who imagined a heaven of eternal forms which serve as perfect exemplars for the imperfect world we inhabit. Plato's student Aristotle was a practical and observant scientist who studied logic, nature, politics and philosophy. He invented a "first unmoved mover" to explain all the motion in the world. Aquinas transformed this being into a model of the Christian God. Homer - Wikipedia, John Palmer - Parmenides, Plato: Parmenides, Thomas Aquinas: Summa Theologica, Robert Graves: The Greek Myths: Complete Edition,
The relationship between a completely simple eternal God and our complex moving world is a very old epistemological issue. It arose from the notion common among philosophers that they could use a theory of human knowledge to understand the nature of the world. This is an anthropic principle. Whatever the Universe is, we say, it is a system capable of creating the Sun, the Earth and conscious animals like ourselves. We owe much of our success to thinking about the past and its lessons for the future. The anthropic principle suggests that by thinking about ourselves, we may hope to learn something about the system which created us. After all, it says in Genesis that we are created in the image of God. Anthropic principle - Wikipedia, Genesis 1:27: God created mankind, Barrow and Tipler: The Anthropic Cosmological Principle
Parmenides and his contemporaries felt that true knowledge is possible only of things that do not change. He therefore postulated that true reality must be immobile, attributing the idea to a goddess:
. . . the goddess greeted me kindly, and took my right hand in hers, and spake to me these words:
. . . One path only is left for us to speak of, namely, that It is. In it are very many tokens that what is is uncreated and indestructible; for it is complete, immovable, and without end. Nor was it ever, nor will it be; for now it is, all at once, a continuous one. . . ..
Nor is it divisible, since it is all alike, and there is no more of it in one place than in another, to hinder it from holding together, nor less of it, but everything is full of what is. Wherefore it is wholly continuous; for what is, is in contact with what is.
Moreover, it is immovable in the bonds of mighty chains, without beginning and without end; since coming into being and passing away have been driven afar, and true belief has cast them away. It is the same, and it rests in the self-same place, abiding in itself. Parmenides: Fragments
Aquinas derived the standard Catholic model of God from Aristotle's theological treatment of the first unmoved mover, found in Metaphysics:
But if there is something which is capable of moving things or acting on them, but is not actually doing so, there will not necessarily be movement; for that which has a potency need not exercise it. Nothing, then, is gained even if we suppose eternal substances, as the believers in the Forms do [ie Plato], unless there is to be in them some principle which can cause change; nay, even this is not enough, nor is another substance besides the Forms enough; for if it is not to act, there will be no movement. Further even if it acts, this will not be enough, if its essence is potency; for there will not be eternal movement, since that which is potentially may possibly not be. There must, then, be such a principle, whose very essence is actuality. Further, then, these substances must be without matter; for they must be eternal, if anything is eternal. Therefore they must be actuality. Aristotle Metaphysics XII, vi, 2
The medieval Christian version of God was built on the unmoved mover. This entity is divine, pure action, enjoying eternal pleasure. Aristotle thought that the first unmoved mover was an integral part of the Cosmos. Aquinas, faithful to his religion, placed his creator outside the Universe. The doctrine of the Summa has never been supeseded in the Church. It remains officially endorsed in Canon Law Aristotle, Metaphysics 1072b3 sqq., Aquinas Summa I, 2, 3: Does God exist?, Holy See: Code of Canon Law: Canon 252 § 3
The Catholic theologian Bernard Lonergan set out to modernize Aquinas' arguments for the existence of God in his treatise on Metaphysics Insight. Lonergan's argument for the existence of God follows a time honoured path. We all agree that the world exists, but we can see (they say) that it cannot account for its own existence. There must therefore be a Creator to explain the existence of the world. This being we might all agree to call God. We might call Aristotle's argument for the unmoved mover physical. He felt that motion could not exist without a first mover. Lonergan set out to argue that God is other than the Universe by following the psychological path pioneered by Parmenides, using the act of human understanding, insight, as his starting point. God, he said, must be perfectly intelligible. But the world is not perfectly intelligible. It contains meaningless data, empirical residue, so it cannot be divine. I think the weak spot in this argument lies in the idea that the world contains meaningless data. Since Parmenides' time we have learnt an enormous amount about our world, as I hope to explain, and it all points to a dynamic and self sufficient, self-explanatory world. The theory of evolution suggests that there is a reason for every detail, it is dense with meaning. Lonergan: Insight, A Study of Human Understanding
So here I set out to explore the idea that the Universe is divine. I assume that it plays all the roles traditionally attributed to God, creator, sustainer and judge. The story of humanity thus becomes part of the story of God, truly the greatest story ever told. If God is observable, theology can become scientific, based on observation, not pure faith, as the Christian churches demand. This hypothesis raises many problems for the traditional view. The most difficult is the extreme contrast between the absolutely simple God imagined by Aquinas and the mystics, and the enormously complex Universe that we inhabit. There is a mathematical answer to this problem, as we shall see in the next section. Aquinas Summa: II, II, 4, 1: Is this a fitting definition of faith: "Now faith is the substance of things hoped for, the evidence of things not seen"? (Hebrews 11:1, KJV), Aquinas, Summa, I, 3, 7: Is God altogether simple?
Implicit in the ancient view is the idea that matter is dead and inert. It cannot move itself. It cannot be the seat of understanding. It cannot be creative. Since the advent of modern physics, founded on relativity and quantum theory, these ideas are history. Quantum theory describes the universe as a gigantic network of communication, a mind. From our point of view the Universe is itself the omniscient mind of God. We move toward this conclusion scientifically, starting, like Lonergan, with an examination of the scientific study of knowledge itself.
2: Divine logic, divine dynamics
One beauty of the hypothesis that the universe is divine is that it encases the full range of science and human experience within the embrace of theology and puts the same constraints on the universe as we place on mathematics and God, that they be consistent. Mathematical consistency leads us logically to the incompleteness theorems of Gödel and the incomputability theorem of Turing, allowing room in the formalism for variation, indeterminism and free will. Logic - Wikipedia, Gödel's incompleteness theorems - Wikipedia, Turing's Proof - WikipediaWe begin our study the interior structure of God by considering the evolution of the mathematics industry that probably began about the same time as the invention of writing. Mathematics is central to this story because it gives us the formal tools to model the actual universe.George Gheverghese Joseph: The Crest of the Peacock: Non-European Roots of Mathematics
Like every industry, mathematics has agents and products. The agents are mathematicians, the products are the mathematical literature, particularly the fixed connections between hypotheses and conclusions we call proofs. The mathematicians make the literature, using creative imagination to dream up new proofs and sharing them for testing and further development. The products of mathematics form a backbone for all other industries that use measurement and computation in any form. Mathematical proof - Wikipedia
The development of mathematics looks very like evolution by natural selection. The variation comes from the minds of mathematicians, and selection is the process of proof. A mathematical structure survives if it is provable. It is provable if there is a definite logical chain of inference from a set of hypotheses to a conclusion. Given the assumptions of Euclidean geometry, for instance, the Pythagorean theorem necessarily follows. Logic and arithmetic lie at the root of all computation. Pythagorean theorem - Wikipedia
A key class of theorems for our story here are the fixed point theorems. These theorems explain the relationship between between the absolutely simple God imagined by Aquinas the enormously complex Universe that we inhabit. God is pure activity, pure dynamism. Since we assume that there is nothing outside God (or the Universe) any motion in God can be represented mathematically as a mapping of God onto itself.
Fixed point theorems tell us the conditions under which we find functions f(x) which map the point x onto itself, so we can write f(x) = x. These fixed points of a purely dynamic god-universe are the stable elements of the Universe that we live in and experience. They are the subjects of science, points that stay still a least long enough to observe and perhaps forever. I am a fixed point that will last for about 100 years. Such fixed points are not outside the divine dynamics, they are simply points in the dynamics which do not move. They are analogous to mathematical proofs, fixed points in the space of mathematical discourse. Fixed point theorem - Wikipedia
Plato, following Parmenides, called the fixed points in the world forms or ideas. His forms served both to shape the world and, in our minds, to enable us to understand it. Plato's forms were immaterial and eternal fixed points in the structure of the world, very much like mathematical structures.
The first mathematics had two branches, arithmetic, dealing with numbers and counting, and geometry dealing wth measurement, shapes and space. An important feature of mathematics is its "platonic" approach, abstracting forms from matter. This means that it can talk about infinite quantities such as the natural numbers without considering if they can be physically realized. All that is mathematical structures require is internal consistency. Arithmetic - Wikipedia, Geometry - Wikipedia
George Cantor exploited the formal property of mathematics to prove the existence of transfinite numbers. His work upset some theologians who insisted that only God can be described as infinite. David Hilbert defused this debate by emphasizing that mathematics is a purely formal game played with symbols. All the matters is that proofs and other mathematical operations be logically consistent. Since theologians hold that God must be consistent, it seems natural to use consistent mathematics to model a consistent God. Formalism (philosophy of mathematics) - Wikipedia, Dauben: Georg Cantor: His Mathematics and Philosophy of the Infinite
As suggested above mathematics solves the apparent inconsistency between the ancient idea that god is absolutely simple and the appearance of the universe, which is exceedingly complex. From a dynamic point of view, the complex structure of the fixed points that we observe are consistent with the dynamic simplicity of the universe: they simply elements of the dynamics that do not move. We can see the fixed points. The dynamics are not so easy to see, but the observable fixed points provide us with a map of the dynamics, just as this fixed text maps the dynamics of my mind.
The literature of mathematics is enormous. Here I choose just six elements of mathematics as the backbone for a model of God: mathematical logic, fixed point theory, Cantor's transfinite numbers, Gödel's incompleteness, Turing's computability and Shannon's mathematical theory of communication. On the assumption that the universe is divine, theology, the traditional theory of everything, embraces all the sciences that fill in the details that do not fit in this short essay.
3: Scientific method
All our stories arise from a combination of observation and imagination. The scientific method formalizes this process into an endless cycle of observation, imagination, observation . . . . Just like driving a car really, keeping a good lookout and correcting course when necessary. If the observations do not match our imagined model, we just need more imagination to fix the model, because we believe that the Universe does not lie, just as we believe that God does not lie.
One of the most frustrating phenomena in life is the inability to understand. This unmet need is fulfilled in a rather random manner by insight, the act of understanding. The archetype of insight is Archimedes discovery that the buoyant force on an object is proportional to the mass of fluid it displaces. In our daily lives we receive a continual stream of information from our environment which we must interpret, and, if necessary, act upon. Some reactions are instinctive, as when we duck to avoid getting hit. Others require a certain amount of thought. Archimedes - Wikipedia
At the further end of the scale are insights that have taken the collective efforts of many people thousands of years to reach. Einstein's general theory of relativity is the culmination of many thousands of years of celestial observation, geometry, mathematics and experience with massive objects. General relativity - Wikipedia
Such is the subtlety and insight of Einstein's work that people may wonder how much longer the general theory would have taken to see the light if Einstein had not done the job. Quantum mechanics, an equally subtle and sharp break from history, was the work of many people, including Einstein, and took 30 years to develop versus the fifteen years or so that Einstein worked on relativity. Albert Einstein: Relativity: The Special and General Theory
Einstein devised a mathematical model whose numerical inputs and outputs exactly mimic the behaviour of the cosmos to the precision of current measurements. He had complete confidence in the theory:
The chief attraction of the theory lies in its logical completeness. If a single one of the conclusions drawn from it proves wrong, it must be given up; to modify it without destroying the whole structure seems impossible. Einstein: Einstein's Essays in Science, page 59
All science begins with a similar steps: people familiar with a body of data want to understand how it all fits together. When an answer comes that fits the data already in hand, the next job is to test it by applying it to new data. The difference between science and pure fiction is that the fictions of science are tested against reality.
Another common feature of scientific models is that they are expressed, at least partly, in mathematical terms. One advantage of mathematics is that loses nothing in translation. It has its own unique and almost universal symbolism. Mathematical models looks the same to everybody who understands them, regardless of their native language.
One of the most important mathematical foundations of science is the theory of probability. This theory tells us what things look like when there is nothing happening. So we expect equal numbers of heads and tails when we toss a fair coin. If we get heads every time, it does not take long to suspect a two headed coin. Statistical tests of data, based on the theory of probability, enable us to decide whether something is worth a closer inspection.
The purpose of scientific method is to guide us toward true knowledge. What do we mean by true? The ancient definition is that a sentence is true of it accurately reflects reality. "John is in the bath" is true whenever John is in the bath, and false otherwise.
Some sentences are easily checked. One looks at the toaster to see if it is true that "the toast has popped". Others need careful checking. Are we the cause of global warming? A large amount of theory and data points in this direction, so the current consensus is that we are, but of course there are holdouts, some with vested interests in the old ways.
Science is methodical. The first step is to decide if we are seeing some sort of systematic behaviour, or are we simply looking at random events. We collect as many observations as possible and use statistical methods to answer this question. If we see a strong coupling between phenomena, the next step is to find mechanisms to explain this coupling. We know for a start that such couplings are established by communication. The problem then is to understand the channels and codes that support the correlations that we see. When it comes to global warming we have been aware of the mechanism for more than a century. We know that carbon dioxide and other "greenhouse gases" make the atmosphere less transparent to infra-red radiation, thus trapping more of the Sun's heat on Earth. History of climate change science - Wikipedia
A most significant step in twentieth century physics was the development of quantum mechanics to explain the relationship between radiation and matter. Spectroscopic observations in the nineteenth century revealed that atoms and molecules emit and absorb certain fixed frequencies of electromagnetic radiation called spectral lines. Spectral lines are characteristic of particular atoms and molecules and enable us to identify particular substances. As the precision of our measurements has improved we have learnt that these frequencies are fixed in nature with very high precision. To date we have constructed clocks based on atomic frequencies that are accurate to about one second in the life of the universe. The world does have a fixed core, as Parmenides suspected. Atomic clock - Wikipedia, W. F. McGrew et al: Atomic clock performance enabling geodesy below the centimetre level
There are many other features of the world which operate with mathematical precision. All these features involve counting in one form or another. So we learn when we are quite young that three apples plus three apples is six apples, and that this is an irrefutable truth. The mathematical operations of counting, measuring and arithmetic form an unimpeachable foundation to all forms of design, accounting, banking, engineering, practical everyday trade and the rules of sports and games.
The application of scientific method, formally or informally, has been the foundation of all the technological improvements in human health and wellbeing. An important part of this function is identifying situations where things are going wrong in order to make corrections. The causes and effects of climate change are particularly important application of this role, as are the many other instances of damage to our global environment arising from uncontrolled pollution and the destruction of the environmental processes upon which we depend for our existence. Carl Safina: In Defense of Biodiversity: Why Protecting Species from Extinction Matters
4: Can theology be a modern science?
We are living on an extraordinarily complex planet in a the enormous universe that emerged (we believe) from a point source about 14 billion years ago. We attribute three properties to this source, it exists, it is structureless and it is the source of the universe. The traditional Christian God has the same three properties. The aim of scientific theology is to devise a new story of creation, explaining emergence of our present state from its source, God or the initial singularity.
In science the foundation of truth is observation. The Christian God is, by definition, invisible. Consequently Christian theology cannot be a science in the modern sense. The Catholic Church claims a monopoly on true knowledge of God. This magisterium of the Catholic Church is based purely on faith. The Church considers faith to be a virtue. On the other hand it may be foolish to believe the unbelievable on the word of a self interested corporation. Magisterium - Wikipedia
In the early days Christian ideas attracted some of the most gifted and educated scientific and political people in the Mediterranean area. A selection of these, the Fathers of the Church, have left us hundreds of volumes of commentary on the Bible and Christianity. Their work was synthesized against a background of Plato and Aristotle's philosophy, science and logic by Thomas Aquinas, who began writing his Summa Theologica (Summary of Theology) in 1265. Church Fathers - Wikipedia, Richard Kraut - Plato (Stanford Encyclopedia of Philosophy), Christopher Shields (Stanford Encyclopedia of Philosophy): Aristotle
Catholic theology sees itself as is a deductive science. It deduces its conclusions from principles per se nota, that is obvious tautologies, and from the divine revelation curated by the Church. Both the truth of the Bible and the reliability of the Church's interpretation of the Bible must be taken on faith. Aquinas, Summa, I, 1, 2: Is sacred doctrine a science?
The dramatic reign of the mythical gods began to crumble about 500 bce when scientifically minded people began to criticize them. In the next 1700 years philosohers and theologians built what looked to them like a mathematically and logically perfect God. Aristotle, Plato's student, laid the foundation of this God with his theory of potency and act. The world moves. Motion, says Aristotle, is the passage from potency to act. For Aristotle it is axiomatic that no potency can actualize itself. Every act must therefore be caused by an agent already in act, which lead him to postulate the first unmoved mover, a purely actual entity devoid of potential.
Aquinas follows Aristotle in teaching that God is pure act. Aquinas' model of God is the starting point for this essay. My intention is to extend this model to the point where it can encompass both the completely simple God envisaged by Aquinas and the enormously complex Universe we see. Our first step is to construct a model that embodies this transformation and then apply and test it. Formally the classical Christian God is identical to the initial singularity predicted by general relativity. Both are structureless sources of the Universe. Our model should, therefore, also be able to handle the transformation from God and the initial singularity to the current Universe. Aquinas, Summa, I, 3, 7: Is God altogether simple?
The traditional story in Genesis is that God exercised its unbounded power and wisdom to create a world other than itself. This immediately raises a problem if we think that God is the realization of all possibility. How can it create a new world if it is already everything that can possibly exist? We do not face that problem here because we identify God and the world. What we do have to explain is how the initial point of the universe, whether we call it God or the initial singularity, differentiates into the huge and complex Universe that we occupy. We need a model that provides a logical explanation of this creative process. This model would be a foundation for scientific theology.
Our plan is to model the Universe as a layered communication network developed by analogy with systems like the internet. In practical networks, the lowest layer is the physical information transmission equipment made of wires, fibres and electromagnetic waves. The topmost layer is the users, ourselves. In between we have layers of software that perform various tasks. Each layer uses the systems provided by the layer beneath it to contribute to the layer above it. In the world, we, the human layer, rely on the layers below us which provide us with the time, space, energy and materials for life. We each contribute to the social and political layers above us. It is in the interests of the higher layers in this system to look after the lower layers upon which they depend. For us, this means that we must preserve the Earth if we are to survive. In a divine world, we must care for God because God cares for us. OSI model - Wikipedia
This network serves as a model of God connecting the ancient ideas of God to the Universe we observe. In the network model of the divine Universe, the lowest layer corresponds to the absolute simplicity of the classical God. The topmost layer, on the other hand, the universe as a whole, corresponds to the detailed knowledge and power of the classical God. The layers in between describe the spectrum of divine dynamics from absolute simplicity to unbounded complexity.
5: On modelling God: the Trinity
Aquinas derived all the traditional properties of God from the assumption that God is pure act. On the other hand the fact, revealed in the New Testament, that God is a trinity of persons is completely out of the range of philosophical theology, and flies in the face of the ancient view that the Hebrew God Yahweh is most decidedly the one true God. The Christians derived their idea of God from Yahweh of the Hebrew Bible. The Trinity was invented by the authors of the New Testament. How could the unity of God be reconciled with the triplicity of the Trinity? Initially, this was just considered one of the many mysteries revealed by God, but explanations slowly emerged. Trinity - Wikipedia, Hebrew Bible - Wikipedia, New Testament - Wikipedia
The first clue is in John's gospel, which begins: "In the beginning was the Word, and the Word was with God, and the Word was God." (John, 1:1). This sentence may allude to the ancient psychological belief that the source of the words that we speak are the mental words that enter our consciousness as we think about what we want to say. Because God is absolutely simple, theologians hold that attributes which are accidental in the created world are substantial in God. God's Word, therefore, is identical to God. The author of the Gospel identifies this word with Jesus, the Son of God, the second person of the Trinity, who "was made flesh and dwelt among us" (John 1:14). The Gospel according to John
The writers of the Nicene Creed were simply expressing Christian belief without trying to explain it. Augustine of Hippo, however used John's idea to produce a psychological model of the Trinity based on human relationships. Aquinas developed this idea in great detail starting with the procession of the Word, the second person of the Trinity. He saw his procession as analogous to understanding, the act of intelligence. The third person, the Holy Spirit, proceeds conjointly from the Father and the Son, corresponding to love, the act of will. Augustine of Hippo - Wikipedia, Augustine: On the Trinity, Aquinas, Summa, qq 27-43
Aquinas explains that the persons of the Trinity are distinguished by their relationships to one another. The Father has the relation of paternitas to the Son, and the Son the inverse relation of filiatio to the Father. He notes that there are no proper names for the relationships established by the will, so relationship between the Father and Son and Spirit is named for the act, spiratio, and inverse act is called simply processio.
Since God is absolutely simple, all its attributes are substantial and identical to itself. So in God essence and existence are identical, as are the relationships that distinguish the persons of the Trinity. There is not much to be said about God so conceived, which is why Aquinas, following Dionysius the pseudo-Areopagite, tells us that we cannot say what God is, only what it is not. Pseudo-Dionysius the Areopagite - Wikipedia, Apophatic theology - Wikipedia
The simplicity of God creates a logical contradiction when we come to the Trinity. The Father is not the Son, and yet both are identically God. One way to resolve this problem is to think of God as a space. Each of the persons exists in the divine space, yet they are all distinct. This is exactly analogous to a room full of people each occupying a personal space which distinguishes one from the others. Thus the Trinity suggests a logical model for the emergence of space-time within God. We will return to this below.
Aquinas explains that the relationships between the persons are created by their origins. They proceed from one another. In the human world, I am a father, and my fatherhood extends well beyond the moment of birth through my ongoing communication with my children. In practical terms relationships are created by communication. I have no immediate relationship with people I do not know although we are all related indirectly through society and our common descent from the earliest forms of life. Tree of life (biology) - Wikipedia
Communication is copying. The sender sends a copy of the message to the recipient and receives a copy of the reply. The Father copies himself to create the Son. Insofar as the Trinity is part of the world the next logical step is to look at it through the eyes of quantum mechanics. But first we have to build a framework for quantum theory.
6: Simplicity, dynamics and fixed points
Continuity has two broad meanings. The first applies to stories and other logical structures. A good story is a continuous narrative that reaches a satisfactory conclusion. The second applies to physical structures and events like lines and motions. We call the first logical continuity, the second geometrical continuity.
The geometric continuity is closely related to motion. It is smooth like the flow of a fish through water. The continuity of stories, on the other hand is real, but not smooth. We see this best in the movies, where stories are told in a sequence of scenes. We snap from one scene to another, but as long as they fit together and make sense the story flows. Just like daily life, which is a fluid sequence of discrete events, doing the dishes, changing nappies, running for the bus . . ..
Aristotle defined 'continuous' as having points in common, some sort of overlap, as in a chain. Geometers, on the other hand, see a continuum as a series of points. This version of geometric continuity now dominates mathematics. It culminated in Cantor's development of point set theory which we discuss in the next section. Point (geometry) - Wikipedia
The development of a continuum out of points was not easy because the two concepts are contradictory. A continuum is featureless. A point on the other hand is a feature, isolated and addressed. The continuum as studied by mathematical analysis works on the principle that if we squeeze enough points into a small enough space we may say the result is continuous. In a continuous line, there is always another point between any two points. In the real numbers, there is always another number between any two numbers. This theory is a mathematical creation and it may be that no true geometric continuum exists in reality. Leopold Kronecker has been quoted as saying "God made the natural numbers; all else is the work of man." From a physical point of view, everything we observe is quantized, even motion, which occurs as tiny events measured by a quantum of action. Planck constant - Wikipedia, Leopold Kronecker - Wikipedia
From another point of view, a point and a continuum are very much the same. A point is an isolated entity addressed by a real number, it has no size and so may be considered. A continuum is also featureless. Since neither has any structure, we may call it simple. The first attribute of God that Aquinas derived from the existence of God is simplicity. This suggests that we may think of God as a point or a continuum, so that it shares the same split personality as the mathematical continuum composed of points, something we have already observed in the doctrine of the Trinity.
We proceed here on the assumption that the important form of continuity relevant to understanding the Universe is logical continuity. In fact all our mathematical proofs about geometric continuity are logical. The archetype of logical continuity is a mathematical proof, an formal chain of connected logical steps leading from an hypothesis to a conclusion. We see the observable world as quantized, digital and logical, a connected story of discrete events. We will describe it with a transfinite version of a computer network. Implicit in this model is the idea that proper way to understand the divine Universe is psychological, through intelligence and mind.
Our first step is to use the logic of the mathematical theory of fixed points to make a connection between the actus purus, omnino simplex God of Aristotle and Aquinas and the exceedingly complex cosmic system of which we are a part. The world looks complicated enough to the naked eye, but we must remember that the finer and finer structure of the Universe continues down to a scale of billionths of billionths of a millimetre and beyond. The basic process of the Universe is pixellated in units of Planck's constant, which is exceedingly small by human standards, about 10-34 Joule.second.
Mathematicians establish the logical connections between a set of hypotheses and a conclusion through a proof. Such a connection yields a theorem. There are a large number of mathematical theorems, and some of them have many proofs. New proofs of old theorems often serve to link different branches of mathematics together.
We write mathematical proofs in a specialized language which we hope will make things very clear and concise. We may think of a written proof as the software of a machine which executes the proof. That machine is often the mind of a mathematician, but computers can perform similar tasks. At least they can check the process even if they do not understand where it is going.
Although everything we see in the world is discrete object or event, most philosophers and scientists since time immemorial have considered the world to be continuous. The most likely explanation of this state of affairs is that motion appears continuous. While a ball might be a distinct object, its moves through the air on a continuous trajectory.
We use functions for the mathematical description of motion. When a wheel revolves, points that were initially in one place are mapped to a new place, and the complete rotation of the wheel may be represented by a function that describes the mappings of all the points of the wheel at each instant of its rotation. We find a fixed point at the centre of the wheel which is mapped onto itself by these functions. A mathematical version of this intuitive result is the Brouwer fixed point theorem.
Brouwer's fixed point theorem tells us that a continuous function f(x)from a compact convex subset of Euclidean space to itself has a point x for which f(x) = x. Euclidean space is considered to be infinite in all three dimensions. A subset of Euclidean space may be either the whole space or some part of it. A set is compact if it is closed (containing all its limit points) and bounded (having all its points within some fixed distance of each another). It is convex if no straight line between any two points in the set goes outside the set. Brouwer fixed point theorem - Wikipedia, Compact space - Wikipedia, Convex set - Wikipedia
A subset of Euclidean space is not a very good model of God of course. A more suitable model of God would contain just one axiom: God is self consistent. Not even God can do something inherently inconsistent, like 'squaring the circle' or 'creating a stone heavier than it can lift'. Aquinas, Summa I, 25, 3: Is God omnipotent?
Can we prove a fixed point theorem on the strength of consistency alone? That is by using the type of proof known as the via negativa or reductio ad absurdum. In both cases we show that denying the hyopothesis leads to contradiction. In other words the hypothesis is a tautology, a built in feature of the symbolism.
Here we come up against a paradox of set theory known as Cantor's paradox. Cantor proved that given a set of a certain cardinal, there is always a set with a greater cardinal, a consequence of the 'axiom of the power set'. Thus we cannot have a largest set, since the axiom would demand that it immediately generate a larger set, and so on without end. We cannot therefore talk about a greatest set, and so we do not have a candidate set to represent God. Cantor's paradox - Wikipedia, Jose Ferrerros: "What Fermented in Me for Years": Cantor's Discovery of the Transfinite Numbers, Axiom of power set - Wikipedia, Hallett: Cantorian Set Theory and Limitation of Size
Whatever God is, we can only talk about subsets of it. And as long as these subsets fulfill the hypotheses of some fixed point theorem, we can expect to find a fixed point within them. Insofar as there may very large number of ways of mapping a subset of the universe onto itself, we can expect to find a correspondingly large number of fixed points.
We can imagine that the subsets of God 'cover' God, so that there is a good chance that the existence of fixed points in the divinity is logically necessary, insofar as a dynamic God (pure actuality, actus purus) would be inconsistent if it did not have fixed points.
Almost everything in the Universe moves. Photons travel at the velocity of light. Tectonic plates move a few centimetres per year, and most other velocities fall somewhere in between. We say, following Einstein, that all motion is relative. We only become aware of motion when we can comparing something 'moving' with something 'still'. Which is moving and which is still depends on our point of view. Nevertheless, some things, like mathematical theorems are considered to be eternal, and there are also physical properties of the world, like the quantum of action and the velocity of light which are considered to be fixed and eternal. As Parmenides and his followers felt, the universe has a fixed unchanging core beneath the endless flow of change.
The mathematical treatment of continuity has a long history. Parmenides' student Zeno supported his master by devising a series of mathematical proofs that motion is impossible, that is self contradictory. Zeno raised questions of continuity and infinity that are still open today.
One is the paradox of Achilles (a very fast runner) and the tortoise (traditionally very slow). As Aristotle puts it:
In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead. Aristotle, Physics, VI, ix: Achilles and the tortoise
The point of interest here is that Zeno has constructed a logical argument about continuity and started a long tradition of discussion about the relationship of 'logical continuity' to 'physical or geometrical continuity'. Geometrical continuity is established by proximity. The Bolzano-Weierstrass theorem tells us the story. As things converge, points get closer together approaching but never reaching zero distance. This mathematical notion of 'limit' serves to bridge the conceptual gap between continuity and pointlikeness. The mathematics of the continuum is generally held to be consistent, but we may ask, in the light of the physical quantization of all observations, if it truly represents reality. Completeness of the real numbers - Wikipedia, Bolzano-Weierstrass theorem - Wikipedia
Logical continuity, on the other hand is not based on spatial closeness but on interaction of symbols represented in some medium, which might be a human mind, a motion picture or the whole Universe. In mathematics logical continuity is demonstrated by proof. Formally a proof comprises an unbroken chain of logical statements that couple a set of hypotheses to a conclusion. The Pythagorean theorem tells us that given Euclidean geometry, the square of the hypotenuse of a right angled triangle is equal to the sum of the squares on the other two sides. This can be proved in hundreds of ways.
In reality, a proof is mechanical, process a represented by sequence of physical events, like the decoding of DNA into protein, the electronic steps in a digital computer, or the molecular processes in a system of nerves and muscles. Every event is an act of communication constrained by the logical continuity (computability) of the algorithms for encoding and decoding messages. We might imagine that the future processes itself into existence using the resources of the past. To go further into this we explore a mathematical space big enough to represent the universe, Cantor's transfinite numbers.
7: Cantor's Paradise
Aus dem Paradies, das Cantor uns geschaffen, soll uns niemand vertreiben können.
No one shall expel us from the Paradise that Cantor has created.
David Hilbert: "Über das Unendliche" [On the Infinite] in Mathematische Annalen 95, (1926) Peter Macgregor: A glimpse of Cantor's paradise
Mathematics is very much involved with infinity. The simplest infinity in the mathematical toolbox is the set of natural numbers, N = {1, 2, 3 . . . }. We construct each new natural number by adding one to the one before it. There is no reason for this process to stop, so there is no largest natural number. The natural numbers are infinite, endless. Peano axioms - Wikipedia
Natural numbers are good for counting discrete objects like sheep and beans, but they are not so good for measuring continuous quantities like mass or length. To do this, we need to introduce fractions. A very interesting distance is the length of the diagonal of a unit square. If we measure it with a tape graduated in natural numbers, we find that distance is somewhere between 1 and 2.
To get a better measurement, we can use a tape with fractions and approximate the length of the diagonal, which the Pythagorean theorem tells us is the precisely the square root of 2. Progressively more accurate measurements are 1.4, 1.41, 1.414 and so on. Mathematicians proved in ancient times, however, that there is no fraction (that is no rational number) exactly equivalent to the √2. Square root of 2 - Wikipedia
This led to the development of the real numbers, which have been constructed so that there is a real number corresponding to the length of every line such as the diagonal of a unit square. This correspondence established a firm connection between arithmetic and geometry. Until the time of Descartes and his invention of Cartesian coordinates arithmetic and geometry had remained more or less separate mathematical subjects. , Rene Descartes - Wikipedia
We may consider a point as a named (numbered) symbol "which has no part". The cardinal of the continuum then becomes the cardinal of the set of real numbers. Toward the end of the nineteenth century, Georg Cantor asked how many points it takes to make a real continuous line, in other words, what is the cardinal of the continuum? He set out to find a representation of this number. Euclid: Elements, Real number - Wikipedia, Cardinality of the continuum - Wikipedia
Cantor revolutionised the symbolic space and methodology of mathematics when he published his papers on transfinite numbers in 1895 and 1897. Georg Cantor: Contributions to the Founding the the Theory of Transfinite Numbers (online)
Cantor's idea is to generate new cardinal numbers by considering the ordinal numbers of sets. The foundation of the whole system is the set N of natural numbers, which is said to be countably infinite. Since there is no greatest natural number Cantor invented the symbol ℵ0 to represent the cardinal of N. ℵ0 is the first transfinite number.
Cantor's idea was to exploit position and order (as used in the decimal system) to generate ever larger numbers. N has a natural order, 0, 1, 2, . . .. We can permute this order in ℵ0! (factorial) ways to create the set of all permutations of the natural numbers whose cardinal we assume to be ℵ1. The cardinal of the set of all permutations of permutations of the natural numbers becomes ℵ2. This process can be continued to produce the endless hierarchy of transfinite numbers. Factorial - Wikipedia
This huge space of numbers, known as the Cantor Universe, provides us with sufficient numbers to address all the fixed points in the universe, no matter how many there may be.
There are ℵ1 permutations of the ℵ0 elements N but Turing found that there are only ℵ0 computable algorithms available for computing these functions. This suggests that a large proportion of all possible permutations are incomputable. This constraint imposes boundaries on stable, that is computable, structures in the universe.Cantor believed that the transfinite number system is capable of enumerating anything enumerable, and so cannot be further generalized. Thus the transfinite numbers provide a space of symbols large enough to encompass anything that mathematicians may imagine, and provides us with a mathematical tool to help represent a divine universe constrained only by consistency.
8: Why does the universe become more complex?
The Hebrew god Yahweh was one God. The Christians introduced the Trinity, which at first glance looks like three Gods, although Christian theologians insisted that God was still one, but comprised three distinct persons, Father, Son and Spirit. The creation of the Trinity is more dramatic than theological. It helps to make the Christian story more coherent. God the Father is the Christian transformation of Yahweh. God the Son became Jesus of Nazareth, the human destined to be sacrificed to placate the Father for the disobedience of the first people. God the Holy Spirit serves to guide the Church that Jesus founded to propagate his message of redemption to the world.
Here we have laid a foundation for the identifying God and the world by identifying the classical Christian God with the initial singularity on the grounds that both are completely simple sources of the universe. The Trinity provides a foundation for the Christian story. The standard explanation of the big bang is that the enormous concentration of energy in the initial singularity quite naturally produced all the particles and structure of the current universe. It is an unquestioned assumption in physics that where there is enough energy new particles will appear. The theoretical foundation of this observation is the relativistic equivalence of mass and energy. Massive particles may annihilate to liberate energy, energy may create new massive particles. Francisco Fernflores (Stanford Encyclopedia of Philosophy): The Equivalence of Mass and Energy
We begin with the ancient tradition that God is pure act (actus purus). Actus and act are used to translate two Greek Aristotelian terms, energeia (ενεργεια) and entelecheia (εντελεχεια). Energeia means activity or operation. Entelecheia means full or complete realty. Between them these terms capture the essence Aristotle's and Aquinas' understanding of God. Here we equate them to the word action used in physics. Action, S has a precise mathematical definition in both classical and quantum physics: it is the time integral of the Lagrangian, L. The Lagrangian is the difference between the kinetic energy and the potential energy of a system expressed as functions of time, L = KE - PE.
S = ∫ L dt
Action (physics) - Wikipedia, Lagrangian - WikipediaHere I guess that the first step in the complexification of the universe is the emergence of energy. Energy is the time frequency of action, expressed in physics by the Planck-Einstein relation E = ℏω, where ℏ is Planck's quantum of action and ω measures frequency. Here we understand the quantum of action as the fundamental unit of measurement in the world and see it logically as the difference between p and not-p. Every action changes things, annihilating one and creating another. This definition establishes the equivalence between physics and logic which underlies the picture of the divine universe developed in this essay. Philip Goff et al., (Stanford Encyclopedia of Philosophy): Panpsychism
I am setting out to model the divine universe as a computer network. As we have already noted, practical networks like the internet are constructed in layers, starting with a physical layer at the bottom and building up to the layers of users at the top. The layered network idea may give us a means of classifying and ordering the appearance of new features in the growing universe as it complexifies.
Each layer of the system uses the facilities provided by the layer beneath it to perform its task and its output is used by the layer above it. We consider the universe to have a similar structure. We identify the fundamental physical layer as the classical God or the initial singularity. The next layer is energy, which serves as the input to gravitation and quantum mechanics. These layers in turn serve the large and small scale structures of the universe which have evolved over the fourteen billion years since the initial singularity began to differentiate.
This layered structure models the increasing complexity of the universe as time goes by. We might ask: why does this happen? Why did the universe simply remain, like the classical God, a structureless eternal entity. A first possible answer to this question is to be found in a combination of Cantor's formal development of the transfinite numbers, the cybernetic principle of requisite variety and the evolutionary ideas of variation and selection.
Cantor proved that given a set with a certain cardinal number of elements, there necessarily exists a set with a greater cardinal number. If we can apply this theorem to a universe with a certain number of elements, the formalism may compel us to admit that the number of elements in this universe will increase, that is it will become more complex. This suggests that we might call the gradient of the potential which moves the universe to complexify the Cantor force. Since the transfinite numbers grow very fast, this gradient is very steep and the force consequently strong. We might see it as the force behind the "big bang" which is postulated to have begun universe. Cantor's theorem - Wikipedia
Gödel proved that any logically consistent formal symbolic system with a sufficient number of elements will be incomplete. In other words, it will be able to form true propositions which can neither be proved or disproved, so introducing uncertainty. The principle of requisite variety, derived from this result, tells us that no system can deterministically control a system more complex than itself. This means that the past cannot control the future if the universe becomes more complex as time goes by. Gregory J. Chaitin: Gödel's Theorem and Information, Ashby: An Introduction to Cybernetics
Evolution proceeds by variation and selection. We expect variation to occur because simple systems cannot control more complex ones. Every system except the whole universe exists as a subystem of a larger system, which it cannot therefore control. This is analogous to the fact that words themselves cannot form meaningful sentences. Sentences may be seen the products of a random concatenations of words, most of which do not make sense. As I write, I act as a higher layer, selecting from all possible sentences those that correspond to the meaning I am trying to express. Even though incompleteness and incomputability make it difficult to predict which of the systems created by such random variation will survive, nevertheless the surviving systems chosen by higher layers are in effect true propositions in the Gödel sense, and so add more complexity to the overall system. From this point of view, creative variation and selective choice operate at all levels in the universal network.
9: The logical origin and time and space
What is the universe made of? Some would say energy, but here we will go one layer deeper and say that the universe comprises a multitude of actions. This answer is consistent with the idea that universe is divine, and with the proposal by Aristotle and Aquinas that God is pure act. Every action is measured by the quantum of action, the atomic action. In our macroscopic world the numerical value of the quantum of action is tiny, but small actions blend seamlessly to constitute large actions. The largest action of all is the life of the universe itself.
We may assume that quantum of action in itself has no particular physical size, only a logical definition. An action is something that changes a system, what was once p becomes not-p. Any system of units has to start somewhere, and we take the quantum of action to measure the primordial undefined event. Once the size of the universe becomes sufficiently large, there can be any number of not-p, available for a to be transformed into, and the transformation requires a minimum of one quantum of action, but there is no fixed maximum. The quantum of action, the velocity of light, the charge of the electron, the gravitational constant and so on provide physical foundations for our systems of units, and appear to be defined with absolute precision, which suggests that they are built on logical foundations which we are seeking to bring into the light.
Each layer in network model uses the resources provided by the layer beneath it to perform tasks which serve as resources for the layer above it. Here we consider the initial singularity to be the root of the Universe, and we identify this singularity with the classical model of God produced by Aristotle and Aquinas.
Aquinas follows the Christian faith in believing that God is the creator of the world. He used the fact that God is pure actuality to give very abstract arguments for the classical attributes of God, simplicity, immobility, eternity, life, truth, goodness, omniscience and omnipotence. It is not easy to see how some of these attributes fit together. How can an absolutely simple being be omniscient if it has no internal structure to store information about the enormous complexity of the universe? We overcome this problem by identifying God and the Universe. We preserve the simplicity of God by imagining that it is one dynamic system whose fixed points are parts of the dynamics, as mathematical fixed point theory suggests.
The current theory for the origin of the universe is known as the "big bang". This model assumes that the universe began as a pointlike initial state of zero size, infinite energy density and infinite temperature. This state may be physically impossible just like the God it replaces, but serves as the theoretical starting point for much of modern cosmology. An alternative approach, suggested by Richard Feynman and favoured here, is that the total energy of the universe has been at all times zero. This is possible because energy comes in two forms, potential and kinetic energy whose algebraic sum may be assumed to be zero. Big Bang - Wikipedia, Feynman: Feynman Lectures on Gravitation
We are proceeding here in the basis of two principles, laws or symmetries: first, to be is to be consistent; and second, derived from this, the principle of symmetry wth respect to complexity, meaning that the consistency principle applies at all levels of complexity, from the initial singularity, modelled on the the absolute simplicity of the traditional god, to the exceedingly complex state of the we currently observe.
The principle of symmetry with respect to complexity suggest that we can use the behaviour of mathematical community as a guide to modelling the universe at every scale. We see this community as a network of people sharing all sorts of messages, the most characteristic of which are the fixed logically consistent propositions we call theorems. Theorems are created by the unconstrained imagination made possible by incompleteness (Gödel) and proved by logical chains of reasoning or computation (Turing).
The history of mathematics suggests that it is a recursive process, beginning with the simple arithmetic and geometry of counting and measuring, and generating layer after layer of more complex formal structures which have turned out to be very useful for modelling ourselves and the world we occupy. We may attribute this consistency to the fact that both the world and mathematics share the same property of consistency. Eugene Wigner: The Unreasonable Effectiveness of Mathematics in the Natural Sciences
Philosophers debate whether mathematics is purely a human creation or whether it exists in the world independently of us and we discover it, rather as we discover new laws and symmetries of nature. Plato thought that mathematics is part of the world, one of the many forms that guide the world. Legend has is that the words "Let None But Geometers Enter Here" were inscribed above the entrance to the Platonic Academy. At the other extreme is the view that mathematics is a purely human creation. Here I feel that mathematics is effectively part of the world, but as Einstein points out:
It seems that the human mind has first to construct forms independently before we can find them in things. Kepler's marvellous achievement is a particularly fine example of the truth that knowledge cannot spring from evidence alone but only from the comparison of the inventions of the intellect with observed fact.
So it was that only after we had invented radar and sonar that we were able to understand that bats also used echolocation. Here, therefore, we imagine that it has first been necessary to invent mathematics in order to realise, as Wigner has pointed out, that it is embodied in the world and provides us with a universal language for describing our total environment, that is for a theology. Platonic Academy - Wikipedia, Philosophy of Mathematics - Wikipedia
The theory of relativity which defines the large scale structure of the universe, tells us that gravitation sees only energy and is completely blind to all the different forms that energy can take. Quantum mechanics tells us that that energy is a measure of the rate of action expressed by the Planck-Einstein equation, E = hf, where h is Planck's constant and f is frequency, the inverse of duration, the time it takes an event to occur. If a repeated event takes a tenth of a second, its frequency is 10 times per second, 10 Hertz (10 Hz). Hawking & Ellis: The Large Scale Structure of Space-Time, Planck-Einstein relation - Wikipedia
In classical mechanics all physical quantities are expressed in terms of three 'dimensions' mass (M), length (L) and time (T). Velocity, for instance, is distance divided by time, so its dimension is L/T = LT-1. Energy is measured as mass multiplied by the square of velocity, so its dimension is ML2T-2. Action, which is the product of energy by time, has the dimension ML2T-1 which is the same as angular momentum, which is the product of momentum by radius of gyration, ie mass x velocity x radius, ie ML2T-1. At this fundamental level, however, we may consider action to be a scalar quantity, having no specific dimension, which implies that in quantum mechanics the dimension of energy is simply inverse time, T-1 as suggested in the paragraph above.
The Standard Model of physics takes space-time for granted, and sees it as the domain for many different fields corresponding to the many different fundamental particles that we observe in the world. Here we view space-time as the second layer of structure to emerge from the initial singularity, built on energy and time, energy being the time rate of action, action the time integral of energy. It is not a passive backdrop for the world, but an active participant, a reservoir of energy from which the word is constructed. The question is what is the relationship between quantum mechanics as described in Hilbert space and the four dimensional spacetime in which we live?
So let us consider the logical source of energy to be the not operator which transforms p into not-p. We can understand the source of energy in the universe is a system in which it is true that not-not-p = p. Logic is such a system. Logically p and not-p cannot exist at the same time in the same place. In other words the creation of not p in such a system annihilates p and vice versa. So, in the broadest sense, energy measures the rate of change in a dynamic universe.
We have something like a clock, tick replacing tock, tock replacing tick and so on. This is a clock with ticker but no counter. If we could count the ticks of this clock, we would observe energy, the rate of action. Since down here in the pointlike foundation of the universe there are no observers, we can imagine that both energy and action exist, and are measured by the rate at which something happens. Energy measures the rate at which "before" becomes "after". Aristotle defined times as "the number of motion according to before and after". This definition of time is closely related to his definition of motion, which is in turn related to his definition of nature, the principle of motion. Aristotle (Time): Physics 219b sqq, Aristotle (Motion): Physics 201a10 sqq, Aristotle (Nature): Physics 192b22 sqq
These definitions are almost tautological, and are consistent with the idea that any process yields the two forms of energy, potential and kinetic. The state p is potentially not-p, and not-p which is potentially p, potential energy. One is real, the other is possible, ie consistent but non-existent. We see an analogy in quantum field theory. A field is a formal mathematical entity which if properly conceived is self consistent. The addition of energy to a formal mode of the field creates a particle. Losing the energy annihilated the particle. Algebraically, these two forms of energy add up to zero, so the transition from action to energy creates something new by the bifurcation of something old, and we imagine this to be the fundamental mechanism of creation with conservation. What is created is a new layer in the universal network. What is conserved is the layer beneath it.
In modern physics the conservation of energy is the second fundamental symmetry of the universe after the conservation of action.. The energy of the universe remains constant, possibly zero, as time goes by the physical equivalent of eternity. Conservation holds if we count kinetic energy as positive and potential energy us negative. A frictionless pendulum, for instance, would swing forever transforming potential energy into kinetic energy and back again. If this is the case, we no longer have a problem with the infinite energy density of the initial singularity. Conservation of energy - Wikipedia, Simple harmonic motion - Wikipedia
In this picture, the first physical particle to arise from pure action may be the photons, which carries one quantum of action in the form of angular momentum or spin, and space-time frequency in the form of energy and momentum which are are coupled by the quantum of action. We cannot observe a photon without annihilating it, but if we could the Lorentz transformation predicts that it would appear to have zero length and its clock would stands still. The path taken by a photon is a null geodesic, so that the space-time interval between the creation of a photon and its annihilation is zero. Lorentz transformation - Wikipedia
How does space-momentum arise from energy-time? So far we have guessed that the logical operator responsible for energy is not. The not operator performs simultaneous creation and annihilation, annihilating one state while creating the other, like a pendulum, annihilating potential energy to creating kinetic energy and vice versa. Here not-not ≡ nop ie nothing happens, and we return to the initial state. Mathematically, we have a cyclic group of two operations, not and the identity element nop.Now we imagine that the next step in the emergence of the universe is the advent of space. The ancient theoretical starting point for the complexification assumed here is the Christian doctrine of the Trinity. One historical representation of the Trinity, known as the Shield of the Trinity, illustrates that there is a certain mysterious inconsistency in the Trinity which has been a stumbling block for theologians ever since the Trinity became part of Christian doctrine. God is both one and three. The Father is God, the Son is God and the Holy Spirit is God, yet the Father is not the Son, the Son is not the Spirit and the Spirit is not the Father. How can this be? Nicene Creed - Wikipedia, Shield of the Trinity - Wikipedia
So let us envisage a system in which both p and not-p can exist simultaneously and interpret this as the origin of space. A space is, by definition, a state where two or more distinct systems can exist simultaneously, like you and me. From this point of view, the Trinity may be seen as a three dimensional space with three orthogonal dimensions, each of which is not the other. This idea, that space serves to reconcile the existence of contradictions, may be a key to explaining the complexification of the universe
Our method here is to try to imagine what the universe was like at the very beginning when it had first just one state, existence, like the classical God, two states, three states and so on. Without going into too much detail, we can proceed on the basis that these states are logically distinct. The next step is to explore the physical implementation of this logic which is best decribed by quantum mechanics.
10: Hilbert space and quantum mechanics
We can use the complex plane to represent the circle group, that is the group of all the complex numbers with absolute value 1. These numbers lie on a circle of radius 1 in the complex plane, and may be used to represent angles, phases or times. A full circle is 360 degrees or 2π radians so that angular frequency, ω = 2πf, where f is frequency measured in Hertz. Planck's constant h is often divided by 2π to give ℏ so that we can write E = ℏω. In the previous section we identified repeated action of the logical operator not as the source of energy. The circle group provides as continuous representation of this operation, one full circle of the group being equivalent to two not operations, bringing the system back to its initial state. In other words, not is equivalent to a phase change of π radian. Circle group - Wikipedia
A cartesian space is a set of points with numerical addresses. It may have any number of dimensions. Intuitively, we best understand the three dimensional space in which we live. The address of each point is a string of numbers, one number corresponding to each dimension. Such a string is a vector, so there is a vector corresponding to every point in a space. All our engineering, architecture and mapping on Earth is worked out in three dimensional cartesian space.
Quantum mechanics works in complex Hilbert space. The state of a quantum system is represented by a vector |ψ>, in this space. State vectors are normalized to one so that all the points represented lie on the surface of multidimensional sphere. Each component of each vector is in effect circle group whose rate of rotation (ie frequency) is proportional to the energy it represents. The sum of the energies corresponding to the each dimension of the hilbert space is the energy of the whole system. All these frequencies are linearly superposed to give a dynamic multidimensional waveform which represents the overall evolution of the system. This wave is not observable, but the mathematical formalism can be interpreted by the Born rule and the eigenvalue equation to give physically observable results. Hilbert space - Wikipedia
Perhaps the most important feature of quantum theory is linearity. Although vectors are more complex than simple numbers ("scalars") they can be added and subtracted simply by adding and subtracting corresponding components. Any state vector can be represented by a linear superposition of a set of orthogonal basis states |i > with the corresponding set of coefficients Ci :
|ψ> = ∑i Ci |i >.
We may then ask: what do the base states mean physically. What are the base states of the universe, and how can we represent them? In the current state of the universe, this question is very difficult to answer because the universal system is so complex. Feynman Lectures on Physics: III:8 The Hamiltonian Matrix
All complex Hilbert spaces are formally identical, the only difference being in the number of dimensions, which we imagine to run from 0, a point space, through the transfinite numbers. von Neumann: Mathematical Foundations of Quantum Mechanics
All the information we have about a physical state is encoded in the direction of its representative |ψ> in its Hilbert space. The dynamics of a quantum system is represented by a partial differential equation which models the transformations of state vectors. These functions are continuous like the real wave functions we use to describe vibrating strings and other forms of wave motion
The hypothetical continuous complex evolution of the quantum wave cannot be observed so that we can only guess at it from the particles that emerge in the process of observation or measurement. Nevertheless the formalism predicts accurate results so our faith in it is strong. Mathematical formulation of quantum mechanics - Wikipedia, Wojciech Hubert Zurek: Quantum origin of quantum jumps: breaking of unitary symmetry induced by information transfer and the transition from quantum to classical
Quantum mechanical interactions between two systems, each of which is described by a Hilbert space of a certain dimension, take place in the tensor product space of the two interacting systems. In other words, interactions have the effect of increasing the complexity of the system. Interaction or observation is thus a source of increased complexity, that is of creation. Tensor product of Hilbert spaces - Wikipedia
As Zurek explains, the two interacting systems must share a common set of orthogonal basis states for the interaction to proceed and information to be shared between the two systems.
An observation is modelled by the interaction of a measurement operator or 'observable' on a quantum system. Mathematically an observable is a matrix with one or more eigenvectors, that is vectors whose direction is not changed by the operation of the matrix. These eigenvectors are the fixed points of the measurement, which yields eigenvalues corresponding to the eigenvectors. The problem is that while a quantum state is understood to be the superposition of a number of eigenvectors, only one of these is revealed at each observation. The situation is analogous to the roll of a die. Each face of the die is a discrete observable, but the face that we actually observe appears at random.
We imagine an observation as a message passed between two subsystems of the Universe. The security of this message and the stability of the Universe, are guaranteed by the quantization predicted by Shannon's mathematical theory of communication. This theory, like other mathematical theories, appears to be 'built in' to the Universe, an appearance consisted with mathematical fixed point theory.
Physical motions are described by the energy equation: dψ/dt = Hψ. H is a Hamiltonian or energy operator, which encodes how the elements of the vector ψ transform with time. In quantum mechanics, frequency is directly related to energy by the relationship f = E/h, where h is Planck’s constant. Quantum mechanics yields results through two further equations, the eigenvalue equation and the Born rule. These equations serve to pick observable results out of the infinity of possibilities offered by the energy equation. Eigenvalues and eigenvectors - Wikipedia, Born rule - Wikipedia
The eigenvalue equation defines the fixed points of the energy equation. In the terminology above, we read Hψ = kψ, where we now think of H as a measurement operator or observable. This equation picks out the scalar values k which correspond to eigenfunctions, that is the fixed algorithms, of the operator H. An observer using the operator H will see only those eigenvalues k which correspond to eigenfunctions of the operator H..
A classical experimenter expects to see just one result (within errors) for each experimental setup. Quantum mechanics, on the other hand, can yield many different outputs from the same input.
There is no way to predict the exact outcome from a given input, but the Born Rule predicts the probability that we will observe a particular eigenvalue k. Experimental physicists must repeat identical initial conditions many times to estimate the probabilities of various possible outcomes.
11: Why is the Universe quantized?
Before we go on we must deal with an issue of great importance, quantization. Quantum mechanics and quantum field theory have been developed to explain the behaviour of the particles that we observe in the universe. These theories model both the nature of the particles and the frequencies and outcomes of their interactions. Quantum mechanics reached a definitive expression in the work of von Neumann and Dirac in the late 1920's but the union of quantum mechanics and special relativity which yielded quantum field theory took another twenty years of so to emerge. Dirac: The Principles of Quantum Mechanics
The principal conceptual difficulty in quantum mechanics lies at the interface between the continuous mathematics used to describe the hypothetical processes underlying physical observations and the discrete or particulate nature of what we actually observe, ranging from fundamental particles through planets to galaxies and beyond. This is known as the quantum mechanical measurement problem, often described as the 'wave function collapse'. Wave function collapse - Wikipedia
Continuous mathematics is in the first instance a human invention which was perfected in the nineteenth century. It has been generally assumed that the universe is continuous, and so it seems legitimate to apply continuous mathematics to the universe. But is the universe really continuous in the mathematical sense? Or are the dynamics of the universe worked out in terms of a discrete atom of action, the quanta of action?
Since Newton's time, the mathematical heart of physics has revolved around differential equations. The basic idea is that if we can describe the local behaviour of a system by a differential equation, we can then extrapolate to its global behaviour by integrating that equation. In Newton's case, the differential equation of interest is a second order differential equation relating force to position (x) and time (t):
F = m d2x / dt2
By solving this equation using the force predicted by the universal law of gravitation, Newton was able to compute the orbits of the known planets and moons of the solar system. Differential equation - Wikipedia, Equations of motion - Wikipedia, Newton's law of universal gravitation - Wikipedia
The invention of differential and integral calculus placed new emphasis on the mathematical problems of infinity and continuity that were first raised in ancient times by Zeno and his contemporaries. Zeno's paradoxes - Wikipedia
The standard definition of the derivative of a function y = f(x) with respect to the independent variable x is:
dy/dx = the limit as h → 0 of [f(x + h) - f(x)] / h
Differential calculus - Wikipedia
The important point of this definition is contained in the notion of limit. We are trying to find the derivative of the function at a given point x where h in the above equation is zero, but we cannot let h actually reach zero or we would be dividing by zero and the derivative would become infinite. The Archimedean property of real numbers suggests that there is no infinitely small element of the sequence of real numbers h, so that h always stays a "safe" distance away from 0. This suggests that the differential does not apply exactly at x but over the small interval between x and x + h. In mathematics this interval may be as small as we like, but in the real world of physics, its minimum size may be related to Planck's constant, ℏ, the physical equivalent of an "infinitesimal". The mathematical theory of communication may explain why this is so. Archimedean property - Wikipedia, Infinitesimal - Wikipedia
As a matter of fact, everything that we observe is quantized, beginning at the microscopic level studied by quantum physics. We see only discrete particles and quanta of action. A stable network, which we may take the universe to be, requires error free communication. The mathematical theory of communication shows that we can defeat error by encoding our messages in packets that are a long way apart in message space, so reducing the possibility of confusion, the source of error. This is in effect quantization.
From a mathematical point of view, a message is an ordered set of symbols. In practical networks, such messages are transmitted serially over physical channels. The purpose of error control technology is to make certain that the receiver receives the same string as the transmitter sends. This can be checked by the receiver sending the message back to the transmitter.
The mathematical theory of communication developed by Shannon shows that by encoding messages into discrete packets, we can maximize the distance between different signals in signal space, and so minimize the probability of their confusion. This theory enables us to send gigabytes of information error free over noisy channels. In our own bodies quantum processes enable trillions of cells each comprising trillions of atoms and molecules to function as a stable system for something approaching 100 years. Claude E Shannon: A Mathematical Theory of Communication, Khinchin: Mathematical Foundations of Information Theory
Shannon's represented his theory using two real function spaces, one representing messages, the other the signals used to transmit the messages, and modelled coding process as the mapping of one space to the other. Using this representation, he determined the maximum rate of transmission of binary digits over a communication system when the signal is perturbed by various types of noise. Claude Shannon: Communication in the Presence of Noise
Shannons theory is implemented by encoding messages into a noise resistant form by the transmitter and decoding the transmitted signal to recover the original message. Encoding and decoding were initially performed by analogue electronic systems, as in frequency modulated wireless transmission. This work is now done by computers, and is bounded by computability. Frequency modulation - Wikipedia, Codec - Wikipedia
A system that transmits without errors at the limiting rate C predicted by Shannon’s theorems is called an ideal system. Some features of an ideal system are visible in quantum mechanics:
A network is essentially a system of processors communicating through a set of memories. A message is in effect a memory carried through space and time. To communicate, one computer will write something in the memory of another. The recipient will read that memory to receive the message. Some memory may be isolated to a single stand-alone machine. Other memories have physical network connections which enable them to read from and write to each other over great distances. Even a single computer is a network. The only real difference between a computer and a network is that a network may have many processors operating at different frequencies, whereas all the operations in a computer are synchronised by a single clock.1. To avoid error there must be no overlap between signals representing different messages. They must, in other words, be orthogonal. This is also the case with the eigenfunctions of a quantum observable.
2. Such ‘basis signals’ may be chosen at random in the signal space, provided only that they are orthogonal. The same message may be encoded into any satisfactory basis provided that the transformations used by the transmitter and receiver to encode the message into the signal and decode the signal back to the message are inverses of one another. Like the codecs used in communication, quantum processes are reversible.
3. The signals transmitted by an ideal system are indistinguishable from noise. This is because their entropy is at a maximum. They cannot be compressed by an algorithm. The fact that a set of physical observations looks like a random sequence is not therefore evidence for meaninglessness. Until the algorithms used to encode and decode such a sequence are known, nothing can be said about its significance. Entropy (information theory) - Wikipedia
4. Only in the simplest cases are the mappings used to encode and decode messages linear and topological. For practical purposes, however, they must all be computable with available machines. The limit on computability found by Turing also places a limit on coding.
5. As a system approaches the ideal, the length of the transmitted packets, the delay at the transmitter while it takes in a chunk of message for encoding, and the corresponding delay at the receiver, increase indefinitely.
The most significant difference between classical physics and quantum theory is, as the name suggests, quantization. Every discrete operation in the Universe comprises one or more sub-operations which are also discrete. Since there is a smallest discrete operation, the quantum or atom of action, it becomes possible to establish correspondences between the discrete symbols of mathematics and discrete events. Various functional relationships between symbols have been found to model the behaviour of the real world. This is logic or arithmetic, which meet in the binary number system which may represent either numbers or truth values. Quantum - Wikipedia, Truth value - Wikipedia
Here we come to the basic conundrum of quantum mechanics, one that baffled Einstein. He remained convinced till the end of his life that a true model of the world would be causal, giving definite outputs for definite inputs. Quantum mechanics does not do this to the same degree as classical mechanics. Quantum mechanics behaves more like a communication source. A source A as imagined by Shannon has an alphabet of letters ai each of which is emitted with probability pi. The letters emitted by a quantum source are called eigenvalues and they are emitted with characteristic probabilities just like the letters of a communication source. These probabilities, like the probabilities of the emission of letters of a communication source, are normalized to 1. Although quantum mechanical eigenvalues appear to be determined to a very high degree of precision, like classical physical values, their occurrence is probabilistic. Classical physics, as Einstein understood it, is completely deterministic, not just in the value of its parameters, but also in the sequence in which these values occur.
Einstein maintained that the classical special and general theories of relativity were causal theories because interactions could only occur between elements of the universe when the interval between them was zero. This claim was based on the nature of infinitesimals in the definition of the interval ds in the equations ds2 = ημνdxμdxν in special relativity with the metric ημν and in general relativity with the metric gμν. As discussed above, the Archimedean property may not necessarily imply an interval of zero. Here we prefer to think of causality in terms of logical rather than geometric connection or continuity.
Quantum mechanics describes communication network between two or more sources. Eigenvalues are the content of the messages transmitted on this network, which we can observe with our own senses or suitable machinery. The eigenfunctions are the algorithms used by the system to encode and decode these messages. This coding is necessary to prevent error in the network, as we have noticed. The particles we observe, defined by the eigenvalue equation, are messages on this network. The frequency spectrum of these messages is computed using the Born rule.From this point of view a quantum system is like a source or a computer in a computer network. Each source in a network has an alphabet of symbols, corresponding to quantum mechanical eigenvalues. This essay is a source encoded in the alphabet of english text. Some of these symbols, like the letters space, e, t, o, i, n are more frequent that others like q, k, x, y z, a situation similar to the different frequencies of eigenvalues predicted by the Born rule. Although from a statistical point of view, the occurrence of letters in this essay may appear random, to a reader of english who knows how to decode them they make sense. We may suspect similar meaning in the quantum mechanical messages shared by elements of the universe.
12: Spin and space-time: boson and fermion
Developments in science often give new meaning to old terms. The two ancient words of most interest here are potential (Greek dynamis, Latin potentia) and act (Greek energeia or entelecheia, Latin actus) key terms in the philosophy and theology of Aristotle and Aquinas. The history of science is punctuated by paradigm changes, rather as the history of politics is punctuated by wars and revolutions. Kuhn: The Structure of Scientific Revolutions
The key axiom of Aristotle's physics is that no potential can actualize itself. This axiom is the foundation for both Aristotle's and Aquinas's proofs for the existence of God, as we saw in section 1. This axiom does not hold in modern physics. Here potential energy and actual or kinetic energy are exactly equivalent. We see this in periodic systems such as pendulums. At the top of its swing the pendulum is momentarily at rest, and all its energy is held as potential energy. At the bottom of its swing the pendulum has maximum velocity and all of its energy is kinetic energy. If there were no friction a pendulum would swing forever, converting potential energy into kinetic energy and back again. The key axiom here is the principle of conservation of energy, the sum of the potential and kinetic energy remains constant as time goes by. Kinetic energy - Wikipedia, Potential energy - Wikipedia
The ancients, like Parmenides and his successors, thought that there must be something unchanging to give meaning to the moving world. The conservation of energy serves as an eternal foundation for modern physics. It is a symmetry, something that stays the same while other things change. A wheel is symmetrical. It stays the same as it rotates. Symmetries are the foundations of modern physics and the conservation of energy is the second most fundamental.
The most fundamental symmetry of the universe is the conservation of action, a term which has been given a new mathematical definition in modern physics. An action is any event, of any size. It definition is logical rather than physical. An event or action changes something, p, into something else, not-p. Its measure is in effect one bit of information. The physical size of this bit may be any step from being to not being, from the birth of a galaxy to the emission of a photon.
How does this definition fit the modern quantum of action, which has a very precisely defined physical measure, represented by the Planck's constant, h. h, specified in terms of conventional units of energy and time, is a very small number, about 10-33. How can we say it measures the formation of a galaxy, for instance? We cannot. What we can say is that it is the basic unit for measuring the Universe, and it dates from a time when the Universe was very small. The actual formation of something a large as a galaxy requires a vast number of these elemental actions whose physical size was fixed forever near the moment of creation.
In physics, the Planck constant is a measure of angular momentum, which has the macroscopic dimensions of energy.time, that is ML2T-1. This is the change angular momentum which occurs, for instance, when an electron changes its state from "spin up" to "spin down". Down at this primitive level of the universe, the only meaning we attribute to up and down here is that up = not-down. An electron is permanently "spinning" with angular momentum ½h, so that when its spin is reversed, the change is ½h + ½h = h.
It is easy enough to imagine the angular momentum of a spinning ball or top, but it is not easy to imagine a spinning electron, particularly since it is conventionally held to be a point particle. It is something of a mystery how it carries its angular momentum, but it surely does. We must just follow our mathematical noses and see that the physics of spin is logically consistent even though it is hard to picture.
Electrons also have a fixed mass and a fixed electric charge and emit and absorb photons when they change they state of motion. Electrons and photons are examples of the two classes into which all fundamental particles fall, bosons, which have integral spin, and fermions, which have half integral sin.
One of the most important discoveries in quantum field theory is the spin-statistics theorem, which explains the relationship between particle spin and the structure of space-time. We find that any number of identical bosons may occupy the same spatial state but only one fermion can occupy each state, a situation known as the Pauli exclusion principle. Spin-statistics theorem - Wikipedia, Streater & Wightman: PCT, Spin, Statistics and All That
Einstein's theories of relativity explain the structure of space-time. The special theory deals with flat, fixed, inertial space-time. The general theory deals with the curved dynamic space-time from which the universe is built.
Traditional efforts to prove the spin-statistics theorem take space-time and the velocity of light as given and use these properties to explain the behaviour of fermions and bosons. Here I wish to explore the alternative, that the behaviour of fermions and bosons explains the structure of space-time and the velocity of light. The properties of space-time, and the velocity of light which couples space to time, are consequence of quantum theory rather than its causes. In other words, quantum theory is prior to special relativity. The first step toward this conclusion is to interpret quantum theory field as the description of a computer network.
13: Entanglement: QFT describes a computer network
We work here on the assumption that quantum mechanics describes the deepest inner working of the universe. We have guessed that the first logical step in the creation of the universe within God is the emergence of energy and time through the bifurcation of action into potential and kinetic energy. Potential energy motivates change, kinetic energy executes it once a consistent path for action becomes available. The analogue in writing is to find a form of words to embody the idea pressing for expression which creates a potential in my mind. Energy is the foundation of both quantum mechanics and general relativity, the two most fundamental theories of the universe.
Processes in corresponding layers (‘peers’) of two nodes in a network may communicate if they share a suitable protocol. All such communication uses the services of all layers between the peers in a particular layer and the lowest physical layer which, by hypothesis, is equivalent to the traditional God. These services are generally invisible or transparent to the peers unless they fail. Thus two people in conversation are generally unaware of the huge psychological, physiological and physical complexity of the nerve and muscle systems that make their communication possible. Here at the bottom of the universal network just one step away from the absolute simplicity of the initial singularity, we expect communication protocols to be very simple, little more in fact than an exchange of formless energy, analogous to a power network.
The simplest expression of the mathematical formalism of quantum mechanics is the Planck-Einstein equation, E = hf which relates energy to time through frequency, the constant of proportionality being Planck's constant, h. We may simplify even further by setting h = 1, to give E = f. Moving in the opposite direction, we have the energy equation, a simplified version of the Schrödinger equation:
iℏ ∂/∂t |ψ> = H|ψ>
which tells us that the time rate of change of the quantum state |ψ> is proportional to the that state multiplied by the energy operator H. In this expression the state vector |ψ> may have any dimension and H is a square matrix of corresponding dimension. Unless some boundary conditions constrain this equation, it's full representation requires a space of infinite dimension. Because the equation is expressed in complex numbers, the general solution is a set of periodic complex exponentials corresponding to waves of different frequencies (and energies) so this equation is also called a wave equation. Each of these solutions corresponds to an instance of the Planck-Einstein relationship for one of an infinite range of frequencies. Schrödinger equation - Wikipedia
The mathematical formalism of quantum mechanics may be interpreted as the program of a universal computer network, also known in physics as an equation of motion. It is a form embodied in the world which controls its motion. The mathematics yields two sets of outputs, one through the eigenvalue equation, which determines the nature of events, and the other through the Born rule which determines the frequencies of these events.
The Born rule computes the probable coupling between two events by superposing the amplitudes of the two events and squaring the result. This definite procedure yields a definite result, but it is only a definite probability. Given the geometry of a fair coin, we can reasonably expect approximately equal numbers of heads and tails in a long sequence of tosses, but past results have no bearing on future results, so on every throw the probabilities of heads and tails remain equal. We can imagine a quantum "throw" as tossing a coin with an infinity of differently weighted sides. The sum of the probabilities of the outcome is 1.
We can hear the superposition in music, and to varying degrees pick out the contribution of each individual instrument in a large orchestra. This is a "real" superposition. We cannot, however, observe the the solutions to the quantum wave function because it exists in the complex domain and is not observable. Instead we can attribute a complex "probability amplitude", ψ to the solutions to the wave equation, and the theory tells us that the probability of the event corresponding to this amplitude is the absolute square of the amplitude, ie probability P = |ψ|2.
From a communication point of view, quantum mechanics may be seen as modelling the flow of information in a network. Communication theory defines a source A by the set ai of the symbols which it can emit and the frequency pi of each of these symbols. These frequencies are normalized by the requirement that ∑i pi = 1. The eigenvalue equation yields the actual values of the symbols ai emitted by a quantum source, and the Born rule yields the probability pi of emission of each of the eigenvalues corresponding to the eigenvectors of the measurement operator. From this point of view, particles are the messages inout to and output from a quantum event.
The similarity in the statistics of quantum and communication sources is reflected in the processes that generate their output. The implementation of Shannon's ideas requires the transmitter to encode messages in a noiseproof form and the receiver to decode the signals received. Engineers first achieved this coding and decoding using analogue methods, but the full power of error control requires digital computations. The layering of engineered network software transforms the human interface to the physical interface in a series of steps. All of these transformations are performed by digital computers. The functions used must be invertible (injective) and computable with the machinery available. The power of Shannon's method lies in encoding messages into large blocks that are far apart in message space so that the probability of confusion in minimised. We guess that the quantization of the messages emitted by quantum systems has a similar role in ensuring error free communication in the universe.
Formal methods have proven very powerful in mathematics, logic and the theory of computation, but formalism can do nothing by itself. This is its strength, since it is not intrinsically limited by a need for energy or material embodiment. On the other hand, if it is to be of any use, it must be implemented in some way. The two principal implementations, apart from the quantum world, are human minds and computers. Since computers are a relatively simple technological artefact compared to a human brain, this section is confined to developing an analogy between the quantum mechanical description of the physical world and the work of a generic digital computer network.
A computer is a mechanical implementation of Boolean algebra. It is a network of 'gates', physical elements that can represent and execute the operations of Boolean algebra. This algebra is quite simple. It is a set of elements, functions and axioms. The elements have two states which we may represent by '0' and '1', 'true' and 'false', 'high' and 'low' or any other duality. There are three functions, often written 'and', 'or' and 'not' which are defined by truth tables. There are four axioms. Boolean algebra is closed, meaning that boolean operations on boolean variables always leads to boolean results. It is also commutative and associative, like ordinary algebra, and distributive, and taking precedence over or. Such a network can (in principle) do anything that a Turing machine can do. Boolean algebra - Wikipedia
A practical computer like this laptop comprises inputs and outputs, memory and processors. I am the user, controlling the machine with a keyboard, trackpad and screen. The machine is also connected to various networks and portable memories. We may divide the information in a computer into two principal classes, program and data. In a digital computer, both these classes comprise physical binary representations of information, but their roles are different. The program determines how the processor transforms the data from input to output. The program is the factory, the data the material. Computer architecture - Wikipedia
All information is represented physically. The binary representation of data requires two physical states which must be well distinguished to avoid confusion and error. Binary logic also has two states, true and false, which also require physical representation. The symmetry between logic and data makes it possible to design universal computers which may be loaded with different algorithms to apply different processes to data. In electronic machines, states are usually represented by two voltages, high and low which are mapped to truth states true and false or to the data states 0 and 1.
The computer operates by changing the physical states which represent the information being processed. At various points in the system states representing 0s and 1s are being created and annihilated. These state transitions are controlled by logical pulses from a clock which is a two state device regularly cycling from 1 to 0 and broadcasting its state though the system to synchronize all the other operations.
We understand a computer as oscillating between motion and stasis, rather like a pendulum. At a certain moment, all the physical representatives in the machine are static, formally representing the momentary state of the machine. At a signal from the clock, a cycle of change begins and the machine begins to move along a logically determined trajectory from the 'before' state to toward the 'after' state.
After a short period, all the electronic states in the machine settle down to their next static values and await the next signal from the clock to take the computation another step forward. All the dynamic state transitions are in effect hidden between the clock pulses. In this way the the machine executes a form of time division multiplexing, stepping between static and dynamic states to execute its computation. Time-division multiplexing - Wikipedia
Although quantum mechanics is used extensively in designing the physical logic of a computer, the computer itself is essentially a classical machine whose operations can be observed by classical methods. In the 1980s Richard Feynman and others realized, however, that one could devise quantum mechanical operators that performed logical functions. Since that time the disciple of quantum computing and quantum information has grown enormously and is beginning to move from academia to engineering. There is a difficulty however. Although the quantum formalism is believed to work perfectly like a deterministic analogue computer which can, in theory, perform tasks which cannot be performed by a classical computer, we can only obtain the results specified by the Born rule, so that most of the information potentially carried by the continuous quantum formalism is lost in the quantized observations that we can make. Ashley Montanaro: The past, present and future history of quantum computing, Nielsen & Chuang: Quantum Computation and Quantum Information
The principal function of a network is copying. When I speak into my phone, the analogue sound signal is digitized and a copy of it transmitted to my listener. Unless I have set the phone to record the conversation, the signals I am sending and receiving are deleted as soon as they are sent or received.
Now we may imagine a quantum system which comprises two fermions in a "singlet" state. This means that one has spin up and the other spin down in whatever frame we choose to measure their spins. Between them they constitute a single state which is said to be "entangled". Their state is shared, so that their individual states cannot be described independently. Singlet - Wikipedia
Now we can imagine one of the electrons to be transported some distance away and the other retained as, in effect, a recording of the other electron, differing only in that it has the opposite spin. We now find that no matter how we measure the spins of the two electrons, one has spin up and the other spin down. Their states cannot be changed independently no matter how far they are separated. This situation occurs even when the electrons are so far apart and the measurements so close in time that there is no possibility of one communicating with the other, even at the velocity of light. Einstein called this phenomenon spooky action at a distance. It is predicted by the quantum formalism, and it has been experimentally verified. Quantum entanglement - Wikipedia, Juan Yin et al: Bounding the speed of 'spooky action at a distance'
This observation suggests two conclusions: first, that quantum states exist prior to space-time; and second, that all the particles in the universe still share information about one another dating from the epoch when many particles were created from few by interactions with one another. We cannot observe this early phase of the life of the universe, but rely on the results of physics experiments to guess what happened. Peacock: Cosmological Physics, Particle physics in cosmology - Wikipedia
We may see an echo of this history in the fact that we cannot precisely compute the interaction between any two particles without taking into account all the other particle reactions may contribute to the reaction in question. Quantum field theory computations are represented by Feynman diagrams, which are graphic representations of the networks of interaction that contribute to any particular event. Precise results in quantum electrodynamics, for instance, require that the contributions of a large number of possible interactions of ever decreasing probability must be taken into account to get exact results, and it may be that absolute precision requires taking all the possible interactions in the universe into account. This idea was first suggested in the classical context by Ernst Mach, and the idea had a strong influence on Einstein. Feynman diagram - Wikipedia, Mach's principle - Wikipedia
14: The transfinite computer network
Cantor's transfinite numbers initially caused a certain amount of controversy. Cantor was postulating an endless hierarchy of ever greater transfinite numbers. Some theologians thought that the attribute "actually infinite" could only be applied to God. Opposition also came from mathematicians and philosophers. Philosophers had held from the time of Aristotle that the actual infinite could not exist. Controversy over Cantor's theory - Wikipedia, Joseph W. Dauben: Georg Cantor and Pope Leo XIII: Mathematics, Theology and the Infinite
Cantor energetically defended his ideas. The basis for his defence was what we now call formalism. He argued that his work was a logically consistent formal development of existing mathematical theory. Years later David Hilbert made this position explicit and it is now, together with intuitionism and consrructivism, mainstream in mathematics. Mathematics, Hilbert claimed, is in effect a game played with symbols. We can make the symbols mean whatever we like, and the only rule is that the structures we construct are logically consistent. This is the same constraint that theologians place on God, and, according to the hypothesis developed here, it is also the only constraint on the universe, which, like God, is totally self sufficient and subject to no outside control. Formalism (mathematics) - Wikipedia, Intuitionism - Wikipedia, Constructivism (philosophy of mathematics) - Wikipedia, Richard Zach (Stanford Encyclopedia of Philosophy)
Formalism applies to pure mathematics. When we come to practical applications we use mathematics to represent situation in the world by establishing correspondences between mathematical structures and features of the world. So we can apply the natural numbers to counting, and more complex mathematical constructs like vectors and calculus to the study electromagnetic fields and the motion of particles in space-time. One of the most important applications of mathematics is to the study of mathematics itself, a subject known as the philosophy of mathematics, or "metamathematics". Metamathematics - Wikipedia
Here we concentrate on two major results of due to Kurt Gödel and Alan Turing which originated in Hilbert's program. Hilbert boiled the formal program down to three key questions: Is mathematics consistent? Is it complete? Is it computable? Gödel showed that if mathematics is consistent, it is not complete. There are true mathematical propositions that cannot be proved. Turing showed that if mathematics is consistent, there are mathematical problems that can be posed but which cannot be solved by logical processes. Hilbert originally thought that every mathematical problem could be solved. Gödel and Turing showed that he was wrong. There are insoluble problems and the deterministic heart of mathematics is surrounded by uncertainty. Goedel's completeness theorem - Wikipedia, Gödel's incompleteness theorems - Wikipedia, Turing's Proof - Wikipedia
Turing's machine, the formal archetype of a computer, is a mechanization of the process of proof. In the normal course of mathematics, proofs are devised in the minds of mathematicians but in order to be communicated in the mathematical literature, a proof needs to be written out in a series of logically connected propositions to form logical continuum from a set of axioms or assumptions to a conclusion. Turing's machine is considered to be able to prove anything provable, and Turing's proof of incomputability amounts to showing that there are things such a machine cannot do. It can, nevertheless, do a lot. There are as many different proofs (and corresponding computers) as there are natural numbers, and these machines are the foundation of the enormous computing industry which has grown from the work of Turing and other mathematicians and logicians.
The transfinite numbers are built on the natural numbers. We can create a transfinite universe (a Cantor universe) by exploiting the correspondence between the natural numbers and computers. Such computers may range from the null logical machine that does nothing, an action which changes nothing, to machines at the limits of computational complexity. We can imagine building such machines by stringing simple computers together so that the output of one becomes the input of the next. Such a string is simply a more complex computer. We construct computer software by assembling simple operations (subroutines) to create more complex codes, just as we make essays like this by stringing words together, each of which adds an atom of meaning to the overall product.
We have generated the transfinite numbers by permuting the natural numbers. We generate the transfinite computer network by permuting computers corresponding to the natural numbers.
We assume a close coupling between the physical and logical worlds. If a physical system is consistent it will attract energy. Here we interpret consistency as a potential. In the presence of a potential if something is not inhibited it will happen. If the floor holding me up fails, the gravitational potential will make me fall.
This coupling leads us to expect local consistency to be the sole constraint on reality. Consistency is built into the mathematics of general relativity, which provides a logical model of gravitation. Consistency in our other major theory of the universe, quantum mechanics, is not so clear, since we still do not have a clear answer to the measurement problem, but we are encouraged by the fact that applied quantum mechanics works quite well. Measurement in quantum mechanics - Wikipedia
The computers in the transfinite network serve as an infinite set of finite patches covering the transfinite sets of formal states in the network. We proceed by analogy with Einstein's step from inertial to accelerated motion. The determinism of computation is limited by the limits to computability, introducing, uncertainty and variation, providing a role for selection, ie evolution.
Every concrete computer and computer network is built on a physical layer made of copper, silicon, glass and other materials which serve to represent and information and move it about. The physical layer implements Landauer’s hypothesis that all information is represented physically. Rolf Landauer
In practical networks, the first layer after the physical layer is usually concerned with error correction, so that noisy physical channels are seen as noise free and deterministic by subsequent layers of the system.
Once errors are eliminated, an uninterrupted computation proceeds formally and deterministically according to the laws of propositional calculus. As such it imitates a formal Turing machine and may be considered to be in inertial motion, subject to no forces (messages).
All the operations of propositional calculus can be modelled using the binary Sheffer stroke or nand operation. We can thus imagine building up a layered computer network capable of any computable transformation, a universal computer, using a suitable array of nand gates. Whitehead & Russell: Principia Mathematica, Sheffer stroke - Wikipedia
An ‘atomic’ communication is represented by the transmission of single packet from one source to another. Practical point to point communication networks connect many sources, all of which are assigned addresses so that addressed packets may be steered to their proper recipient. This ‘post office’ work is implemented by further network layers.
Logic and symbolic computation are discrete processes, and the steps in a logical argument can be numbered with the natural numbers. Much of our mathematics is concerned with continuous quantities represented by real or complex numbers. The proofs of propositions about continuous quantities are logical continua, that is discrete strings of symbols. This leads to the general conclusion that deterministic processes in the world are underlain by deterministic logical processes whose steps are measured by quanta of action, and that the notion of a real continuum may not be actually realized.
Since all observable realities are quantized in units of Planck's constant, we must accept that the application of continuous mathematics to discrete processes could lead to trouble. The multiplication of real numbers is a process well understood in mathematical analysis, but does it ever occur in reality? Obviously the real numbers are very valuable in physics. Is this because when we deal with large numbers of events the law of large numbers serves to convert a quantized curve into a continuous curve, even though the underlying processes remain quantized?
Logical continuity does not require an ether or a vacuum, it exists independently of any continuous substrate and its whole reality can be expressed in the truth tables for the logical connectives. This is a very hard idea to absorb because it looks like action at a distance, but there is no distance in logic. The only metric we have for computation is a count of operations multiplied by the complexity of each operation. We judge the power of practical computers by their rate of operation.
The computable transformations in the transfinite network stand out as a set of fixed points, each described by a fixed algorithm which can be executed by a computer. The layers of the network are all self similar in that any deterministic and repeatable processes to be found within them must be the product of some computation. They all have this limitation in common, and it serves as a symmetry to lead our understanding of the world. Although the power of computers is limited by Turing's theorem, there is no practical limit on then number of computers that can be connected into a network. New computers can be connected as long as there are symbolic addresses available.
The operation of a computer effects a transformation from an input state to an output state. We know that computable transformations are deterministic and guess that such transformations are the foundation of predictable events in the Universe. On the other hand the 'principe of sufficient reason' leads us to suspect that incompleteness and incomputability are the sources of randomness in the Universe. Yitzhak Y. Melamed and Martin Lin: Principle of Sufficient Reason (Stanford Encyclopedia of Philosophy)
Knowledge is a reality created by measurement, the creative role of quantum mechanics. The world is god's knowledge. There are not two copies of the world, in itself and in gods mind, but just one, made unique by the quantum mechanical no cloning theorem. No cloning theorem - Wikipedia
15: Lagrangian mechanics and evolution
The transfinite computer network is a very large system. Cantor's universe is built of ordered sets of ordered sets of . . . of natural numbers. Each number appears a transfinite number of times. We understand the cardinal number ℵn+1 to be the cardinal of the set of permutations of a set whose cardinal is ℵn, so we write ℵn+1 ≡ ℵn!. To form the transfinite computer network, we replace every natural number with a corresponding Turing machine.
This system seems much too big to model the universe. We propose to cut it down match the visible world in two ways. First, we believe that the universe started off very simple, possibly as a single singularity. At this point the first cardinal number is not the total number of natural numbers, but may be any number. From a logical point of view, the first transfinite number ℵ0 may just be 1 or 2. We might call this number "machine infinity". For a machine with only two states, as the earliest universe might be, three is an infinite number.
The second way is natural selection. The first step in this direction was taken by Maupertuis (1698-1759), who conjectured that in a perfect world, events occurred with the least expenditure of action. This is in effect a selection principle, picking algorithms that survive from all possible algorithms. The permutations that construct the transfinite computer network explore all possible algorithms, and we guess that the only ones that survive are those that drive the system of the world that we actually observe. Maupertuis' principle - Wikipedia
Although Newtonian mechanics is in principle quite simple, its application is not so easy for problems involving three or more bodies. This became apparent when astronomers began to compute orbits in the solar system which comprises the Sun, eight or nine planets and many moons. This motivated reformulations of Newton's laws in more general terms leading to the work of William Hamilton. Joseph-Louis Lagrange - Wikipedia, William Rowan Hamilton - Wikipedia
Maupertuis was a bit vague about what action actually meant, but Lagrange produced a clear mathematical definition of an action function now called the Lagrangian. Where Newton's method computes the path of a particle from p1 to p2 by analyzing it point by point, Lagrange's method computes an path by considering its start and endpoints and then finding the path that minimizes the classical action. In a space with a conservative potential this path is found to be the identical that computed by Newton's method. More generally the paths taken in nature are those stable paths for which small variations in the process make no change in the action. This is Hamilton's principle, the principle of stationary action. Hamilton's principle - Wikipedia
Mathematically, the action for a system moving on a trajectory beginning at time 1 and ending at time 2 is the time integral of the difference between the kinetic and potential energy of the system as it moves along the trajectory. The actual path taken in nature is the path predicted by Hamilton's principle. This path may be calculated using the calculus of variations.
S = ∫ (KE - PE) dt
Calculus of variations - Wikipedia
The action and energy described here so far follow the ideas of classical mechanics. The relationship between these two concepts changes a little when we turn to quantum mechanics. In classical physics, action is the time integral of energy. In quantum physics action is describes the relationship between frequency and energy expressed in the Planck-Einstein relation E = ℏω where E is energy, ℏ is Planck's constant and ω is frequency.
Richard Feynman has given us a fascinating lecture which shows the connection between the application of Hamilton's principe in classical mechanics and quantum mechanics. Feynman, Leighton and Sands FLP II:19: The Principle of Least Action
The action in quantum mechanics is is simpler than the action in classical mechanics since all quantum mechanical equations embody the quantum of action. Feynman devised a methodology which takes advantage this and the quantum mechanical principle of superposition to produce the path integral formulation of quantum mechanics. Feynman & Hibbs: Quantum Mechanics and Path Integrals, Path integral formulation - Wikipedia
The idea is that a quantum system follows every possible path between two states. Each path is represented by a probability amplitude that looks like a wave. When all the amplitudes for all the paths are added up, most of them cancel out except on the path where all the amplitudes are in phase and interfere constructively. In nature this selection happens automatically by superposition rather than by the application of the calculus of variations to compute the classical path. It has the advantage of approaching the classical path as the quantum of action is decreased to zero. We may see a similarity between the path integral formulation and de Broglie's justification of Bohr's assumption that the orbit of an atomic electron comprises an integral number of waves. Andrew Duffy: de Broglie's Justification of Bohr's Assumption
Energy is the foundation of both quantum mechanics and relativity. Quantum mechanics is very similar to music in that it involves the superposition of periodic signals. In music the signal is a real wave in air. In quantum mechanics the signal is a probability amplitude. In classical mechanics Hamilton's principle models the selection of the real particle trajectories from all possible trajectories. In quantum mechanics the superposition of probability amplitudes performs the same task.
In the network model, more complex structures are built using the subroutines provided by the layer beneath them. Because higher layers of the network are more complex than the lower layers, the principle of requisite variety makes it impossible for them to be fully determined by the lower layers. Instead, they must be 'discovered' by random variation and selected by their ability to control the resources provided by lower layers for their own sustenance.
This creates a situation analogous to that envisaged by the P versus NP problem in computer science. Designing a new species that can survive seems to be an almost intractable problem, so that it falls into the class NP. On the other hand, checking such a design is relatively easy, class P, since if it survives it is, from an evolutionary point of view, true. If it does not survive it is not consistent with its environment, and so evolutionarily false. P versus NP problem - Wikipedia
This idea has been developed by Michael Polanyi:
The theory of boundary conditions recognizes the higher levels of life as forming a hierarchy, each level of which relies for its workings on the principles of the levels below it, even while it itself is irreducible to these lower principles. Michael Polanyi: Life's Irreducible Structure
We begin by thinking of space as memory, that is not-time. In space-time, space is orthogonal to time, that is independent of it, analogous to the independence of the three dimensions of space: we can move north without going east.
The creation of memory requires a loss of unitarity, that is a loss of normalization, in effect the emergence of two parallel sources capable of independent action. Since energy is conserved, the total rate of action must be shared between these sources. Let us, for sake of concreteness, assume that the sources are a massive particle like an electron and its antiparticle, the positron, both fermions, communicating through massless bosons, like photons. Photons have no rest mass, and travel always at the velocity of light, c. Electrons and positrons do not spontaneously decay (like neutrons for instance), and so have no fixed lifetime between their creation and annihilation, but they can be created in in pairs by energetic photons, or annihilate in pairs to create photons.
In a particular inertial frame space is orthogonal to time, but the effect of the Lorentz transformation between inertial frames in relative motion is to break this orthogonality, mixing space and time from the point of an observer moving relative to the observed frame. Nothing changes for an observer moving in the observed frame, but Lorentz transformation produces time dilation and the length contraction, so that from the point of view of an observer, time stands still for a photon and it has zero length. It appears, in effect, to be outside space-time.
Since energy is equivalent to process, and mass is equivalent to energy, we may guess that massive particles like electrons have some internal process that accounts for their mass and energy. They are in effect localized processes, rather as I am a massive localized process. A photon, on the other hand having no mass has no internal process, and yet it moves. How is this?
We get a clue from the fact that in special relativity energy and momentum transform in the same way as time and space. From a quantum mechanical point of view, energy and momentum are both cyclic phenomena, that is measures of processing. Energy measures the frequency of steps in time. Momentum measures the length of steps in space. The the length of spatial steps (Δx = h/Δp) divided by the duration of the time steps (Δt = h/ΔE) yields velocity.
This suggests that rather than being simply the passive background for physical processes, space itself is effectively the network layer that provides the processing necessary for the motion of photons, which carry both energy and momentum, despite their lack of mass, and may be envisaged as stepping along in steps whose rate of execution is their frequency and whose length is their wavelength, both related so that the product of frequency and wavelength is the velocity of light, c
As in biological evolution, the predecessor of any emergent structure much be capable of existing in its own right. So cells existed independently before they became united into multicellular organisms. So we have imagined energy and time existing independently of space-time whose emergence is accompanied by the emergence of momentum.
This guess suggests how space-time executes local motion and may give us a clue to the why the Minkowski metric has the signature -1, 1, 1, 1,. This metrics suggests that time and space are inverses of one another so that for photons the null geodesic connects points between which the space-time interval is zero, that is they are in contact. Minkowski space - Wikipedia
We now turn to a quantum mechanical picture of the logical generation of a dual structure and its expansion to a a transfinite model of the universe. Most of the essential features of quantum mechanics are demonstrated by the classical two slit thought experiment. One of the most interesting features of this experiment is that even when particles are sent through the apparatus one at a time, they still build up the interference pattern on the screen. In other words, individual particles communicate with themselves, in the environment set up in the experiment. FLP III:01: Chapter 1: Quantum Behaviour
We have already noticed the role of entanglement in the creation of spooky action at a distance. The interference patterns produced by the two slit experiment suggest that the particles creating the pattern must be, in effect, going through both slits. This is not a problem if the quantum mechanical layer lies beneath the space layer in the universe. This brings us to the emergence of space and the observation that all quantum mechanical particles are either bosons and fermions: bosons with integral spin; fermions with half integral spin.
In the macroscopic world it is safe to say that no two objects are completely identical, since each comprises trillions of atoms and can be different in many small ways. At the level of fundamental particles, however, true identity is possible. All electrons have the same rest mass and the same charge, for instance. At rest they also have two states of spin, "up" and "down". In motion they have a potentially infinite variety of momentum states. Truly identical particles are indistinguishable, and the quantum mechanical rules vary, depending on whether the particles involved are distinguishable or indistinguishable. Feynman, Leighton & Sands FLP III:04: Identical Particles
As Feynman states them, the rules are very simple:
(1) The probability of an event in an ideal experiment is given by the square of the absolute value of a complex number φ which is called the probability amplitude:
P = probability,
φ = probability amplitude,
P = |φ|2.(2) When an event can occur in several alternative ways, the probability amplitude for the event is the sum of the probability amplitudes for each way considered separately. There is interference:
φ = φ1 + φ2
P = |φ1 + φ2|2(3) If an experiment is performed which is capable of determining whether one or another alternative is actually taken, the probability of the event is the sum of the probabilities for each alternative. The interference is lost: P = P1 + P2.
Indistinguishable particles therefore interfere. Distinguishable particles do not. But, bosons interfere in a different way from fermions, and fermions with opposite spins interfere differently from fermions the same spin since they are distinguishable by their spin. The amplitudes of identical fermions add 180 degrees out of phase , their amplitudes add up to 0 so that the probability of them being in the same state is zero. Identical bosons, on the other hand, can enter the same state. In fact the more bosons there are in a particular state, the greater the probability that more will enter. Fermion behaviour appears to be related to the extension of space. Boson behaviour explains the behaviour of lasers and masers.
The Trinity is an atom of communication, two sources and a link. Two fermions and a flow of bosons. Father and Son communicating through the Spirit. This may sound a bit blasphemous, but we are talking formalism, in the realm of absolute simplicity where there are no nuances of personality, only the first physical implementation of formal logic, ie not and and, defined by truth tables. We can leave 2000 years of religious and theological meaning aside and go back to basics looking for a theological interpretation of energy, gravitation and quantum mechanics. Fermion - Wikipedia, Boson - Wikipedia
Continuous mathematics imagines that changes of states are continuous, so that there is a continuous function joining any state φ to any other state not-φ = ψ. This idea implies that there is a continuous sequence of states running from φ to ψ. This idea suffers from the same difficulty as the notion that one can make a continuum out of discrete points. The act that mathematicians can make logically consistent definitions of continuity does not necessarily imply that continua exist in reality. From an information theoretical point of view, a real continuum carries no entropy or information. Zee: Quantum Field Theory in a Nutshell
We observe, in fact, that there are no continuous transitions between states. What we do see in any event is the annihilation of an old state or set of states and the creation of new states. Nevertheless, various parameters like action, energy and momentum, are conserved in the transition from before and after. Conservation is a very old idea, made explicit in Aristotle's theory of matter and form. Matter is common to all physical states (conserved) but may accept different forms to create different states, like swords and ploughshares. Neuenschwander: Emmy Noether's Wonderful Theorem
Here we understand symmetry in terms of a layered network model. Each layer of the network acts as a symmetry or set of algorithms for the layer above it. The physical layer of a network provides the symmetries such as the conservation of action, energy and momentum which constrain the behaviour of all the higher layers. These symmetries are 'broken' when they are applied to the specific processes used by higher layers to perform their functions. The algorithm for addition applies to all additions, but particular additions are distinguished by the values entering into them. Quantum mechanics, while conserving action presents us with a countable infinity of possible actions represented by the eigenfunctions of a quantum operator.
The fundamental symmetry, from the point of view of this essay, is computability or determinism. Turing found that there are a countable infinity of computable functions. The hypothesis that the Universe is digital to the core proposes that we identify the set of computable functions with the set of quantum eigenfunctions, on the assumption that they are equinumerous. Computability we propose, provides the stable skeleton upon which the the Universe is build.
Some things are predictable. Some are not. If we are to succeed, we must base our technology on predictable things, like the strength of steel or the conductivity of copper. The science of physics aims to discover these fixed points in nature and (hopefully) understand the relationships between them.
When we consider the Universe as divine, we can imagine the symmetries discovered by physics as the boundaries of the divinity. From a logical point of view, the dynamics of the Universe is consistent. As Turing and Gödel found, however, consistent does not necessarily mean determinate. There are unprovable and incomputable propositions that are nevertheless true.
The execution of an abstract Turing machine is an ordered sequence of operations which do not involve time or frequency. A practical physical computer, on the other hand, operates in space-time and its power is measured by a combination of the spaciousness of its memory and the frequency of its processor.
The operations of a single computer are controlled by a clock which produces a square wave by executing the not operation at a frequency determined by a physical oscillator. In modern machines, this frequency lies in the gigaHerz range. The clock pulses maintain synchronicity between all the logical operations of the machine. From a quantum mechanical point of view, the clock represents a stationary state.
Although most macroscopic processes appear continuous, we know that all change requires the discrete acts of creation and annihilation whose nature and frequency we model with quantum field theory applied at various scales in various contexts. The Standard Model is the application of this theory to the point particles which lie at the lowest level in the universal structure. Standard model - Wikipedia
To have kinetic energy, we must have space to move, in other words we need velocity, which is from our macroscopic point of view a measure of distance travelled per unit of time. To have potential energy, we must have some sort of memory, a system to store energy in a non-kinetic or stationary form. In physics a store of energy is simply called a potential. The emergence of space-time therefore seems coupled with the emergence of potential and kinetic energy whose algebraic sum may be zero. Marcelo Samuel Berman: On the Zero-Energy Universe
Each layer in network model uses the resources provided by the layer beneath it to perform tasks which serve as resources for the layer above it. Here we consider the initial singularity to be the root of the universe, and we identify this singularity with the classical model of God produced by Aristotle and Aquinas.
There are two ways to count the activity of a source of action. The first is its energy, the time rate of action. This is represented by the equation E = hf where E is energy, h the quantum of action and f frequency. The second is count the number of different actions and their relative probability. This yields a measure of entropy or complexity. The mathematical theory communication tells us that the entropy H of a source A which has an alphabet of i different actions ai each executed with probability pi is
H = - ∑i pi log pi
16: General relativity: the cosmic network
Isaac Newton built his system of the world in a fixed three dimensional Euclidean space pervaded by universal time, the same at every point in space. He was able to describe all motion in the heavens on Earth with three simple laws which describe the relationship between force and motion. Then a fourth law, the law of universal gravitation, provides the force which guides the motions of the planets and moons of the solar system and explains why nobody falls off our spherical planet..
Newton's laws of motion are:
Newton: The Principia : Mathematica: I Principles of Natural Philosophy, Newtons Laws of Motion - Wikipedia, Kepler's laws of planetary motion - Wikipedia, Tycho Brahe - Wikipedia1. A body at rest remains a rest, and a body in motion continues to move in a straight line unless it is acted upon by a force. We call this inertial motion. This law breaks from the ancient idea that continued application of force was required to keep a body moving. This is the case where there is friction present.
2. The acceleration imparted to a body by the action of a force is inversely proportional to the mass of the body: a = F/m. Massive bodies tend to resist the action of forces, leading to the third law:
3. The action of a force and the reaction to it are equal and opposite.
On Earth most forces are exerted by contact, but the heavenly bodies are moved by gravitation, whose source is mass itself. Newton's universal law of gravitation reads:
F = Gm1m2 / r2
Where G is the gravitational constant, m1 and m2 are the masses of the heavenly bodies and r is the distance between them. Newton was able to deduce this law from the three laws of planetary motion which Kepler had derived from astronomical observations made by Tycho Brahe.
An important point about Newton's law of gravitation is that the source of the gravitational force acting on the masses is the mass itself. In a sense we are looking at a form of consciousness, self interaction at the lowest physical layers in the universe. This leads us to distinguish between gravitational mass, the source of gravitation, and inertial mass, the source of the resistance to force. Einstein realized that these two concepts of mass point to the same reality.
Einstein was motivated to revise Newton's laws by a contradiction that was revealed when he imagined travelling alongside a light beam at the speed of light. According to the Galilean principle of relativity used by Newton, the light should appear stationary, just as a train appears stationary when we are travelling along with it. On the other hand, Maxwell's equations which describe the propagation of light show that light cannot appear stationary. John D. Norton: Chasing a beam of light: Einstein's most famous thought experiment, Galilean invariance - Wikipedia, Maxwell's equations - Wikipedia
Even when travelling alongside it at the speed of light, a light bean would still appear to be travelling at the speed of light. Galileo thought that all the laws of nature appear identical to observers in an inertial frame. Einstein added the velocity of light to this list of laws. In the Galilean picture, the total velocity of a person running along a train is simply the sum speed of train + speed of runner. Einstein needed a transformation which said speed of light + speed of light = speed of light. Such a transformation had already been formulated by Lorentz and is now central to the special theory of relativity. Special relativity - Wikipedia
After he had completed the special theory of relativity Einstein saw that he had more work to do:
'In 1907, while I was writing a review of the consequences of special relativity . . . I realized that all natural phenomena could be discussed in terms of special relativity except for the law of gravitation. . . . It was most unsatisfactory to me that although the relation between inertia and energy is so beautifully described [in special relativity], there is no relation between inertia and weight. Pais: 'Subtle is the Lord...': The Science and Life of Albert Einstein, page 179
The network model proposed here assumes that each layer of the network is capable of existing as a free standing system whose outputs become the symmetries upon which the layer above it is built. So the inertial system described by the special theory of relativity becomes the foundation for the universe described by the general theory. The general theory is built from the interactions of "flat" inertial spaces to give the geometry of "curved" spacetime.
Einstein said that the 'happiest thought in my life' was the realization that a person in free fall would not feel their own weight. A body freely falling in a gravitational field is in an inertial frame, like a satellite circling Earth. His problem was to make this insight into the theory of gravitation. He worked sporadically on gravitation from 1907 to 1912 then '. . . between August 10 and August 16, it became clear to Einstein that Riemannian geometry is the correct mathematical tool for what we now call general relativity (Pais, 210). Riemannian geometry - Wikipedia
A differential manifold is bit like chain mail, a flexible dynamic structure is constructed by hinging together a large number of rigid pieces each of which is a flat space tangential to the manifold at a given point. This idea was used long ago by Archimedes to approximate curves by short segments of straight lines. The same technique is used in differential calculus to construct the differential of a function representing a curve. Differentiable manifold - Wikipedia
In the manifold the pieces are infinitesimal flat tangent spaces. The hinges are differentiable connections between these elements. The resulting space is continuous and continuously deformable topological space that has no metric so that it may be compressed and stretched to any size. This serves as a blank canvas for general relativity.Here we wish to interpret this manifold as a digital network and break the continuity by imagining that the limit we take when calculating a derivative is not zero but a very small number, Planck's constant. Compared to the size of the Universe, the difference between h and zero is so small as to have a negligible effect on the structure of the manifold. This interpretations replaces the infinitesimal flat spaces of the differential manifold with quantum events whose size is measured by the quantum of action.
The fully developed cosmic network has four dimensions, one of time snd three of space. The first dimension to emerge is energy / time, as described above. We then image the emergence of one dimension of space to enable the existence of two discrete points p and not-p which we may imagine as fermions and perhaps antiparticles communicating in time through a massless boson similar to a photon, moving at the velocity of light. Next we may imagine the emergence of a second spatial dimension orthogonal to the first to give us a system of four particles analogous to that described by the Dirac equation.
Finally 4-space emerges and is selected because it is now possible for every point in space to communicate with every other without "crossed wires" and confused signals. This development was noted some time ago in the manufacture of printed circuit boards when it was realized that a single flat layer places a very strong constraint on the connectivity of devices on the board, so that it is necessary to move into the third dimension. A similar problem arising from the bottlenecks in urban traffic flow caused by level intersections between transport links is solved by moving into the third dimension. Although higher dimensions may be possible, they do not appear to add sufficient selective advantage to maintain their added complexity.
As noted above, quantum mechanics describes a communication network, so we may understand all the detailed communication at the physical layers in the universal network to be mediated by quantum mechanical processes. Over very short distances these are managed by the strong snd weak forces. Over longer distances electromagnetism and photons becomes dominant. The general theory of relativity describes the space-time symmetry underlying all the communication operations in the universe. Although gravitation is very weak, it is not shielded and therefore localized as the electric channel is by the existence of positive and negative charges. It determines the large scale structure of the universe which arises from the transmission of undifferentiated energy.
Newton was the first to realize that the source of gravitational interaction between masses is mass itself. This recursive process accounts for the compression of masses of particles into stars producing temperatures great enough to complete all the relatively slow nuclear syntheses which were not completed in the initial moments of particle formations.
For nearly a century much sweat, maybe some tears (and possibly a little blood) has been spent in the so far unsuccessful effort to quantize gravity. Here we interpret this situation as evidence that gravity is not quantized. The core argument is based on the notion that the Universe may be modelled as a computer network. An important feature of useful networks is error free communication.
Conversely, we should not expect to find quantization where error is impossible, that is in a regime where every possible message is a valid message. Since gravitation couples universally to energy alone, and is blind to the particular nature of the particles or fields associated with energy, we can imagine that gravitation involves messages that cannot go wrong, and therefore have no need for error correction, or, on our assumptions, quantization. In other words, we may think of them as continuous. They are in a sense empty transformations.
Gravitation, is thus present at every point in the universe. In the context of our present four dimensional universe, this conserved flow is described by Einstein’s field equations. Because this flow cannot go wrong, it requires no error protection, and so it does not need to be quantized.
This line of argument suggests that efforts to quantize gravity may be misdirected. It may also give us some understanding of 'spooky action at a distance' which has been found to propagate much faster than the velocity of light. Let us guess that the velocity of light is limited by the need to encode and decode messages to prevent error. If there is no possibility of error, and so no need for coding, it may be that such 'empty' messages can travel with infinite velocity.
This suggests that logical continuity is prior to and more powerful than geometric continuity. If we can find a way to substitute logical arguments for the arguments from continuity employed in physics we may be able to put the subject on a stronger foundation. Cantor's set theory effectively digitized the continuum into sets of points. There is something of a paradox in trying to describe something continuous with discrete points. Aristotle considered things to be continuous if they had points in common, like a chain. Logical continuity, comprising a chain of logical statements, is consistent both with Aristotle's idea and the connections in a differentiable manifold.
Continuous mathematics gives us the impression that it carries information at the highest possible density. This theory is so convincing that we are inclined to treat such points as real. Much of the trouble in quantum field theory comes from the assumption that point particles really exist. The problem is solved if we assume that a particle is not a geometrical but a logical object, a self sustaining proof, proving itself by logical processes as I prove myself into existence.
Feynman provides a succinct summary of the general theory:
The theory must be arranged so that everybody—no matter how he moves—will, when he draws a sphere, find that the excess radius is G/3c2 times the total mass (or better G/3c4 times the total energy current) inside the sphere. That this law — law (1) — should be true is one of the great laws of gravitation, called Einstein's field equation. The other law is (2) — that things must move so that the proper time is a maximum — and is called Einstein's equation of motion.
Since gravitation couples only energy, and sees only the energy of the particles with which it interacts, we might assume that it is a feature of an isolated system, that is one without outside observers. The Universe as a whole is such a system. Whenever there is observation (communication), there is quantization. There can be no observation when there are no observers. This is the initial state of the Universe described by gravitation.
With the breaking of the initial total symmetry of the initial singularity space, time, memory and structure enter the universe to give us the system we observe today. Nevertheless, as in a network, the more primitive operations remain inherent in the more complex.
One consequence of the the final form of the field equations is that the the world they describe cannot be static, it must either be expanding or contracting. Subsequent observations of the universe have shown that it is expanding. This opens the possibility of extrapolating the universe back from this enormous size back to something smaller. Taken to the extreme, this suggests that the Universe began as a structureless point, now called the initial singularity. This point is formally identical to the classical God: no spatial size, absolutely simple with no structure and the source of the Universe. Hawking and Ellis: The Large Scale Structure of Space Time
17: Network intelligence and consistency
The 4D spacetime network described by the general theory provides a framework for the quantum mechanical construction of the microscopic detail which fills out the macroscopic structure of the Universe.
We assume that the detailed structure of the universe has emerged by a process of evolution by natural selection. We see the formal features of this process arising from two mechanism, variation and selection. Variation is the consequence of two features of consistent formal systems, incompleteness and incomputability, which place limits on determinism. From a cybernetic point of view, these limits are captured by the principle of requisite variety. From a spatial point of view, deterministic evolution is limited by incompleteness; from a temporal point of view, it is implemented by computability.
Gödel's incompleteness theorem establish that there are true propositions that cannot be proven because they are more complex (have greater entropy) than the resources available for proof. Since we believe that entropy always increases, this means that in general the past lacks the entropy to control or predict the future. This fact is an ubiquitous feature of common experience. "The best laid schemes o' Mice and Men gang aft agley." To a Mouse - Wikipedia
A major unsolved problem in the theory of computation is known as the "P vs NP problem". Turing placed an absolute bound on computability, there are some problems that are inherently incomputable. Within that bound, however, there are degrees of difficulty. One classification distinguishes problems whose difficulty grows exponentially with the size of the problem from those whose difficulty grows polynomially. If n is the size of the problem and e is any number, the difficulty D grows as en for exponential problems, but as ne for polynomial problems. As n increases en will eventually becomes greater than ne, regardless of the size of e
We may imagine that the problem of evolution falls into this classification. The discovery of new systems may be exponentially difficult, but testing them may fall into the area of polynomial difficulty. The random process of variation may sometimes solve the exponential problem. The deterministic process of selection will then be able to sort the survivors from from the failures. The basic requirement for an organism to survive is that it be able to obtain enough resources from its environment to maintain its own existence. In other words we might say that its existence is consistent with its environment. Fortnow: The Golden Ticket: P, NP, and the Search for the Impossible
We understand intelligence as the ability to create solutions to problems. We may see a problem as a blockage to the release of a potential. I want to solve the crossword. My desire is a potential, creating a force moving me toward a solution. Unless the puzzle is very simple however, I cannot move deterministically to the solution by a computation. I must follow a course of trial and error, using my imagination to generate random trials and then testing them to see if they are consistent with the clue, the grid and the words I have found so far. Intelligence understood in this way is very close to evolution by natural selection. It seems reasonable to see evolution and intelligent design as instances of the same process.
Here we see Cantor's theorem as the both formal source of the potential which moves the universe to become more complex and the source of variation that makes this possible. This potential is actualized when random processes open up a path to a new structure which is capable of sustaining itself. This is in effect the creation of a new layer in the universal network. Such a layer may propagate and become the foundation for further new layers, or it may eventually falter and die. We see this process at work from day to day in the evolution of technology and culture.
Every new layer in the network is a new interpretation of the past, a new moment, a new meaning.
18: Symmetry, invisibility and scale invariance
Both the structure of the universe and our knowledge of it are made possible by symmetry. The concept of symmetry has three features: a concept of identity; a concept of difference; and a transformation between the two. A perfect featureless wheel, for instance, looks the same no matter how we turn it. This is the identity. The difference is that if we turn it, it assumes a different position. The transformation that connects the sameness and the difference is the turning of the wheel. Mathematically a symmetry is represented by a group which is a set of elements and and a rue of composition which combines any to elements of the group to give a third, also a member of the group. The group is closed. There is no operation that takes the group outside itself. There is always a sequence of group operations, like the turning of a wheel, which brings us back to the starting point. Neuenschwander: Emmy Noether's Wonderful Theorem, Auyang: How is Quantum Field Theory Possible?
We have constructed the transfinite computer network using the group operation known as permutation which generates the permutation group by swapping one element for another. The permutation group is the most general possible group which contains all the others. In the network model, we assume that identity is the result of copying, that is descent. So we find that all humans are descended from a common ancestor, and we can produce different "teams" of a group of people by assigning different roles to each of them.
Our assumption is that layer n of the transfinite network is constructed using elements of layer n-1. We therefore expect to find symmetries in layer n reflecting the elements of layer n-1 that have been incorporated into layer n.
The technological paradigm for the network model of the world is the internet which is a layered computer network. As I user I am not aware of all the transformations that my input undergoes as is works its way down through the network to the physical layer and then after transmission to the recipient, up through the network layers to the user who is my peer. All this processing is invisible to me.
Since it requires computation to transform a message, a computer which which is programmed to transmit all the steps that it takes to perform a particular calculation must also transmit all the steps that it takes to transmit the information about what it is doing. The transmission of this information requires further processing to describe the process of transmission and so on, which ultimately means that the machine's process is completely taken up by the transmission of its own internal states, and it can achieve no progress on its actual task. This vicious circle can be avoided by the computer opting to remain at least partly invisible, that is to simply complete its task and perhaps provide a log of the the major actions that it has taken.
One consequence of this constraint on communication is that the lowest physical layers of the universal process must remain invisible to us. It may be for this reason that we cannot observe the quantum mechanical processes that we represent by the evolution of wave equations. We can only observe the outcomes of these processes, and are left to devise hypotheses about what is actually happening in the invisible bottom layers of the universal network.
19: Physics is mind: panpsychism
Among the attributes of the traditional God are intelligence and omniscience. Aquinas explains that intelligence is associated with spirituality, and since God is the supremely spiritual being, it must also be the supremely intelligent being. Aquinas, Summa: I, 14, 1: Is there knowledge in God?
For a long time the Christian Churches have preached a sharp distinction between God and the world. Their religion is founded on the notion that the newly created young and curious first people disobeyed God who angrily crippled his new creation as punishment for their evil deed. In the light of our modern scientific history of the creation this story is of complete rubbish but it has become deeply ingrained in the human psyche over the last two thousand years.
The point of this essay is that there is no reason to distinguish God and the Universe, that the Universe performs all the roles traditonally attributed to God and that it is quite intelligent enough to have created us. We know that our own intelligent minds are a complex network of nerves, our brains. Since we have modelled the universe itself as a network, we might guess that the universal network is intelligent, as our mental networks are. The fact that we are here, enormously complex creatures created by the universe itself is strong evidence for this position.
Each of us is a complex of some 100 trillion cells, each comprising some 100 trillion atoms. Our studies of our microscopic anatomy suggest that almost every atom in this huge system has a specific place and a specific role in the system. In other words, many billions of years of evolution have sculpted our structures, and the structures of all other life forms, in atomic detail. We may attribute the omniscience and omnipotence of the traditional god to the system that did this work.
20: Humanity: cutting theology free from politics
Theology, the traditional theory of everything, is at present politically imprisoned by powerful organizations like the Catholic Church. It is not really a theory of everything but the ideology of the political powers that have shaped it.
There are two outstanding events in the history of Christianity. The first was its political capture by the emperor Constantine who used it as a tool to consolidate his power. His first move was to force the bishops to standardize their doctrine, the result being the Nicene Creed. Once this was in place it became easy to define heresy and heretics and to control the doctrinal consolidation of the Church by inquisitional threats, torture, murder and military actions, reaching its apogee in the Crusades and the wars of religion that destroyed large parts of Europe over more than a century. Inquisition - Wikipedia, Crusades - Wikipedia, European wars of religion - Wikipedia
The second event, a consequence of the first, was the prosecution of Galileo for professing an evidence based scientific opinion which the Inquisition held to be contrary to divinely inspired doctrine. Galileo lost his battle with the Inquisition but science was unstoppable, and now it is undermining the last bastion of falsehood and mythology, the ancient religions. Scientific theology is in an embryonic state, but there can be little doubt that it will dominate our view of reality in the long run. Galileo affair - Wikipedia
In the light of the hypothesis that the universe is divine, the defects in Christian theology became very clear. The chief conflicts between the Catholic Church and its human environment seem to lie in the following areas:
1. Divine right: As well as claiming infallibility, the Pope enjoys supreme, full, immediate and universal ordinary power in the Church, which he can always freely exercise. Such power has led the Church to ignore the rule of law and human rights. Not only has it been responsible for widespread sexual abuse of children, it has frequently attempted to pervert the course of justice to hide these crimes. Vatican I: Pope Pius X: Pastor Aeternus
2. Dictatorship The dictatorial constitution of the Church serves as a paradigm and justification for all the other theocratic dictatorships on the earth which routinely harass, imprison, torture and murder anybody who opposes them. We now hold that all people are born free and equal. Social structures which give some people arbitrary control over others are obsolete. In their place we now expect democracy and the rule of law. The Catholic Church with its celibate, male, priestly hierarchy culminating in an absolute monarch is very far from this ideal. Through its Christian political proxies such as the United States the Church still commands the bulk of military power on Earth and uses it regularly to oppress people everywhere.
3. Faith versus science The church claims that its god has given it the "gift of absolute truth" from God, and the right, therefore to propagate its doctrines and require that they be held purely by faith, since it can offer no credible evidence for the truth of its claims. From a scientific point of view, the Catholic model of the world is an hypothesis, to be accepted or rejected on the evidence. The Church's holds that its dogma is not negotiable. Anybody who chooses to disagree with it is ultimately a heretic to be excommunicated. There is no room in the Church for the normal scientific evolution of our understanding of our place in the Universe. John Paul II: Fides et Ratio: On the relationship between faith and reason
4. Deprecation of the World: The Church holds that 'this life' is a period of testing in a fallen world to determine who is worthy of salvation. As a result of the murder of Jesus, God will repair the damage caused by original sin, the blessed will enjoy an eternal life of the blissful vision of God, the dammed an eternity of suffering in Hell. There is no evidence for any of this scenario.5. Sexism: Within the Roman Catholic Church, the glass ceiling for women is practically at ground level; women are excluded from all positions of significant power and expected to play traditional subordinate roles.
6. The distinction between matter and spirit: The Catholic Church depends for its livelihood on a claimed monopoly on communication with a God. Part of the cosmology that goes with this claim is that human spirits are specially created by God and placed in each child during gestation. Neither we nor the Church are of this world, but in some way alien to it. In the light of the divine universe, this claim is clearly false since we in God and God is visible to us at all times.
7. Misunderstanding of pain: In a similar vein, the Church holds both that pain is punishment for sin, and that endurance of pain, even self inflicted pain, is a source of merit. It overlooks the fact that pain is in general an error signal that enables us to diagnose and, ideally, treat errors, diseases, corruption and other malfunctions that impair our lives. Included here is the unnecessary pain caused by the false doctrines of the Church.
8. Forgiveness of sins: The Church claims that it has, from God, the power to forgive all sins. This power is often used to circumvent the natural course of civil justice, so the Church has used it, perhaps for thousands of years, to hide the crimes of its clergy any other members. Only in the last few decades are we beginning to see the extent of the child sexual abuse that has occurred in the Church, and it is quite likely, as investigations proceed around the world, that there true extent of these crimes will be seen to be enormous.
9. Violence: In the Christian model God the Father oversees the death of His own Son, in order to placate himself for the 'original sin' committed by the first people he created. This story, which has origins shrouded in ancient mythology, places violence at the heart of human salvation. Since the Christian God is omnipotent, he could have dealt with original sin without the murder of his own son. We might divide Churches generally into those that will go as far as murder to get their own way, and those that hold life sacred. The Catholic Church, unfortunately, has a long history of killing unbelievers.
10. Cannibalism: The central ceremony of the Church, Mass or the Eucharist, is understood to comprise eating the real substantial body and blood of Christ. The words of consecration are believed to "transubstantiate" bread and wine into a human body without changing their appearances. Eucharist - Wikipedia, Transubstantiation - Wikipedia
11. Eternal life, eternal bliss and eternal punishment: The Church claims that we do not die, but live on after death to be rewarded or punished for our actions in life, and that at the "end of the world" the world will be renewed to the pristine condition it enjoyed before the original sin, and we shall continue in eternal life as fully constituted human beings. There is no evidence for this. Christian Eschatology - Wikipedia
12. Marketing and Quality: The Catholic Church believes it has a duty to induce everyone to hear and accept its version of the Gospel. This is a natural foreign policy for an imperialist organism whose size and power increases in proportion to its membership. But the modern world expects any corporation promoting itself in the marketplace to deliver value for value. People contributing to the sustenance of the Church and following its beliefs and practices need to be assured that they will indeed receive the eternal life promised to them. Ad Gentes (Vatican II): Decree on the Mission Activity of the Church, Lumen Gentium (Vatican II): Dogmatic Constitution on the Church
John 8:32: "32 And ye shall know the truth, and the truth shall make you free." This quotation is true if the truth referred to is really true. Unfortunately the principal message of Christianity preached by Jesus of Nazareth has been throughly hidden by other doctrines by the imperial version of Christianity preached by the Catholic Church. Jesus replaced the complex law of his Hebrew ancestry with the simple phrase, love God, love you neighbour. Through the parable of the Good Samaritan he taught that "neighbour" means everybody. In modern politics, the essence of humanity is captured by the Universal Declaration of Human Rights and all the other more specific declarations of rights which emphasise human rights and equality.
The decay of the Catholic Church is a consequence of the close relationship between Christianity and the authoritarian warlords of the Roman Empire who took it over in the fourth century. Since then it has not been a religion of truth, but a political tool designed to keep the poor poor while making the rich and powerful richer and more powerful: the clergy, bishops and archbishops, the popes, princes, cardinal, dictators, warlords and all the other thieves who have ruled the world since time immemorial.
The salvation of religion began in the days of Galileo. Galileo lot his battle with the Inquisition and prudently opted to save his life by recanting the position he was alleged to hold. But he knew he was on the right track. His manifesto was quite simple and targeted directly on those who drew their opinions from works of fiction:
Philosophy is written in this grand book - the universe, which stands continually open before our gaze. But the book cannot be understood unless one first learns to comprehend the language and to read the alphabet in which it is composed. It is written in the language of mathematics . . .Recantation of Galileo (June 22, 1633) , Galilei, p 238.
Since that time, the gradual growth of scientific inquiry has radically improved the conditions of life for a large proportion of the Earth's population. These improvements would been even greater if they were not hampered by reactionary religious and and political forces. These forces are largely in retreat, but there is a long way to go. Organizations like the Catholic Church whose ideology was formed thousands of years ago are one of the main problems. The other is the political power of the wealthy who rely on exploiting other people to maintain their wealth and have, due to their wealth, disproportionate power. Acemoglu & Robinson: Why Nations Fail: The Origins of Power, Prosperity and Poverty
Theology is the traditional theory of everything and its task is to open our eyes to the amazingness of everything. We have seen the benefits to be reapt from free scientific enquiry. Theology is still the slave of political and religious institutions, but we can expect a similar explosion in human welfare when it finally breaks free and is able to tell us all there is to be known about our position in the universe and the possibilities that lie ahead of us if we manage our lives with reason and prudence.
We have to learn to fill the whole divine playing field with music, technology, love, peace, goodness, occupying our own little section of transfinite space, leaving the basic life support systems and beauty of the world intact. There is room for everybody, as we increase the ratio of spirit to matter exploiting the transfinite possibilities of life..
21: Conclusion: Life in the divine world
Many hundreds of generations of people have been indoctrinated into accepting their difficult lot in life by promise that if they remain docile and obedient to the powers that be they will be rewarded after death with eternal delights. This story has lost much of its credibility in the face of our scientific understanding of the nature of the world and our role within it.
This loss of faith is balanced by the growing realization that life without repression, exploitation and falsehood can be quite rewarding and is well worth striving for. As more people reach this conclusion and demand their natural rights, we can expect the overall system to improve its care and respect for every form of life, including ourselves. We can see nations of the world currently stretched along a spectrum running from comprehensive social security and rational management to the extremes of poverty and repression maintained by regimes that act only for their own welfare and not for the welfare of the whole population. The task is relatively straightforward: to bring everybody up to the standard enjoyed by those in the best parts of the world.
Cantor's transfinite numbers are a measure of the divine potential of the paradise in which we live. We have only lift our heads and see the magnificence of what has developed so far, and realize that there is no limit to the creativity to be realized by careful observation of prudent exploitation of our world in which we find ourselves. To save ourselves, however, we must preserve the planetary systems that sustain our lives.
Above all, it is important to throw of the shackles of those institutions that would enslave us for their own benefit. We are all divine, and may rightfully demand to be treated as gods, realizing that we live in a community of others identically divine whose cooperation and love is necessary for us to realize our personal paradise.
From an evolutionary point of view, the key to paradise is to maximize our personal pleasure while minimizing our footprint on the world that sustains us so that the resources of life are conserved for all to share.
(Revised 16 March 2019) Back to top