Thomas D. Le

Quantum Mechanics and the Multiverse

Part II - The Multiverse

The idea of the (non-metaphysical)

## Chapter 5

## The Multiverse

**5.1 The Modern Concept of the Multiverse**
** multiverse**
or

Is there a double of you living out there in another world that you know nothing about? What is your Doppelgänger doing in a parallel universe? What evidence is there of the existence of another universe parallel to ours? How was it formed? How many parallel universes are there? How far is the closest parallel universe? Is it possible to test and falsify the theories of parallel universes? These and a host of other similar questions about the multiverse cry out for answers.

Before we dismiss the concept of the multiverse as strange or weird, we should remember that familiar discoveries such as time dilation, length contraction, curved space, electromagnetic fields, subatomic particles, and black holes were no less weird. In fact, quantum mechanics itself is weird. Yet according to Tegmark and Wheeler, it has been recently estimated that 30% of the U.S. gross national product is derived from inventions made possible by quantum mechanics. As we learn more about reality, we can expect with almost absolute certainty to see more, not fewer, surprises. The criterion for judging a theory is not whether it is weird but whether it is experimentally testable and falsifiable.

One is likely to find scientists working in the fields of quantum cosmology and quantum computing much more receptive to the concept of the multiverse than physicists in other fields.

Only three parallel universe theories are examined in this paper since it is not intended to be thorough, but to be exploratory. However readers with a keen interest can certainly explore on their own and benefit from the references included.

**5.2 Hugh Everett III’s Relative States**

In 1957, Hugh Everett III developed the ** theory of
relative states** in his PhD dissertation at Princeton University, which
is now considered a starting point of the many-worlds formulation of quantum mechanics. After decades
of neglect this novel interpretation has finally gained the attention and support of some
of the physicists who had originally shrugged it off.

The first part of the following discussion mirrors Jeffrey
Barrett in his article *Everett’s Relative-State Formulation of Quantum Mechanics.*

Uncomfortable with the Copenhagen interpretation, which requires collapse of the wave function, and an observer external to the measurement environment, Everett proposed in his thesis a new formulation of quantum mechanics that he claimed would eliminate the need for collapse of the wave function. (Decoherence, not found until 1990, was not known by Everett.) According to his concept the situation after a quantum measurement is a superposition of states that encompasses the physical system, the observable and the observer in entangled states where the outcomes are all present. There is no collapse and no suppression of interference.

To illustrate, let us consider a good observer *J*
engaged in measuring the spin of an electron in the quantum system *S*. Since the electron is a spin-½
particle, its spin in the x-direction has two values: +½ or x-spin up and - ½ or x-spin down.
Following von Neumann, the first stage of measurement has this
configuration of observer and the observed system, in which the arrow
represents the time evolution required in Dynamics rule 4a in Section 4.5
above, and the notation | > represents a vector.

1. |”ready”>* _{J}*
|x-spin up>

2. |”ready”>* _{J}*
|x-spin down>

The observer *J* is in the ready state. If *J* measures a system that is
determinately x-spin up, he will record “x-spin up.” If *J* measures a system that is determinately x-spin
down, he will record “x-spin down.”

However, *J* will begin the measurement with a
superposition of x-spin eigenstates, so that in the initial stage we have:

*a*
|x-spin up>* _{S} *+

where *a* and *b* (or rather their squares) are
probabilities of their respective eigenstates.

The initial state of the composite system should now be:

|”ready”>* _{J}*
(

In the standard theory, after *J* has observed the
system, *J* either measured x-spin up (the first term of the expression
below) or x-spin down (the second term) in accordance with the collapse
dynamics. But in Everett’s no-collapse proposal, the situation remains an entangled superposition of
eigenstates, i.e., *J* recording “spin up” and *S* being x-spin up, *and* *J*
recording “spin down” and *S *being x-spin down at the same time, as follows:

*a *|”spin
up”>* _{J}*
|x-spin up>

Everett explains:

As a result of the interaction the state of the measuring apparatus is no
longer capable of independent definition. It can be defined only *relative*
to the state of the object system. In other words, there exists only a
correlation between the states of the two systems. It seems as if nothing can
ever be settled by such a measurement.

However, in his 1973 article entitled *The Theory of the Universal Wave
Function*, he elaborates further:

Let one regard an observer as a subsystem of the composite system: observer
+ object-system. It is then an inescapable consequence that after the
interaction has taken place there will not, generally, exist a single observer
state. There will, however, be a superposition of the composite system states,
each element of which contains a definite observer state and a definite
relative object-system state. Furthermore, as we shall see, *each* of
these relative object system states will be, approximately, the eigenstates of
the observation corresponding to the value obtained by the observer which is
described by the same element of the superposition. Thus, each element of the
resulting superposition describes an observer who perceived a definite and
generally different result, and to whom it appears that the object-system state
has been transformed into the corresponding eigenstate. In this sense the usual
assertions of [the collapse dynamics Rule 4b in Section 4.5 above] appear to
hold on a subjective level to each observer described by an element of the
superposition. We shall also see that correlation plays an important role in
preserving consistency when several observers are present and allowed to
interact with one another (to ‘consult’ one another) as well as with other
object-systems.

Barrett remarks that in the above passage “Everett presents a principle that he calls
the fundamental relativity of quantum mechanical states. On this principle, one
can say that in state * E * [the superposition of the composite system states],

Everett (1957) argues that after measurement observer and object system entangled as a composite whole split up into branches, each representing a distinct outcome:

With each succeeding
observation (or interaction), the observer state ‘branches’ into a number of
different states. Each branch represents a different outcome of the measurement
and the *corresponding *eigenstate for the object-system state. All
branches exist simultaneously in the superposition after any given sequence of
observations.

This clearly suggests that each observer-object system state corresponds to a possible measurement outcome, and allows for the simultaneous existence of multiple observer-object system states and outcomes, each pair being elements of the superposition of the composite system states. Yet for the observer each branch exists independently of every other branch, and consequently the observer is not aware of any splitting.

Everett’s formulation is wide open for many interpretations. We will examine just one.
The most popular interpretation of Everett’s concept is the ** Many-Worlds Interpretation (MWI) **
contained in Bryce DeWitt’s 1971article,"The Many-Universes Interpretation of Quantum Mechanics,"
in

[Everett's interpretation of quantum mechanics] denies the existence of a
separate classical realm and asserts that it makes sense to talk about a state
vector for the whole universe. This state vector never collapses and hence
reality as a whole is rigorously deterministic. This reality, which is
described *jointly* by the dynamical variables and the state vector, is
not the reality we customarily think of, but is a reality composed of many
worlds. By virtue of the temporal development of the dynamical variables the
state vector decomposes naturally into orthogonal vectors, reflecting a continual
splitting of the universe into a multitude of mutually unobservable but equally
real worlds, in each of which every good measurement has yielded a definite
result and in most of which the familiar statistical quantum laws hold (1973, p. v).

By this many-worlds theory, all the outcomes of the measurement process exist at the same time. Each time a measurement is made the observer and the object-system state split off into a different world.

DeWitt admits to being unnerved by this counterintuitive splitting of the world:

I still recall vividly the shock I experienced on first encountering this
multiworld concept. The idea of 10^{100} slightly imperfect copies of oneself all constantly
spitting into further copies, which ultimately become unrecognizable, is not easy to reconcile with common sense. Here is
schizophrenia with a vengeance (1973, p. 161).

To replace the collapse dynamics of the standard theory, the many-worlds theory requires
that there be a preferred basis upon which the observable is made determinate. Barrett sees great difficulty
with selecting a physical ** preferred basis**. How can, amidst the decompositions of a

Among the facts that one would want to have determinate are the values of our measurement records. But saying exactly what a preferred basis must do in order to make our most immediately accessible measurement records determinate is difficult since this is something that ultimately depends on the relationship between mental and physical states and on exactly how we expect our best physical theories to account for our experience. The preferred basis problem involves quantum mechanics, ontological questions concerning the philosophy of mind, and epistemological questions concerning the nature of our best physical theories. It is, consequently, a problem that requires special care.

Lev Vaidman (2002), however, argues that the preferred basis issue need not arise because we human observers and our knowledge of the world define the preferred basis:

Indeed, the mathematical structure of the theory (i) [the many-worlds theory] does not yield a particular basis. The basis for decomposition into worlds follows from the common concept of a world according to which it consists of objects in definite positions and states ("definite" on the scale of our ability to distinguish them). In the alternative approach, the basis of a centered world is defined directly by an observer. Therefore, given the nature of the observer and given her concepts for describing the world, the particular choice of the decomposition (2) [the selected one] follows... If we do not ask why we are what we are, and why the world we perceive is what it is, but only how to explain relations between the events we observe in our world, then the problem of the preferred basis does not arise: we and the concepts of our world define the preferred basis.

From his point of view as a philosopher, David Papineau in
*David Lewis and Schrödinger’s Cat *looks favorably on the Everettian theory. In this
paper, Papineau defends Everett against one of his colleagues, the philosopher
David Lewis, who, like so many others, finds that MWI is beset with more
problems than the orthodox collapse theory. Papineau’s reading of Everett presents a much
less radical picture of the ** no-collapse theory** (as he prefers to call MWI) than others’.

While Everett’s interpretation does
add extra ‘branches’ to the reality recognized by common sense, these additions
fall far short of Lewis’s multiplication of worlds. For a start, the extra ‘branches’ that Everett adds to reality
all lie within the actual world that evolves from the actual initial conditions
in line with the actual laws of physics—these branches by no means include all
possibilities. Moreover, Everett’s branches are best conceived, not as sunderings of the whole universe, but
rather as entities that spread out causally at finite speeds, ‘like ripples on a pond’, as Lewis puts it. For example,
in the Schrödinger’s Cat experiment, __first__ the photon branches into a
deflected and undeflected version when it passes through the half-silvered mirror;__then__ the detector
branches into a triggered and untriggered state when it interacts with the photon;__then__ the poison
bottle branches into a smashed bottle and an unsmashed bottle under the influence of
the detector;and so on, culminating in the cat branching into a live and dead cat, and the human
observer branching into a self who sees a live cat and a self who sees a dead cat.

Such a reading leaves out the basis for
deciding among the possible branches. To replace chance in the orthodox collapse, Everett might need that
basis, what is termed the ‘** intensity rule**.’ We note,
incidentally, that the collapse postulate has been criticized for being

Lewis allows that Everettians might respond by adopting an ‘intensity rule’ to govern their expectations. Suppose that Everettians think of their non-collapsing branches as having differing ‘intensities’, corresponding to the squared amplitudes of those branches. Then they can proportion their expectations about the future directly to these intensities, even in the absence of any chances. For example, in the case of Schrödinger’s cat, if the squared amplitude of the branch where the cat is alive is 50%, and that of the branch where it is dead is 50%, then you should expect to see the cat alive to degree 50%, and expect to see it dead to degree 50%. The intensity rule thus offers Everettians an alternative route to the confirmation of quantum mechanics. If you proportion your expectations directly to the squared amplitudes, in line with the intensity rule, then once more you can regard quantum mechanics as confirmed if the outcomes you observe are the ones that you expected.

In the footnote to the above Papineau brings us to the conclusion that Everett’s interpretation is no more radical than the orthodox collapse view by pointing out that intensity coupled with our expectations may be equated with the amplitude in the wave function plus the collapse of the wave function.

The no-collapse interpretation**
**implies that, in addition to those (high intensity) later selves who
observe frequencies that confirm quantum mechanics, you will also have (low
intensity) later selves who observe rogue frequencies that disconfirm quantum
mechanics. This might seem worrying, but it is not clear that the no-collapse view has any more trouble with
knowledge of quantum-mechanical amplitudes than does orthodoxy. On any account of statistical inference,
there is always a danger of observing an improbable frequency in repeated
trials. Even so, orthodoxy takes observed frequencies to be evidence for corresponding chances (and hence
quantum mechanical amplitudes). Everettians can advise us to reason in just the standard way, modulo the
substitution of intensities for chances: infer that the intensity (and hence quantum mechanical amplitude) is
close to the observed frequency, and hope that you are not the victim of an
unlucky sample.

One last criticism of Everett’s relative-state formulation is philosophical: the Occam’s razor argument, which states that entities should not be multiplied beyond necessity. As Barrett puts it, Everett’s approach is “ontologically extravagant.” He wonders if the price of the proliferation of worlds to account for our experience with our own world is too high. But here Lev Vaidman disagrees. Vaidman believes quite to the contrary that the MWI is most economical as it has all the laws of the standard quantum theory without the baggage of the problematic collapse postulate.

Gell-Mann (pp. 141ff) prefers the expression “** many alternative histories of the
universe**” to “many worlds.” A history is just a narrative
of a time sequence of events, past, present or future. It is the decoherence of histories that
makes possible the domain of our daily experience. This domain consists of what
he calls

While acknowledging the work of Everett as useful and important, Gell-Mann realizes that much still remains to be done. He deplores the ambiguity and gap in Everett’s concept for having led to confusing language in the interpretive literature. Specifically, he thinks it unfortunate to assert that “parallel universes are all equally real” whereas some statement like “many histories, all treated alike by the theory except for their different probabilities” would be far less confusing.

**5.3 David Deutsch’s Multiverse**

In Bryce DeWitt’s review of Deutsch’s *Fabric of Reality*,“Deutsch
uses the word *multiverse* to refer to the reality described by the
quantum theory and has these words to say about it: ‘The quantum theory of
parallel universes is not the problem, it is the solution. It is not some
troublesome, optional interpretation emerging from arcane theoretical considerations.
It is the explanation--the only one that is tenable--of a remarkable and
counter-intuitive reality.’ This reviewer agrees completely with this statement
and only regrets that Deutsch limits his discussion essentially to the setting
dealt with by Hugh Everett in 1957, hence missing the understanding provided by
the more recent consistent-histories theory of complex quantum systems, which
explains not only the emergence of the classical world, and hence why we can
perform good interference experiments in the first place, but also the fact
that the generation of multiple worlds is far more common than the results of
interference experiments would suggest.”

In the book under review Deutsch identifies the four strands of the fabric of reality as quantum theory, the theory of evolution, epistemology, and the theory of computation. Using arguments drawn from two of these areas of reality, quantum theory and quantum computation, Deutsch forcefully expounds his view of the multiverse.

*The Four-Slit Experiment*

The experiment crucial to Deutsch’s argument for parallel universes is the four-slit experiment recounted in Section 4.1 above. To recapitulate what was described there, Deutsch’s experiment calls for a laser beam to be fired on an opaque barrier with four slits cut one tenth of a millimeter apart to reach a screen that shows an interference pattern of bright and dark fringes when the photon passes through the slits. Here is how Deutsch (1997, p. 43) sums up the result of the experiment:

We have found that when one photon passes through this apparatus,

it passes through one of the slits, and then something interferes with it, deflecting it in a way that depends on what other slits are open;

the interfering entities have passed through some of the other slits;

the interfering entities behave exactly like photons…

…except that they cannot be seen.

He calls these entities photons because that is what they are. He then
temporarily distinguishes the ** tangible (visible) photons **from the

We have just inferred the existence of a teeming, hidden world of shadow photons. They are invisible, travel at the speed of light, bounce off mirrors, are refracted by lenses, and are stopped by opaque barriers or filters. Yet they do not trigger our most sensitive detectors. They reveal their existence only when they interfere with the tangible photons which they accompany, and create the telltale pattern of interference we see.

Quantum experiments confirm that interference occurs for other particles as well. Thus for every neutron there is a host of accompanying shadow neutrons, for every electron there is a host of accompanying electrons, and so on. And the shadow particles are indirectly detected only through interference with their tangible counterparts.

Tangible particles live in the universe we live in. They interact with one another, interfere with one another, obey physical laws, and are detectable by instruments and sensory organs. And though they are governed by the wave function and behave randomly as we have seen, they are part of physical reality.

Deutsch argues that for similar reasons, shadow particles form a parallel universe, for they too are affected by their tangible counterparts through interference. Furthermore, they are partitioned off among themselves in the same way that they are partitioned off from their tangible counterparts. Each of these parallel universes is similar in composition to the tangible one, and obeys the same laws of physics. The only difference is that each particle occupies a different position in each universe. We call all these universes the multiverse.

To reach a conclusion that the multiverse is roughly partitioned into parallel universes, Deutsch proposes an experiment. What happens at the quantum level to shadow photons when they strike an opaque object? They are stopped, of course, because interference stops when a barrier is placed in the paths of the shadow photons. Remember that since shadow photons do not interact with their tangible counterparts, being absorbed by tangible photons is ruled out. We can verify this by measuring the atoms in the barrier or by replacing the barrier with a detector. They do not show energy absorption or change of state in any way unless a tangible photon strikes them. We conclude that shadow photons have no effect.

We can then say that shadow photons and tangible photons are affected identically when they reach the barrier, but that the barrier is affected differently by the two types of photons. The barrier is not affected by shadow photons at all. We infer that some sort of shadow barrier exists at the same location as the tangible barrier. Clearly the shadow barrier is made of shadow atoms, just like their counterparts in the tangible world make the tangible barrier. However, since there are vastly more shadow atoms than tangible atoms, the shadow barrier consists of only a tiny proportion of the shadow atoms available.

All matter and all physical processes in the shadow universes follow the same physical laws. Deutsch concludes,

In other words, particles are grouped into parallel universes. They are ‘parallel’ in the sense that within each universe particles interact with each other just as they do in the tangible universe, but each universe affects the others only weakly, through interference phenomena.

As a caveat, Deutsch clarifies that what he has described as ‘tangible’ and ‘shadow’ is only an artificial distinction. In fact, tangibility is relative only to the observer. An observer in a parallel universe is ‘tangible’ from her own perspective, and what she observes in her world is just as tangible to her as what we observe in our world is tangible to us. David Deutsch is explicit about this point:

When I introduced tangible and shadow photons I apparently distinguished them by saying that we can see the former, but not the latter. But who are ‘we’? While I was writing that, hosts of shadow Davids were writing it too. They too drew a distinction between tangible and shadow photons; but the photons they called ‘shadow’ include the ones I called ‘tangible’, and the photons they called ‘tangible’ are among those I called ‘shadow’.

Not only do none of the copies of an object have any privileged position in the explanation of shadows that I have just outlined, neither do they have a privileged position in the full mathematical explanation provided by quantum theory…

Many of those Davids are at this moment writing these very words. Some are putting it better. Others have gone for a cup of coffee.

It is clear that in Deutsch’s parallel universes, your doubles may behave differently than you do.

*Quantum Computation*

Within the last twenty years or so, quantum computing has mushroomed into a discipline devoted to utilizing quantum technology in the development of hardware, and quantum effects, such as entanglement and interference, to perform calculations of an intractable nature. Today centers for quantum computing research have been established around the world. Besides private and government laboratories and research centers, academic institutions spearhead the effort. In Canada McGill University, University of Toronto, University of Calgary are involved. In the United States, California Institute of Technology, Stanford University, Columbia University, Harvard University, MIT, University of California at Berkeley, Southern Illinois University, University of Virginia, State University of North Carolina, City University of New York, Portland State University among others are vigorously pursuing teaching and research in quantum computing Worldwide, quantum computing research and development is undertaken in France, Germany, the United Kingdom, Austria, Finland, Ireland, Italy, Norway, the Netherlands, Poland, Slovenia, Spain, Sweden, Switzerland, the Russian Federation, China, Japan, Taiwan, Singapore, Hong Kong, India, Brazil, Australia, New Zealand, South Africa. The race is on for the development of the first operational quantum computer that holds out promises of great power and capabilities.

All current computers, also known
as classical computers, are implementations of the prototype of a classical
universal computing device called Turing machine, named after the British
mathematician Alan Turing (1912-1954). In the last twenty years, along with advances in quantum
theory, the concept of quantum computing has become an increasingly important area of
computation research.*A quantum computer uses uniquely quantum-mechanical effects, especially
quantum interference, to perform tasks that lie beyond the reach and capability of even
the most powerful classical computers.*

The Centre for Quantum Computing at Oxford University traces the beginning of quantum computation as follows:

The story of quantum computation started as early as 1982, when the physicist Richard Feynman considered simulation of quantum-mechanical objects by other quantum systems. However, the unusual power of quantum computation was not really anticipated until 1985 when David Deutsch of the University of Oxford published a crucial theoretical paper in which he described a universal quantum computer. After the Deutsch paper, the hunt was on for something interesting for quantum computers to do. At the time all that could be found were a few rather contrived mathematical problems and the whole issue of quantum computation seemed little more than an academic curiosity. It all changed rather suddenly in 1994 when Peter Shor from AT&T's Bell Laboratories in New Jersey devised the first quantum algorithm that, in principle, can perform efficient factorization. This became a `killer application' --- something very useful that only a quantum computer could do. Difficulty of factorization underpins security of many common methods of encryption; for example, RSA - the most popular public key cryptosystem which is often used to protect electronic bank accounts - gets its security from the difficulty of factoring large numbers. Potential use of quantum computation for code-breaking purposes has raised an obvious question.

The ** RSA cryptosystem** is named after Ronald Rivest, Adi Shamir and
Leonard Adelman, who first proposed it in 1978. A public key
cryptosystem is any method of sending secret information where the sender and
recipient do not already share any secret information. Factorizing large numbers, such as a number
with 125 digits, is an intractable problem, in the sense that the computing
resources and time required are prohibitive. It has been estimated that factorizing a number with 25-digit
factors would occupy all the computers in the world for centuries. And the computer scientist Donald Knuth
has estimated that factorizing a 250-digit number using the most efficient algorithm available on a network
of a million computers would take over a million years. Pessimistic as the estimate may sound, it emphasizes
the intractability of the factorization problem.

As an introduction to ** quantum computation**, the following quotation
and other material from the Centre for Quantum Computing give us an idea:

Today's advanced
lithographic techniques can squeeze fraction of micron-wide logic gates and
wires onto the surface of silicon chips. Soon they will yield even smaller
parts and inevitably reach a point where logic gates are so small that they are
made out of only a handful of atoms. On the atomic scale matter obeys the rules
of quantum mechanics, which are quite different from the classical rules that
determine the properties of conventional logic gates. So if computers are to
become smaller in the future, new, *quantum* technology must replace or
supplement what we have now. The point is, however, that quantum technology can
offer much more than cramming more and more bits to silicon and multiplying the
clock-speed of microprocessors. It can support entirely new kind of computation
with qualitatively new algorithms based on quantum principles!

Unlike the classical digital
computer, which uses binary bits (each bit can have only one of two values 0
and 1) as data for computation, a quantum computer uses the ** qubit**
(for quantum bit, pronounced queue-bit). In classical computers, the voltage
between the plates in a capacitor represents a bit of information: a charged
capacitor denotes bit value 1 and an uncharged capacitor bit value 0. Another
way of encoding bits is by using two different polarizations of light or two
different electronic states of an atom.

If we choose an atom as a physical bit then besides the two distinct electronic states the
atom can be also prepared in a *coherent superposition* of the two states, i.e., the
atom is *both* in state 0 and state 1. Let us see how this can happen.

We reflect a single photon off a
half-silvered mirror, i.e., a mirror which reflects exactly half of the light
which impinges upon it while the remaining half is transmitted directly through
it. It seems reasonable to assume that the photon is *either in the
transmitted or in the reflected beam* with the same probability. Indeed, if
we place two photodetectors behind the half-silvered mirror in direct lines of
the two beams, the photon will be registered with the same probability either
in the detector 1 or in the detector 2. Instead of taking one path, the photon
takes two paths at once! This can be demonstrated by recombining the two beams
with the help of two fully silvered mirrors (capable of reflection only) and
placing another half-silvered mirror at the beams’ meeting point, with two
photodetectors in direct lines of the two beams. With this setup we can observe
a truly amazing quantum interference phenomenon.

If the two possible paths are
exactly equal in length, then there is a 100% probability that the photon
reaches the detector 1 and 0% probability that it reaches the other detector 2.
Thus the photon is certain to strike the detector 1! It seems inescapable that
the photon must, in some sense, have actually traveled both routes at once, for
if an absorbing screen is placed in the way of either of the two paths A or B,
it is equally probable that detector 1 or 2 is reached. Blocking off one of the paths (blockage
represented by a small box*) allows detector 2 to be reached, but with both
paths open, the photon somehow knows it is not permitted to reach detector 2;
so it must have actually felt out both routes.* It is therefore
legitimate to say that between the two half-silvered mirrors the photon took
both the transmitted and the reflected paths or, using more technical language,
we say that the photon is in a *coherent superposition of being in the
transmitted beam and in the reflected beam*. By the same token an atom can
be prepared in a superposition of two different electronic states, and in
general a quantum two-state system, called a quantum bit or a qubit, can be
prepared in a superposition of its two logical states 0 and 1. Thus *one
qubit can encode both 0 and 1 simultaneously.*

Deutsch explains the above experiment in terms of the parallel universes. A single photon enters the interferometer (the apparatus) from the left. The photon and its counterparts in all the universes are traveling toward the first half-silvered mirror at the same time. At this point the universes are identical. When the photon strikes the first half-silvered mirror, the universes become differentiated. In half of them the photon bounces off (is reflected) to the fully silvered mirror 1; in the other half the photon passes through (is transmitted) to the fully silvered mirror 2. The two versions of the photon then bounce off their respective fully silvered mirrors, and if all the paths are of equal length, the copies of the photon will arrive at the second half-silvered mirror at the same time, and interfere with each other. The photon exits along Path A and is detected by the detector 1 in all universes. In none of the universes is the photon reflected or transmitted along Path B. The universes were differentiated, and interfered with one another only for a miniscule fraction of a second. When Path A is blocked the photon exits along Path B, and is detected by the detector 2. To Deutsch this non-random interference phenomenon is just as inescapable a piece of evidence for the existence of the multiverse as is the phenomenon of the shadow particles we saw earlier.

This and other extraordinary discoveries, viz. coherent superposition, decoherence, and quantum interference, lie at the heart of quantum computation. Although the hurdles in the science and technology required to build quantum computers are horrendous, the lure of their power and potentialities is irresistible. Deutsch believes building a universal quantum computer is beyond today’s technology. To detect interference, it is necessary to set up appropriate interactions among all the variables that are different in the universes that contribute to the interference. With a large number of interacting particles it is extremely hard to engineer the interaction that would display the interference, which is the result of the computation. At the atomic level, it is very difficult to prevent interference from affecting the environment. If the atoms undergoing interference affect the environment, the results of computations are leaked out of the system, and become useless. This is called decoherence. Therefore, the challenge is to develop sub-microscopic systems of variables in a way that they interact among themselves, with only minimal effect on the environment.

In 1989, Charles Bennett of IBM Research and Gilles Brassard of the University of Montreal built the first working special-purpose quantum computer consisting of a pair of cryptographic devices. In their quantum cryptosystem, messages are encoded in the states of individual laser-emitted photons. The system is based on the properties of the interference phenomenon, and can be built with current technology because its quantum computation is performed on only one photon at a time.

On the power of a quantum computer Deutsch says, “In 1996, the computer scientist Lov Grover discovered a quantum algorithm - a way to program a quantum computer - that could try out a million possibilities in only a thousand times the time needed to try one, and a trillion possibilities in only a million times the time of one, and so on, without limit… Cryptographic systems which themselves use quantum computation are already commonplace in laboratories, heralding the development of communication that is both perfectly secure - even against quantum attack - and immune to future advances in mathematics or technology.”

** Shor’s algorithm** makes factorization tractable. Deutsch
claims that when a quantum factorization engine, such as Shor’s, is factorizing
a 250-digit number, the number of interfering universes will be on the order of
10

So convinced is he of Shor’s algorithm’s vast use of parallel resources not available in the visible universe that Deutsch issued a challenge to readers to explain how Shor’s algorithm works. As he said, any interference phenomenon destroys the classical idea that there is only one universe.

Although so far no one has yet been able to construct a universal quantum computer, theoretical physicists already have some ideas, at least in principle, how many components there are, and how complicated they must be. What’s encouraging is that just about any interactions between two information-carrying systems, such as particles, atoms, electrons, will do. Optimists are talking in terms of decades, not centuries, before a quantum computer will have been developed that can do meaningful computation work. Others, like Rolf Landauer of IBM Research, are pessimistic about the prospect of progressing beyond the construction within a few years of a two- and higher-bit quantum gates, the quantum counterparts of the logical gates in the classical computers.

In any case, David Deutsch has this to say about his view of the multiverse, and how this idea is meshed with quantum computing.

A growing minority of physicists, myself included, accept the ”many universes'' interpretation of quantum mechanics. We concluded that what we observe as a single particle is really one of countless similar entities in different universes, subtly affecting each other through a process called “quantum interference.'' To us, no mystery exists in quantum computation, only wonder.

Quantum computing, according to this view, is
possible because a quantum computer performs vast numbers of separate
computations in different universes and then shares the results through quantum
interference. The equations of quantum theory describe this phenomenon step by
step. But because such information sharing is achievable *only* through
quantum interference, these same equations also drastically limit the types of
task that quantum computation should be able to perform, or speed up.

Deutsch’s challenge did not go
unheeded. In his 2002 article *A Quantum Computer Needs Only One Universe*, Deutsch’s colleague
A. M. Steane, of Oxford University and the Centre for Quantum Computing, presents seven reasons
(remarks, as he calls them) why quantum computing needs no concept of the multiverse. Steane’s basic
argument is that, with respect to Shor’s algorithm, vast parallel computations are the result of a
false impression produced by a flawed mathematical notation. Since the idea of vast computation could
only be true in a highly qualified sense, and there is independent evidence to
suggest that vast computation is not taking place, the impression is only an
artifact of the mathematical notation.

As for the question of where the computations take place, Steane replies, “In the region of spacetime occupied by the quantum computer.” He concedes that the evolution of the quantum computer is a subtle and powerful process, and one may not resist invoking the concept of parallel universes because of the implication of computational power, which is to him not present in quantum computing.

Steane believes that an algorithm’s efficiency must be measured against memory resources and time used, i.e., in the direction of less processing, and not in that of the same amount of processing performed in parallel. On the parallel processes, he argues that, “The different ‘strands’ and ‘paths’ in a quantum computation, represented by the orthogonal states which at a given time form, in superposition, the state of the computer (expressed in some product basis), are not independent, because the whole evolution must be unitary.” In other words, in a quantum computer only one process is going on, not different processes, and by implication there is nothing at all like parallel processing in parallel universes.

If interference is crucial in Deutsch’s view of quantum computation, Steane sees
*entanglement* as equally essential in his. As he formulates
in his so-called ‘interpretational view’: “A quantum computer can be more
efficient than a classical one in generating some specific computational
results, because quantum entanglement offers a way to generate and manipulate a
physical representation of the correlations between logical entities, without
the need to completely represent the logical entities themselves.” And logical entities, he means to say, are
typically integers. Since entanglement is written as a sum of terms, not as a product, and since the linearity of
quantum mechanics dictates that subsequent unitary operations cause the terms
to evolve independently, the idea of multiple parallel processes, false as it
may be, becomes an attractive one.

To understand what ** entanglement**
is, let us quote Jeffrey Bub: [imagine] “ two particles are prepared from a source in a certain quantum state
and then move apart. There are ‘matching’ correlations between both the positions
of the two particles and their momenta: a measurement of either position or
momentum on a particular particle will allow the prediction, with certainty, of
the outcome of a position measurement or momentum measurement, respectively, on
the other particle. These measurements are mutually exclusive: either a
position measurement can be performed, or a momentum measurement, but not both
simultaneously. Either correlation can be observed, but the subsequent
measurement of momentum, say, after establishing the position correlation, will
no longer yield any correlation in the momenta of the two particles. It is as
if the position measurement disturbs the correlation between the momentum values.”

Furthermore, “if two particles are prepared in a quantum state such that
there is a matching correlation between two ‘canonically conjugate’ dynamical
quantities, like position and momentum, then there are infinitely many
dynamical quantities of the two particles for which there exist similar
matching correlations: every function of the canonically conjugate pair of the
first particle matches with the same function of the canonically conjugate pair
of the second particle.” This connection between two quantum systems is called *entanglement*.

Since entanglement can persist over long distances, physicists, computer scientists, and cryptographers have begun to exploit these non-classical, non-local correlations of entangled quantum states in the new field of quantum computation.

To conclude, we note that the idea of parallel universes is still very controversial in quantum mechanics as well as in quantum computing. However, controversy and debate are necessary in scientific investigations, as they are healthy signs of freedom of thought and expression, without which the human spirit and genius wither and atrophy.

**5.4 Max Tegmark’s Parallel Universes**

In *Parallel Universes*, an article of January 23,
2003, to appear in *Science and Ultimate Reality: From Quantum to Cosmos*,
edited by J. D. Barrows, P. C. W. Davies, and C. L. Harper, Cambridge
University Press (2003), Max Tegmark presented an overview of the parallel
universes along with what constitutes evidence, what they contain, and how the
hypotheses may be falsified.

Tegmark launches his argument for
the existence of parallel universes with a startling assertion: The simplest and most popular cosmological
model today predicts that another copy of you actually exists in a Galaxy about
10{10^{29}} (that’s 10 to the power of 10^{29}, or 1 followed
by a hundred thousand trillion trillion zeros) meters from Earth, far beyond
any visible objects in the *Hubble volume*, i.e., our universe, the sphere
whose radius of 4 x 10^{26} meters delimits the observable universe
since the Big Bang expansion began some 15 billions years ago. Furthermore your alter ego exists in the
same size sphere as ours, but centered over there. This is the simplest example of the multiverse.

Multiverse models rest on the foundations of modern physics, and if they seem at times to touch the province of metaphysics, it is because modern theories become ever more abstract. The line of demarcation between physics and metaphysics lie not in abstraction but in experimental verifiability, empirical falsifiability, and predictive power. As of this writing, no proponents of the multiverse position have been able to definitively convince the larger physics community. But if history is any indication, the destiny of radical new ideas seems to fit into alternatives such as falling by the wayside, hovering in zombie-like limbo, or rising to respectability.

Tegmark distinguishes four distinct types of parallel universes under discussion in the literature. Level I Universe (the least controversial) is the distant universe beyond our cosmic horizon that obeys the same laws of physics as ours but with different initial conditions. At Level II are other post-inflation bubbles, with the same fundamental equations of physics but perhaps different constants, particles and dimensionality. Level III is the domain of the many-worlds of quantum physics with the same features as Level II. Finally Level IV universes consist of other mathematical structures with different fundamental equations of physics. We examine each level in turn.

**Level I: Beyond our Cosmic Horizon**

Level I assumes that space is flat (has the familiar
Euclidean geometry) and infinite, and the distribution of matter is
sufficiently uniform on large scales. As a matter of fact, *Ω *(Omega), the ratio of average
density *ρ* (the Greek letter rho) of all forms of matter and energy,
to critical density *ρ _{c}* (rho critical) in the universe,

If Omega is greater than 1, the universe has positively curved or *spherical geometry*, where the sum of
the angles of a triangle is greater than 180˚, and extended parallel lines intersect. Because of its positive
curvature, spherical geometry is finite, and has a definite size but no edge, just like the earth. A traveler on a
straight path will eventually come back to her starting point. The positively curved (closed) universe will
eventually stop expanding, and will begin to contract. Omega will then go to infinity. The universe will
collapse upon itself in a Big Crunch, the “death by fire.”

If Omega is less than 1, space is negatively curved and has *hyperbolic* (saddle-shaped)
geometry, in which case the universe will keep expanding, and the density will decrease faster
than critical density. Hyperbolic geometry is infinite since extended parallel lines in it diverge and never
meet. In this geometry the sum of the angles of a triangle is less than 180˚. This open universe
eventually grows cold and meets its “death by ice“ in a Big Chill.

Our flat universe is near Omega 1. It seems thus that our universe will expand forever (the Big Bore). It has the Euclidean geometry that we learn in school. Our universe has no boundary, and is infinite. A traveler heading straight out will never return to her starting point.

If space is infinite and matter is distributed everywhere, then Tegmark claims the most unlikely events must take place somewhere. Thus there are an infinite number of regions as large as our universe with all sorts of cosmic history. And there are an infinite number of planets with infinitely large numbers of copies of us in every way going about their business just like us.

*Evidence for Level I Parallel Universes*

This model is called Level I multiverse, the simplest and most accepted model, since it conforms to the concordance model of cosmology.

The evidence for the flatness of our universe and its expansion is corroborated by its current
rate of growth of a light-year every year and an Omega near 1. We cannot see objects beyond the cosmic
particle horizon, the distance that light has traveled since the Big Bang, which constitutes the observable
universe, because their light has not had enough time to reach us. But we know that they are there, like ships
beyond the horizon. The radius of the observable universe is on the order of 10^{28} centimeters.

This view conflicts with Einstein’s theory of gravity, which allows space to be finite and
curved like a sphere or a doughnut, so that a traveler starting from any point will
eventually reach her original point of departure from the opposite direction. Accepted evidence of the
Einsteinian curvature of spacetime exists in a phenomenon called *gravitational
lensing*, where a light ray from a distant star “bends” near the sun because
the sun distorts the spacetime geometry around it by creating a warp. In warped spacetime, a light ray path
appears curved to us. Though we do not yet know for sure whether the universe is closed like a sphere, we
know that it expands, as astronomers found that galaxies (whose stars are bound together by
gravity) move away from one another, governed by *Hubble’s law*,

*H = v / d*

*H*, the *Hubble’s constant*, represents the slope of a line on a plot of radial
velocity, the galaxy’s velocity relative to us versus distance. The greater the slope, the larger the value of
*H*, and the greater the expansion rate. The value *v* is the *recessional (radial) velocity*
in km/s, and *d* indicates the distance in Mly (million light-years). Today astronomers believe
that *H* is between 15 and 27 km/s/Mly with an average of about 20 km/s/Mly. This finding shows
that our universe, far from being static, as Einstein believed, is an expanding one.

Another piece of evidence for the Level 1 model resides in cosmic microwave background radiation (CMB or CBR). This radiation comes from all directions in space in the form of microwaves with wavelengths in the millimeter to centimeter range. Following the Big Bang, compressed and intense heats break down atoms into protons and electrons, and matter is completely ionized. Matter is so dense that photons can travel only short distances before they are absorbed, leaving the universe completely opaque to its own radiation. It therefore acts like a blackbody whose radiation exhibits a characteristic spectral shape. As the universe cools protons and electrons coalesce into hydrogen and helium, making the universe transparent to most lights. The blackbody radiation, freed from matter, continues its expansion through space until now.

Observational results from the Cosmic Background Explorer (COBE) satellite support the notion that CMB is very isotropic (uniformly filling all space) and cosmic (coming from all directions of space). CMB allows us to glimpse at our young universe (hence the name “fossil radiation” it is sometimes called), which we conclude is very closely isotropic. The thermal and isotropic nature of CMB is consistent with the formation of galaxies and the large-scale structure in their distribution. And although observed anisotropy disturbs this picture by raising issues on the formation of structures in galaxies, scientists consider the universe isotropic, as “lumps” such as galaxies do not detract from the uniformity and homogeneity of the present universe, just like mountains on earth as seen from space hardly mars the image of Earth as smooth and regular.

Tegmark mentions that the cosmological theory of inflation supports the spatially
infinite universe. Andrei Linde (1994) in his *The Self-Reproducing Inflationary Universe*
proposes a theory of inflation that addresses most of the issues raised by the
Big Bang theory. Linde’s theory hinges crucially on the presence of ** scalar fields** in the
early universe, and has no need for quantum gravity effects, phase transitions, supercooling,
not even the assumption of an initially hot universe. Inflation also occurs in a wide class of theories of
elementary particles.

Inflation is responsible for the
rapid expansion of the universe in a split second after the Big Bang. Occurring between 10^{-43}
and 10^{-35} second, inflation brought the universe from size 10^{-33} centimeter to
size 10{10^{12}} centimeters (1 followed by one trillion zeros). And
scalar fields provide the mechanism that generates this rapid inflation. With scalar fields in widespread use in
particle physics, one can construct a theory of inflation that resolves the problems associated with the Big Bang.

Scalar fields fill the universe and affect the properties of elementary particles. If a
scalar field interacts with the *W* and *Z* particles, they become heavy. Photons,
which scalar fields do not interact with, remain light. Physicists use scalar fields to unify the
weak and electromagnetic interactions into the electroweak force. Scalar fields contribute to the expansion of
the universe with their potential energy. The larger the scalar field, the greater its potential energy. It is
the presence of this energy, according the Einstein’s theory of gravity, that must have caused the universe
to expand rapidly. Eventually, as the scalar field reaches its minimum level of potential energy, the expansion
slows down. This self-sustaining rapid expansion lasts about 10^{-35} second, but is enough to
inflate the universe to its tremendous size given in the previous paragraph.

How does the inflationary theory work? Linde says to just consider all
possible kinds and values of scalar fields to see if any of them lead to inflation. Those domains where
inflation takes place become exponentially large and quickly fill all space. Because scalar fields can take
arbitrary values, Linde calls the scenario ** chaotic inflation**. Since the rapid expansion of
the universe tends to erase

The theory of inflation makes predictions that are testable. First, inflation predicts that the universe is extremely flat. Because the density of a flat universe is related to its rate of expansion, the theory is easily verifiable. Observational data available so far are consistent with this prediction.

Another prediction concerns the presence today of *density perturbations*
produced during inflation. These perturbations affect the distribution of matter, and may be accompanied
by *gravitational waves*. Both the perturbations, known as “ripples in space” and gravitational
waves affect cosmic background radiation, and introduce slight temperature differences in different regions
of space. The COBE data so far has exactly found these irregularities, which later experiments have
also confirmed.

Finally, with regard to the uniform matter distribution on large scales, Tegmark, who
did a large amount of work on CMB, proposes this experiment.
Imagine placing a large sphere of radius *R* at random places in
the universe, measuring how much mass is enclosed each time, and computing the
variation* ΔM* between the measurements. He continues:

The relative
fluctuations *ΔM/M* have been measured to be of order unity on the
scale *R* ~ 3 x 10^{23} m, and dropping on larger scales. The Sloan Digital Sky Survey has
found *ΔM/M* as small as 1% on the scale *R* ~ 10^{25} m, and cosmic
background measurements have established that the trend towards uniformity continues all
the way out to the edge of our observable universe (*R* ~ 10^{27}
m), where* ΔM/M* ~ 10^{-5}.

The *Sloan Digital Sky Survey*
(SDSS), one of the most ambitious projects in astronomy, maps in detail
one-quarter of the entire sky, to determine the positions and absolute
brightnesses of more than 100 million celestial objects, and to measure the
distances to more than a million galaxies and quasars. This mapping will help cosmologists review
and sharpen their theories in light of the data collected.

Tegmark concludes that “space as we know it continues far beyond the edge of our observable universe, teeming with galaxies, stars and planets.”

*What are Level I parallel universes like?*

A physics description contains two parts: the initial conditions and the laws of physics
specifying how the initial conditions evolve. Level I
universes live under the same laws of physics as we do, with different initial
conditions (densities, motions of different types of matter), created at random
by quantum fluctuations during the inflationary epoch. The conditions produce density fluctuations
termed *ergodic*, in the sense that “if you imagine generating an ensemble
of universes, each with its own random initial conditions, then the probability
distribution of outcomes in a given volume is identical to the distribution
that you get by sampling different volumes in a single universe. In other
words, it means that everything that could in principle have happened here did
in fact happen somewhere else.”

Tegmark offers staggering numbers and estimates of where Level I parallel universes lie:

A crude estimate
suggests that the closest identical copy of you is about ~ 10{10^{29}}
m away. About ~ 10{10^{91}} m away, there should be a sphere of radius 100 light-years
identical to the one centered here, so all perceptions that we have during the next century will be
identical to those of our counterparts over there. About ~ 10{10^{115}} m away, there should
be an entire Hubble volume identical to ours.

What did Tegmark base on to make the above statement? First the estimate
results from counting all the quantum states that a Hubble volume (the volume
of our universe) can have at a temperature no hotter than 10^{8}
K. The number 10^{115} is the number of protons the *Pauli exclusion principle* allows us
to pack into a Hubble volume. This principle states that *no two electrons in an atom can occupy
the same quantum state*, i.e., no two electrons in the same atom can have the same four quantum numbers.
(Recall the quantum numbers in Section 4.5 above.) Again, according to Tegmark, since each of the
10^{115} slots can be filled or not filled by a proton, we have *N* = 2{10^{115}}
~ 10{10^{115}} possibilities, so the distance to the nearest identical
Hubble volume is *N*^{1/3} ~ 10{10^{115}} Hubble radii ~
10{10^{115}} meters. And your nearest copy is likely to be much closer than 10{10^{29}}
meters. Tegmark estimates that our Hubble volume contains at least 10^{20} habitable planets.

*Testing the multiverse theory*

A theory that is testable may contain unobservables so long as it makes testable predictions. Tegmark claims that the Level I multiverse framework is routinely used to rule out theories in modern cosmology, though rarely acknowledged explicitly. For example, our CMB maps show observed hot and cold spots too large to be consistent with the once-popular open universe model. “When cosmologists say that the open universe model is ruled out at 99.9% confidence, they really mean that if the open universe model were true, then fewer than one out of every thousand Hubble volumes would show CMB spots as large as those we observe.“ Therefore, the entire model is unsupported. Multiverse theories can be falsified only if they predict what the ensemble of parallel universes is and specify a probability distribution over it.

**Level II: Other Post-Inflation Bubbles**

Predicted by the chaotic theory
of inflation, Level II multiverse consists of a series of bubbles, lying
beyond,as described by Linde. We can treat quantum fluctuations of the
scalar field as waves moving about in all directions then freezing on top of
one another. As the frozen wave increases the scalar field, a rare but possible scenario, it produces a
domain with added energy that exponentially expands with ever increasing speed, just like what
happens shortly after the beginning. Another universe has just formed.
If this scenario repeats itself, as nothing in the inflationary theory seems to prevent it, we will be looking
at universes spawning bubbles that spawn other bubbles *ad infinitum*.
And the total volume of this multiverse keeps growing without end. These domains are infinitely far away,
and we can never get there even if we travel at the speed of light. This is so because Level I multiverse is
still expanding faster than light can traverse it.

*Evidence for Level II parallel universes*

The latest cosmological view of the evolution of the universe now rests crucially on the theory of chaotic inflation. In the 1970’s the Big Bang theory, which laid a fairly satisfying explanatory foundation for the question of how the universe began, left some persistent problems. The Big Bang theory works only with carefully chosen initial conditions, and when the universe has cooled enough for physical processes to be well established. It does not deal with the earlier epochs of the primordial fireball, when the temperatures were extremely high. The first enigma was the Big Bang itself. Did anything exist before the Big Bang? If nothing existed before, how did spacetime come about? Could everything come into being from nothing? Which came first, the universe or the laws of physics that govern its evolution? Explaining the beginning of the universe still remains the toughest problem for scientists today.

The second is the *monopole problem.* The standard Big Bang
theory and the modern theory of elementary particles together predicted the
existence in the early stages of the evolution of the universe of superheavy
particles carrying only one magnetic charge, known as *magnetic monopoles*,
with typical mass 10^{16} times that of the proton. These particles would be theoretically as
abundant as protons, making the density of matter in the universe about 15
orders of magnitude greater than its present value of 10^{-29} gram per
cubic centimeter.

The third concerns the *flatness of the universe*. According to
general relativity, space may be curved, with a radius of *Planck length* *l
~ *10^{-33} centimeter. Yet the evidence for the flatness of the universe abounds, and the
observable universe has a radius of 10^{28} centimeters, a difference of more than
60 orders of magnitude.

Another discrepancy between
theory and observation relates to the *size of the universe*. Observations show that our part of the
universe contains about 10^{88} elementary particles, which is a huge
number. Yet the standard theory calculations using an initial universe of Planck length and an initial density
equal to Planck density results in a tiny universe large enough to hold at most
10 elementary particles, not even enough for an average person, who has ~ 10^{29
}elementary particles.

Then there is the so-called *horizon problem*. Observations show that the
CMB temperature is a uniform 2.74 K throughout space. Because the horizon
(the distance light could travel) before the release of cosmic microwave
radiation was small relative to today’s horizon distance, there was no way
vastly separated regions of space could have come into contact with one another
to establish thermal equilibrium

The *distribution of matter* in the universe raises another thorny problem.
How can the universe be so homogeneous on large scales and yet lumpy at the same time? The
homogeneity is in the fact that across more than 10 billion light-years, matter distribution
deviates from perfect homogeneity by less than one part in 10,000. For an explanation, the hot Big Bang
theory introduces the *cosmological principle*, which states that the universe
should look the same to all observers, i.e., the universe must be homogeneous and isotropic. But Linde
(1994) says that still does not explain the lumpiness in the form of stars, galaxies, and
other agglomerations of matter.

Finally, what Linde calls the *uniqueness problem*. Many popular particle theories
assume that spacetime originally had many more than the familiar four dimensions of spacetime. Where did the
rest of the dimensions go? These models posit that they were “compactified” or shrunk in size and curled
up. And because how these dimensions are rolled up affect the values of the constants of nature and the
masses of particles, questions such as why the gravitational constant is so small, or why the proton
is about 1800 times heavier than electrons become relevant to the understanding of the construction of the world.

All these questions and others
are solved by just one theory, the *chaotic theory of inflation*,
originally proposed by Linde. We will see how inflation addresses these serious
problems, which have plagued the standard Big Bang theory since its inception.

The most refreshing aspect of the inflation scenario lies in its relative simplicity and
lack of *a priori* assumptions. It does not assume that the universe in the beginning had to be hot,
largely homogeneous, in thermal equilibrium, and large enough to survive until inflation took over.
Linde (2002) argues that inflation could begin without thermal equilibrium, quantum gravity effects,
phase transitions or supercooling. Most inflationary models proposed since 1983 are based on the
idea of initial chaotic conditions. What the inflationary model needs is already widely used in particle
physics: the *scalar field*, which represents spin zero particles, and is crucial in spontaneous symmetry
breaking, whereby fundamental forces (i.e., strong, weak, electromagnetic and gravity) separate. The
best-known example of this symmetry breaking is the Higgs field, which though not yet
experimentally discovered, breaks the electroweak symmetry.

The early universe is filled with scalar fields with various potential, analogous to the
potential of an electric field. If all the scalar fields have a constant electrostatic potential, they look like a vacuum and have
no effect. Only scalar fields with large potentials cause inflation. When the potential reaches its minimum the
scalar field vanishes. According to the general theory of relativity, the universe expands at the rate
roughly proportional to the square root of its density. If the universe were filled
with ordinary matter, as it expanded its density would rapidly decrease until expansion eventually stopped.
To Einstein it is the scalar fields that must be responsible for the universe’s
rapid expansion. A large scalar field with a persistent energy potential causes the universe to expand
exponentially in a short time. Investigations for some realistic inflationary models show that after
10^{-35} second of inflation the initial universe of Planck size *l *~* *10^{-33}
cm reaches the enormous size of 10{10^{12}}cm (one followed by one trillion zeros), compared
to the size of our observable universe, *l *~* *10^{28} cm. At the end of
inflation, the scalar field’s potential oscillates near its minimum. Its rapid oscillations create pairs of
particles-antiparticles that annihilate, and their collisions produce high-energy gamma rays. Two gamma
rays collide to form two particles elsewhere, and the process continues. Because the rates of pair
production and annihilation exactly match, the universe is in thermal equilibrium. From now on, the
standard Big Bang scenario can take effect.

As the universe expands rapidly, inhomogeneities, monopoles and other irregularities become exponentially diluted, leaving the universe homogeneous. Since the universe is so large and we see only a tiny part of the cosmic balloon, the universe looks flat. The incredibly fast expansion in an incredibly short time of the inflationary scenario explains why regions of space from opposite directions, separated by about 30 billion light-years (assuming light has been traveling 15 billion years since the Big Bang) look so uniform. The huge size of the universe compared to the much smaller size of the observable part of the universe explains why our part of the universe looks flat, homogeneous, and isotropic. We know that CMB observations have found that the universe is flat.

*Quantum fluctuations* that fill the vacuum of space can be regarded as waves,
which have different wavelengths and move in all directions.
Those waves with large wavelengths are the first to freeze. As expansion continues, more waves freeze on
top of other frozen waves, and enhance the value of the scalar field in some
areas and depress it in others, thus creating inhomogeneities. These quantum fluctuations in the scalar
field cause density perturbations that are essential for the formation of galaxies.

The same quantum fluctuations, by increasing the value of the large scalar field that produces them, cause some regions of the universe to expand exponentially and at a greater rate than their parent domains, to become independent bubbles or universes. Inside these domains quantum fluctuations operate to spawn yet other domains that expand even faster. And the self-reproduction process continues indefinitely leading to the production of additional universes.

*What are Level II parallel universes like?*

Level II multiverse is probably more diverse than Level I multiverse.
According the ** Grand Unified Theory** (GUT), the strong, weak
and electromagnetic forces are unified at high energies above 10

Quantum mechanical theories that
use up to 11 dimensions to describe the fundamental forces of nature are called
*Kaluza-Klein theories*. There is agreement between theory and experiment if spacetime has 11
dimensions, seven of which (all space dimensions) are curled up.
In other words, at every point in space and at every moment of time,
there must be seven dimensions too tiny to detect. Or a different number of
dimensions may be compactified, leaving a world with a totally different
dimensionality. The supergrand unified theory known as *supergravity,*
which supports the use of 11 dimensions, predicts that every type of ordinary
particle has a *supersymmetric* counterpart that is yet to be detected. For example, the electron’s
supersymmetric counterpart is the selectron, and the quark has the squark. Also quantum fluctuations
could cause symmetry to be broken differently in different bubbles, resulting in a
different lineup of elementary particles, and different effective equations to describe them.

*Fine-tuning and selection effects*

Here Tegmark gingerly touches upon the highly controversial *anthropic
principle**.* There are various forms of the principle,
which essentially says that the world is as it is because if it had been
otherwise, we humans wouldn’t be here to talk about it. In order words, there is a conjuncture of
physical conditions and forces such that the existence of life is favored and
not prejudiced. For instance, according to the view Tegmark subscribes to, if the electromagnetic force
were weakened by a mere 4% then the Sun would immediately explode… If it were stronger, there
would be fewer stable atoms. If the strong force were weaker, it would not be
able to hold the atom together to form matter. Indeed, most if not all the parameters affecting low-energy
physics appear fine-tuned at some level, in the sense that changing them by modest amounts results
in a qualitatively different universe. It seems nature follows certain selection rules. In other
words, physical conditions that exist seem to favor the sustenance of life, especially human life.

To clinch his argument, Tegmark quotes what is termed the *minimalist anthropic principle (MAP)*:

When testing fundamental theories with observational data, ignoring selection effects can give incorrect conclusions.

And for those who doubt, he cautions:

MAP says that the chaotic inflation model is *not* ruled out by the fact
that we find ourselves living in the minuscule fraction of space where inflation had ended,
since the inflating part is uninhabitable to us.

Note that he does not say MAP supports chaotic inflation, only that MAP does not rule it out.

**Level III. The Many Worlds of Quantum Physics**

This type of parallel universe has been discussed in some detail in Section 4.5, and especially in Section 5.2 above. Level III universes are the different random outcomes of measurements of a quantum event. Each outcome branches out into a separate universe. Decoherence is the mechanism that brings Level III to life.

*Evidence for Level III parallel universes*

Much has been written about what is known as the measurement problem in quantum mechanics in Chapter 4, so that this discussion will be more of a recapitulation than an exposition.

The standard Copenhagen interpretation of quantum mechanics describes the measurement process as follows: A physical system has a set of properties that according to quantum mechanics are potentialities, each with a probability of realizing itself when being observed by an observer. These properties are governed by the wave function, which completely describes the physical system and which resides in the Hilbert space of the system. When a physical property of a quantum system (the microsystem) is measured by a classical (macroscopic) measuring instrument, the property being measured exists in a superposition of states, each of which having a probability of occurrence. During the phase of measurement where the human observer looks at the result, the wave function collapses, and all the potential states of the property vanish except the one that is being registered by the measuring apparatus. This is known as the collapse of the wave function. In the Schrödinger cat thought experiment, the cat’s dead state and live state are in superposition. Upon collapse of the wave function occasioned by the observer uncovering the box, she observes only one state of the cat, either dead or alive.

In 1957 Hugh Everett III (Section 5.2 above) proposed a totally different approach that did not call for collapse of the wave function. Instead his concept, later developed by DeWitt and others into the many-worlds interpretation (MWI), argued that during the measurement process each time a quantum system interacts with its environment (photons, air molecules, measuring apparatus, observer, etc.) the physical system and observer in superposition split off into different worlds, which are independent of each other, so that an observer in one world can never know a double of her in a parallel branch. That is the effect of decoherence, which is also the mechanism that identifies the Level III parallel universes in Hilbert space, and keeps them apart from one another. Everett’s interpretation, which has increasingly gained acceptance, clearly encompasses the Level III parallel universes.

*What are Level III parallel universes like?*

Tegmark distinguishes two perspectives when discussing a physical theory:
the *bird perspective* or outside view of a mathematician,
and the *frog perspective *or inside view of an observer living in the world described by the equations.
From the bird perspective a Level III multiverse is described by only one wave
function, which evolves deterministically and smoothly over time. The abstract quantum world described
by the wave function is in constant state of change, involving splitting and
merging. From the frog point of view, the observer can perceive only a tiny fraction of this reality.
She cannot see beyond her own Hubble volume or a Level III universe different from hers.
When she makes a decision or answers a question, the quantum effects on
the neurons of her brain lead to a multiplicity of outcomes. To her, quantum
branching is mere randomness. Each copy of her has the exact same memories and
past up until the moment she makes the decision or answers the question.

*How many different parallel universes are there?*

One of the criticisms of the many-worlds interpretation is the Occam’s razor: the unnecessary proliferation of branches. What if repeated branching exponentially increases the number of universes over time? Tegmark rejects this argument by maintaining that Level III universes add nothing new beyond Level I and Level II. As he points out, there may not be any more universes than the critics might think:

The number of universes *N* may well stay constant.
By the number of “universes” *N*, we mean the number that are
indistinguishable from the frog perspective (from the bird perspective there is of course just one) at a
given instant, i.e., the number of macroscopically different Hubble volumes. Although there
is obviously a vast number of them…, the number *N* is clearly finite –
even if we pedantically distinguish Hubble volumes at the quantum level to be
overly conservative, there are “only” about 10{10^{115}} with
temperature below 10^{8} K…

From the frog perspective the evolution of the wave
function corresponds to the constant sliding from one of these 10 {10^{115}}
states to another. Now you are in universe A doing one thing, now you are in universe B doing something
else. Both universes A and B have the same observer, but with an extra instant of memories.

*Two worldviews*

As the debate continues among scientists over many aspects of quantum physics, the measurement problem, and the multiverse controversy are to Tegmark only the tip of the iceberg. There is a deeper philosophical rift among their belief systems than what appears on the surface. Their worldviews have divided scientists into two camps espousing either (1) the Aristotelian paradigm, in which the subjective frog view is physically real, and the bird perspective with its mathematical language is but an approximation; or (2) the Platonic paradigm, in which the bird view (the mathematical structures) is physically real, while the frog view and the human language used to describe it are mere approximations.

Scientists are still grappling with a theory that unifies explanations in both classical mechanics and quantum mechanics, the Theory of Everything (TOE). This theory can only spring from the Platonic view for in the final analysis the physical world consists essentially of mathematical structures. And if you lean toward the Platonic position, you are likely to find the concept of the multiverse natural.

**Level IV: Other mathematical structures**

From Level I to Level IV we pass through increasingly abstract and speculative concepts. The domain of Level IV multiverse is the mathematical structure, which is a formal structure. A formal structure consists of a set of abstract symbols that can be strung together, a set of rules for determining which of such strings are well-formed formulas (WFFs), and a set of rules for determining which WFFs are theorems. Examples of mathematical structures are Boolean algebra, natural numbers, integers, rational numbers, real numbers, complex numbers, groups, fields, vector spaces, Hilbert spaces, manifolds, relations, even theories… Some WFFs called axioms are postulated without proof, from which theorems are derived.

Underlying the concept of Level IV multiverse is the basic premise that “everything that exists mathematically exists physically.” A mathematical structure has mathematical existence (ME). Furthermore, according to this premise a mathematical structure also has physical existence (PE). This position has far-reaching implications. For example, if we consider the wave function, which describes a physical system completely, as a mathematical structure, then it corresponds to physical reality.

The next assumption is just the converse: that the physical world is a mathematical
structure. The notion that Level III multiverse is a mathematical structure has the profound implication
that mathematical equations describe all aspects of the physical world. It means that some mathematical
structure is *isomorphic* (and hence equivalent) to our physical world, in which each element has its
counterpart in the mathematical structure. Tegmark clarifies, “Given a mathematical structure, we will say
that it has physical existence if any self-aware substructures (SAS) within it subjectively, from its frog perspective,
perceives itself as living in a physically real world.” An SAS could be a human, but it could be something
capable of some form of logical thought. A substructure is self-aware if it thinks it is self-aware.

David Deutsch devotes an entire Chapter 10 of *The Fabric of Reality* to exploring
the nature of mathematics. To him mathematical entities have independent existence, and are part of the
fabric of reality. Deutsch takes the example of prime numbers, a subset of natural numbers. We define
the natural numbers, then we define what numbers among them are prime numbers. But we still need to
understand more: for example, how prime numbers are distributed on very large scales, how clumped they
are, how random they are and why. To understand numbers more fully, we define many new abstract entities, and postulate many new structures and relations
among these structures. Before long we realize that these mathematical entities (or structures) constitute
a real world existing independently of other entities. How do we know they are real?
Following Dr. Johnson’s reply (that if a stone ‘kicks back’ when he kicks it with his foot, then it is real),
Deutsch argues that “prime numbers kick back when we prove something unexpected about them.”
Therefore prime numbers are just as real as the stone is real. And by implication,
so are mathematical structures.

*Evidence for a Level IV multiverse*

Tegmark cites two assumptions:

__Assumption
1__: That the physical world (specifically our Level III multiverse) is a mathematical structure.

__Assumption
2__: Mathematical democracy: that all mathematical structures exist “out there” in the same sense.

Both assumptions are very strong. The first one gives impetus to the theory amusingly called Theory of Everything (TOE), and takes a radical Platonic position of equating the physical world with mathematical structure. TOE seeks to unify the explanation of all four forces of nature into one theory, bringing the theory of quantum gravity into line with the rest of quantum theory. The second assumption asserts the independent existence of mathematical structures, which mathematicians discover but do not invent.

The first argument that supports
assumption 1 comes from a 1967 essay by E. P. Wigner, *Symmetries and
Reflections*, which argues that “the enormous usefulness of mathematics in
the natural sciences is something bordering on the mysterious,” and that “there
is no rational explanation for it.” If mathematics is useful in describing science, it is because the physical
world is a mathematical structure. The second argument supporting assumption 1 is “that abstract
mathematics is so general that any TOE that is defined in purely formal terms … is also a mathematical
structure.”

Tegmark sees this same argument as consistent with assumption 2 since any conceivable theory of parallel universes can be described at Level IV. In addition, assumption 2 provides an answer to Wheeler’s question, Why these particular equations, not others?

*What are Level IV parallel universes like?*

Given the subjective perceptions of SAS, a theory like the multiverse theory should allow for calculation of the probability distribution for future perceptions, as compared with past outcomes. The multiverse theory makes the following predictions:

__Prediction 1__: The mathematical structure describing our world is the
most generic one that is consistent with our observations.

__Prediction 2__: Our future observations are the most generic ones
that are consistent with our past observations.

__Prediction 3__: Our past observations are the most generic ones
that are consistent with our existence.

Quantifying the concept of “generic” is challenging. However, we know that mathematical structures that capture the symmetry and invariance properties observed in our universe tend toward the generic.

One of the most vexing problems in quantum mechanics has been and still remains how to calculate probability. The problem becomes even more taxing in the multiverse concept since, as Tegmark points out, there are so many copies of “you” with identical past lives and memories that even if you had complete knowledge of the state of the entire multiverse, you simply could not compute your future. However, you can predict probabilities only for your tiny observable part of your multiverse. Even so, since “you” live in separate, disconnected worlds, there is no clear, natural way to order them for observation, and assign statistical weights among them. This is known in mathematics as the ‘measure’ problem. The problem becomes increasingly intractable from Level I, where it is treatable, to Level IV, where it is simply stupendous.

*Against Occam’s Razor*

Tegmark defends the multiverse hypothesis against the Occam’s razor argument thus:

Why should
nature be so ontologically wasteful and indulge in such opulence as to contain
an infinity of different worlds? Intriguingly, this argument can be turned around to argue *for* a
multiverse. When we feel that nature is wasteful, what precisely are we disturbed about her wasting?
Certainly not “space,” since the standard flat universe model with its infinite volume draws no such
conclusion. Certainly not “mass” or “atoms” either, for the same reason – once you have wasted an
infinite amount of something, who cares if you waste some more?

In fact, he points out, from level to level the multiverse theory is heading toward reducing the amount of information necessary to specify them. Specifically, Level I obviates the need for initial conditions, Level II eliminates the need for physical constants. As for the Level IV multiverse of mathematical structures, it has essentially no algorithmic complexity at all.

Finally, as for the objection that the multiverses are weird, Tegmark dismisses this complaint as an aesthetic not a substantive issue. When we examine the world, from the relativity and quantum perspectives, we observe many bizarre things: particles that are in many places at the same time; at high speed time slows down; black holes; particles colliding at high temperatures change identity. And the TOE, whether complete or not, is going to be weird if the gap between the frog perspective and the bird perspective remains significant, as seems to be the case.

The debate swirling around the topic of parallel universes will undoubtedly continue. Like many of the pragmatic, instrumentalist physicists who are more concerned about predictions of theories, we may wonder why some scientists devote time to thinking about unobservable universes. However, predicting is only one part of the story of science, the other part is understanding and explaining. For the first time in history, science has pieced together a fairly clear picture of the universe from the infinitely large to the infinitesimally small. The last eighty years have seen the revolutionary quantum theory reshape our view of the world, and lead us to a deeper understanding of reality.

The question of the multiverse almost invites itself, as quantum theory has the capacity to force us into looking at reality from vastly different perspectives than were required by classical physics, and to lead us, as a scientific theory, into territories that border on philosophy and metaphysics. We are compelled to abandon some comfortable old assumptions, and to adopt new ones, about which we are still uncertain, in part because they run counter to intuition and common sense, and which we somehow know to be inescapable.

To extend this line of thought, a quotation from David Deutsch’s *The Fabric of Reality* may be in order:

…it is right and proper for theoretical physicists such as myself to devote a great deal of effort to trying to understand the formal structure of quantum theory, but not at the expense of losing sight of our primary objective, which is to understand reality. Even if the predictions of quantum theory could, somehow, be made without referring to more than one universe, individual photons would still cast shadows in the way I have described. Without knowing anything of quantum theory, one can see that those shadows could not be the result of any single history of the photon as it travels from the torch to the observer’s eye. They are incompatible with any explanation in terms of only the photons that we see. Or in terms of only the barrier that we see. Or in terms of the universe that we see. Therefore, if the best theory available to physics did not refer to parallel universes, it would merely mean that we needed a better theory, one that did refer to parallel universes, in order to explain what we see.

Other theories in the works are building on the relativity theory and the quantum theory in search of a unified explanation for all four forces of nature. Already the String (Superstring) Theory and as many as six related approaches have undergone unification in the form of the M-Theory (M for magical, mysterious, marvelous or membrane). A sense of high hope pervades the community of its proponents that M-theory will be the formulation of the Theory of Everything. While we all aspire to reach the ultimate theory of reality, we must also remember that nature does not yield its secrets easily.

And so, whether or not we believe in parallel universes, whether or not the new M-theory will possess a compelling
explanatory power leaving only details to work out, and whether or not M-Theory will refer to the multiverse, let us make
understanding reality central to our total human experience. And if this total human experience includes parallel
universes, so be it.

Atkins, K. R. 1965.*Physics.* New York: John Wiley & Sons, Inc.

Anton, Howard. 1991.*Elementary linear algebra.* 6^{th} edition.
New York: John Riley & Sons, Inc.

Barrow, John D. 1991. *Theories of everything: The quest for the ultimate explanation.*
Oxford, UK: Clarendon Press.

Beiser, Arthur. 1981. *Concepts of modern physics. *3rd ed*.* Auckland, NZ: McGraw-Hill International
Book Company.

Bohm, David. 1979. *Quantum theory.*New York: Dover Publications, Inc.

Brown, Julian. 2000. *Minds, machines, and the multiverse: The quest for the quantum
computer*. New York: Simon & Schuster.

Calle, Carlos I. 2001. *Superstrings and other things: A guide to physics*. Bristol, UK:
Institute of Physics Publishing.

Davies, Paul. 1984 *Superforce: The search for a grand unified theory of nature.*
New York: Simon and Schuster.

Deutsch, David. 1997. *The fabric of reality: The science of parallel universes
and its implications.* New York: Penguin Books.

Einstein, Albert. 1961. *Relativity: The special and general theory*. New York:
Wings Books.

Friedberg, Stephen H et al. 1989. *Linear algebra.*Englewood
Cliffs, NJ: Prentice-Hall.

Fix, John D. 1995. *Astronomy: Journey to the cosmic frontier*.
St. Louis, MO: Mosby.

Gell-Mann, Murray. 1994. *Quark and the jaguar. Adventures in the simple and the
complex.* New York: W. H. Freeman and Company.

Giancoli, Douglas C. 1984. *General physics.* Englewood Cliffs, NJ: Prentice-Hall, Inc.

Green, Brian. 1999. *The elegant universe: Superstrings, hidden dimensions, and the
quest for the ultimate theory.* New York: W. W. Norton & Co.

Gribbin, John. 2001. *Hyperspace: Our final frontier*. New York: DK Publishing, Inc.

Hawking, Stephen. 1996. *The illustrated brief history of time*.
New York: Bantam Books.

Hecht, Eugene. 1998.*Physics: Algebra/Trig*., 2^{nd} ed. Pacific Grove,
CA: Brooks/Cole Publishing Company.

Kaku, Michio. 1994. *Hyperspace: A scientific odyssey through parallel universes,
time warps, and the tenth dimension.* New York: Doubleday.

Kaku, Michio. 1994. *Visions: How science will revolutionize the 21 ^{st}
century. * New York: Random House, Inc.

Kaufmann III, W. J. 1994. *Universe, *4th ed*. *New York: W. H. Foresman and Company.

Krauss, Lawrence M. 1999. *Quintessence: The mystery of
missing mass in the universe.* New York: Vintage.

Lightman, Alan. 2000. *Great ideas in physics.* New York: McGraw-Hill.

Linde, Andrei. 1994. “The self-reproducing inflationary universe”. * Scientific American. *274:32

Longair, Malcolm S. 1996. *Our evolving universe*. New
York: Cambridge University Press.

Peebles, P .J .E. 1993. *Principles of physical cosmology.* Princeton, NJ: Princeton University Press.

Penrose, Roger. 1989. * The emperor’s new mind.* New York: Penguin Books.

Rowan-Robinson, M. 1993. *Ripples in the cosmos: A view behind the
scenes of the new cosmology.* Oxford, UK: W. H. Freeman Spektrum.

Shapley. H., ed. 1960. *Source book in astronomy: 1900-1950*.
Cambridge, MA: Harvard University Press.

Smolin, Lee. 1997. *The life of the cosmos.* New York: Oxford University
Press.

Suplee, Curt. 1999. *Physics in the twentieth century.* Ed. Judy R. Franz et al.
New York: Harry N. Abrams, Inc., Publishers.

Tegmark, Max and John A. Wheeler. 2001. “100 Years of the Quantum”.
*Scientific American*. February 2001, pp. 68-75.

Tegmark, Max. 2003. “Parallel universes”. In Barrow, John D, et al.
*Science and ultimate reality: From quantum to cosmos*. Cambridge University Press.

Walker, James S. 2002. *Physics, *volume II*.* Upper Saddle River, NJ:
Prentice Hall.

Weinberg, S. 1993. *Dreams of a final theory*. New York: Random House, Inc

Wolf, Fred A. 1988. *Parallel universes: The search for other worlds.* New York:
Simon & Schuster.

Zellik, Michael. 1997. * Astronomy: The evolving universe. *8th ed*.*
New York: John Wiley & Sons, Inc.

**Decoherence**

Bacciagaluppi, Guido. 2003. The role of decoherence in quantum theory.

http://plato.stanford.edu/entries/qm-decoherence/

**Inflation**

Linde, Andrei. 2002.* Inflationary theory versus ekpyrotic/cyclic scenario. *

http://www.arxiv.org/PS_cache/hep-th/pdf/0205/0205259.pdf

Linde, Andrei. 2004.*Prospects of inflation*

http://www.arxiv.org/PS_cache/hep-th/pdf/0402/0402051.pdf

**Interpretation of Quantum Mechanics (Stanford Encyclopedia of Philosophy)**

Barrett, Jeffrey. 2003.* Everett’s relative-state formulation of quantum mechanics*

http://plato.stanford.edu/entries/qm-everett/

Dickson, Michael. 2002. * The modal interpretations of quantum theory*

http://plato.stanford.edu/entries/qm-modal/

Faye, Jan.2002. *Copenhagen interpretation of quantum mechanics*

http://plato.stanford.edu/entries/qm-copenhagen/

Ghirardi, Giancarlo. 2002. *Collapse theories*

http://plato.stanford.edu/entries/qm-collapse/

Vaidman, Lev. 2002. * The many-worlds interpretation of quantum mechanics*

http://plato.stanford.edu/entries/qm-manyworlds/

**Measurement Problem**

Krips, Henry. 1999. * Measurement in quantum theory*

http://plato.stanford.edu/entries/qt-measurement/

Papineau, David. * David Lewis and Schrödinger’s cat*

http://www.kcl.ac.uk/ip/davidpapineau/Staff/Papineau/LewisQM.htm

Whitmarsh, Stephen.* Does consciousness collapse the wave-packet? II*

http://a1162.fmg.uva.nl/~djb/publications/2004/stephen_werkstuk.pdf

**Quantum Computation**

Bub, Jeffrey. 2001.* Quantum entanglement and information*

http://plato.stanford.edu/entries/qt-entangle/

Centre for Quantum Computation

http://www.qubit.org/index.html

David Deutsch’s Lectures on Quantum Computation

http://www.quiprocone.org/Protected/Lecture1.htm

Gouin, Roger Y. 2004. *On the origin of space: Part 15: Composite quantum
system, understanding the many-realities idea*

http://rgouin.home.mindspring.com/pdf/composite.pdf

Steane, A.M. 2002. *A quantum computer
needs only one universe*, LANL Preprint Archive for Quantum Physics, quant-ph/0003084.

http://xxx.lanl.gov/PS_cache/quant-ph/pdf/0003/0003084.pdf">http://xxx.lanl.gov/PS_cache/quant-ph/pdf/0003/0003084.pdf