Evidence of Parallel Worlds

1. Novel predictions

Despite the fact that the Everett, or many worlds approach to quantum mechanics is the only realist interpretation of quantum mechanics, it will not be accepted by the scientific community until it has made a novel prediction that can be verified.

Quantum suicide is one way of doing this, if you believe you will become all of the observers present after a quantum interaction, and not just one. There would be no way to convince others, however, without them playing a game of their own.

1.1 Artificial intelligence

British physicist David Deutsch suggested the first experimental test to falsify the collapse approach in 1985[1a][2]. However, it is still not practical to implement, as it requires artificial intelligence (AI) and reversible nanoelectronics.

In Deutsch's thought experiment, an atom, which has a determinate spin state in one axis, 'left' for example, is passed through a Stern-Gerlach apparatus that has the possibility of measuring it in another axis, as either spin 'up' or spin 'down' in this case. This means that the atom is then in a superposition of 'up' and 'down' states from the perspective of an observer who has not yet become entangled with it.

This superposition travels to an AI's artificial 'sense organ'. Here it is provided with two options: it may be detected as either spin 'up' or spin 'down'. The AI's conscious mind then records the result.

The collapse approach predicts that this will cause the atom to collapse into one determinate state, with either a determinate 'up' or 'down' (but not 'left' or 'right') spin. The Everett approach predicts that the mind will branch into two, one mind will record up and one down (but neither will record 'left' or 'right').

The whole process is then reversed so that the atom emerges from the entrance to the Stern-Gerlach apparatus and the mind forgets which result it recorded. This process does not erase any of the AI's other memories however, including the memory that they did record the atom to be in a definite state.

If a 'left-right' detector was placed at the entrance of the Stern-Gerlach apparatus then the collapse approach predicts that it will be detected as being in either a 'left' or 'right' state with equal probability. If the Everett approach is correct, then the atom will be in the same state that it was in before the measurement, it will still have a 'left' spin.

Deutsch stated that:

"this experiment allows the observer to 'feel' [themselves] split into two branches: The interference phenomenon seen by our observer at the end of the experiment requires the presence of both spin values, though [they] accurately remember having known at a previous time that only one of them was present. [They] must infer that there was more than one copy of [themselves] (and the atom) in existence at that time, and that these copies merged to form [their] present self"[1b].

Russian-Israeli physicist Lev Vaidman described a similar experiment in 1998[3]. If a photon passes through a polariser that has the possibility of sending it in two different directions, towards detectors A or B, then experiments show that it will be detected at either one detector or the other, but not both. If we remove detector A then the photon is only detected at B half of the time. Vaidman suggested that we could falsify the collapse approach by reversing the process, as Deutsch suggested, and observing how often the photon is 'recomposed'.

The collapse approach predicts that a photon will only be detected at the source half the time, yet the Everett approach predicts that it will be detected every time because the photon arrives from both paths whether it was detected or not.

1.2 Cosmological observations

There are also arguments in cosmology that could falsify the collapse approach. In 1970, American physicist Bryce DeWitt showed that there would be a time before Everett's universal wave function had decohered and we may one day be able to find evidence of this[4].

In 2000, Canadian physicist Don Page suggested that the Everett approach can be empirically verified because some cosmological observations are more probable given Everett's approach[5].

These experiments will not falsify the Bohm approach because Bohm also suggested that there's no collapse of the wave function, and so the probabilities will be the same for this approach as for Everett's.

1.3 Gateway states

Physicist Rainer Plaga devised an experiment to falsify the Bohm approach in 1997[6a]. Plaga argued that it should be possible to communicate with other parallel worlds if we could repeat Deutsch's experiment and isolate part of the apparatus so that it can be changed before it has completely decohered.

Plaga suggested that "a 'gateway state' between the parallel worlds"[6b] could be achieved by isolating a single ion. An observer can then divide, having set the apparatus to only excite the ion if they record a certain result. If the ion is excited and they do not record that result then they can assume that the atom was excited by their parallel-self. If the Bohm approach is correct, then parallel worlds do not exist and so we cannot communicate with them. This means that the ion will never be excited unless we observe the correct result.

Plaga stated that:

"inter-world communication on a time scale of minutes should be possible with state of the art quantum-optical equipment"[6c].

2. Quantum computers

All attempts to falsify the collapse and Bohm approaches to quantum mechanics rely on the fact that we'll be able to develop artificial consciousness. This is based on a functional theory of the mind and the idea that we'll be able to develop large-scale quantum computers.

All computers can be described as Turing machines, a concept devised by British mathematician Alan Turing in 1936[7]. Moore's Law, devised by Intel co-founder Gordon Moore in 1965, states that the number of transistors on a microprocessor doubles every eighteen months[8]. If this process continues, then by 2030, the circuits on a microprocessor will be measured on an atomic scale.

Physicist Paul Benioff first considered a quantum Turing machine in 1981[9], and, in 1985, Deutsch showed how this could be done[10a].

A quantum computer is faster than a classical computer because a classical computer stores information in definite states, in terms of 1s or 0s. A quantum computer can store information in a superpositional state, and hence perform more than one calculation at once.

Deutsch argued that these calculations are most naturally understood as being computed in parallel worlds, and that this "places an intolerable strain on all interpretations of quantum theory other than Everett's". Other interpretations lead to the question of where the correct answer was computed[10b].

In 1999, physicists at IBM developed a 3-qubit quantum computer (where a qubit, or quantum bit, is a unit of quantum information). This was followed with a 5-qubit quantum computer the following year[11], and by 2001, they had developed a 7-qubit computer that could be used to calculate prime factors[12].

In 2007, Canadian company D-Wave Systems developed a 16-qubit quantum computer, which could solve Sudoku puzzles[13]. They developed a 512-qubit quantum computer in 2013, and a 1000-qubit quantum computer in 2015[14].

Large-scale quantum computers would allow any classical encryption to be broken almost instantly. As soon as one is built, no information on the internet will be safe, and in 2014, it was revealed that the United States National Security Agency (NSA) has spent almost $80 million on research into quantum computing[15].

2.1 Artificial realities

It's thought that quantum computers will be used to design others, and there's no known physical law preventing them from reaching the state where they can simulate conscious experiences. This leads back to the philosophical question of whether we could be living inside of a simulation.

In 2003, Swedish philosopher Nick Bostrom suggested that if it's possible to simulate entire universes, then most advanced civilisations will probably come to build these simulation machines[16]. Bostrom predicted that one day everyone alive will exist inside of a simulation, making it almost certain that we are in one now.

Human consciousness is predicted to require about 1016 - 1017 (10-100 million, billion) operations per second and Bostrom predicted that it would take about 1033 - 1036 (1-1000 million, billion, billion, billion) operations per second in order to simulate our current experience of the universe.

In 1992, American engineer Kim Eric Drexler showed that a system the size of a sugar cube could perform 1021 (1000 billion, billion) operations per second[17], and, in 2001, computer scientist Robert Bradbury showed that a computer with a mass similar to that of a large planet could perform 1042 (a million, billion, billion, billion, billion) operations per second[18].

If we are living inside of a simulation like this, then we would not necessarily have a body on the outside, and other people would not necessarily be conscious, although it could be argued that it would not be possible for something to have a brain state that appears conscious if it is not. Dreams are a type of simulation and so we know this concept is possible.

Outside of the simulation, time might run faster, or slower, and we might not notice if the simulation was switched off for an extensive period. The simulation might be stored on different computers in different places, and we can have no idea of how safe they are. Even if we were to exit one simulation, we would have no way of knowing if that world was a simulation too, and so no way of ever knowing that any world is 'real'.

In 1986, American physicist Frank Tipler claimed that if the universe ends in a big crunch, then anything that managed to live inside the final singularity will continue to exist forever, from their perspective[19]. Tipler referred to this as the Omega Point. Inhabitants of the singularity would have enough energy to create an infinite amount of simulated worlds and may wish to live in one. Deutsch defended Tipler's Omega Point theory in his 1997 book The Fabric of Reality.

After the discovery of dark energy in 1998[20], Tipler extended his theory, arguing that the Omega Point could be created after the universe becomes a void, when all of the matter has decayed. However, this may not be necessary; assuming that Everett is correct, any worlds that do undergo a big crunch will be able to simulate any worlds that don't.

The main problem with Tipler's theory is that it relies on the idea that all theories of quantum gravity that do not allow singularities are false, including string theory.

Another problem with both Bostrom and Tipler's arguments is that they rely on the fact that we know what advanced civilisations are likely to do. If we could ask any of our ancestors what the 21st century is like, it's doubtful that many of their predictions would be correct. The further back we could go, the more likely it is that they would be wrong.

Bostrom and Tipler's ideas also suggest that the laws of physics would be similar on both sides of the simulation. This may be the case, but it's also possible that if we're in a simulation, then our knowledge of any external laws is severely limited.

If, for example, the characters inside of a simple platform game could become conscious, then they would have to obey the same laws of physics as their human creators, but they would not necessarily be able to derive them. There would be many more laws that they would discover first, those programmed into the game, and any experiments would be limited by these programs.

3. References

  1. (a, b) Deutsch, D., 1985, 'Quantum Theory as a Universal Physical Theory', International Journal of Theoretical Physics, 24, pp.1-41.

  2. Deutsch, D., Penrose, R. (ed), and Isham, C. J. (ed), 1986, 'Three experimental implications of the Everett interpretation' in 'Quantum Concepts of Space and Time', The Clarendon Press.

  3. Vaidman, L., 1998, 'On Schizophrenic Experiences of the Neutron or Why We should Believe in the Many-Worlds Interpretation of Quantum Theory', International Studies in the Philosophy of Science, 12, pp.245-261.

  4. DeWitt, B. S. M., 1970, 'Quantum mechanics and Reality', Physics Today, 23, pp.155-167.

  5. Page, D., 2000, 'Can quantum cosmology give observational consequences of many-worlds quantum theory?', General Relativity and Relativistic Astrophysics, 493, pp.225-232.

  6. (a, b, c) Plaga, R., 1997, 'On a possibility to find experimental evidence for the many-worlds interpretation of quantum mechanics', Foundations of Physics, 27, pp.559-577.

  7. Turing, A. M., 1936, 'On computable numbers, with an application to the Entscheidungsproblem', Journal of Math, 58, pp.230-265.

  8. Moore, G. E., 1965, 'Cramming more components onto integrated circuits', Electronics, 38, pp.1114-1117.

  9. Benioff, P., 1981, 'Quantum mechanical Hamiltonian models of discrete processes', Journal of Mathematical Physics, 22, pp.495-507.

  10. (a, b) Deutsch, D., 1985, 'Quantum Theory, the Church-Turing Hypothesis, and Universal Quantum Computers', Proceedings of Royal Society of London, 400, pp.97-117.

  11. IBM News releases, 'IBM's Test-Tube Quantum Computer Makes History', last accessed 15-02-16.

  12. Vandersypen, L. M., Steffen, M., Breyta, G., Yannoni, C. S., Sherwood, M. H. and Chuang, I. L., 2001, 'Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance', Nature, 414, pp.883-887.

  13. Brumfiel, G., 2007, 'Quantum computing at 16 qubits', Nature News, last accessed 15-02-16.

  14. D-Wave, 'Introduction to the D-Wave Quantum Hardware', last accessed 15-02-16.

  15. Rich, S. and Gellman, B., 2014, 'NSA seeks to build quantum computer that could crack most types of encryption', The Washington Post, last accessed 15-02-16.

  16. Bostrom, N., 2003, 'Are we living in a computer simulation?', The Philosophical Quarterly, 53, pp.243-255.

  17. Drexler, K. E., 1992, 'Nanosystems: Molecular Machinery, Manufacturing, and Computation', John Wiley & Sons.

  18. Bradbury, R. J., 2000, 'Matrioshka Brains'.

  19. Tipler, F. J., 1986, 'Cosmological limits on computation', International Journal of Theoretical Physics, 25, pp.617-661.

  20. Riess, A. G., et al, 1998, 'Observational evidence from supernovae for an accelerating universe and a cosmological constant', The Astronomical Journal, 116, pp.1009-1038.

Blog | Space & Time | Light & Matter | Mind & Multiverse | Timeline

RSS Feed | Images | About | Copyright | Privacy | Comments