**1. Probability and quantum mechanics ↑**

The Everett, or many worlds, approach to quantum mechanics does away with the objectively indeterminate universe suggested by the collapse approach. There is no objective uncertainty because every physical possibility is actualised^{[1]}.

Some may see this as a virtue, German-Swiss-American physicist Albert Einstein was critical of the collapse approach because of its reliance on objectively indeterminate probabilities - probabilities that relate to an uncertainty that cannot be explained by our ignorance of the physical system.

In a letter to German physicist Max Born, written in 1926, Einstein stated that:

"quantum mechanics is certainly imposing. But an inner voice tells me that it is not yet the real thing. The theory says a lot, but does not really bring us any closer to the secret of the 'old one'. I, at any rate, am convinced that He is not playing at dice"^{[2]}.

Despite this, we experience quantum events with well-measured probabilities.

**2. The incoherence problem ↑**

The Everett approach must first explain how it can ascribe probabilities to events at all, given that every physical possibility is certain to happen. This is known as the incoherence problem.

Everett attempted to resolve the incoherence problem by arguing that:

"the subjective experience...is precisely in accordance with the predictions of the usual probabilistic interpretation of quantum mechanics"^{[3a]}.

This means that from a subjective point of view, an observer will not be aware of every possibility, and so probabilities represent their chances of observing a specific result.

**3. The quantitative problem ↑**

Given that it makes sense to talk of probabilities within the Everett approach, a more serious problem arises. What good is it in saying that an atom has a 1% chance of decaying in the next twenty-four hours, when there are only two possibilities, a world where it decays, and a world where it does not?

The quantitative problem asks why Everett is justified in using the Born rule to assign probabilities, rather than assigning an equal probability to each branch.

Everett suggested that:

"in order to establish quantitative results, we must put some sort of measure (weighting) on the elements of a final superposition"^{[3b]}.

In the example above, the universe can be thought of as branching into 100 copies, the atom decays in one but not in the 99 others. These 99 worlds remain identical, until new quantum interactions force them to diverge, and so they can be thought of as one world with a weight of 99.

The meaning of the word 'weight' is still debated but the quantitative problem is analogous to the problems raised by classical probabilities. When we throw a weighted dice, for example, we know that there are only six possible outcomes, and so this raises the question of why are we entitled to give them unequal probabilities.

**4. Decision theory ↑**

Classically, we understand probabilities in terms of decision theory. The decision-theoretic link states that it's rational for a person to use their objective knowledge of a system in order to determine how to act^{[4]}^{[5]}.

Objectively, we know that regular dice have a 1/6 chance of landing on any particular number, and that coins have a 1/2 chance of landing either heads or tails. Weighted dice and coins will have different probabilities associated with each outcome, and a rational person should try to bet on the number that has the highest objective probability.

The problem with this approach is that we do not know how to derive probabilities without knowing the symmetry of the system.

If we actually throw dice and count how many times each outcome occurs, then a set of objective probabilities is expected to emerge, but there is no certainty to this assumption. No matter how many frequency trails we run, we can never know for certain if the dice are weighted. There's always the possibility that we have just been unlucky.

This raises the question of why we should be rationally compelled to use our objective knowledge of probabilities when placing bets. There are three solutions to this problem: functionalism, primitivism, and eliminativism.

**4.1 Functionalism ↑**

Functionalism suggests that one day we'll have a better understanding of objective probability, and we'll be able to define it as a physical property. This property will be defined independently of the decision-theoretic link but will come to the same conclusions. The frequency approach is an example of a failed attempt at a functional definition.

**4.2 Primitivism ↑**

Our inability to find a functional definition of objective probability has led some to adopt the second approach, primitivism. Primitivism is the view that we should accept the decision-theoretic link as a fundamental law of nature and not look for a deeper explanation. This is a popular approach but it does not seem entirely satisfactory, the decision-theoretic link is not similar to other concepts that we are willing to accept as fundamental, such as mass, charge, and spin.

**4.3 Eliminativism ↑**

The only option left is eliminativism. This is the view that there's no such thing as objective probabilities, and so nothing that can explain the decision-theoretic link. This does not seem acceptable, since the concept of objective probability is used in all branches of science as well as in ordinary life.

**5. Cautious functionalism ↑**

There's no satisfactory explanation for what classical probabilities really are, but primitivism, the denial that a functional definition exists beyond the decision-theoretic link, and eliminativism, the denial of objective probabilities all together, should only be accepted once we have given up on finding a functional definition.

Cautious functionalism is the view that we will one day find a functional definition and, in the meantime, we can use the decision-theoretic link as such. This allows scientists to continue to use decision theory when considering objective probabilities.

There is no further justification for the use of classical probabilities, and so proponents of the Everett approach can defend their use of the Born rule in the same way that proponents of the collapse approach do, using the decision-theoretic link.

If Everett's use of the Born rule is correct, then we should still be rationally compelled to take the probability of each possibility into account, and to bet on the event that has the highest objective probability.

Proponents of the Everett approach can simply state that their definition of the Born rule meets the conditions set by the decision-theoretic link and defend its use on the basis of cautious functionalism.

British physicist David Deutsch showed that they can go further than this, and prove that their concept of 'weight' fits the functional definition of objective probability^{[6]}. This is because it defines objective probability as a physical property that is independent of the decision-theoretic link, but comes to the same conclusions about how to act when faced with uncertainty.

If we accept this claim, then proponents of the Everett approach can defend their use of the Born rule more fully than proponents of the collapse approach. But even if we do not accept Deutsch's proof, the Everett approach is no worse off than the collapse or Bohm approaches in this regard.

In a letter to American physicist Bryce DeWitt, written shortly after his theory was published in 1957, Everett stated that:

"from my point of view there is no preference for deterministic or indeterministic theories. That my theory is fundamentally deterministic is not due to any deep conviction on my part that determinism holds any sacred position...I only object to mixed systems where the character changes with mystical acts of observation"^{[7]}.