How We Came to Know the Cosmos: Light & Matter

Discover How We Came to Know the Cosmos

Chapter 30. The Limitations of Science

30.1 Epistemology and metaphysics

The philosophy of science asks whether science tells us the truth about the world. The branch of philosophy that is concerned with reality is known as metaphysics. The branch of philosophy concerned with what we can know is known as epistemology.

Scientific realism is a branch of epistemology that claims we have good enough reasons to believe that science provides a true description of the universe. Scientific antirealists, on the other hand, claim that it’s only rational to believe in certain aspects of science, usually those that we can verify with our own eyes. Some antirealists, like Rene Descartes (discussed in Chapter 26) and George Berkeley (discussed in Chapter 27), argue that we can’t even be certain that the external world exists.

30.2 Falsification and the problem of induction

The claim that we can be certain of scientific knowledge was disputed by David Hume’s argument that there’s no necessary truth to the idea that the future will resemble the past (discussed in Chapter 27). In 1748, Hume argued that concepts like cause and effect are acquired by custom, instinct, and habit, and given our limited perspective, we can never be certain that our experiences are objective.[1] We may assume that all swans are white, for example, if we have only ever seen white swans, yet no amount of observations can disprove the idea that black swans exist. This is known as the problem of induction.

In 1935, the Austrian-British philosopher Karl Popper showed that science can be distinguished from pseudoscience because all scientific theories can be proven false.[2] The discovery of a single black swan would falsify the theory that all swans are white, for example. This means that you can only disprove theories, you can never be certain that any theory is correct.

30.3 The no-miracles argument and inference to the best explanation

The most persuasive argument for scientific realism - the idea that it’s rational to believe science describes the true nature of the universe - may be the no-miracles argument. This was developed by Australian philosopher Jack Smart in 1963,[3] and extended by the American philosopher Hilary Putnam in 1975.[4]

The no-miracles argument states that we should accept that current scientific theories are true because, historically, science has been very successful. The Canadian philosopher James Robert Brown defined the success of science in three ways:

  • Science organises and unifies a great variety of known phenomena.
  • Our ability to do this is more extensive now than in the past.
  • Science is statistically more successful at making novel predictions than it would be if we were just guessing.[5]

Popper defined novel predictions as predictions that are not used in the construction of a theory, and that correctly predict aspects of a phenomenon that we were not already aware of.[2] Popper suggested that Albert Einstein’s theory of general relativity (discussed in Book I) made a novel prediction when Einstein predicted that mass curves spacetime.[6] The British physicist Arthur Eddington verified this during the 1919 solar eclipse.[7]

Scientific realists argue that it’s not surprising that novel predictions are successful. This is what we would expect if the theory were true. If it were not true, then it would be very unlikely, miraculous even, if the novel predictions of a theory were proven correct. Putnam claimed, “realism is the only philosophy that does not make the success of science a miracle”.[4] Smart stated that it would be a “cosmic coincidence” if the theoretical entities suggested by physics were not real.[3]

The no-miracles argument relies on the fact that truth is the best explanation for success because it’s the simplest explanation. This is similar to Ockham’s razor, the idea that the theory with the fewest assumptions is most likely to be correct. This type of argument is known as inference to the best explanation.

One problem with inference to the best explanation is that there’s no accepted definition of simplicity. Any amount of complexity is accepted in a theory as long as it is simpler than its competitor, and the discovery of extra complexity will not necessarily lead us to abandon it. Scientific antirealists argue that we don’t have enough evidence to accept arguments like Ockham’s razor, and so they do not accept the no-miracles argument.

Laws, theories, and hypotheses

A theory will never become a law; theories and laws are two separate things. Both have been thoroughly tested, and so both are often referred to as scientific facts.

Scientific laws

A scientific law states what happens. This is often a mathematical relationship between two or more things.

For example, “the Sun rises in the East” or Newton’s law of gravitation.

Scientific theories

A scientific theory explains why it happens.

For example, the fact that the Sun rises in the East was explained by the theory that the Earth rotates from west to east. Newton explained the law of universal gravitation using the theory that gravity is a force.

Scientific hypotheses

A scientific hypothesis is an idea that has not been tested yet, but could be tested in theory. If a hypothesis is proven correct, then it becomes a law or theory.

For example, geocentrism, the hypothesis that the Sun orbits the Earth, was proven wrong when it was tested. Heliocentrism, the hypothesis that the Earth orbits the Sun was proven correct, and so became a theory.

Some confusion arises because some scientific hypotheses are popularly known as theories. This is because they are mathematical theories, and the mathematics will be true even if it’s shown not to apply to our universe.

For example, geocentrism has been proven wrong but is often called the theory of geocentrism because the mathematics it’s built on is correct.

How are things proven?

For a hypothesis to become a scientific law or theory, it must be proven. However, scientists generally accept that nothing other than mathematics can ever be proven with absolute certainty. Even if a prediction is verified hundreds of times, this alone does not mean it will be verified again.

Someone in the Northern Hemisphere might predict that all swans are white, for example, because they have only ever seen or heard of white swans, but this alone doesn’t mean that black swans don’t exist in the Southern Hemisphere.

Scientists can never prove anything absolutely, but they can try their best to prove a hypothesis wrong. They do this by testing scientific hypotheses using a range of techniques known as the scientific method.

The scientific method generally involves hypotheses that make novel predictions and that can be falsified. This means that they clearly state what would have to happen for them to be proven wrong before they are tested. A hypothesis generally becomes a scientific law or theory when it has been tested numerous times.


Pseudoscience is non-science that is packaged as science. Pseudoscientific ideas cannot be falsified because they do not tend to be revised when faced with evidence, the evidence may be dismissed instead.

30.4 Curve fitting and pessimistic meta-induction

Scientific antirealists argue that the success of science can be explained without assuming science refers to the truth. In 1983, the philosopher Nancy Cartwright suggested that the predictive success of science may be an accident that arises from what Berkeley called the ‘compensation of errors’.[8] This means that adjustments are made until the correct observational effects are predicted; one incorrect adjustment can be corrected by another.

In 1980, the Dutch-born philosopher Bas van Fraassen claimed that scientific theories are not successful because they are true, but because theories that don’t make correct observational predictions are dropped, in the same way that natural selection drops species that do not positively adapt to their environment.[9] Both of these approaches fail to explain how science is so successful at making novel predictions. Brown claimed that this is analogous to a radical change in the environment for species, and so the metaphor breaks down.

Van Fraassen claimed that we still don’t have a good enough reason to believe in the existence of entities that cannot be verified by direct observation with the naked eye. This is because there are an infinite amount of theories that give rise to the same observational results. This can be highlighted with the example of curve fitting.

There are an infinite amount of lines that can be drawn on a graph, and so scientists must make inferences beyond the data. They must assume that the simplest approach is correct. Van Fraassen argued that scientists are not justified in this claim and that theories should only be described as ‘empirically adequate’, referring to the fact that they can successfully explain our observations. There’s no way to know which of an infinite amount of empirically adequate theories are really correct.

Van Fraassen’s argument draws an absolute distinction between observable and theoretical entities, yet we don’t accept this distinctive cutoff point in real life. This problem was highlighted by the American philosopher Grover Maxwell in 1962.[10] Maxwell stated that the continuous transition between what we see through a pair of glasses, a windowpane, and temperature gradients in the atmosphere, through to the use of instruments such as microscopes and telescopes, shows that the distinction between observable and theoretical entities is vague and arbitrary.[10] Maxwell claimed that because there’s no logical connection between observation and existence, there’s no reason to believe that unobservable things do not exist.

In 1985, the New Zealand philosopher Alan Musgrave argued that van Fraassen’s boundary doesn’t make sense, because some people can see more with their naked eyes than others.[11] Sight is something that varies from person to person and depends on our evolutionary history.

Van Fraassen accepted the claim that the boundary between observable and theoretical entities is vague, but argued that this is not important, as there are many cases where a clear distinction can be made. There’s a vast difference, for example, between the electron microscope - microscopes that use electrons, instead of light, to illuminate things - and the naked eye.

In 2001, the British philosopher Philip Kitcher suggested that we can prove instruments like glasses and telescopes work because those with better vision can verify what is seen by others.[12] By 1981, the Canadian philosopher Ian Hacking had shown that this approach also applies to microscopes.[13] This is evident when we resolve details of macroscopic objects, observe macroscopic objects at the same time as microscopic objects, or observe a reaction after interfering with a microscopic object. Hacking argued that this continuity can even be shown using instruments that depend on theoretical entities to work, like electron microscopes.

The American philosopher Larry Laudan may have suggested the most persuasive argument for scientific antirealism in 1981.[14] This is known as pessimistic meta-induction. Laudan argued that we do not have a good enough reason to believe in the existence of theoretical entities, objects which we cannot see with our own eyes, because history is full of examples of scientific theories that were later shown to be false.

Newtonian physics (discussed in Book I) was certainly successful, for example, yet Newton made assumptions that are inconsistent with the theory of general relativity. This would mean that Newton’s theory is now considered wrong, and so we should infer that general relativity is probably wrong too. In fact, all of our current scientific theories are probably wrong.

A photograph of a moth taken under an electron microscope.

Figure 30.1
Image credit

A pyralidae moth, an image taken with an electron microscope.

A photograph of a leaf taken under an electron microscope.

Figure 30.2
Image credit

The leaf of a walnut plant, an image taken with an electron microscope.

30.5 Structural realism

The no-miracles argument and pessimistic meta-induction are both persuasive. The ability to build and use technology like computers, for example, would appear to be a miracle if science doesn’t provide a correct explanation for how they work. Yet it would be wrong to claim that science has reached a full understanding of the world, and it’s true that science progresses by abandoning past theories, even ones that have been accepted for hundreds of years. A good explanation for scientific realism should take this into account.

In 1984, the American philosopher Richard Boyd showed that scientific theories are rarely abandoned entirely.[15] Succeeding theories usually contain aspects of their predecessors. In 1905, the French philosopher Henri Poincare had suggested that the structure of theories carry over, as limiting cases in succeeding theories.[16] The equations remain true because they preserve some aspect of reality, and this explains why a false theory can make successful novel predictions.

Newton’s theory is a limiting case of Einstein’s, and if we accept that this is how theories have progressed in the past, then we have a good reason to believe that they will continue to do so in the future. This is known as structural realism.

30.6 Entity realism

In 1982, Hacking argued that theoretical entities can be proven to exist without the need for theory or observation.[17] This is because we can use them as tools to successfully build instruments. Hacking stated that we are convinced that electrons exist, for example, when we use them in electron microscopes. Hacking claimed that many experimental physicists are, in fact, realists about entities rather than theories.

The first objection to Hacking’s entity realism is that his definition is not wide enough to allow for realism in astrophysics since astronomical objects can’t be used to build instruments. In 1993, the American philosopher Dudley Shapere suggested that this problem can be avoided by extending Hacking’s argument to allow for more passive forms of interference.[18]

The second objection was raised by the philosopher David Resnik in 1994.[19] Resnik argued that Hacking’s entity realism fails because Hacking must rely on some kind of theory to interpret that instrumental results are successful.

Perhaps we can never be certain that the external world exists, or that science tells us the truth about the universe, but we have made a tremendous amount of progress by assuming that it does, and so this should not prevent us from trying to understand the universe we are presented with as best we can.

30.7 References

Back to top