1. Scientific realism ↑
Scientific realism is a branch of epistemology. Scientific realists claim that it's rational to believe science provides true knowledge about the world. Scientific antirealists claim that it's only rational to believe in certain aspects of science, usually those that we can verify with our own eyes.
1.1 Falsification and the problem of induction ↑
The claim that we can be certain of scientific knowledge was disputed by British philosopher David Hume's argument that there's no necessary truth to the idea that the future will resemble the past.
In 1748, Hume argued that concepts like cause and effect are acquired by custom, instinct, and habit, and given our limited experience, we can never be certain that our experiences are objective. We may assume that all swans are white, for example, if we have only ever seen white swans, yet no amount of observations can disprove the idea that black swans exist. This is known as the problem of induction.
In 1935, Austrian-British philosopher Karl Popper showed that science can overcome the problem of induction with falsification[2a]. Popper suggested that science can be distinguished from pseudoscience because all scientific theories can be proven false. There's no reason to abandon a theory until it's falsified, however you can never be certain that any theory is correct.
1.2 The no-miracles argument and inference to the best explanation ↑
The most persuasive argument for scientific realism - the idea that it's rational to believe some theories are correct - was developed by Australian philosopher Jack Smart in 1963[3a], and extended by American philosopher Hilary Putnam in 1975[4a]. This is known as the no-miracles argument.
The no-miracles argument states that we should infer that scientific theories tell us the truth about the world because, historically, science has been very successful. Canadian philosopher James Robert Brown defined the success of science in three ways:
Science organises and unifies a great variety of known phenomena.
Our ability to do this is more extensive now than in the past.
Science is statistically more successful at making novel predictions than it would be if we were just guessing.
Popper defined novel predictions as predictions that are not used in the construction of a theory, and correctly predict aspects of a phenomenon that we were not already aware of[2b]. Popper suggested that German-Swiss-American physicist Albert Einstein's theory of general relativity made a novel prediction when Einstein showed that mass curves spacetime. British physicist Arthur Eddington verified this during the 1919 solar eclipse.
Scientific realists argue that it's not surprising that novel predictions are successful. This is what we would expect if the theory were true. If it were not true, then it would be very unlikely, miraculous even, if the novel predictions of a theory were proven correct.
Putnam claimed, "realism is the only philosophy that does not make the success of science a miracle"[4b]. Smart stated that it would be a "cosmic coincidence" if the theoretical entities suggested by physics were not real[3b].
The no-miracles argument relies on the fact that truth is the best explanation for success because it is the simplest explanation. This type of argument is known as inference to the best explanation.
Scientific antirealists argue that we do not have enough evidence to support this claim, and so do not accept the no-miracles argument.
One problem with inference to the best explanation is that there's no accepted definition of simplicity. Any amount of complexity is accepted in a theory as long as it is simpler than its competitor, and the discovery of extra complexity will not necessary lead us to abandon it.
1.3 Pessimistic meta-induction ↑
American philosopher Larry Laudan suggested the most persuasive argument for scientific antirealism in 1981. This is known as pessimistic meta-induction.
Laudan argued that we do not have a good enough reason to believe in the existence of theoretical entities, objects which we cannot see with our own eyes, because history is full of examples of scientific theories that were later shown to be false.
Newtonian physics was certainly successful, for example, yet Newton made assumptions that are inconsistent with the theory of general relativity. This would mean that Newton's theory is now considered wrong and so we should infer that general relativity is probably wrong too. In fact, all of our current scientific theories are probably wrong.
1.4 Structural realism ↑
The no-miracles argument and pessimistic meta-induction are both persuasive. The ability to build and use computers, for example, would appear to be a miracle if science doesn't provide a correct explanation for how they work. Yet it would be wrong to claim that science has reached a full understanding of the world, and it's true that science progresses by abandoning past theories, even ones that have been accepted for hundreds of years. A good explanation for scientific realism should take this into account.
In 1984, American philosopher Richard Boyd showed that scientific theories are rarely abandoned entirely. Succeeding theories usually contain aspects of their predecessors. In 1905, French philosopher Henri Poincare had suggested that the structure of theories carry over, as limiting cases in succeeding theories. The equations remain true because they preserve some aspect of reality, and this explains why a false theory can make successful novel predictions.
Newton's theory is a limiting case of Einstein's, and if we accept that this is how theories have progressed in the past, then we have a good reason to believe that they will continue to do so in the future. This is known as structural realism.
One problem with this explanation is that it claims it would be a 'miracle' if a successful theory were overthrown without being referenced in by its successor. This is because it would be miraculous if science were successful without containing an aspect of truth.
1.5 Compensation of errors and curve fitting ↑
Scientific antirealists argue that the success of science can be explained without assuming science refers to the truth. In 1983, philosopher Nancy Cartwright suggested that the predictive success of science may be an accident that arises from what Irish philosopher George Berkeley called the 'compensation of errors'. This means that adjustments are made until the correct observational effects are predicted; one incorrect adjustment can be corrected by another.
In 1980, philosopher Bas van Fraassen claimed that scientific theories are not successful because they are true, but because theories that do not make correct observational predictions are dropped, in the same way that natural selection drops species that do not positively adapt to their environment.
Both of these approaches fail to explain how science is so successful at making novel predictions. Brown claimed that this problem is analogous to a radical change in the environment for species, and so the metaphor breaks down.
Van Fraassen claimed that we still don't have a good enough reason to believe in the existence of entities that cannot be verified by direct observation with the naked eye. This is because there are an infinite amount of theories that give rise to the same observational results. This can be highlighted with the example of curve fitting.
There are an infinite amount of lines that can be drawn on a graph and so scientists must make inferences beyond the data. They must assume that the simplest approach is correct. Van Fraassen argued that scientists are not justified in this claim, and that theories should only be described as 'empirically adequate', referring to the fact that they can successfully explain our observations. There's no way to know which, of an infinite amount of empirically adequate theories, are really correct.
Van Fraassen's argument draws an absolute distinction between observable and theoretical entities, yet we do not accept this distinctive cut-off point in real life. This problem was highlighted by American philosopher Grover Maxwell in 1962[13a].
Maxwell stated that the continuous transition between what we see through "ordinary spectacles…an ordinary window pane" and "temperature gradients"[13b], through to the use of instruments such as microscopes and telescopes, shows that the distinction between observable and theoretical entities is vague and arbitrary. Maxwell claimed that because there's no logical connection between observation and existence, there's no reason to believe that unobservable things do not exist.
Van Fraassen accepted Maxwell's claim that the boundary between observable and theoretical entities is vague, but argued that this is not important, as there are many cases where a clear distinction can be found. There is a vast difference, for example, between the electron microscope - microscopes that use electrons, instead of light, to illuminate things - and the naked eye.
Images from electron microscopes: pollen from a lily, a weevil, and a Pyralidae moth. Image credit: Dartmouth Electron Microscope Facility/Public domain.
In 1985, New Zealand philosopher Alan Musgrave argued that van Fraassen's boundary does not make sense, since some people can see more with their naked eyes than others. Sight is something that varies from person to person, and is dependent upon our evolutionary history.
In 2001, British philosopher Philip Kitcher suggested that we can prove instruments like glasses and telescopes work because those with better vision can verify what is seen by others.
By 1981, Canadian philosopher Ian Hacking had shown that this approach also applies to microscopes. This is evident when we resolve details of macroscopic objects, observe macroscopic objects at the same time as microscopic objects, or observe a reaction after interfering with a microscopic object. Hacking argued that this continuity can even be shown using instruments that depend upon the existence of theoretical entities in order to work, such as electron microscopes.
Van Fraassen claimed that these arguments are irrelevant; we simply need a stronger reason to believe in entities that we cannot verify with our own eyes.
Images from electron microscopes: the leaf and stem of a walnut plant. Image credit: Dartmouth Electron Microscope Facility/Public domain.
1.6 Entity realism ↑
In 1982, Hacking argued that theoretical entities can be proven to exist without the need for theory or observation[17a]. This is because we can use them as tools to successfully build instruments.
Hacking stated that we are, for example:
"completely convinced of the reality of electrons when we regularly set out to build - and often enough succeed in building - new kinds of device that use various well understood causal properties of electrons to interfere in other more hypothetical parts of nature"[17b].
Hacking claimed that many experimental physicists are, in fact, realists about entities rather than theories.
The first objection to Hacking's entity realism is that his definition is not wide enough to allow for realism in astrophysics, since astronomical objects cannot be used to build instruments. In 1993, American philosopher Dudley Shapere suggested that this problem can be avoided by extending Hacking's argument to allow for more passive forms of interference.
The second objection was raised by philosopher David Resnik in 1994. Resnik argued that Hacking's entity realism fails because Hacking must rely on some kind of theory in order to interpret that instrumental results are successful.
Perhaps we can never be certain that science tells us the truth about the world, but if we accept that we don't even know that the external world exists, then this is not too much of a surprise, and should not prevent us from trying to understand the universe we are presented with as best we can.