Sven Ove Hansson
Philosophy Unit
Royal Institute of Technology
100 44 Stockholm
Sweden
soh@infra.kth.se
Abstract: In addition to traditional fallacies such as ad hominem, discussions of risk contain logical and argumentative fallacies that are specific to the subject-matter. Ten such fallacies are identified, that can commonly be found in public debates on risk. They are named as follows: the sheer size fallacy, the converse sheer size fallacy, the fallacy of naturalness, the ostrich’s fallacy, the proof-seeking fallacy, the delay fallacy, the technocratic fallacy, the consensus fallacy, the fallacy of pricing, and the infallability fallacy.
Ever since Aristotle, fallacies have had a central role in the study and teaching of logical thinking and sound reasoning. (Walton 1987) It is not difficult to find examples of traditional fallacies such as ad hominem in any major modern discussion of a controversial issue. Discussions on risk are no exception. In addition, the subject-matter of risk (like many others) seems to invite to fallacies of a more specific kind. The purpose of this short essay is to discuss ten logical and argumentative fallacies that can be found in public debates on risk. Some of these are not as rare in the scholarly and scientific literature as one might have hoped.
1. The sheer size fallacy
X is accepted. Y is a smaller risk than X. Y should be accepted. |
This is one of the commonest fallacies in the lore of risk. It is not often found in recent scientific writings, but is still often heard from advocates of various risk-associated technologies: “You will have to accept this chemical exposure since the risk it gives rise to is smaller than the risk of being struck by lightning.” Or: “You must accept this technology, since the risks are smaller than that of a meteorite falling down on your head.”
The problem with these arguments is, of course, that we do not have a choice between the defended technology and the atmospheric phenomena referred to. Comparisons between risks can only be directly decision-guiding if they refer to objects that are alternatives in one and the same decision. When deciding whether or not to accept a certain pesticide, we need to compare it to other pesticides (or non-pesticide solutions) that can replace it. Comparisons to lightning, meteorites, background radiation, one cigarette a day – or any of the other popular examples in the rhetoric of risk – are not of much guidance. (Hansson 1997)
Life can never be free of risks. We are forced by circumstances to live with some rather large risks, and we have also chosen to live with other, fairly large risks – typically because of the high value we assign to their associated benefits. If we were to accept, in addition, all proposed new risks that are small in comparison to some risk that we have already accepted, then we would all be dead.
Like several of the fallacies to be discussed below, the sheer size fallacy involves the treatment of risks as “free-floating” objects, dissected out of their social context – in this case, out of the context of associated benefits. Strictly speaking, it is on most occasions wrong to speak of acceptance of a risk per se. Instead, the accepted object is a package or social alternative that contains the risk, its associated benefits, and possibly other factors that may influence a decision.
2. The converse sheer size fallacy
X is not accepted. Y is a larger risk than X.___ Y should not be accepted. |
This line of argument may be described as the converse of the sheer size fallacy, and it is equally fallacious. For an example, consider two pesticides X and Y, such that X can easily be replaced by some less harmful alternative, whereas Y can at present be dispensed with only at high economic costs. It may then be reasonable to accept Y but not X, even if Y gives rise to more serious risks to health and the environment.
Several of the fallacies to be treated below also have a converse form, but in what follows I will only state one of the two forms (namely the one that gives an invalid argument for acceptance, rather than non-acceptance, of a risk).
3. The fallacy of naturalness
X is natural._________- X should be accepted. |
Psychometric studies indicate that most of us are more concerned about those risks that we believe to be unnatural than about those that we conceive as natural. Unfortunately, it is difficult to make more precise sense of this way of thinking, since naturalness is one of those concepts that seem to dissolve almost into nil when subjected to a logical analysis. As an example of this, consider the notion of “natural foods”.
With a very stringent definition of “natural”, it may be claimed that only the foods that were available to pre-civilization humans are natural. This would exclude all foodstuffs that derive from domestic animals or require heating for their preparation. At the other extreme, a very liberal definition of “natural” can be based on the argument that we humans are biological creatures, so that all products of our civilization are also products of nature. According to this latter definition, everything that we eat is natural.
The intuitive notion of “naturalness” seems to lie somewhere between these two extremes, but more precisely where should the limit be drawn? Is heavily salted meat a “natural” food component? And what about frozen meat? The latter is of later origin and depends on more advanced technology, whereas the former deviates more in its chemical composition from what pre-civilization humans can have eaten. And is fried meat natural? Tea?Strawberries and grapefruit (that are outcomes of plant breeding)?Yoghurt?Vitamin C tablets?
The terms “natural” and “unnatural”, as used in this and many other contexts, have a strong normative component. By saying that a child’s behaviour is natural you call for acceptance or at least tolerance. By saying that it is unnatural you condemn it. More generally, we tend to call those, and only those, fruits of human civilization unnatural that we dislike. What is commonly accepted is not considered unnatural. I have not heard anyone call it unnatural to boil contaminated water before drinking it or to wear glasses, but pasteurization has been given that designation, and so (in certain religious circles) has the use of condoms. Not many of those who call homosexuality “unnatural” would change their views if it was proved that humans have a biologically based tendency to homosexuality.
Hence, naturalness is not a scientific concept, based on our knowledge of human biology, but rather a teleological concept that expresses beliefs about how humans are suited or destined to live their lives. Naturalness is one of the major rhetoric figures that are used in debates on risk. It is often more correct to say that we call something natural because we accept it than to say that we accept it because it is natural. The fallacy of naturalness is a case of petitio principii; a discussant who is not already convinced that a risk is acceptable cannot either be expected to regard it as natural.
There may be valid reasons to treat some of the risks that we call natural less strictly than some of those that we call artificial, but naturalness per se is not among these reasons. We eat meat and potatoes although we would never accept a new pesticide with as incomplete toxicity data as we have for these foodstuffs. However, this is not – or at least it should not be – because of their elusive “naturalness”, but because of the associated benefits. We can live on meat and potatoes, but not on DDT and preservatives.
4. The ostrich’s fallacy
X does not give rise to any detectable risk.____ X does not give rise to any unacceptable risk. |
The standpoint that indetectable effects are no matter of concern is a common implicit assumption in both scientific and more popular discussions of risk. On occasions we also find it stated explicitly. An unusually clear example is a statement by a former chairman of the American Conference of Governmental Industrial Hygienists (ACGIH), a private standard-setting body with a strong influence on occupational exposure limits throughout the world. He conceded that the organization’s exposure limits “can never be used to guarantee absolute safety”, but found it sufficient that “they can be used to control adverse health effects of all types below the point at which they cannot be distinguished from their background occurence”. (Mastromatteo 1981) Similarly, the Health Physics Society wrote in a position statement:
“…[E]stimate of risk should be limited to individuals receiving a dose of 5 rem in one year or a lifetime dose of 10 rem in addition to natural background. Below these doses, risk estimates should not be used; expressions of risk should only be qualitative emphasizing the inability to detect any increased health detriment (i.e., zero health effects is the most likely outcome).” (Health Physics Society 1996)
Effects can be detectable either on the individual or only on the collective level. The following hypothetical example can be used to clarify the distinction. (Hansson 1999) There are three substances A, B, and C, and 1000 persons exposed to each of them. Exposure to A gives rise to hepatic angiosarcoma among 0.5 % of the exposed. Among unexposed individuals, the frequency of this disease is very close to 0. Therefore, the individual victims can be identified. This effect is detectable on the individual level.
Exposure to B causes a rise in the incidence of leukemia from 1.0 to 1.5 %. Hence, the number of victims will be the same as for A, but although we know that about 10 of the about 15 leukemia patients would also have contracted the disease without being exposed to the substance, we cannot find out who these ten patients are. The victims cannot be identified. On the other hand, the increased incidence will be distinguishable from random variations (given the usual criteria for statistical significance). Therefore, the effect of substance B is detectable on the collective (statistical) but not on the individual level.
Exposure to C leads to a rise in the incidence of lung cancer from 10.0 to 10.5 %. Again, the number of additional cancer cases is the same as for the other two substances. Just as in the previous case, individual victims cannot be identified. In addition, since the difference between 10.0 and 10.5 % is indistinguishable from random variations, the effects of this substance are indetectable even on the collective level.
In the perspective of an irresponsible company, B is preferable to A, and C is quite acceptable since there is no danger of being caught. In the perspective of a company that worries about the health of its employees, neither B nor C is really better than A. Only the latter viewpoint seems to me to be morally defensible.
I have called this the “ostrich’s fallacy”, honouring the biological folklore that the ostrich buries its head in the sand, believing that what it cannot see is no problem.
5. The proof-seeking fallacy
There is no scientific proof that X is dangerous. No action should be taken against X. |
Science has fairly strict standards of proof. When determining whether or not a scientific hypothesis should be accepted for the time being, the onus of proof falls squarely to its adherents. Similarly, those who claim the existence of an as yet unproven phenomenon have the burden of proof. These proof standards are essential for both intra- and extrascientific reasons. They prevent scientific progress from being blocked by the pursuit of all sorts of blind alleys. They also ensure that the corpus of scientific knowledge is reliable enough to be useful for (most) extra-scientific applications.
In many risk-related issues, standards and burdens of proof have to be different from those used for intrascientific purposes. Consider a case when there are fairly strong indications that a chemical substance may be highly toxic, although the evidence is not (yet) sufficient from a scientific point of view. It would not be wise to continue unprotected exposure to the substance until full scientific proof has been obtained. According to the precautionary principle, we must be prepared to take action in the absence of full scientific proof. (Sandin 1999, Sandin et al 2001)
We can borrow terminology from statistics, and distinguish between two types of errors in scientific practice. The first of these consists in concluding that there is a phenomenon or an effect when there is in fact none (type I error, false positive). The second consists in missing an existing phenomenon or effect (type II error, false negative). In science, errors of type I are in general regarded as much more problematic than those of type II. (Levi 1962, pp. 62–63) In risk management, type II errors – such as believing a highly toxic substance to be harmless – are often the more serious ones. This is the reason why we must be prepared to accept more type I errors in order to avoid type II errors, i.e. to act in the absence of full proof of harmfulness.
6. The delay fallacy
If we wait we will know more about X._____ No decision about X should be made now. |
In many if not most decisions about risk we lack some of the information that we would like to have. A common reaction to this predicament is to postpone the decision. It does not take much reflection to realize the problematic nature of this reaction. In the period when nothing is done, the problem may get worse. Therefore, it may very well be better to make an early decision on fairly incomplete information than to make a more well-informed decision at a later stage.
It must also be observed that in some cases, scientific uncertainty is recalcitrant and not resolvable through research, at least not in the short or medium run. Many of the technical issues involved in assessing risks are not properly speaking scientific but, in Alvin Weinberg’s (1972) term, “transscientific“, i.e. they are “questions which can be asked of science and yet which cannot be answered by science“. A further complication is that new scientific information often gives rise to new scientific uncertainty. New results may cast doubt on previous standpoints, and they may also create new uncertainty by revealing mechanisms or phenomena that were previously unknown.
The search for new knowledge never ends, and there is almost no end to the amount of information that one may wish to have in a risk-related decision. Since the premise of the delay argument (“If we wait we will know more about X”) is true on all stages of a decision process, this argument can almost always be used to prevent risk-reducing actions. Therefore, from the viewpoint of risk reduction, the delay fallacy is one of the most dangerous fallacies of risk.
7. The technocratic fallacy
It is a scientific issue how dangerous X is._____________ Scientists should decide whether or not X is acceptable. |
It should be a trivial insight, but it needs to be repeated again and again: Competence to determine the nature and the magnitude of risks is not competence in deciding whether or not risks should be accepted. Decisions on risk must be based both on scientific information and on value judgments that cannot be derived from science.
Examples are easily found of how value judgments have been disguised in scientific robe. A clear case is that of the official German committee for occupational exposure limits. In statements by that committee, it is claimed that its exposure limits are based exclusively on scientific information on health effects, and thus unaffected by economic, political, or technological considerations. In spite of these declarations, the committee’s decisions can be shown to have been influenced by techno-economical factors. (Hansson 1998a)
There seems to be a fairly general tendency to describe issues of risk as “more scientific” than they really are. Wendy Wagner (1995) concluded from her study of the EPA and its external relations that there is a massive “science charade” going on: policy decisions are camouflaged as science. Instead of discussing, on the policy level, how to handle scientific uncertainty, authorities, industry, and environmentalists send forward scientists to argue about technical details that are not the real issue. In this way, the entire decision-making procedure is mischaracterized as much more “scientific” than it can actually be. This mischaracterization, Wagner says, can create obstacles to democratic participation in the decision-making process.
8. The consensus fallacy
We must ask the experts about X.______________________ We must ask the experts for a consensus opinion about X. |
The conventional approach to science advising is to search for consensus so far as this is at all possible. Scientific expert committees have a strong tendency to opt for compromises whenever possible. Even if the initial differences of opinion are substantial, discussions are continued until consensus has been reached. It is extremely unusual for minority opinions to be published. Instead, committees of scientists end up with a unanimous – although sometimes watered down – opinion in issues that are controversial in the scientific community. (The Swedish Criteria Group for occupational exposure limits has gone as far as to call their reports “consensus reports”. No minority opinion has ever been published. (Hansson 1998a))
The search for consensus has many virtues, but in advisory expert committees it has the unfortunate effect of underplaying uncertainties and hiding away alternative scenarios that may otherwise have come up as minority opinions. If there is uncertainty in the interpretation of scientific data, then this uncertainy can often be reflected in a useful way in minority opinions. Therefore, it is wrong to believe that the report of a scientific or technical advisory committee is necessarily more useful if it is a consensus report.
There are also other ways in which scientific uncertainty can be reported. Scientists can (perhaps unanimously, perhaps not) describe scientific uncertainties in ways that are accessible to decision-makers. The Intergovernmental Panel on Climate Change (IPCC) does this in an interesting way: it systematically distinguishes between “what we know with certainty; what we are able to calculate with confidence and what the certainty of such analyses might be; what we predict with current models; and what our judgement will be, based on available data.” (Bolin 1993)
9. The fallacy of pricing
We have to weigh the risks of X against its benefits. We must put a price on the risks of X. |
There are many things that we cannot easily value in terms of money. I do not know which I prefer, $8000 or that my child gets a better mark in math. If I am actually placed in a situation when I can choose between the two, the circumstances will be crucial for my choice. (Is the offer an opportunity to bribe the teacher, or is it an efficient extra course that I only have to pay for if he achieves a better mark?) There is no general-purpose price that can meaningfully be assigned to my son’s receiving the better grade, simply because my willingness to pay will depend on the circumstances.
Similar situations often arise in issues of risk, including those that involve the loss of human lives. We cannot pay unlimited amounts of money to save a life. The sums that we are prepared to pay in a specific situation will depend on the particular circumstances. Again, general-purpose prices are not useful as decision-guides.
To the contrary, such pricing will tend to hide away the fact that these are decisions under conditions that have all the characteristics of moral dilemmas. Our competence as decision-makers is increased if we recognize a moral dilemma when we have one, rather than misrepresenting it as an easily resolvable decision problem. (Hansson 1998b)
10. The infallability fallacy
Experts and the public do not have the same attitude to X. The public is wrong about X. |
Much of the public opposition to risks has been directed at the risks inherent in complex technological systems, which can only be assessed through the combination of knowledge from several disciplines. From the viewpoint of experts, this is often seen primarily as a matter of information or communication: the public has to be informed. However, this is only part of the problem. There is also another aspect that is often neglected: The experts may be wrong.
Experts have been wrong on many occasions. A rational decision-maker will have to take into account the possibility that this may happen again. (Hansson 1996) In fact, when the output of a risk analysis of a complex technology indicates a low level of risk, the possibility that this analysis was wrong may very well be a dominant part of the legitimate concerns that a rational decision-maker can and should have with respect to the technology in question.
When there is a wide divergence between the views of experts and those of the public, this is certainly a sign of failure in the social system for division of intellectual labour. However, it does not necessarily follow that this failure is located within the minds of the non-experts who distrust the experts. It cannot be a criterion of rationality that one takes experts for infallible.
Avoiding the fallacies
Discussions on risk are much influenced by their political context. Proponents of technologies that are associated with risks tend to use a rhethoric of rationality; they maintain that the acceptance of certain risks is required by rationality. Their adversaries are prone to employ a rhethoric of morality, claiming that it is morally reprehensible to make use of these same technologies. In such a polarized situation it is no surprise to find arguments being used that are less impressive from an inferential than a rhethorical point of view.
Arguably, this is how it has to be. A fallacy-free public discussion in a contested social issue is probably an idle dream. Therefore, the task of exposing fallacious reasoning is much like garbage collecting: Neither task can ever be completed, since new material to be treated arrives all the time. However, in neither case does the perpetuity of the task make it less urgent. In order to improve the intellectual quality of public discussions on risks it is essential that more academics take part in these discussions, acting as independent intellectuals whose mission is not to advocate a standpoint but to promote science and sound reasoning.
References
Bolin, B. (1993) “Global Climate Change an Issue of Riskassessement and Riskmanagement”. Presentaded as NWO/Huygens lecture, 16 nov 1993.
Hansson, S. O. (1996) “Decision-Making Under Great Uncertainty”, Philosophy of the Social Sciences 26:369-386.
Hansson, S. O. (1997) “Incomparable risks”, pp. 594–599 in B-M Drottz Sjöberg (ed.), Proceedings, Annual Meeting of the Society for Risk Analysis-Europe, New Risk Frontiers, Stockholm.
Hansson, S. O. (1998a) Setting the Limit. Occupational Health Standards and the Limits of Science. Oxford University Press.
Hansson, S.O. (1998b) “Should we avoid moral dilemmas?”, Journal of Value Inquiry 32:407-416.
Hansson, S. O. (1999) “The Moral Significance of Indetectable Effects”, Risk 10:101-108, 1999.
Health Physics Society (1996) Radiation Risk in Perspective. Position statement of the Health Physics Society, adopted January 1996. Downloadable from http://www.hps.org/.
Levi, I. (1962) “On the seriousnees of mistakes.” Philosophy of Science 29:47–65.
Mastromatteo, E. (1981) “On the concept of threshold.” American Industrial Hygiene Association Journal 42: 763–770.
Sandin, P. (1999) “Dimensions of the Precautionary Principle”, Human and Ecological Risk Assessment 5, 889-907.
Sandin, P.. Peterson, M., Hansson, S.O., Rudén, C., And Juthe, A. (2001) “Five Charges Against The Precautionary Principle”, Journal Of Risk Research, in press.
Wagner, W. E. (1995) “The Science Charade In Toxic Risk Regulation”, Columbia Law Review 95:1613–1723.
Walton, D. N. (1987) Informal fallacies: towards a theory of argument criticisms, Amsterdam: J. Benjamins.
Weinberg, A. M. (1972) “Science and Trans-Science.” Minerva 10: 209–222.