A Precautionary Tale: Towards a Sustainable Philosophy of Science, Andrew Michael Baker

* School of Natural Resource Sciences, Queensland University of Technology (Australia)

 

 

Abstract: Sustainable management of dwindling resources is perhaps the biggest challenge facing the human species. Successfully addressing this challenge requires holistic perspective: a nebulous connection across disparate realms of science, economics and sociopolitics.  Here, I examine some important historical philosophical ideas in our understanding of science. I relate these ideas to how science is generally perceived today. And I question how our view of science is applied through modern policy incorporating a variant of the ‘precautionary principle’, a notion that essentially attempts to articulate a cautious approach to management in our rapidly changing world. I conclude that deeper, philosophical thought would be much welcome: both for clearer purpose within science itself and in order to move forward more strategically in applied areas, such as sustainable management of our planet.

 

 

 

In the world today, it is becoming increasingly clear that finding a sustainable solution to our resource crisis is the key to our long-term survival as a species. Topics such as biodiversity loss, land degradation, environmental restoration and, most notably of late, global warming, are fuelled by ongoing debate. They have become scrutinised and thrust onto the world stage. Public opinion of these issues is much welcome, even if criticised as being largely whipped by media frenzy rather than rational deliberation. It plays crucial part in democratic process on the path toward policy change and a more sustainable future for us all. And yet finding solutions to sustainability issues remains humanity’s greatest challenge: it requires a broad perspective, spanning the realms of science, economics and policy. In rewriting policy, Science must grapple head-on with the Law; an often unhappy union. Also, we must strive towards a fusion between, on the one hand, the goal of an objective, methodical interpretation of our world, and on the other hand, our subjective value judgements of how changes to the environment may impact upon our lives. In our search for these sustainable solutions, we continue the struggle to be precautionary in our approach to environmental management.

 

In the 1992 Earth Summit at Rio, representatives of 100 countries united to forge a blueprint for global sustainable development in the 21st Century. A key document emerging from these meetings became known as Principle 15 or ‘The Precautionary Principle’. Here is a bite size chunk of it:

 

“In order to protect the environment, the precautionary approach shall be widely applied ……where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.” (UNCED 1993)

 

In short, this principle means: if you are going to do something that might harm the environment in irreversible ways, then be careful. It springs from the old rule taught to first year medical students: ‘First, do no harm.’ Just suppose you are going to build a high-rise complex: to get permission to proceed, you first need to demonstrate that you’re not going to do any environmental damage. Given the difficult wording and its importance for global management and restoration projects, it shouldn’t surprise that in the years following the Rio Earth Summit there was heated debate in all circles: economics, law and science (see, for example, Gollier, Jullien, and Treich 2000; Saladin 2000; Stewart 2002).

 

The terms of principle 15 have been discussed by others (e.g., Kaiser 1997; Lemons, Shrader, and Cranor 1997; Martin 1997): the wording gives a somewhat muddied view of the role of science in the process of being sustainable. And the fact that the Precautionary Principle is worded in slightly different ways in various international laws has meant further confusion. Because of this, effort has been made to try and clarify how science should be viewed in such statements, particularly in regard to the nature of science (e.g., Arrow et al. 1996; Downes et al. 2002) and the gap this may create between developments in science and policy (Bradshaw and Borchers 2000).

 

Six years after the Rio Declaration, 32 participants met at the Wingspread Conference Centre in Wisconsin, USA. The global community were represented at the conference by a diverse array of stakeholders: international lawyers, economists, scientists, indigenous persons and laypersons. The results of the meeting were a series of statements which tried to clarify some of the previous confusion written in the original Rio Declaration, six years before. In this attempt, the participants introduced a notion of ‘cause and effect’ into the documentation. Here is a relevant section:

 

“Where an activity raises threats of harm to the environment or human health, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.” (Wingspread 1998)

 

The notion of cause and effect was subsequently adopted by the European Union in 2000 and has since been used in other sustainable management documents around the globe. Consider a recent example: in May of 2007, the World Conservation Union (IUCN) proposed new guidelines for applying the precautionary principle to global biodiversity conservation and resource management. In this document, they stipulate that the precautionary principle should be most relevant where there is scientific uncertainty. And where a threat is relatively certain, more preventative measures should be taken. They define ‘relatively certain’ as where a causal link between an action and environmental damage can be established (IUCN 2007).

 

In this paper, I would like to take a different approach to other published critical works on this topic. I will discuss some important points about the continued misuse of the ‘cause and effect’ concept in modern policy in relation to the work of philosophers of science, David Hume and Karl Popper.

 

The Precautionary Principle forms a cornerstone of sustainable management: yet philosophy shows that we are missing something important in how we continue to portray science to the world. My main purpose in discussing some of these problems is to raise awareness of the importance in understanding more clearly both the strengths and weaknesses in science. And I want to argue the need for deeper, philosophical thought about the role of science in areas such as sustainable management.

 

The philosophy of Karl Popper forms the foundation of the modern scientific method. Popper lived in the 20th Century and his work is important not only to modern science but also to our notions of social change (refer to Popper 1959, 1962; Magee 1985, 2001). Basically, Popper believed living was an ongoing process of problem-solving. The great internal struggle that Popper had in developing his philosophy of science was linked to the established ‘problems with induction’.

 

David Hume, a Scottish philosopher who lived about two hundred years before Popper, had become famous for proposing his classic problem with induction. In this, and other ways, the work of these two great thinkers is related and has relevance to our application of science via policy documents to secure a sustainable future, as we shall see.

 

At the time of David Hume, induction (thanks largely to Francis Bacon [1620]) held sway and formed a cornerstone of science. Induction may be viewed as a process of coming up with a general pattern or law based on lots of evidence from specific examples. For instance, I may notice that swans living in various parts of England are white. Then I may travel to Europe, and notice at a number of different locations, white swans. I notice that wherever I go, if ever I see a swan, it is white. I may then put forward a general rule to try and explain this situation: all swans are white. This is a very nice example of induction, because it also demonstrates its flaw: there can always be an exception to the rule. If I go to Western Australia, I will see black swans. So part of Hume’s problem with induction was that, because there can always be an exception to the rule, it is not logical to use it. Of course, the great attraction for modern science is that we can generalise, form a theory, and make great imaginative strides forward in thinking by using induction. But Hume’s age-old argument was that this method of thinking has a fatal flaw: no matter how many swans I look at, there can always be another one that does not fit my proposed theory. Another aspect of Hume’s problem was this: the notion that the future will resemble the past cannot be justified (Hume 1739). Taken to its sceptical conclusion, Hume’s problem is far-reaching indeed. He claimed that we cannot use induction in our attempts to understand nature because to do so assumes that this process will work in each successive case, which is itself inductive. In other words, we are using an inductive process to justify induction, which is circular. But neither can we justify uniformity in nature deductively, using evidential support, because the future will not necessarily resemble the past (Hume 1739). Since these are the only ways of justifying conclusions from premises, induction is never justified. 

 

Hume’s idea is so fundamental and well articulated that it has never to this day been shown wrong. Almost two centuries later, Karl Popper’s greatest contribution was to sidestep (yet not solve) Hume’s problem with induction; in doing so, he paved the road to our modern scientific method. Popper did this by making induction less important in science: he wanted us to boldly use induction in proposing theories but to then carefully test and try to disprove them. To Popper, it is the testing of the theories that is the critical part of the process: induction is just used to help the creative juices flow in coming up with a theory to test.

 

In the face of the problems of induction, Popper’s chief success was in not falling into the obvious trap so neatly revealed by Hume. Popper is not trying to provide further evidence or find another case that supports the theory. Instead, he is trying to find some evidence that doesn’t agree with the theory (Popper 1959; Magee 1985). In this way, if we find some evidence of a swan that is not white, we can reject the theory that all swans are white and progress in a logical way. So, although Popper advocates use of induction in theory formation, the major focus of his method is the logical and critical testing of otherwise tentative theories.

 

So what happens when we find our black swan? Well, we can modify the theory about our swans and test it again using Popper’s method. This process of testing, followed by modification and further testing, goes on and on…endlessly. And do you know in Chile, there are swans with black necks and white bodies! That is science; there is no final answer. There can be no final answer. Put simply, there can always be an exception to the rule, and the scientist must keep searching for it, forever. We must know that on this journey we can never reach our destination. To coin a phrase, for Popper’s scientific method: there is no truth only progress. ‘Scientific certainty’ is therefore nonsense, one term contradicts the other.

 

And yet Popper’s scientific method has its problems. Work by other 20th Century Philosophers such as Thomas Kuhn (1962), Imre Lakatos (1970) and others has shown that Popper’s rejection of theories can be too harsh; from a single case that doesn’t fit; an otherwise good theory must be thrown out or modified. There are other philosophical issues. No theory can be tested based on observations alone. With it goes some baggage: internal assumptions about accuracy of experimental conditions and equipment. So, we may at best falsify the theory coupled with our auxiliary hypotheses. To determine that it is the theory in isolation which is falsified, we must demonstrate that the auxiliary hypotheses are true, which is akin to using induction. This problem is often referred to as the Duhem/Quine thesis (see Quine 1961; Duhem 1962). There is another issue associated with hypothesis testing. Consider the example used previously; when we look for our non-white swan, the testing process is fairly clear cut: either the swan is white or it is not. Observation of a black swan would falsify this hypothesis. But other hypotheses are more difficult to falsify. Consider, for example, testing this hypothesis: that you are tossing a fair (unbiased) coin. If you throw a long, unbroken sequence of tails, to determine if you can reject the hypothesis you need to use probability and impose confidence intervals on the decision-making process. We use such a procedure regularly in science, but even with a very long sequence of tails our hypothesis of an unbiased coin at best only becomes highly improbable (see Schilpp 1974). And we must consider at what point it is anything other than arbitrarily appropriate to consider it falsified under a Popperian paradigm.

 

So Popper’s approach has been criticised as too idealistic or naive: it takes out the very human element of science. Our innate ‘humanness’ and the limits this imposes on our ability to interpret the world is at the very core of our process of understanding, so it must be of first concern to us in any method we use to gain knowledge. And yet in one sense, the very limitations with falsificationism outlined above serve to highlight how important a philosophical outlook is in science; scientists must be aware at a deeper level of how we interpret the world and the implications this has on the process of gaining knowledge.

 

It is recognised that Popper makes important inroads with his notion of rigorously testing hypotheses and thus avoiding the obvious pitfall of induction. Yet, as outlined above, there are some internal inconsistencies with the falsification process associated with the Popperian assessment of theories. So we see that Hume’s central issue with induction retains some of its sting. Further, Hume’s problem with induction extends in a broader sense to the very method we use to attain knowledge: there is no final ‘correct’ approach to interpreting the world. We must continue our search for a better one than we have.

 

And now we may return to the Precautionary Principle. As stated earlier, the original Rio Documentation is unclear in its wording, using triple negatives and muddy phrasing. At the Wingspread Conference in 1998, an attempt was made to rework the Precautionary Principle in relation to science, but aspects of the new document are even less clear. This is largely due to the introduction of the cause and effect concept. However, my main point of contention here lies with the more clear-cut misuse of this concept in subsequent policy. In the recent IUCN document, for example, it was stated that where an environmental threat is relatively certain, strict preventative measures should be taken. In this document, they define ‘relatively certain’ as where a causal link between an action and environmental damage can be established (IUCN 2007). But consider this: how can we logically apply the precautionary principle to ever demonstrate a causal link between human activities and damage to biodiversity or natural resources?

 

To get a handle on the problems here, we need to more closely examine the idea of ‘cause and effect.’ Earlier, I mentioned David Hume’s problem of induction. In fact, this problem arose naturally from another issue that Hume (1739) was more vocal about in his writings: cause and effect. This was the key point in Hume’s philosophy and remains one of his lasting impressions on us today.

 

Hume’s problems with cause and effect are best introduced by an example he used himself (Hume 1739). How do we know the sun will rise tomorrow? Because we have seen it rise on hundreds of mornings, and have known it to do so ever since Humans have recorded their observations; because day always follows night. But these are not logical thoughts: they are the voice of habit and custom. There can always be an exception to the rule; just because the sun has risen as far back as we can remember it may not do so tomorrow. Just as day has always followed night, it may not do so tomorrow. Consider this: from all our observations of day following night, can we say that because night ends, the day must follow? This is cause and effect; it is not logical (so not scientific) and we can see that from this idea springs the problem of induction. We will probably not lose an ounce of sleep tonight worrying whether the sun will rise tomorrow or not; we have strong expectations that it will! But the kernel of this idea is very important to how we may view science.

 

Hume (1739) found that we never actually observe any cause and effect. We can observe event A, and we can observe event B, bet we never observe the ‘causal connection’ or ‘link’ between them, event C. In other words, we see that A happens and then B happens just after it; we may see this happen a lot of times, but it does not mean that because A happens, B happens. Our habit of making any connection between things we see is driven by our desire to understand and find meaning in our world; David Hume tells us that it says more about our psychology than anything else.

 

In science, we look for relationships between things. One thing may be increasing as another thing increases. For example, Human beings burn coal and this may increase amounts of carbon dioxide in the atmosphere. There also may be a rise in average temperature around the world. But although the pattern of rise in both carbon dioxide and temperature may be very similar, we can never say that one causes the other. The urge to do so is driven by our desire for explanation, to look for pattern and generalise, to make conclusions using induction, which is not logical. So what can we do then? In modern science we can look for evidence against relationships between things, to try and find exceptions to the rule. If we find no evidence against the relationship, then it does provide some more corroborative evidence supporting our theory; our knowledge has advanced, yes, but it can never be shown that humans have caused global warming.

 

Of course, even acknowledging that we may lack scientific knowledge about issues such as global warming is important. And in this sense, the precautionary principle succeeds. But we need to go further than saying that we may not right now understand certain relationships between increases in carbon dioxide and global temperature. We need to realise that ultimately, we can never link these two things through observation in any logical way. To this, some might say, who cares? As far as we should worry, humans cause global warming; smoking causes cancer. But it is important! The capabilities of science are overstated in the IUCN document, and if both the public and scientists do not think so, then science is weakened through confusion and ignorance. And this means any application of science (such as sustainable restoration of our planet’s ecology) must also be weakened. We can never be certain, about relationships between things or about a final answer to our questions. There will never be a time or place where ‘cause and effect can be established’ because this requires a demonstration of the link between two conjunctive events and such a link can never be observed. As it stands then, the IUCN policy promises something undeliverable. It proposes something unmanageable. Until we rethink what we can expect of science, and the implications this might have, we cannot most wisely move forward in either science itself or areas where it is applied.

I am not saying there has not been great value added by the work of the many researchers in this area from within law, science and economics. Nor am I saying that we shouldn’t keep pursuing a clearer understanding of the nature of science and practically apply it in risk analysis and management. To the contrary, these problems are precisely too important not to apply science appropriately in trying to find sustainable solutions. But first and foremost, there is a basic misunderstanding in the wording of the IUCN document in regard to science that has apparently not been recognised; it is more than unclear, it misleads.

 

Now, a good thing about the precautionary principle is that we are more cautious about undertaking activities which may lead to harming ourselves or the environment. We have power to enforce the law and protect the environment, although we may struggle to be consistent. I am certainly not saying that by changing the words of such documents we will solve the sustainability crisis. This is far too simplistic. But I think the way the recent IUCN statement has been worded shows that we are generally not thinking carefully enough about what science can bring to the negotiation table.

 

This may seem a subtle point, but it is fundamental, with profound implications. First, policy implementing the Precautionary Principle (such as the IUCN document) is used as a template for all we can achieve in sustainable management; if our reference documents are misleading, then errors will be compounded and our efforts stymied. Second, by overstating what science can achieve and ignoring the underlying philosophy we become complacent. This is a natural, human frailty; even though we may superficially acknowledge science as uncertain, we gradually are eroding the keen edge of science because we subconsciously view our current ideas with more permanence than they deserve.

 

Simply because they are ‘our’ ideas, we stubbornly refuse to question the validity of a lifetime of research. A clearer understanding of the philosophy underlying all that we do in our work will constantly guard against this. Through deeper questioning, we create a keener front on our attempts to understand the world. Third, the example I have used here is the tip of the iceberg; there are a range of other areas in science, all of them underpinned by philosophy. The work of philosophers stretching across the last 2,500 years is distilled to purified essence within modern science; it implicitly underlies how we do what we do, today. Through more effort in understanding this volume of work, we can be better scientists. But by shying away from these philosophical issues, we ultimately limit our scientific achievements. This has a flow-on effect in questions of sustainable management, where science must play a vital role and be clearly defined.

 

Although since the time of Newton it has been ever more tempting to think so, science is not flawless, no less because human beings, with their limited perceptions and human frailties, are its masters. Instead, it has boundaries, a major one being the impossibility of finding any final answer to the questions we ask. But when we fully accept this, rather than it inspiring any sense of futility, despair or frustration, there is, instead, hope, and merit in recognising our ignorance. In throwing our successes and failures in a bolder, brighter searchlight, we switch on a philosophy of open-minded humility, to give and take criticism, and we recognise at last, deep down, that our knowledge is never final.

 

Many scientists are forgetting, in the rush to specialise, pigeonhole and publish, to stop and ask these broader questions. We shy away from this deeper philosophical thought because it appears messy in our ordered world, pointless or confusing, endless. But it is only through such thinking that scientists can explore all the possibilities and understand the limits to human understanding. We can then better nest our ideas within the web of sustainability, where science and society meet. In the process, we may realise our ideas are not so clear-cut or well-defined as we think; this is valuable! A messy middle road will wind forth, but we shouldn’t in ignorance separate ourselves out of our very human need for security, or fear of the unknown, all the while reaching for the supporting crutch of truth. For, in a wonderful and liberating sense, so much remains unknown; and our final answer is unknowable. Only once we embrace this ‘truth’ will we see our limitations, and then more clearly our possibilities. And we can take this philosophy forward to better face humanity’s most urgent question: how can we secure a sustainable future?

 

 

References

Arrow, K. J., Cropper, M. L., Eads, G. C., Hahn, R.W., Lave, L.B., Noll, R.G., Portney P.R., Russell M., Schmalensee R., Smith V.K. and R.N. Stavins. 1996. Is there a role for cost-benefit analysis in environmental, health, and safety regulation? Science, 272: 221-2.

 

Bacon, F. 1620. Novum organum.

 

Bradshaw, G.A. and J.G. Borchers. 2000. Uncertainty as information, narrowing the science-policy gap. Ecology and Society, 4(1): 7-14.

 

Downes, B.J., Barmuta, L.A. , Fairweather, P.G., Faith, D.P., Keogh, M.J., Lake , P.S., Mapstone, B.D. and G.P. Quinn. 2002. Monitoring Ecological Impacts: Concepts and Practice in Flowing Waters. New York : Cambridge University Press.

 

Duhem, P. 1962. The aim and structure of physical theory. New York : Atheneum.

 

Fairbrother, A. and R.S. Bennett. 1999. Ecological Risk Assessment and the Precautionary Principle. Human and Ecological Risk Assessment, 5(5): 943-949.

 

Gollier, C., Jullien, B. and N. Treich 2000. Scientific Progress and Irreversibility: An Economic Interpretation of the ‘Precautionary Principle’.  Journal of Public Economics, 75(2): 229-253.

 

Hume, D. 1739. A treatise of human nature.

 

IUCN. 2007. Guidelines for applying the precautionary principle to biodiversity conservation and natural resource management.

 

Kaiser, M. 1997. The Precautionary Principle and its implications for science. Foundations of Science, 2: 201–205.

 

Kuhn, T.S. 1962. The structure of scientific revolutions. Chicago : University of Chicago Press.

 

Lakatos, I. 1970. Falsification and the methodology of scientific research programmes. In Criticism and the Growth of Knowledge, ed. I. Lakatos and A. Musgrave, 91-195. Cambridge : Cambridge University Press.

 

Lemons, J., Shrader, F. and C. Cranor.  1997. The Precautionary Principle: scientific uncertainty and type I and type II errors. Foundations of Science, 2: 207–236.

 

Magee, B. 1985. Philosophy and the real world: an introduction to Karl Popper. Illinois : Open Court Publishing.

Magee, B. 2001. The Story of Philosophy. London : Dorling Kindersley.
Martin, P.H. 1997. If you don’t know how to fix it, please stop breaking it! The Precautionary Principle and climate change. Foundations of Science, 2: 263-292.

 

Popper, K.R. 1959.  The logic of Scientific discovery. London : Hutchinson and Company.

 

Popper, K.R. 1962. The open society and its enemies. London : Routledge.

 

Quine, W.V.O. 1961. Two dogmas of empiricism. In From a logical point of view. 20-46. New York : Harper and Row.

 

Saladin, C. 2000. Precautionary principle in international law. International Journal of Occupational and Environmental Health, 6: 270-280.

Schilpp, P. A. ed. 1974. The philosophy of Karl Popper. 2 Vols. La Salle: Open Court .

 

Stewart, R.B. 2002. Environmental regulatory decision making under uncertainty, Research in Law and Economics, 20: 71-135.

 

UNCED. 1993. The Key Principles. Rio de Janeiro , Brazil .

 

Wingspread. 1998. Wingspread Statement on the Precautionary Principle. Wingspread Conference Centre, Wisconsin .

 

This article first appeared in the Journal of Philosophy, Science and Law.