Making A Difference, Not Faking A Difference – Learning And Using What’s Good And Fair In Biostatistics, David Goddard

Making a difference, not faking a difference – learning and using what’s good and fair in biostatistics

Goddard, David

Monash University Department of Epidemiology & Preventive Medicine

Alfred Hospital, Commercial Road

Melbourne Vic 3004 Australia

david.goddard@med.monash.edu.au

 

  1. Background

 

This talk is addressed to statisticians who aim to lead others to a better understanding of statistics – whether those others are trainee statisticians or simply regular users of statistical methods. I come to you as one whose work, among other things, is to interest and to guide students in various health sciences toward an understanding of statistical inference.

 

My co-panellist, Nora Donaldson, will speak on considerations for statisticians who serve on ethics committees. I shall leave to her ethical matters such as whether the groups of persons selected for study are appropriate and issues of study design. For example, I shall not discuss research on vulnerable groups, e.g. children or the cognitively impaired, nor talk of studies where those who are likely to benefit from the outcome are not in the same social group as the participants in the study, nor explain why having too many or too few people enrolled in a drug trial is unethical.

 

Many statisticians build models (in using data to make predictions). Here, I invite you to be models when working in the midst of those you seek to influence. I say this because it’s difficult to teach ethics without sounding like Goody two-shoes. It’s a topic where what you do has greater influence than what you say. The attitudes that lead to ethical behaviour come from within; the gauge of ethical behaviour is one’s own. For me, it’s what I do when those people I’m doing it to aren’t looking. Later I’ll refer to ‘the people that aren’t looking’ when statisticians are analysing.

 

  1. What ethics?

 

My background is in occupational health. One central feature of Australian legislation on workplace health and safety is the employer’s general duty of care. In my home state, it is expressed thus: “An employer shall provide and maintain so far as is practicable for employees a working environment that is safe and without risks to health”. In similar spirit, an ethical duty for statisticians (and others involved in research) could be expressed: “A statistician shall provide and maintain so far as is practicable for clients an ethos that is fair and without harm to truth.”

 

The ISI has a Declaration on Professional Ethics (ISI, 1985) which is prominent on the Institute’s website. The Declaration speaks of four sets of obligations – to society, to those who pay you, to colleagues, and to subjects of study. For a statistician, ethical behaviour affords people freedom to choose, advances knowledge, minimises deception, and does not harm those unable to stand up for themselves.

 

Under the heading ‘1.3 Pursuing Objectivity’, the ISI Declaration says of statisticians: “They should … not engage or collude in selecting methods designed to produce misleading results, or in misrepresenting statistical findings by commission or omission.” It seems to me that, in general, misrepresentations due to omission are likely to be less conspicuous than committed misrepresentations, and therefore more insidious. I’ll take them first.

 

  1. Omissions

 

Statisticians use data to generate probabilities. These probabilities about, say, associations between a group of exposures and a group of diseases, reflect uncertainties. A principal challenge for statisticians is to create a willingness among research investigators and the community to receive these probabilities, these uncertainties, and make the effort to grasp their meaning.

 

This is a task indeed! Watson (2001) says that probability and statistics have not historically held a significant enough place in the high school mathematics curriculum but that this is changing. My own experience of teaching basic biostatistics to students for a decade is that the majority of my students feel uncertain about some or all of ratio, proportion, odds and probability. Uncertainty brings anxiety which hampers learning; and the vocabulary of biostatistics distracts students toward too great a focus on how to do it rather than why. I have no doubt that many who enter research in the health sciences carry misgivings about statistics and confusion about probability. One PhD student put it to me thus: “The stats stuff freaks me out! What I need most from it is to get my data into a publishable form.” To this person (and many others) the statistical analysis is perceived as a hurdle to be got over for the purposes of short-term gain, rather than welcomed as a gift of deeper understanding to which there is passionate attachment.

 

Statisticians as a group have worked hard to build understanding. Examples of this industriousness are the many books on biostatistics for the beginner (or almost beginner) in research. And, for the community at large, books such as Haigh (2003) and Senn (2003) use stories and a plethora of interesting examples to build understanding of probability. These broadcasted communications do assist what occurs in the classroom. But there, among students, statisticians require more patience and more will to probe for understanding than do teachers in most other disciplines. Competence is not enough; to engage students, a competent statistician must recall from years past what it felt like to be incompetent.

 

Pocock and others (2004) examined 73 published articles on observational epidemiology. (Epidemiology is the study of who gets what health outcomes and why. Observational as distinct from intervention studies afford the research investigator no control over the exposure that influences the health outcome.) The authors found issues of concern on, among other things, study size, multiple comparisons, and adjustment for confounders – all activities within the statistical ambit. Perhaps this indicates the taking of sneaky shortcuts; more probably, I expect, it shows a failure of the investigators to fully understand the place of number-based information, particularly probability, in building knowledge.

 

Let’s look at one of Pocock’s expressed concerns in observational epidemiology – multiple comparisons. These are inevitable; cohort studies and case-control studies demand so much effort and expense that no study nowadays has just one hypothesis concerning a single exposure and single health effect. Among, say, 200 independent tests with a level of statistical significance in each of 0.01, the probability of three or more tests where the null hypothesis is falsely rejected is as high as 0.32. An author who, in such circumstances, crows about three individually low P-values is hardly persuasive.

 

Yet, among dozens of research investigators in observational epidemiology that I have met, a very common view is that positive results are the only path to publication a high-impact journal. An investigator may not even write up a negative study, preferring instead to direct the required 10 – 20 working days toward a more fruitful end. So results that are as dubiously positive as three associations among 200 tests of significance may be pounced upon as potentially publishable – even if there is no prior evidence of such associations (between these exposures and these health effects) nor plausible explanations of why there might be.

 

A statistician involved in the analysis and write-up of such a study will, of course, realise how the fact of multiple testing can devalue the significance of these findings. However, if the statistician makes no effort to carefully explain this to the investigator and (if the findings are to be published) fails to insist that the readership be warned of this, then he or she betrays the truth. So, to come back to my personal standard of ethical behaviour – who “isn’t looking” when the statistician omits to urge an investigator to refrain from crying, “Wolf”’? Firstly, they are members of the community who may be frightened by these findings or who feel, for the sake of their health, they ought to avoid things that they would otherwise enjoy doing. Secondly, the investigator will ‘be looking’ but with vision clouded by lack of prowess in probability. The investigator may not realise, without being told, how the circumstances of these findings deflates their significance.

 

One way to reduce the risk that misleading results will be generated in observational epidemiology is to carefully specify the process of analysis as part of the study protocol.

 

The pressure of time can also lead a statistician toward omissions. It takes many hours to ‘proof read’ a large data set looking for errors such as a date of death that precedes the date of that person’s entry to the study. A statistician who is neither assured of the quality of data nor paid for the time of checking has strong potential to prepare a misleading analysis on the basis of garbage in, garbage out. One ethical option would be to refuse to analyse it.

 

  1. Committing misrepresentations

 

Clients of statisticians realise that it is possible to use the processes of statistical inference to derive different answers depending what method is used. A statistician may then face pressure to choose a method, even an inferior or not-quite-correct method, in order to obtain an answer that suits a purpose – let’s say to find a statistically significant difference between one group and another in an epidemiological study. Examples of where such situations arise are:

 

  • An invitation from a legal firm to evaluate the statistical evidence for a case under consideration by the courts associated with direct offers of generous rewards if the evaluation is favourable to the firm’s client.
  • Threat of court action by the sponsor of a study should the results not accord with that sponsor’s expectations. In conference with this client, it may be difficult to convey the sensitivity of results to different modelling assumptions or methods without having the client attempt to lure the statistician into a final analysis that best suits the client’s purposes.
  • If results are adjusted for age, sex, severity of disease, then who is it that chooses which adjusted results are presented for publication?       An investigator may want to publish only carefully selected parts without revealing at all what has been winnowed behind the scenes.

 

This can extend to intimidation or bullying. A senior investigator may bring a data set to, say, a young statistician and discuss the analysis. Some days later, the investigator returns and says something like: “Oh! Patients 8 and 23 weren’t properly part of that study; I’d like you to take them out”. A cogent reason for this is to make the study outcome ‘look better’ – to ‘fake a difference’. A veiled threat is that if the young statistician doesn’t accede to this request by the senior investigator, his or her employment situation might become ‘a little difficult’. This is a stiff test of integrity.

 

A statistician may also find that a report submitted to a study sponsor is afterwards subtly changed. Providing the report as a portable document format (PDF) file makes that more difficult.

 

5.      Making a difference

 

I’ve talked of some ways to fake a difference. So, how do you make a difference? The first way is deliberately train yourself to communicate well and to take every opportunity to practise. For a statistician this means not only saying how you will undertake your part of the process of statistical inference, but what doing that means. Please realise that many seemingly well-qualified people have anxieties and gaps with concepts such as probability, particularly conditional probability. A person in this situation may find it very difficult to articulate what it is they would like you to tell them. Be patient, be enabling, and try to see ‘the question behind the question’. Secondly, maintain your integrity; determine to model yourself not just your data.

 

6.      Acknowledgements

 

My colleagues Assoc Prof Andrew Forbes, Assoc Prof Damien Jolley, Dr Danny Liew and

Dr Jenny Smith provided examples and advice for this presentation.

 

REFERENCES

 

Haigh, J.(2003). Taking chances – winning with probability. Oxford University Press, Oxford.

 

ISI – International Statistical Institute (1985). Declaration on professional ethics. Available at website http://www.cbs.nl/isi/ethics

 

Parliament of Victoria (1985). Occupational health and safety act: s. 21. Victorian Government Printer, Melbourne.

 

Pocock, S.J., Collier, T.J., Dandreo, K.J. et al (2004). Issues in the reporting of epidemiological studies: a survey of recent practice. BMJ; 329: 883-887.

 

Senn, S. (2003). Dicing with death –chance, risk and health. Cambridge University Press, Cambridge.

 

Watson, J.M. (2001) Probability and statistics – an overview. In Teaching secondary school mathematics – theory into practice (eds Grimison, L. & Pegg, J.), Ch. 6. Thomson, Melbourne.

 

David Goddard is a University teacher and occupational physician with
special interests in occupational hygiene and toxicology. These
interests developed from his earlier work as a government medical
inspector of workplaces.
He has taught medical undergraduates and post-graduate students in
public health and occupational health at Monash University since 1990,
something he loves to do. In 2001, he was successfully nominated by his
undergraduate students for the Vice-Chancellor’s Award for Distinguished
Teaching.