Concepts Instead of Computations: Enhancing Statistical Literacy, Patrick Murphy

Patrick Murphy

School of Mathematical Sciences,

University College Dublin, Belfield, Dublin 4, Ireland


This paper describes a course that was developed to teach statistics to students majoring in Psychology and Politics. There were several interesting aspects to this course. Firstly each lecture contained between 550 and 800 students. Secondly those students were almost uniformly negatively disposed to Statistics prior to the beginning of the course. Thirdly we were required to provide an introduction to Statistics in just 12 lectures, each of 50 minutes duration. Constrained, we were forced to think deeply about what we want to provide to students in an Introductory Statistics course. Making use of simulations and the internet, we chose to emphasise concepts and critical thinking and supported these with examples which had direct relevance to our students. Restricted to 12 lectures, we learned to make optimum use of each lecture. Can a short course like this act as a useful pre-cursor to the standard Introductory Statistics course?



University College Dublin (UCD) is the largest university in Ireland with approximately 20,000 students enrolled. Statistics was established as a discipline in its own right in 1986 with the formation of a Department of Statistics and the appointment of a Chair in Statistics. In 2005, University College Dublin undertook a major restructuring of its departments and its courses. An outcome of this restructuring was the adoption of a new modular, credit based system of education.



In the late 1990s the Department of Psychology in UCD decided that they no longer were willing to devote 24 lectures of their first year Psychology programme to statistics. They concluded that twelve, 50-minute lectures of statistics were sufficient for their students. This decision forced us to answer some interesting questions. Is it possible to teach the fundamentals of statistics in 12 lectures? One solution would be to only attempt to teach some basic descriptive statistics. Although this was the easy option we decided to seek an alternative solution.



It seems to be a universal phenomenon that many first year university students are attracted to studying psychology. Students in UCD are no exception and in 1999 when we first delivered this course there were 550 students in each lecture. Over the following years this number actually increased to 700. And in 2002 these 700 were joined by an additional 100 students from the Department of Politics. Although it might seem natural to divide such a large group of students into smaller groups for teaching purposes, this is not what happened; instead each lecture contained the entire 800 students.

A short initial survey was conducted at the beginning of each year to determine students’ perceptions of the course. Over six years a total of 4,211 responses were collected to the question: “What is your opinion of taking a course in statistics?” 3,463 students indicated that they saw statistics as “boring”. So the challenge facing us was how to engage, in a large lecture theatre, with students whose initial conceptions were almost universally negative towards the material we were going to deliver.



“You can know the name of a bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird. So let’s look at the bird and see what it’s doing — that’s what counts. I learned very early the difference between knowing the name of something and knowing something.”

This quotation from the Nobel Laureate in Physics, Richard P. Feynman, describes most eloquently our belief that statistics courses should not merely consist of a set of recipes for computing descriptive statistics and p-values. Despite the limited time available or the initial disaffection of our audience, we believed that it would not serve any purpose to abandon this most fundamental principle.



In 1999 when we first delivered this course the traditional method of delivering a lecture in UCD was using transparencies and an overhead projector. Students normally took notes during class and lectures were not very interactive. According to Glasser (1998) most people learn 10% of what they read and 20% of what they hear but 70% of what they talk over with others and

80% of what they use in real life.

Clearly then, the traditional, passive, method of lecturing is actually one of the worst techniques that one can use to teach students. We decided to do as much as possible to depart from this tradition. Notes were prepared in advance, delivered by multimedia computer presentation in class and made available for students to download from a course website. While none of this seems radical today, in 1999 this approach to teaching was practically unique in UCD. The technology, while novel, was not an end in its own right, instead its purpose was to free up time, in the limited number of lectures available, which would allow us to introduce some degree of interactivity into our classes.



It is not possible to describe in detail the complete content of this course, instead we shall focus on some of the components which we have found most successfully connect with the students. The course was designed to entertain, to challenge, to stimulate and to educate.

It has already been mentioned that we had between 550 and 800 students in each of our classes over the last six years. We have also noted that these students generally did not approach this course with favourable opinions. Often educators may be tempted to ignore this apathy with which their students greet statistics. Perhaps this is because we enjoy our subject and don’t empathise with the students’ perceptions? “If the students only knew what we know”, is what we think. And so we can be tempted to rush headlong into introducing our students to our favourite data sets. Surely once the students see these wonderful data sets and the amazing techniques with which we can analyse them, they will become excited by our subject!

From our experience with this course we believe that we should not try to ignore these difficulties, instead we should confront them directly.


  • Dispelling Myths

This course begins by highlighting all of the negative associations that are attached to statistics. “Statistics is boring”, “Lies, damned lies and statistics”, our first few slides remind the students of all the negative references they have ever encountered. We then include a few humorous ones that some may not have encountered. Many studies have shown that humour is an effective tool in teaching and we find at this stage of the first lecture, that the students who entered the lecture theatre with trepidation and perhaps even hostility are now enjoying themselves. Their minds are open and we can proceed to challenge further their preconceptions.

We examine some interesting cases where statistics have been misused. We begin with a case taken from that week’s news, something with which many of the students will be familiar. We then consider some well-documented examples. In 1989 Meryl Streep was interviewed on US television about her perceptions of the significant danger that was posed by the consumption of apple juice. This example (c.f. Herrmann (1997)) has proved not only to be extremely useful but also extremely popular with students.

Another very useful example for teaching relates to the 1992 British General Election. For those unfamiliar with politics in Britain, the opinion polls the day prior to the election predicted the Labour Party would get a landslide victory in the 1992 British general election. When the election results came through, the Labour Party actually lost out to the Conservatives for the fourth successive election. The media controversy in Britain following this outcome was enormous (c.f. Smith (1996)). Statisticians and opinion pollsters were suddenly in demand; they were being interviewed as the top item on the evening news. This example, which we return to several times in the course, illustrates many issues of importance in statistics: sampling, estimation, margin of error etc.

Following this initial stage of the course almost all of the large group of students leave entertained and intrigued. Most importantly, they are now looking forward to their statistics classes. We now assign the students a small project to complete at home. They choose from a group of statements, for example “Ireland has the best education system in the world”. They are required to gather information from the media, the university library or the internet so that they can defend or rebut the chosen statement in a tutorial.

Often the next lecture would involve an introduction to descriptive statistics. Surely there can be no more guaranteed method of boring an audience than to begin an introductory statistics course with descriptive statistics, yet consider the content of the first hundred pages of most textbooks aimed at “Introductory Statistics”.

At the beginning of our course we present the students with a goal: “we wish to estimate population characteristics”. Our first aim is to illustrate that measuring all of the units in a population is not always possible. A useful way to make this point is to describe in detail the effort and expense that National Statistical Agencies undertake in conducting a census of population. Even in a country the size of Ireland, this expense is significant. Now we are ready to introduce the concept of sampling, firstly as a convenient way to gather data with much less expense than making measurements on the entire population.

The course proper begins with an examination of how data may be collected. We consider issues involved in conducting sample surveys, designed experiments and observational studies. Only after we have extensively examined many of the issues involved in data collection do we introduce the students to descriptive statistics. As an illustration of this aspect of our course we describe some of the content of our lectures on sample surveys.


  • Data Collection: Surveys

Although some of our students are studying political science we find that the majority are quite unaware of and uninterested in that great source of material on surveys and samples, namely elections. For that reason we find that it is more helpful to discuss opinion polls that the students are familiar with from television advertising. We try to choose advertisements that are running in the weeks surrounding the lecture and we ask the students to criticise the validity of the poll results reported in those advertisements. “Do 9 out of 10 cats really prefer one brand of cat-food above all others?” “Is it valid to say that the french-fries of one fast food restaurant chain really are the nation’s favourite, on the basis that more of these fries are sold?”

Having discussed several examples we now introduce the students to concepts such as intentional, unintentional bias, validity, reliability etc. By asking students to complete one of two differently phrased questionnaires which have been randomly assigned to each person in the class, we discuss how the ordering of questions and the phrasing of questions is a factor in designing a survey. The merits of open questions versus closed questions are also considered.

The next lecture examines how one should select a sample of individuals to participate in a survey. Every day there are interviewers conducting surveys on the main shopping streets in Dublin, and practically all of our students have been asked to participate in these surveys at some stage in their lives. This concrete example allows us to consider issues like representative-ness and volunteer response in surveys.

We now conduct two surveys in class, the first a show of hands and the second an anonymous survey of the entire class. In both instances we attempt to measure the amount of independent study performed by our students and the prevalence of illegal drug use among them. Finally we ask our students to conduct their own surveys of their classmates and of students in the university as a whole. In analysing the results of these three surveys we can examine the important role played in surveys by setting, relationship between questioner and subject, anonymity etc.


  • Descriptive Statistics

Instead of trying to teach our students how to compute numerous descriptive statistics we try to illustrate their uses and relative merits through examples, deciding not to present any mathematical formulae to our students. So, in a sense, this course truly is a Statistics course without mathematics. In adopting this approach have we not gone a step too far? We believe not. We believe that in discussing concepts and using practical examples but avoiding tedious computations our students achieve a deeper understanding.

The majority of students in our classes are between 17 and 19 years of age. However we have always had a small but significant group of mature students in their 50s. This is an extremely fortuitous situation which helps enormously to illustrate the relative merits of different measures of central tendency. When the students see, for instance that the mean age of those in the class is 23 but the class of 800 contains not a single 23 year old, they begin to ask questions.

This curiosity is piqued again when we ask the students to consider what might be the average income of university employees. This example has the advantage that students are naturally curious about how much we, their lecturers, earn. First we ask them to think about the amount of money earned by different groups of employees from the cleaning staff to the university president. We then ask them to estimate how many employees there are in each income bracket. Finally they estimate the mean, the median and mode of these incomes and in comparing these they discover how bad a measure the mean can be for something like incomes. We conclude this discussion by highlighting that in some countries like the US, government statistics usually report median incomes, while in many other countries, including Ireland, they still report mean incomes.


  • Beyond Descriptive Statistics

It might be considered folly to attempt to go beyond data collection and descriptive statistics in a course consisting of only 12 lectures. However, it is said that “fortune favours the brave”, and so we included material on Chebychev’s Rule and the Empirical Rule. Could we go further? Confidence intervals and p-values we decided were a step too far. But by explaining margin of error, which our students could compute easily, we were able to convey the essential message of a confidence interval.

The final task we set ourselves was to explain the principles of hypothesis testing. We would avoid the details and would concentrate on rejection regions as more intuitive than p-values. We present the following scenario to the class. “An investigator has collected a representative sample of individuals from a population and computed the mean age of those in the sample.” We then ask the class the following question: “If he reports that the mean is 50 years of age, would you believe that the sample was chosen from among students in this university?” Asking the same question repeatedly with decreasing means, the students begin to discover that they each have a “cut off point” where their answers change from no to yes.

From here we discuss the commonly used analogy of the courtroom with a null hypothesis representing innocence and the requirement to prove guilt beyond reasonable doubt. With this analogy, Type I and II errors and significance levels can be easily understood. Asking the students to envisage a post September 2001 judicial system where a presumption of guilt might replace the presumption of innocence, provides further useful understanding of the errors in hypothesis testing. Returning to our example of the ages, we can now illustrate how altering significance levels affect the location the students’ “cut off points”. And finally replacing the phrase “cut off point” with “rejection region”; our students gain an understanding of the basic logic of hypothesis testing.



At the end of each year that this course was delivered, assessments were conducted where the students were asked for their opinions of the course. The responses were uniformly positive and completely overturned the negative opinions held by 82% of the students at the commencement of the course.




Year 1999-2000 2000-2001 2001-2002 2002-2003 2003-2004
# Respondents 353 490 473 270 562


Course is well organised. 4.55 4.5 4.92 N/A 4.78
The meanings of concepts were explained. 4.45 4.4 4.55 N/A 4.81
Lectures stimulated me to think about the subject. 4.5 4.45 4.15 4.12 4.20
Lecturer made me enthusiastic about attending lectures. 4.75 4.8 4.63 4.70 4.54
Scale: 1= Strongly Disagree, 5= Strongly Disagree


The following are a selection of some of the comments made by students who have taken this course:

  • “I found this course very good and it is a pity you can’t study for a statistics degree through doing this course.”
  • “Lecturer has changed my mind about maths – I no longer hate it. Thanks! Practical uses of maths made the course so much more interesting.”
  • “I was surprised by this course as I had expected it to be boring and I found it interesting and enjoyed it!”
  • “I never thought I’d find statistics so interesting. The lecturer made it really easy to understand and used simple examples and language in lectures, which was very effective. Overall, course and lecturer were great.”
  • “Very interesting (which is a hard thing to do in statistics). Tried to make it as easy and understandable as possible. Thank you!!”
  • “Lectures were very enjoyable. Good examples of statistical uses were used. How do I become a statistician?

Recalling that the students initially possessed very negative opinions of statistics, we believe that the approach taken by this course can successfully instil a positive appreciation for statistics and an understanding of some of the most fundamental principles of our discipline. In 2005 we began to extend this approach.



Following the announcement, in early 2005, that UCD would adopt a modular, credit based system for undergraduate degrees we began planning to replace this course, which was aimed at Psychology and Politics students, with one that would be marketed to a wider Science, Business and Social Science audience. The 12-lecture restriction was lifted. Should we now consider returning to a more standard Introductory Statistics course? Our decision was to continue the approach outlined above but to supplement this course with some other material.

Using the prosecutor’s fallacy we examine Bayes’ Theorem. We find that even our current 17-year-old students have some awareness of the OJ Simpson trial (c.f. Gigerenzer (2002)) and this serves as a useful example. The Sally Clark case in Britain (c.f. Hill (2005)) allows us to examine the concept of independence.

The major addition to this course, post modularisation, has been the attempt to teach students about the central limit theorem and p-values using simulations. We begin by simulating in class a die rolling experiment. First we ask each student to write the numbers 1 to 6 on pieces of paper, fold these pieces of paper up and turn to their neighbours who will choose one number. A show of hands allows us to draw an approximately uniform histogram. Then we use the R statistical computing package to examine the distribution of the average number shown when N dice are rolled. As N is increases the students witness the central limit theorem in operation.

From here we provide the students with 40 numbers and ask them to consider whether they believe a hypothesis that this data has come from a population with mean equal to 10. Much discussion takes place about the intuitions of the students, during which they generally suggest computing the mean of the 40 values and comparing it with the hypothesised value of 10.

We then simulate 20,000 samples of size 40 from a normal distribution with mean 10. The means of these samples are computed and we plot a histogram of the distribution of these means. The students can then compare where the mean of the original 40 values lies among these 20,000 values. And by considering what proportion of these values are further from the mean of the 20,000 values than their original value the students begin to understand p-values.


There has been much reform of the Introductory Statistics course over the last 25 years. At the first ICOTS conference Ehrenberg (1982) and Joiner (1982) argued for making statistics relevant to real life and for using computers in our teaching. The old fashioned Introductory Statistics course may soon be a thing of the past and texts like Moore (2001), Chance and Rossman (2006), Utts (2005) and others which promote critical thinking, the analysis of real data and make use of modern technology are becoming more widespread.

While great strides have been made we believe that it is possible to go further still. Our experiences indicate that there are ways to make the subject of Statistics interesting to even the most apathetic student. Instead of ignoring the negative perceptions held by students towards statistics we should directly confront these perceptions through the use of relevant local examples and humour. Traditionally we feel the need to teach our students how to compute numerous descriptive statistics. Why? Is it not more interesting and more valuable to discuss fundamental concepts related to appropriate sampling and data collection? We believe that a “taster” course, such as we have described, which does not attempt to instruct students on computation but concentrates on principles and critical thinking should act as a precursor to the new data-driven Introductory Statistics course.



Chance, B and Rossman, A (2006). Investigating Statistical Concepts, Applications and Methods (1st edition). Belmont: Brooks/Cole.

Cobb, G (1993). Reconsidering Statistics Education: A National Science Foundation Conference. Journal of Statistics Education, 1(1).

Ehrenberg, A.S.C. (1982). We must preach what is practised. In D.R. Grey, P. Holmes, P. Barnett and G.M.Constable (Eds.), Proceedings of the First International Conference on Teaching Statistics, (pp. 215-218). Sheffield, UK: Teaching Statistics Trust.

Gigerenzer, G. (2002). Reckoning with risk (1st edition). London:Penguin.

Glasser, W. (1998). The Quality School Teacher (Revised 1st edition). New York: Harper Collins.

Herrmann, R., Warland, R. and Sterngold, A. (1997). Who Reacts to Food Safety Scares?: Examining the Alar Crisis. Agribusiness, 13(5), 511-520.

Hill, R (2005). Reflections on cot death cases. Significance, 2(1), 13-15.

Joiner, B.L. (1982). The case for using computers in teaching statistics. In D.R. Grey, P. Holmes, P. Barnett and G.M.Constable (Eds.), Proceedings of the First International Conference on Teaching Statistics, (pp. 307-312). Sheffield, UK: Teaching Statistics Trust.

Moore, D. (2001). Statistics: Concepts and Controversies (5th edition). New York: WH Freeman

Smith, T.M.F. (1996). Public Opinion Polls: The UK General Election, 1992. Journal of the Royal Statistical Society, Series A, 159(3), 535-545.

Utts, J (2005). Seeing Through Statistics (3rd edition). Belmont: Brooks/Cole.



This article was presented as a paper at the ICOTS 7 conference sponsored by the International Association for Statistics Education.