While comparisons between percentages or rates appear frequently in journalism and advertising, and are an essential component of quantitative writing, many students fail to understand precisely what percentages mean, and lack fluency with the language used for comparisons. After reviewing evidence demonstrating this weakness, this experience-based perspective lays out a framework for teaching the language of comparisons in a structured way, and illustrates it with several authentic examples that exemplify mistaken or misleading uses of such numbers. The framework includes three common types of erroneous or misleading quantitative writing: the missing comparison, where a key number is omitted; the apples-to-pineapples comparison, where two subtly incomparable rates are presented; and the implied fallacy, where an invalid quantitative conclusion is left to the reader to infer.
For many years, early in my Quantitative Reasoning class, I have asked my students to consider the following statement:
One third of all fatal car accidents involve people who are drinking and driving. That means that many more traffic deaths –fully two thirds! –occur when the drivers aren’t drunk. Why, then, are we so upset about drunk driving?1
At first, I was shocked by the response I got. My students, first-year students who were identified as having weak quantitative literacy skills, would suggest that maybe the drivers who weren’t drunk were high, or were talking on their cell phone, or were otherwise distracted. They would question whether drunk drivers cause worse accidents that sober drivers.
Most semesters, not a single student asked either “What percentage of all drivers are drunk?” or “What percentage of drunk drivers get into accidents? How about sober drivers?” While these questions do not completely resolve the issue, realizing that you need to compare either the accident rates or the drunk-driving rates across two distinct populations, rather than looking at only one population, is a key first step to deriving any meaningful conclusions from these numbers. Only then is it reasonable to consider refinements and discuss confounding factors.
Percentages are extremely common in both objective news reporting and persuasive writing such as advertising, lobbying, and politics. Someone who cannot identify the essential information that is missing from this drunk-driving example can easily be led to believe in unjustified conclusions by skillful manipulation of accurate but misused statistics.
This paper describes the approach I use to explicitly teach students how to analyze arguments based on comparisons of numbers and percentages, and how to write such comparisons with the necessary precision. In the first section, I review some evidence showing that this is a weakness that needs attention. Next, I present the framework I use for teaching these skills; the framework is intended to provide students with a process to organize their thoughts and carefully evaluate the meaning of their statements. Finally, I discuss some examples that illustrate two common types of problems: the missing comparison and the apples-to-pineapples comparison.
Evidence of the Problem
Poor Understanding of Percentages
Many students lack fluency with percentages. In particular, they lack the precise understanding required to accurately describe a comparison between two percentages. In a study by the W. M. Keck Statistical Literacy Project in 2002 (Schield 2006, 3), researchers surveyed college students, college teachers and data analysts and found that 40% could not accurately describe the meaning of a two-way table, interpreting two percentages of different wholes as if they were instead parts of the same whole. In the same study (Schield 2006, 6), 60% of the subjects misinterpreted a simple pie chart, describing it as if it proved a comparison between two different wholes. In both of these cases, the subjects were simply asked to decide whether the data provided did or did not support a statement (with an option to select “Don’t know”). The accuracy rate of around a half suggests that the subjects may have been basically guessing, with little if any understanding of the meaning of the relevant percentages to guide them.
Students are not the only ones to display this lack of fluency with the language of percentages and comparisons. Poorly interpreted percentages routinely appear in professional journalism. Here’s one example: the Boston Globe ran a pie chart showing that 45% of car accidents are caused by women and 55% by men, with the heading “Women Safer.”2 Knowing that fewer accidents are caused by women than by men does not support the statement that women are safer than men. Just as with the drunk drivers in the drunk-driving example, one needs data that allow an appropriate comparison: what percentage of all drivers are women?
Many of the percentage-based errors that appear in the news are relatively subtle; they require careful thought about the meaning of the underlying rates. Sometimes, however, it appears that the writer never considered what his or her percentages are actually saying. The following, which appeared in large type and multiple colors as part of a graphic in the Boston Globe, is an example:
45 percent of online music fans who use Napster are more likely to increase the amount of music they purchase, than online music fans who do not use Napster. (Feb 4, 2001, p. H1)
Even a quick perusal of the op-ed section of a newspaper, or of the claims and counter-claims of politicians, will turn up occasions where percentages are cited in support of the speaker’s position. One vital skill for basic citizenship is the ability to recognize when conclusions being drawn are not supported by the given data. As Madison (2012, 5) observes,
Frequently, numbers and statistics are assumed correct because of weak understanding. […] [N]umbers (or quantities, i.e., numbers with units) are almost never beyond question, and questions involve language. Students and non-students need to know how to detect misuses and appropriate uses of data and statistics.
Invalid comparisons of quantities and rates are a common technique for lying with basic statistics, perhaps because percentages sound familiar enough that many people fail to notice their subtleties.
Often, numerically unsupported conclusions are accidental, occurring because the writer does not entirely understand the data, as with the graphic above. In persuasive pieces such as advertising, however, numerical comparisons can be intentionally used to mislead. The unsupported conclusion may be unstated, with the numbers being presented in a way that sets up the comparison in the reader’s mind. I call this the implied fallacy: the writer carefully avoids saying anything actually untrue, while still leaving an invalid impression.
In addition to learning to assess statements, students need to learn to write accurately and fluently about data. If they cannot confidently describe what a table means, then they will avoid writing about those data to the detriment of their papers. Comparing percentages is a vital skill in this context, as well. In their article laying out a rubric for assessing quantitative reasoning in written assignments, Grawe, Lutsky and Tassava (2010, 6) characterize a student paper with weak use of numerical evidence as one who, in part, “may consistently provide data to frame the argument, but fail to put that data in context by citing other numbers for comparison.”
Evidence indicates that many student papers show such weaknesses. At Carleton College, reviewers evaluated the QR content of papers submitted by students for their sophomore writing portfolio. These papers were not selected for their use of QR content, and indeed, many of the papers contained none at all. The goal of the study was not to assess individual students, but rather to “to examine uses of QR as a whole in order to gain insight into how we can improve instruction at the institution and to compare QR activity between large groups […] in order to discern effects of institution-level programs and curricular reforms.” (Grawe et. al. 2010, 1-2) Of these papers (Grawe et. al. 2010, 7),
- 31.9% had the problem “fails to provide numbers that would contextualize the argument,”
- 15.3% present “numbers without comparisons that might give them meaning”, and
- 12.5% were classified as “Presents numbers but doesn’t weave them into a coherent argument.”
I participated in a follow-up feasibility study using this rubric to evaluate Wellesley student papers that were selected by faculty from a range of disciplines as having some numerical content. Over the course of the project, I observed that many students who did provide comparisons often did so by quoting descriptions of the numbers, complete with comparison, from secondary sources, rather than coming up with their own discussion of the data. We need to be teaching students how to write effectively about numerical comparisons, if we want to see them doing so in their papers.
One obstacle that prevents students from putting their data in context is that they do not know how to accurately describe rates and percentages. The language they use is often imprecise and confused, and fails to clearly communicate the relevant details to the reader. In my QR course, I see sentences such as
There are more blacks than whites on death row, not as total numbers but speaking in comparison.
This student was struggling to describe a difference of proportions, in contrast with raw numbers. When we discussed the issue in class, it was clear that she realized that her language was clumsy, but she didn’t know how to express her thoughts. Madison and Dingman (2010, 7) point out that this lack of fluency with the language of percentages is not just a problem for students, but shows up in the media as well:
the language used in media articles on percent and percent change is inconsistent, often difficult to parse, and sometimes incorrect. Instances of language use are central issues in several of the case studies.
For students to be able to write persuasive, effective arguments with numerical evidence, they must be able to use raw data to construct and fluently describe useful comparisons. As Schield observes (2008, 94),
the comparison of ratios, rates and percentages in ordinary language requires using English in a very precise manner. Small changes in syntax can produce large changes in semantics.
Many of us use examples of incorrectly described percentages in our classrooms, and encourage our students to critically read statistics in recent media articles. Certainly, this topic cannot be taught without authentic examples to take apart and discuss. But, as Kaminski, Sloutski and Heckler (2008) show in the context of abstract mathematics, students cannot always generalize principles from specific examples. I have found almost no instructional material that covers how both to describe comparisons of percentages and rates, and how to analyze numbers that appear in an article in order to determine whether the conclusions are justified. In Damned Lies and Statistics, Joel Best (2001, 96-127) provides a detailed and valuable look at appropriate types of comparisons, such as comparisons over time, over places, and among groups. My emphasis is on language that can be used regardless of the type of the data.
Framework for Teaching Comparisons
I approach the topic of making comparison from two directions: (1) teaching the students to construct comparisons from data that they find in tables or other sources, and (2) teaching them to analyze comparison statements from the media in order to determine whether those statements are accurate and honest. These two perspectives complement one another; experience finding the correct language to describe data leads students to be more alert to the subtleties of statements they are evaluating. In both situations, I ask the students to sketch pie charts for each percentage they look at. They need to label all of the appropriate “wedges.” Of critical importance, they also need to label the “pie” itself – i.e., the whole. The pie serves as a valuable visible metaphor for the percentages, as the sketching and labeling of the pie chart forces the students to consider the often overlooked total, or denominator, of their percentage.
Before considering how to compare percentages, students need to be able to clearly describe the meaning of a single percentage, identifying both the “part” and the “whole.” They also need to be comfortable with the language of percentage change, and percentages of percentages. Good pedagogical materials covering this subject, such as Understanding and Using Mathematics: A Quantitative Reasoning Approach (Bennet and Briggs 2005, 133-147), are already available. The resources I have seen do not, however, go on to discuss constructing comparisons and assessing conclusions. Students need to be taught explicitly how to write these comparisons, and how to evaluate the meaning of ones that they encounter.
Here is the process I have my students follow when they are describing patterns in this type of data:
- Determine the meaning of the data or table by focusing on one quantity and identifying the “whole” that it represents a “part” of.
- Sketch and fully label pie charts that illustrate that percentage, adding any additional wedges from the table.
- Describe comparisons within one pie, using language appropriate to comparing raw counts.
- Select a second percentage that represents the same “wedge” as the first, but out of a different “whole,” and sketch that pie chart
- Describe comparisons across two “wholes,” each of which have been divided into the same “wedges,” using appropriate comparisons.
In my class, we first work through this process using data from two-way tables, such as Table 1, showing data from the 2008 presidential election exit polls.
|2008 National Presidential Election, Vote by Race|
In the first step, my students pick out a single percentage, and decide whether it is a percentage out of the row, the column, or out of some other whole. In this example, the table gives row percentages: the 43% where “White” intersects “Obama” describes the percentage of White voters who voted for Obama, not the percentage of Obama voters who were White. This can be determined by observing that the rows all add up to 100%, and are plausibly out of the same whole, which is not the case for the columns. Students are often surprised at how tricky it can be to answer this question.
Figure 1. Language for comparing counts or percentages within one population.
Next, the students sketch and label a pie chart to illustrate that quantity (Fig. 1). This step helps them confirm the decisions they made about the meaning of the percentage. Filling in the pie with the remaining appropriate wedges from the table leads to a discussion of how to compare different parts of the same whole. The critical point here is to realize that parts of the same whole can be compared with words such as “more,” “fewer,” “at least as many” – the language of raw counts. Students should understand that when two percentages are out of the same whole, they can determine which group represents more people. We know there are more white than black people in the United States, because one can look up the exact numbers at the Census Bureau. While this exit poll data does not tell how many white people voted for each of the candidates in 2008, we still know more of them voted for McCain.
We then move to comparing two percentages that describe corresponding subgroups from different wholes. Again, I make sure everyone has sketched the two charts, and realized that we are looking at two different populations, both broken down into the same parts, before writing any description down; see Figure 2. The language is more complicated, as you usually do not know the relative sizes of the populations represented by the two charts. Thus you cannot use terms like “more” to compare subgroups of different wholes. Instead, you could say “A greater percentage of African Americans voters than of white voters voted for Obama.” Notice that the language used for comparisons of this type needs to distinguish the groups that serve as whole populations, white and African-American, from the group which makes up the two parallel parts, Obama voters. In Figure 2 the words that describe entire populations are in italics, while words describing the parts being compared are underlined.
Figure 2. Language for comparing percentages across two populations.
Unlike the single-population examples, the language used to describe comparisons across two populations is often specific to the situation. While “a greater percentage” and “more likely” are widely applicable, “more popular” is appropriate only because this example is looking at the choices made by different groups of people. Some of the many terms that can describe comparisons between two different rates are:
- More reliable
- More popular
- More effective
- Better on-time record
- Better batting average (in baseball)
Two-way tables lead to two distinct types of cross-population comparisons. One can compare the breakdown of two different subgroups, as in this example, or compare the way one subgroup breaks down to how the larger population does. Using the “total” line, we see that Obama was less popular with white voters than with the electorate as a whole.
Students should observe that both types of comparisons involve contrasting two parts that differ from one another in exactly one way. Either the parts are two different parts of the same population, or two corresponding, identically described parts of different populations. Changing both the population and the subgroup, such as trying to compare the Latino vote for Obama to the Asian vote for McCain, leads to confusing and often misleading descriptions.
One of the more-sophisticated goals for students is to learn to ask what appropriate comparison populations should be chosen to put a given statistic in context. The subtle choice of a comparison population can dramatically change their conclusions. In Manhattan, about 16 percent of pedestrian crashes that led to death or serious injury involved a taxi or livery cab, according to an article in The New York Times.3 Are taxi cabs unusually dangerous on the road? Students can come up with several different appropriate comparison populations, and indeed the article presents two: while “only 2%” of cars registered in New York are taxis, they “can make up nearly half” of the cars on the road in Manhattan at any given time.
Once my students have practice describing comparisons from two-way tables, I work with them on reversing the process. We look at statements, such as the drunk-driving and “Women Safer” examples above, and analyze them using a similar series of steps:
- Identify the type of comparison: is it using the language of raw quantities, or of rates?
- Sketch and fully label pie charts that illustrate any data given explicitly, or that illustrate the type of data that would be needed to support the given statement.
- Examine the pie charts to see whether they contain corresponding wedges from two different wholes and if they do support the conclusion.
Here is one straightforward example that focuses on the subtleties of language:
In 2006, two competing cell phone ad campaigns were running in the Boston area. Cingular claimed “the fewest dropped calls of any network,” while Verizon claimed “the most reliable network.” Assuming claims are backed up with data, how can they both be true?
Certainly, the “reliability” of a cell network involves more than just dropped calls, but dropped calls and the ability to get a signal in the first place are important parts of whatever that claim is measuring, so it is reasonable to focus on dropped calls in order to compare these two ads.
Cingular uses the term “fewest”, which compares raw quantities. They are describing how many calls got dropped for Cingular customers, in contrast with the number dropped by other networks. Verizon, in contrast, uses “most reliable.” This, as discussed above, is a rates comparison; it reflects an average user’s experience, regardless of how many users there are. Figure 3 shows sample pie charts illustrating this claim. Verizon’s claim compares the relative size of the “dropped call” wedges, whereas Cingular’s looks at how many calls are in each wedge, without acknowledging the total number of calls in each pie. Both statements can easily be true as long as Cingular’s pie is significantly smaller than Verizon’s, which was true of these two carriers in the Boston area at the time of the ads.
Figure 3. Dropped calls for Verizon and Cingular.
For another example, consider this sentence, from an op-ed piece in the New York Times:
More of Berkeley’s undergraduates go on to get Ph.D.’s than those at any other university in the country.4
The numerate reader has to decide between two competing interpretations of that “more.” Neither of them entirely matches the language of the article. Either the “more” is meant literally, which speaks less to the strength of U.C. Berkeley than to its size, or it should be replaced with a phrase such as “a greater percentage.”
These examples did not involve looking at any data; rather, they ask exactly what the statements mean. As students start investigating statements that are presented with supporting evidence, they will realize that often a comparison will be made implicitly, not explicitly. In these situations, the reader is assumed to be drawing certain conclusions from the statistics, so it is essential to clearly describe those conclusions before evaluating them. This helps the reader remain alert for the implied fallacy, where an invalid conclusion is unstated and left for the reader to draw from insufficient but persuasive-sounding data.
I have found it helpful to name two common scenarios in which the data do not support the given comparison: the missing comparison and the apples-to-pineapples comparison.
The Missing Comparison
There are two common forms of missing comparisons: (1) only raw numbers are given, without the sizes of the respective populations, to support a conclusion that requires rates; (2) only rates from a single population are given, without any appropriate comparison population, to support the conclusion that those rates are unusually large or small. Statements like these are particularly confusing when they incorporate rates, because a rate gives the impression of providing an appropriate context even when it actually fails to do so. As always, pie charts help students analyze the correct interpretation of the data.
From the Washington Post.
From 1947 to 1980, [college] enrollments jumped from 2.3 million to 12.1 million.5
This sentence was used in an op-ed piece arguing that “college-for-all” is a bad idea. In this context, the reader is clearly meant to focus on the large growth in college enrollment. There was certainly a substantial increase in college enrollment over these 23 years, but ignoring the change in the total population artificially magnifies the effect. This type of missing comparison, where the changing population size goes unmentioned, is usually the easiest for students to recognize.
From Time Magazine.
The stats speak for themselves: Just 3% to 5% of female entrepreneurs get venture-capital funds.6
These figures are cited in support of the claim that women are “underfunded in business,” presumably in comparison to men. The language is that of a rates comparison, but the data describe only a single pie chart. The implication that the funded wedge is disproportionately small requires that a comparison population – presumably male entrepreneurs – be similarly broken into “v.c. funded” and unfunded wedges, as illustrated in Figure 4.
Figure 4. VC Funding pie charts for male and female entrepreneurs.
As with earlier examples, this comparison would certainly not provide the last word on the subject, but it’s an essential starting point for the conversation.
From a Reuters news story.
We found that the proportion of infants dying from SIDS in organized child care settings was disproportionately high,’ […] 12 percent [of SIDS cases] occurred in home child care settings, 3 percent in day care centers, 4 percent in a relative’s home, and 1 percent with a baby-sitter or nanny. The rest occurred while the baby was under family
Here we are given percentages that belong on a single pie chart, with a comparison, “disproportionately high,” that requires comparing two different populations. The pie-chart model helps students identify the form that the missing data must take: they need some larger comparison group of babies, broken out into wedges representing how much time the spend in different locations, and then need to compare the relative sizes of the corresponding wedges. For this sentence to be justified, the wedges for home child care and day centers must be larger in the SIDS death pie that they are in the comparison populations. This process leads naturally to a discussion of how to select the appropriate comparison population: should it be all babies, broken out by percent of time spent in these locations, or would it be better to look only at their time spent sleeping, as SIDS occurs while babies are asleep?
Apples to Pineapples Comparisons
The above discussion focused on comparing quantities that differ in only one way: either two different parts of the same whole, or corresponding (“same”) parts from two different wholes. Sometimes we see quantities or rates that sound similar and are intended to be compared, but differ in more than one way. I call these apples-to-pineapples comparisons: two numbers that sound superficially similar, but cannot actually be meaningfully compared. They may be different parts of different wholes, or one of them may be a rate while the other is a count. In practice, comparisons like this will often involve two changes between the quantities: one that the audience is expected to notice which makes the desired point, and a second, obscure change that contributes significantly to the observed difference. As with the previous examples, drawing and labeling the pie charts helps to understand the problems with these examples.
These examples are more varied in form, and often more subtle, than the missing comparisons discussed above. They also frequently involve implicit, as opposed to explicit, comparisons, with the faulty conclusion left to the reader. Following are three examples, which illustrate three different ways the one-change-at-a-time comparison rule can be broken.
From an advertising flyer for Princeton Review test prep courses.
448,820 applications were submitted in 2005 for 17,000 med school spots.
While this statement simply provides two raw numbers, it suggests a rate: 17,000 out of 448,820, or about 4%, which is intended to strike the viewer as alarmingly small. That implied rate, however, is incorrect, as the “whole” is the number of applications submitted to med schools, whereas the “part” is the number of people applying, not the number of applications. (Most applicants apply to more than one med school; the average is about 13 applications per person.) Attempting to sketch and label pie charts that depict these numbers highlights the mismatch between them, as seen in Figure 5.
Figure 5. Outcomes for med school applications and applicants.
From the New York Times.
According to the Bureau of Justice, 1 in every 1,000 people is raped or sexually assaulted on land each year; on cruise ships, there is only one alleged incident of sexual assault for every 100,000 passengers”8
These statistics were provided by the president of the trade association for the cruise industry, as evidence that cruise ships are “safe.” The reader is being asked to compare the relative safety of cruise ships in contrast with land. Two (not to scale) pie charts demonstrate this in figure 6.
Figure 6. Assault rates in U.S. and on cruise ships.
The data provided are appropriate to the style of comparison, and that the assault victim wedge for cruise ships is substantially smaller that the wedge for land that is presented in a manner that makes it sound like it matches. Precise labeling of these two charts highlights the problem with this data, as the two wedges do not carry identical labels. The “part” for the first rate is the number of people assaulted in a year. In contrast, the “part” for the second rate is the number of people assaulted while on the cruise, a much shorter time than the year in the first statistic. Two different changes make for an apples-to-pineapples comparison, where the obvious conclusions about the rate of assault owe much more to the hidden change, the time frame, than the explicit comparison of land to cruise.
From the Washington Post.
When President Clinton tried to tackle health care in 1994, it represented 14 percent of our GDP, and 38 million Americans were uninsured. Now, the nation spends 16 percent of its GDP on health, and about 44 million of us are uninsured.9
There are two valid comparisons here: the increasing rate of spending on health care, and the increasing number of people uninsured. The unstated claim, however, is that we are spending more on health care, but people are less likely to be insured – or perhaps fewer people are insured. That confusion is the crux of the problem with this example. The increasing number of people without insurance is being contrasted with the increasing percentage of GDP spent on health care. Tracking down the population figures for these two years reveals that the percentage of people without health insurance barely changed.
The framework discussed here is a first step towards developing sophisticated critical thinking skills about comparisons. While the data in two-way tables can be accurately presented with pie charts, not all rates follow the simple part-andwhole model, and the choice of an appropriate comparison population can dramatically change the conclusions. Discussions of the relative safety of flying versus driving, for example, could involve calculations of accidents or deaths per miles traveled, or, for a very different conclusion, per trip made. Nevertheless, the pie chart and the part-and-whole language helps to develop students’ initial intuition for the language used to compare rates. That basic conceptual understanding serves as a solid foundation for discussing the more complicated issues that come from digging deeper into the data.
These skills make up one component of the larger network of concepts related to teaching rates, fractions, and percentages from a quantitative literacy perspective. Schield (2008) discusses making comparisons as one of several related quantitative skills, and Lutsky (2008) lays out ten key questions to ask about the source and meaning of such statistics. Working with percentage change, which allows the writer to specify how much more likely one outcome is than another, is a very closely related topic that is frequently covered, and it is listed as an essential topic in the MAA’s “Quantitative Reasoning for College Graduates: A Complement to the Standards” (MAA 1998, appendix B).
Remarkably, the simple skill of describing these comparisons is rarely explicitly taught. Unlike more traditionally “mathematical” skills, almost no calculations are required, and expertise can only be demonstrated with descriptive sentences as opposed to numerical answers. Teaching comparisons need not be time consuming. The basics could certainly be covered in one or two class meetings, and then used and revisted any time students attempted quantitative writing.
My gratitude to Len Vacher and the reviewers of an earlier version of this paper and to the editor, whose comments greatly improved it.
Bennett, Jeffrey and William Briggs. 2005 Using and understanding mathematics: A quantitative reasoning approach, 3rd edition. Boston: Pearson.
Best, Joel. 2001. Damned lies and statistics. Berkeley: University of California Press.
Grawe, Nathan D., Neil S. Lutsky, and Christopher J. Tassava. 2010. A rubric for assessing quantitative reasoning in written arguments. Numeracy 3(1): Article 3. http://dx.doi.org/10.5038/1936-46220.127.116.11 (accessed Nov. 30, 2013)
Kaminski, Jennifer A., Vladimir M. Sloutsky, Andrew F. Heckler. 2008. The advantage of abstract examples in learning math. Science 320 (5875): 454-455. http://dx.doi.org/10.1126/science.1154659
Lutsky, Neil S. 2008. Arguing with numbers: Teaching quantitative reasoning through argument and writing. In Calculation vs. context: Quantitative literacy and its implications for teacher education, ed. Bernard L. Madison and Lynn A. Steen, 5974. Washington, DC: Mathematical Association of America.
Madison, Bernard L. 2012. If only math majors could write…. Numeracy 5(1): Article 6. http://dx.doi.org/10.5038/1936-4618.104.22.168 (accessed Nov. 30, 2013).
———, and Shannon. W. Dingman. 2010. Quantitative Reasoning in the Contemporary World, 2: Focus questions for the numeracy community. Numeracy 3(2): Article 5. http://dx.doi.org/10.5038/1936-4622.214.171.124 (accessed Nov. 30, 2013).
Mathematical Association of America. 1998. Quantitative reasoning for college graduates: A complement to the Standards. Committee on the Undergraduate Program in Mathematics.
Schield, Milo. 2006. Statistical literacy survey analysis: Reading tables and graphs of rates and percentages. International Conference on Teaching Statistics.
http://www.statlit.org/pdf/2006schieldicots.pdf (accessed Nov. 30, 2013).
Schield, Milo. 2008. Quantitative literacy and school mathematics: Percentages and fractions.” In Calculation vs. context: Quantitative literacy and its implications for teacher education, ed. Bernard L. Madison and Lynn A. Steen, 87-107. Washington, DC: Mathematical Association of America.
1 For detailed statistics on drunk driving, see http://www-nrd.nhtsa.dot.gov/Pubs/811016.PDF (accessed Nov. 30, 2013).
2 See http://www.boston.com/news/local/articles/2008/02/10/few_bad_apples/ for a graphic illustrating the article “Accidents waiting to happen” by Matt Carroll and Connie Paige http://www.boston.com/news/local/articles/2008/02/10/accidents_waiting_to_happen in the Feb 10, 2008 Boston Globe (both sites accessed Nov. 30, 2013).
3 “Deadliest for Walkers: Male Drivers, Left Turns” by Michael Grynbaum, in the August 16, 2010 New York Times: http://www.nytimes.com/2010/08/17/nyregion/17walk.html (Accessed Nov. 30, 2013)
4 “Cracks in the Future,” by Bob Herbert, in the Oct 3, 2009 New York Times:
http://www.nytimes.com/2009/10/03/opinion/03herbert.html (accessed Nov. 30, 2013).
5 “It’s time to drop the college-for-all crusade” by Robert Samuelson in the May 27, 2012 Washington Post: http://www.washingtonpost.com/opinions/its-time-to-drop-the-college-for-allcrusade/2012/05/27/gJQAzcUGvU_story.html (accessed Nov. 30, 2013).
6 “Scandal in Silicon Valley: Why the Ellen Pao Suit Isn’t Helping Women in Tech”, by Amy Tennery, in June 5, 2012 Time Magazine: http://business.time.com/2012/06/05/scandal-in-siliconvalley-why-the-ellen-pao-suit-isnt-helping-women-in-tech/ (accessed Nov. 30, 2013).
7 No other statistics were provided to justify the statements. One version of the article can be found at http://www.royalsociety.org.nz/2000/08/08/health-cribdeath/ (accessed Nov. 30, 2013).
8 “Mystery at Sea: Who Polices the Ships?” by Christopher Elliot, in the Feb 26. 2006 New York
Times, http://travel2.nytimes.com/2006/02/26/travel/26crime.html (accessed Nov. 30, 2013).
9 “Once the Stimulus Kicks In, the Real Fight Begins” by Robert Reich, in the Feb 1, 2009 Washington Post. http://www.washingtonpost.com/wpdyn/content/article/2009/01/30/AR2009013003116.html (accessed Nov. 30, 2013).
Jessica Polito is a Lecturer in the Quantitative Reasoning Program at Wellesley College. Her research interests include student attitudes towards quantitative reasoning, numeracy in the media, and elementary math education.
A version of this article appeared in Numeracy, Advancing Education in Quantitative Literacy. Author retain copyright of their material under a Creative Commons Non-Commercial Attribution 4.0 License