Teaching Bits: A Resource for Teachers of Statistics

Journal of Statistics Education v.3, n.2 (1995)

Joan B. Garfield
Department of Educational Psychology
University of Minnesota
332 Burton Hall
Minneapolis, MN 55455
612-625-0337
jbg@maroon.tc.umn.edu

J. Laurie Snell
Department of Mathematics and Computing
Dartmouth College
Hanover, NH 03755-1890
603-646-2951
jlsnell@dartmouth.edu

This column features "bits" of information sampled from a variety of sources that may be of interest to teachers of statistics. Joan abstracts information from the literature on teaching and learning statistics, while Laurie summarizes articles from the news and other media that may be used with students to provoke discussions or serve as a basis for classroom activities or student projects. We realize that due to limitations in the literature we have access to and time to review, we may overlook some potential articles for this column, and therefore encourage you to send us your reviews and suggestions for abstracts.


From the Literature on Teaching and Learning Statistics


Special Section on CATS Symposium on Modern Interdisciplinary University Statistics Education (1995)

The American Statistician, 49(1), 1-23.

This section of the February issue of The American Statistician reprints seven papers from a symposium on Modern Interdisciplinary University Statistics Education. The symposium was organized by the Committee on Applied and Theoretical Statistics (CATS) of the National Research Council and was held prior to the Joint Statistical Meetings in 1993. Presentations focused on what changes in statistics education are needed to (1) incorporate interdisciplinary training into the upper-undergraduate, graduate and postdoctoral statistics programs, (2) bring the upper-undergraduate and graduate statistics curricula up to date, and (3) improve apprenticing of statistics graduate and postdoctoral students and appropriately reward faculty mentors. The papers included in this special section include "What Industry Needs" by Jon Kettenring, "What Academia Needs" by Peter Bickel, "What the Government Needs" by N. Phillip Ross, "A Larger Perspective" by John Bailar, "Modernizing Statistics Ph.D. Programs" by John Lehoczky, and two responses to the papers by Joan Garfield and Carl Morris.


"Will Our Students Be Statistically Literate?"

by Iddo Gal (1995). Connections, National Center on Adult Literacy, May 1995, 14-15.

One key goal of adult literacy education is to empower students and enable them to become more informed citizens. Most educators of adult students briefly touch on the topic of statistics, typically teaching fragmented topics in isolation of a problem context. Gal is concerned that this practice is unlikely to contribute to students' ability to make sense of statistical information they encounter in the world around them. This column describes some interpretive skills and dispositional aspects involved in developing adults' statistical literacy and suggests ways to help adult students become statistically literate.


The Psychology of Judgment and Decision Making

by Scott Plous (1993). New York: McGraw-Hill, Inc.

This book (which is part of a series in social psychology) attempts to show how a social perspective on judgment and decision making can offer practical suggestions on how to deal with common problems in life, many of which involve the use of statistical information. Before reading the book, readers are asked to take a survey which includes many items from the research literature documenting misconceptions people have concerning chance events. One purpose for taking this survey is to prevent readers from falling prey to "hindsight biases" or the "I-knew-it-all-along effect." The book is divided into six sections: "Perception, Memory, and Context," "How Questions Affect Answers," "Models of Decision Making," "Heuristics and Biases" (which does a nice job of summarizing the research on misconceptions of probability and statistics), "The Social Side of Judgment and Decision Making," and "Common Traps."


Two American Statistical Association (ASA) publications of interest to teachers of statistics are STATS and the newly created Newsletter for the Section on Statistical Education.

Although STATS is billed as "The Magazine for Students of Statistics," it contains several articles that could be used as the basis for a classroom activity. The Spring 1995 issue includes articles on W. Edwards Deming's red bead experiment, a method of using statistics to predict college ice-hockey teams' performance, and a method used to statistically evaluate Renaissance art objects. Several regular columns include "Life and Hard Times of a Statistician," a featured student project, and "Outlier...s," a collection of interesting stories, anecdotes, jokes, and cartoons.

Two experimental issues of the Newsletter for the ASA Section on Statistical Education have been produced this year. Short articles describe current projects, conferences, events, and organizations related to teaching statistics. The second issue, distributed in July 1995, contains articles on the Statistics Education programs for the 1995 and 1996 Joint Statistical Meetings, the Isolated Statisticians group, the Undergraduate Data Analysis Contest, STATS magazine, the ASA/MAA Joint Committee on Undergraduate Statistics, The International Study Group for Research on Learning Probability and Statistics, the ASA Poster Competition, the Quantitative Literacy programs, and the new Advanced Placement Statistics Course.


Teaching Statistics


A regular component of the Teaching Bits Department is a list of articles from Teaching Statistics, an international journal based in England. Brief summaries of the articles are included. In addition to these articles, Teaching Statistics features several regular departments that may be of interest, including Computing Corner, Curriculum Matters, Data Bank, Historical Perspective, Practical Activities, Problem Page, Project Parade, Research Report, Book Reviews, and News and Notes.

The Circulation Manager of Teaching Statistics is Peter Holmes, p.holmes@sheffield.ac.uk, Center for Statistical Education, University of Sheffield, Sheffield S3 7RH, UK.

Teaching Statistics, Summer 1995
Volume 17, Number 2

This 50th Golden Jubilee Issue is 50% larger than usual and includes all of the regular departments along with many articles on teaching statistics.

"The `Golden Egg'" by Hilary Kimber

Summary: We examine a geometrical construction of a "perfect" egg shape, comparing it with students' conceptions of the ideal shape. Dimensions of birds' eggs are investigated, to find out how egg shapes are related to natural circumstances.

"Beginning Statistics with Cuisenaire Rods" by Caroline Hollingsworth

Summary: Cuisenaire Rods provide a concrete embodiment for teaching mean, median and mode to middle schoolers. These statistical concepts are traditionally only taught abstractly, but may be better understood via manipulatives.

"Statistics and the Mathematical Processes" by Andy Begg

Summary: The increasing emphasis on what mathematicians DO as distinct from what they KNOW in the development of school mathematics curricula presents some interesting possibilities for teachers of statistics. This article looks at these processes and suggests the kind of influence that they may be having on schools.

"Using Computers to Teach Theoretical Statistics" by Lucette Carter and Mathilde Mougeot

We describe a computer-based project tested on a group of third year students in MASS (Mathematiques Appliquees aux Sciences Sociales) who had a good general background in Mathematics and had already acquired some background in probability and statistics. The idea was to run work sessions in a computer environment parallel to the classical course with the aim of illustrating some results in probability or statistics with an emphasis on what happens when there is a failure of the usual convergence assumptions for the distributions.

"Using IT to Investigate World Statistics" by Brian Hudson

Summary: This article outlines the development of classroom materials and a number of associated datafiles involving the use of IT (Information Technology).

"A Multiple Regression Project" by Roger Johnson

Summary: The number of calories in a serving of a food item may be determined from the amount of fat, protein, and carbohydrates it contains. Students can uncover this relationship by collecting food data and then performing a multiple regression.

"Electrifying Statistics for Your Students" by Michael Rycraft

Summary: The problem of which electricity tariff is the most economical is considered from some different situations. Given the uncertainty of estimating exact usage in advance, various statistical concerns are mentioned, many of which lend themselves to classroom demonstrations.

In addition, there are regular columns and departments such as Classroom Notes ("Asking Sensitive Questions in Surveys" by Paul Hutchinson), Data Bank ("Scoring Patterns in Rugby League" by John Croucher), Standard Errors ("Don't get t out of proportion!" by Gerald Goodall), Project Parade ("Tree Slugs" by Chris du Feu), Historical Perspective ("An Episode in the History of Measurement" by David Hand), Statistics at Work ("Setting Objectives in Scientific Studies" by Janet Riley), Practical Activities ("Chinese Dominoes Games" by Ann-Lee Wang), Problem Page ("Play Your Cards Right" by Mike Fletcher), Software Reviews ("Junior Pinpoint: A Child's Guide to Data Handling," by Alan Graham, "Pinpoint Datafiles," by Chris du Feu, and "World Development Database" by Gwen Royle), and Research Report ("Reflections on the Past 15 Years" by Joan Garfield).


Topics for Discussion from Current Newspapers and Journals


A Mathematician Reads the Newspaper

by John Paulos (1995). New York: Basic Books.

In this book, John Paulos indulges his long-time passion for reading newspapers to continue his campaign to improve the public's numeracy.

The book itself is written in the style of a newspaper, with sections on national and local news, medical news, living styles, and so on. The mathematics is woven into the news story so as to make the reader almost unaware that it is there. Paulos shows that mathematics and related areas such as psychology and philosophy are lurking everywhere in your friendly newspaper.

For example, there is a nice discussion of Kolmogorov complexity woven into remarks about why presidents who oversimplify do better than those who overanalyze and why some issues (for example, the savings and loan scandal or derivatives) are too complex to be compressed into your daily newspaper.

A discussion of the random walk and the stock market was a favorite of reviewers, but our favorite is Paulos' discussion of how to liven up the society page by using incidence diagrams.

While reviewers have widely praised the book, some have exposed their own misunderstandings of mathematics in their reviews. In our opinion, the best review of the book appeared in the Philadelphia Inquirer in the book review section on April 30, 1995. It was written by Charles Seife, who is currently a graduate student in mathematics at Yale.

Like Paulos' previous books, A Mathematician Reads the Newspaper is written in a lively style and is fun to read. It reads like a newspaper, but here all the stories are interesting.


"Schools Bring Math to Life by Teaching Statistics"

by David Leonhardt. The Washington Post, 19 June 1995, A1.

Leonhardt reports that there is a quiet revolution occurring in mathematics teaching. Probability and statistics teaching at grade levels kindergarten through 12 is on the rise. The appeal stems from the fact that the subjects are practical and relevant; teachers argue that probability and statistics will help students become skeptical consumers of the deluge of information presented to society. They say that the shift toward the practical subject of statistics is unmistakable and has helped convince students that mathematics and real life intersect. The trend also reflects the long-growing desire among parents and business executives that math instruction become more relevant to students and the jobs they will eventually seek.

This revolution began in 1989 when the National Council of Teachers of Mathematics issued a national recommendation urging all schools to teach more hands-on math. The effect of this revolution is evident throughout the country's school systems. In Fairfax County, Virginia, statistics composes 25% of the middle school math curriculum. Washington, D.C., public school instructors teach kindergartners about poll taking and second graders about probability. Leonhardt describes a third grade class in which the students carried out an experiment equivalent to estimating the proportion of each color of M&M's candy in a bag. More typically, middle school and high school teachers start with descriptive statistics and then discuss graphs, probability distributions, and correlation.

Within the next two years, an advanced placement test in statistics will be offered. Educators conjecture that the test will probably cause hundreds of high schools to offer statistics classes. The obvious problem of teacher training for this revolution is mentioned.

Leonhardt remarks, "Like any curriculum change, this one causes debate. In some schools the emphasis on statistics means that fewer students will progress to calculus and be ready for upper-level mathematics and science in college." This is one reason that some schools are teaching statistics in the middle schools.


"Propensity to Abuse--Propensity to Murder?"

by Jon F. Merz and Jonathan P. Caulkins (1995). Chance, 8(2), 14.

"When Batterer Turns Murderer"

by I. J. Good (1995). Nature, 375(6532), 541.

Both authors discuss the same problem suggested by recent comments by Alan Dershowitz, a well-known Harvard law professor who is on the O. J. Simpson defense team. Dershowitz has mentioned in the media several times that the fact that O. J. Simpson is reported to have abused Nicole Simpson does not imply that he is guilty of her murder. For example, he commented on Larry King's television program:

"Nobody dismisses spousal abuse. It's a serious and horrible crime in this country. The statistics demonstrate that only one-tenth of one percent of men who abuse their wives go on to murder them. And therefore, it's very important for that fact to be put into empirical perspective, and for the jury not to be led to believe that a single instance, or two instances of alleged abuse necessarily means that the person then killed. That just doesn't follow."

While Dershowitz estimates the probability that a man would murder his wife given that he abuses her to be 1/1000, both authors believe the jury would be more interested in the probability that the husband is guilty of the murder of his wife given that he abused his wife and his wife was murdered. (Here wife is interpreted in the obvious generalized sense.)

They both solve this problem by using Bayes' theorem, but their solutions differ in an interesting way. Let A be the event that a man has abused his wife, M that the wife is murdered, and G that the man is guilty of murdering his wife.

Merz and Caulkins write the desired odds ratio as

P(G|A,M)/P(~G|A,M) = P(G|M)/P(~G|M) * P(A|G,M)/P(A|~G,M)

and Good writes it as

P(G|A,M)/P(~G|A,M) = P(G|A)/P(~G|A) * P(M|G,A)/P(M|~G,A).

Consequently, they need different kinds of information to answer the question.

Merz and Caulkins say that of the 4936 women who were murdered in 1992, about 1430 were killed by their husband or boyfriend, giving an estimate of .29 for P(G|M). In a newspaper article, Dershowitz stated that "It is, of course, true that, among the small number of men who do kill their present or former mates, a considerable number did first assault them."

Merz and Caulkins interpret "a considerable number" to be 1/2. This gives them P(A|G,M) = .5. Finally, they assume that the probability of a wife's being abused by her husband, given that she was murdered by someone else, is the same as the probability of a randomly chosen wife's being abused. They say that this has been estimated to be .05. This gives an odds ratio of 4.08, with corresponding probability .81 of a husband's being guilty, given that he has abused his wife and she has been murdered.

Good needs first to estimate P(G|A). He starts with Dershowitz's estimate of 1/1000 that the abuser will murder his wife. He assumes the probability is at least 1/10 that this will happen in the year in question. Thus P(G|A) is at least 1/10,000. Of course P(M|G,A) = 1, so he is left only with estimating P(M|~G,A). For this he says that there are about 25,000 murders a year in the U.S. population of 250,000,000. Thus he estimates P(M|~G,A) to be 1/10,000. This gives him an odds ratio of 1 with corresponding probability .5 for the husband being guilty, given that he had abused his wife and she was murdered.

Since they have had to make some rather arbitrary assumptions to get their estimates, it is not surprising that they get quite different answers. However, the true probability is surely much larger than the .001 probability that Dershowitz obviously hoped the jury in the O. J. Simpson case would think was the appropriate probability.


"Screening Mammography and Public Health Policy--The Need for Perspective"

by C. J. Wright and C. B. Mueller (1995). The Lancet, 346(8966), 29-31.

The authors review the studies on the effects of screening for breast cancer and conclude that screening is not good public policy for any age group. (Screening for breast cancer is currently recommended for women over 50.)

They observe that there has been a lot of publicity about early studies that showed a 30% relative reduction in breast cancer for women over 50, but very little attention paid to newer studies that showed no significant benefit to any group. In addition, they claim that little attention has been paid to the costs and possible harmful effects of screening.

The authors say that randomized prospective trials showed that the numbers of women screened to achieve one less death per year ranged from 7000 to 63,000. About 5% of screening mammograms are positive or suspicious, and of these, 80-93% are false positive, causing unnecessary concern and further procedures including surgery. False reassurance by negative mammography occurs in 10-15% of women with breast cancer that will manifest clinically within a year. They estimate that the mean annual cost per life saved is around $1.2 million dollars, which is consistent with other estimates.

The authors discuss the outcomes of a 1985 study that showed a significant 30% reduction in breast cancer mortality, but no reduction in overall mortality. They present a graph analyzing the outcome of the trial in terms of population benefit rather than relative mortality reduction, giving a striking example of the effect of different ways of presenting data.


"Beyond All Reasonable DNA"

by J. Cohen and I. Stewart (1995). The Lancet, 345(8965), 1586-1588.

This article begins with a quote from Sherlock Holmes: "Once you have eliminated the impossible, then whatever remains, however improbable, must be the truth." The authors suggest that Holmes has swept under the rug the issue of "beyond a reasonable doubt" that juries must confront. They suggest that in the case of DNA evidence there is an apparent paradox: "A jury that would accept without qualms an error probability of one in a thousand, such as an honest mistake by a key eyewitness, is unwilling to accept a probability of one in a trillion once its attention is drawn to the statistical nature of the evidence."

The authors give a detailed discussion of three concerns about DNA fingerprinting: (a) statistical non-uniformity in human populations, (b) technical error by forensic scientists, and (c) misunderstanding of the meaning of statistical claims.

In the discussion, they show how these issues are regarded differently by scientists, expert witnesses, and jurors. They remark that "lawyers have shamelessly exploited jurors' mathematical naivete."

The authors remark that "the fact that an individual has confessed to a crime may in some circumstances increase the probability of his or her innocence." They cite "The Interrogator's Fallacy" by R. A. J. Matthews (Bull. Inst. Math. Appl. (1995), 31, 3-5) for this assertion.

They point out that DNA fingerprinting is really quite new and has come a long way toward being accepted in a very short time. They are confident that it will become as firmly established as fingerprinting is now.


"Keeping Up"

by Daniel Seligman. Fortune Magazine, 10 July 1995, 211.

Daniel Seligman calls himself Mr. Statistics in this column. Mr. Statistics says that he was asked about the wager the King makes with Laertes over the outcome of his duel with Hamlet. This wager appears in the final act of Shakespeare's Hamlet. The court flunky presents to Hamlet the terms of the proposed match with Laertes:

The King, sir, hath laid, sir, that in a dozen passes between yourself and him, Laertes shall not exceed you three hits. He hath laid on twelve for nine.

Mr. Statistics reports that the usual interpretation is that there will be twelve "passes" or bouts, each ending with the first "hit." If Laertes is to win the bet with the King, his total hits must exceed Hamlet's by more than three--for example, eight to four.

Mr. Statistics says that he found, from a computer simulation of 100,000 duels, that for a three-point spread to be fair, Hamlet would have to have only a 38% chance of winning a bout.

A more detailed discussion of this wager and its mathematical aspects can be found in the very interesting article "The Odds on Hamlet" by Evert Sprinchorn. This article originally appeared in the Columbia Forum (1964), VII(4), 41, and was reprinted in The American Statistician (1970), 24(5), 14-17.

Sprinchorn does not think the above interpretation is very reasonable. He cannot imagine that the King has such a low opinion of Hamlet's dueling skills. He argues instead for the interpretation that to win the match a dueler has to get three hits in a row. With this interpretation, if the two contestants are equally matched, he shows that the probability that Laertes will win is .443, which is very close to the probability .429, corresponding to the King's odds of 9 to 12.


"Pseudo-Opinion Polls: SLOP or Useful Data?"

by Dan Horvitz, Daniel Koshland, Donald Rubin, Albert Gollin, Tom Sawyer, and Judith M. Tanur (editor) (1995). Chance, 8(2), 16-25.

Science occasionally runs "fax-in-your-answer" polls. The results of such polls have been published in Science with the comment that, while they were not statistically randomized polls, they do provide valuable information about how an admittedly self-selected group felt about an issue. One example of such a poll in Science was the reporting of 200 responses to questions relating to the 1993 special issue on women in science. One result given was the proportion of those responding who felt that there is a female style for doing science. The results were not given by gender. (Only 30 men responded to the poll.)

A number of (self-selected) statisticians wrote to the journal to complain that such polls were useless and a bad example for Science to set. The editor Dan Koshland wrote an editorial defending the polls. The issue was discussed at the 1994 meeting of the American Association for the Advancement of Science by the authors of this article. This article is an account of this discussion.

Koshland again defended the polls. He pointed out that Gallup considers a 60% response to a survey to be a good response, so even those polls might have an element of self-selection. He argued that those who wrote in about the women's issue had strong feelings about the questions asked, and the magazine valued their opinions even though they were obviously self-selected.

Rubin concludes that such polls are okay for internal use but argues that, since Science would not publish other researchers' results based on such polls, they should not publish the results of their polls of this type. Gollin reviews the history of polling and also concludes that such self-selected polls should be discouraged. Congressman Tom Sawyer describes his own use of such polls and defends their use.

The participants had lively discussion and, in the end, they agreed to disagree.


"The Law of Averages"

by Ann E. Watkins (1995). Chance, 8(2), 28-32.

I confess that I never really believed that people think that a coin owes some heads if it has come up tails several times in a row. After reading this article I am prepared to believe they do. Watkins has found a wonderful collection of newspaper quotes to show the many variations on misconceptions of the law of averages.

She starts with the "it is due" interpretation.

Announcer Chick Hearn notes that Perkins had made the last six out of six free throws and concludes that the law of averages works for the opposition.
Dear Abby: My husband and I just had our eighth child. Another girl. Even the doctor told me that the law of averages were in our favor 100 to 1.

The next version is that things will average out over a moderate length of time.

The law of averages is what baseball is all about. It is the leveling influence of the long season. A .250-hitter may hit .200 or .300 for a given period, but he will eventually level off at .250.

Watkins observes that this interpretation is even blessed by the definition of the "law of averages" in the Oxford American Dictionary.

Other interpretations discussed are that everyone or almost everyone is average, rare events should not happen in succession, and if we wait long enough even impossible events can occur.

[Editor's note: Bernoulli, the author of the law of large numbers, did not have such a pessimistic view of his readers. Before presenting his proof he writes:

"Further, it cannot escape anyone that for judging in this way about any event at all, it is not enough to use one or two trials, but rather a great number of trials is required. And sometimes the stupidest man--by some instinct of nature per se and by no previous instruction (this is truly amazing)--knows for sure that the more observations of this sort that are taken, the less the danger will be of straying from the mark." (J. Bernoulli, "The Art of Conjecturing IV," trans. Bing Sung, Technical Report No. 2, Harvard University, Dept. of Statistics, 1966.)

If you want to see what life in New York would be like if there were no law of averages, read "The Law" by Robert M. Coates. This article appeared in The New Yorker in the 1940's and was reprinted in The World of Mathematics by James R. Newman (1956), Vol. 4, New York: Simon and Schuster, p. 2268.]


"Ask Marilyn"

by Marilyn vos Savant. Parade Magazine, 9 July 1995, p. 15.

In her February 26, 1995, column, a reader poses the following question:

"I'm flying over the China Sea in a single-engine plane. The same route is being flown by my buddy in a twin-engine plane. The engines are made by different companies, but they're the same in all other respects, such as age, condition and inherent reliability. It is known that the twin-engine plane cannot maintain flight on a single engine. Our destination is hours away. Which plane has a higher probability of going down because of engine failure?"

Marilyn says the single engine plane is safer, claiming that if all other factors are equal, the twin-engine plane is twice as likely to go down.

In a later column, a reader pointed out that her answer could lead to a probability bigger than one. Marilyn defended her answer, saying the probability is small that an engine would fail.

Now she reports a letter from the Deputy Director of the Center for Defense Information complaining that her answer is misleading since no manufacturer is permitted to sell a two-engine aircraft that requires both engines for flight. He is worried that her column will convey too pessimistic a picture of aviation safety.

Marilyn responds that she had to answer the original question in which the twin-engine plane cannot fly with a single engine. She said that such "if-then" problems appear in her column because they are good mental exercises and they are entertaining.


Return to Table of Contents | Return to the JSE Home Page