Teaching Bits: A Resource for Teachers of Statistics

Journal of Statistics Education v.3, n.3 (1995)

Joan B. Garfield
Department of Educational Psychology
University of Minnesota
332 Burton Hall
Minneapolis, MN 55455
612-625-0337
jbg@maroon.tc.umn.edu

J. Laurie Snell
Department of Mathematics and Computing
Dartmouth College
Hanover, NH 03755-1890
603-646-2951
jlsnell@dartmouth.edu

This column features "bits" of information sampled from a variety of sources that may be of interest to teachers of statistics. Joan abstracts information from the literature on teaching and learning statistics, while Laurie summarizes articles from the news and other media that may be used with students to provoke discussions or serve as a basis for classroom activities or student projects. We realize that due to limitations in the literature we have access to and time to review, we may overlook some potential articles for this column, and therefore encourage you to send us your reviews and suggestions for abstracts.


From the Literature on Teaching and Learning Statistics


"Statistics Education Fin de Siecle"

by David S. Moore, George W. Cobb, Joan Garfield, and William Q. Meeker (1995). The American Statistician 49(3), 250-260.

This paper grew out of a session at the 1993 Joint Statistical Meetings that focused on imagining the state of statistics education at the end of this century. David Moore (serving as provocateur) raised challenging questions in three areas of statistics education: the role of technology, new ways of helping students learn, and teaching in institutions of higher education. Acknowledging that "higher education faces an environment of financial constraints, changing customer demands, and loss of public confidence," Cobb, Garfield and Meeker took turns as primary and secondary responders to these questions. Audience reactions to these concerns are also summarized.


"Confessions of a Coin Flipper and Would-Be Instructor"

by Clifford Konold (1995). The American Statistician 49(2), 203-209.

Konold's extensive research on students' probabilistic reasoning is well known and has provoked many instructors to follow his recommendation to have students first make predictions about random events and then test them by performing experiments or computer simulations. Konold's "ProbSim" software, described in this paper, was originally designed to facilitate this learning activity by providing an easy and graphical way for students to set up probability models to generate simulated data. In this article Konold recounts a surprising experience he had when trying out this instructional approach with a student through a series of individual tutoring sessions which led him to rethink and test his own beliefs about coin flipping. The paper concludes with some practical recommendations for teachers of statistics who use simulations with students to help them overcome misconceptions related to probability.


1994 Proceedings of the Section on Statistical Education

American Statistical Association, 732 North Washington Street, Alexandria, VA 22314.

Each year the ASA Section on Statistical Education publishes papers presented in their sessions at the Joint Statistical Meetings. The papers in this volume were presented at the meetings held in Toronto, Canada, in August, 1994.

The volume includes papers from five invited paper sessions on the following topics:

I. Robust Regression in Practice

II. Improving Teaching of Graduate Level Statistics Service Courses

III. Short Courses: Challenge, Issues and Educational Value

IV. The Adopt-A-School Project: Successful Models for K-12 Outreach

V. The First Day of Class

There are contributed papers from sessions on the following topics:

I. Demonstrating Statistical Concepts Using Computers, Graphics and Geometry

II. Improved Methods for Statistical Instruction, Teacher Training, and Evaluation

III. Inconsistencies in Current Practice: Handling Interactions Between Fixed and Random Effects

IV. Statistical Education for Business and Industry

V. Programs and Techniques for Teaching Data Analysis and Interpretation

VI. Statistical Consulting

VII. Teaching Statistics: Problems and Solutions

VIII. Training and Consulting for Industry: What's Missing?

The proceedings conclude with seven contributed papers from poster sessions.


Teaching Statistics


A regular component of the Teaching Bits Department is a list of articles from Teaching Statistics, an international journal based in England. Brief summaries of the articles are included. In addition to these articles, Teaching Statistics features several regular departments that may be of interest, including Computing Corner, Curriculum Matters, Data Bank, Historical Perspective, Practical Activities, Problem Page, Project Parade, Research Report, Book Reviews, and News and Notes.

The Circulation Manager of Teaching Statistics is Peter Holmes, p.holmes@sheffield.ac.uk, Center for Statistical Education, University of Sheffield, Sheffield S3 7RH, UK.

Teaching Statistics, Autumn 1995
Volume 17, Number 3

"Apparent Decline as a Sign of Improvement? or, Can Less Be More?" by Sangit Chatterjee and James Hawkes

Summary: Improvement of a system need not be the usual growth expected of dynamic systems. Decreasing variability, an important index of improvement, is discussed. The disappearance of 0.400 hitters from professional baseball can be better understood under such an assumption. Other examples illustrating the apparent paradox are also mentioned.

"Probability, Intuition, and a Spreadsheet" by Alan Graham

Summary: Probability simulations are a useful way of helping students to challenge their intuitions about chance events. However, tossing dice and coins can be slow and messy and may mask underlying long-run patterns. This article provides examples of probability simulations on a spreadsheet which overcome some of these difficulties.

"Primary Data" by Andrew Bramwell

Summary: This article gives a rare insight into the way in which statistical thinking may be introduced in the primary school classroom.

"Secondary Students' Concepts of Probability" by Richard Madsen

Summary: Students develop concepts of probability without formally studying the discipline and some of their concepts are at variance with those taught in the classroom. A survey of 200 students in five schools in Missouri was undertaken in an attempt to learn about pre-conceptions. The results are discussed in this article.

"Arm-waving Mathematics: Sound, if Not Rigorous" by Ken Brewer

Summary: This article suggests some techniques, herein referred to as "arm-waving", which utilise analogies, graphs and examples to illustrate the contribution of mathematics to statistical concepts without a rigorous treatment of the mathematics used.

"Statistical Tools and Statistical Literacy: The Case of the Average" by Iddo Gal

Summary: This article is intended to serve as a starting point for a dialogue regarding the goals of teaching students about averages and how to assess their emerging knowledge.

In addition to these articles, this issue includes the columns Standard Errors, Software Review, Data Bank, and Computing Corner.


Topics for Discussion from Current Newspapers and Journals


"Bordeaux Wine Vintage Quality and the Weather"

by Orley Ashenfelter, David Ashmore, and Robert Lalonde (1995). Chance, 8(4), 7-14.

If you want to drink good wine you can buy new wine and let it mature in your cellar or you can buy older wine that has matured in some dealer's cellar. To decide which is the better strategy it is helpful to know the answers to questions like: Does the price of mature wine reflect the quality of the wine? Presumably the answer to this question is yes, because eventually the quality of the wine is known, and the price reflects this knowledge. Other natural questions might be: Is the price of new wine a good predictor of the price after it has matured? If not, what is a good predictor? This article tries to answer such questions.

The authors begin by providing the 1990-1991 London auction prices of red wines from six of the best known Chateaux (vineyards) that were produced in the years from 1960 to 1969. These years were chosen because by 1990 the wines should be fully mature and their quality known. For a given Chateau, there is wide variation in these prices through the years and, for a given year, there is wide variation in the prices between Chateaux.

Using regression techniques, the authors show that the prices of the wines when new are not good predictors of their prices when mature. On the other hand, weather conditions are very good predictors. Great vintages for Bordeaux wines correspond to years in which August and September were dry, the growing season was warm, and the previous winter was wet. Ashenfelter uses this fact to estimate the value of new wines and provides these estimates in a newsletter he distributes called "Liquid Assets: The International Guide to Fine Wines."

Professor Ashenfelter is a Princeton economist who is widely quoted in newspapers on weightier matters, but his newsletter also makes the news occasionally. This article provides some of the more humorous remarks made by well-known wine critics on the use of statistics to assist in judging wines.


"Picturing an L.A. Bus Schedule"

by Howard Wainer (1995). Chance, 8(4), 44.

Howard Wainer edits a column in Chance called "Visual Revelations." His column provides wonderful examples for classroom discussions of the use of graphics. This month he considers a question from the first National Adult Literacy Survey conducted in 1992. This question gives the appropriate L.A. bus schedule and asks how long you would have to wait for the next bus on a Saturday afternoon if you missed the 2:35 bus leaving Hancock and Buena Ventura, going to Flintridge and Academy. The schedule is typical of those we have all struggled with: columns of outbound times and inbound times, remarks about buses that run Monday through Friday only, and so on.

Wainer suggests that we should make a general-purpose plot of the bus data, and then see how it serves to answer a variety of questions, including the one on the quiz. His choice is to plot the time of day on the horizontal axis and the various bus stops on the vertical axis. A change of scale suggests itself; after this change is made we have a plot that makes it easy to see regularities in the way the buses run. The cyclic nature of the graph suggests that there is a single bus going back and forth on the route considered, making a round trip in just under two hours. The graph provides easy answers to a variety of questions, including the one on the survey.


"Fuzzy Logic: Great Hope or Grating Hype?"

by Michael Laviolette (1995). Chance, 8(4), 15-19.

The author of this article feels that many problems currently being solved by fuzzy set theory could be equally well solved using probability theory. To illustrate this he considers the following simple application of fuzzy set theory to control theory.

You want a air condition-controller to make a motor go at speed y when the temperature is x. You only have a vague feeling about when the room is cool, just right, or warm. Fuzzy logic suggests associating these labels with appropriate intervals of temperatures. These sets are called fuzzy sets and are allowed to overlap. For example, suppose you assign the interval from 50 to 70 as the "cool" set, 60 to 80 as "just right," and 70 to 90 as "warm." Then the temperature 65 is in both the "cool" set and the "just right" set. For each fuzzy set you define a membership function that assigns a value between 0 and 1 to each member of the set. For example, for the cool interval from 50 to 70, you might make the membership function increase linearly from 0 to 1 as the temperature goes from 50 to 60 and decrease linearly from 1 to 0 as the temperature goes from 60 to 70. Then 60 is a really cool temperature, but 65 is only .5 cool.

Similarly, you can determine fuzzy sets and measurement functions corresponding to intervals of speed that you consider slow, medium, and fast.

We associate "cool" with the motor being on "slow," "just right" with it being on "medium," and "warm" with it being on "fast." This restricts how we make the correspondence between temperatures and speeds, but at the same time creates some conflicts. For example, the temperature 65 is in both the "cool" and the "just right" temperature sets, so it should correspond to a point in either the "slow" or "medium" set or possibly both. The temperature and speed membership functions determine, by fuzzy logic, a new speed fuzzy set and membership function for the temperature 65. When the temperature is 65, the controller sets the speed equal to the average speed calculated using this membership function.

For a detailed description of how this is done, consult the author's longer article (Technometrics (1995), 37(3), 249-261). The explanation in the Chance article is rather brief, and a key figure (the last part of Figure 2) is incorrect.

In the probabilistic approach to the problem, the membership functions are replaced by conditional probabilities. We determine subjectively or otherwise probabilities of the form "the probability that the room is perceived as cool given that the temperature is 65" and probabilities of the form "the probability that the machine is running at speed x given that it is running at medium speed." These probabilities, combined with the rules associating temperature sets with speed sets, allow you to compute the expected speed given a specific temperature, say 65. Then, for a given temperature x, the controller sets the speed equal to y, where y is the expected speed with respect to these conditional probabilities.

Laviolette's article in Technometrics includes long discussions by workers in fuzzy set theory describing their feelings about the relationship between probability and fuzzy sets.


Luck: The Brilliant Randomness of Everyday Life

by Nicholas Rescher (1995). New York: Farrar, Straus & Giroux.

We teach and write about chance as if we know what it is. Yet we talk about luck, good and bad, all the time and seldom ask what luck is and how it relates to chance. In this book the philosopher Nicholas Rescher attempts to remedy this situation.

Rescher's use of the term "luck" is consistent with the definition in the Oxford English Dictionary: "the fortuitous happening of an event favorable or unfavorable to the interest of a person." Thus luck combines a chance event with an effect on an individual. We have good luck if the chance event helps us and bad luck if it hurts us.

Rescher writes: "Recognizing the prominent role of sheer luck throughout the role of human affairs, this work will address such questions as: What is luck? How does it differ from fate and fortune? What should our attitude toward lucky and unlucky people be? Can we expect to control or master luck? Are people to be held responsible for their luck? Should there be compensation for bad luck? Can luck be eliminated in our lives?"

The only question for which you will find a definitive answer is: "Can luck be eliminated from our lives?" The answer is no! You may be disappointed on the first reading of this short book because you don't get enough answers. However, you will start asking your own questions such as: Is there a law of large numbers for luck? You will find yourself discussing the meaning of luck with your colleagues and students. Perhaps from this you will get lucky and discover what luck really is.


"Divine Authorship? Computer Reveals Startling Word Pattern"

by Jeffrey B. Satinover (1995). Bible Review, 11(5), 28.

This article reviews the research of three statisticians, Doron Witztum, Eliyshu Rips, and Yoav Rosenberg, published in Statistical Science (1994, 9(3), 429-438). These authors claim to show that the book of Genesis contains information about events that occurred long after Genesis was written, and that this finding cannot be accounted for by chance.

To show this, the authors chose 32 names from the Encyclopedia of Great Men of Israel and formed word pairs (w, w'), where w is one of the names, and w' is a date of birth or date of death of the person with name w. We say a word w is "embedded" in the text if its letters appear in the text at positions corresponding to an arithmetic sequence (not counting spaces). For example, the word "has" is embedded in the sentence "The war is over," since the letters h, a, and s occur in the sentence separated in each case by two letters. The authors show that the names and dates they chose appeared in Genesis (which is not itself surprising) with the names nearer their matching dates than could be accounted for by chance (p = .00002).

Satinover asks: "What was the purpose of encoding all this information into the text?" He answers his own question with: "Some would say it is the Author's signature."

At the suggestion of a referee, the authors tried the same tests on other Hebrew works and even Tolstoy's War and Peace translated into Hebrew. They did not find any similarly unlikely events in these controls.

When the results were published in Statistical Science, the editors commented that the referees doubted this was possible but could not find anything wrong with the statistical analyses. They published it so the rest of us could try to discover what is going on.

The authors first announced their results in the Journal of the Royal Statistical Society A (1988, 155(1), 177-178), while commenting on an article "Probability, Statistics and Theology" by D. J. Bartholomew. After this announcement, a public statement was made by well-known mathematicians including H. Furstenberg at Hebrew University and Piateski-Shapiro at Yale that these results "represented serious research carried out by serious investigators."

The article gives a nice description of this work and how it has been received. Evidently, responses so far have fallen into two categories: a priori acceptance and a priori rejection, the former by believers and enthusiasts and the latter by scientists who say that no amount of evidence would be convincing.


"Breast Cancer Study a First"

by Robert Cooke. Newsday, 15 November 1995, A36.

This article reports on the first study that used a probability sample to study risk factors for breast cancer. The researchers considered three factors generally thought to be risk factors for breast cancer: not having a baby or waiting until after age 19 to have one, having a moderate or high income, and having a family history of breast cancer.

The study involved 7,508 women between ages 25 and 74, and began in 1971. By 1987 when the study ended, 193 women had developed breast cancer. The results of the study suggested that 41% of the risk was linked to the three factors considered.

The article states that it was estimated that 29% of the breast cancer cases were attributable to not having a baby or waiting until after age 19. An additional 19% were linked to having a moderate or high income, and 9% were linked to an inherited predisposition for breast cancer. It is interesting to think about what this really means.


"$4.1 Million Awarded in Implant Case; Dow Chemical Facing Prospect of More Suits"

by Jay Mathews. The Washington Post, 29 October 1995, A5.

Last spring, Dow Corning filed for bankruptcy as a result of lawsuits brought by women who alleged their health had been ruined by silicone breast implants. Fearing they would be denied compensation, some women sought to sue the parent company, Dow Chemical. A Nevada jury has now ruled that Dow Chemical must pay an Elko, Nevada, woman $3.9 million in damages. This is the first time the parent company has been held responsible for damage allegedly caused by the implants.

Health complaints have ranged from chronic fatigue and muscle pain to connective tissue disorders and rheumatic diseases, although a series of scientific studies have been unable to establish any links to the implants. The plaintiff's attorneys argued that Dow Chemical had done studies of other uses of silicone in industry and agriculture and knew of problems that should have been made public. Dow Chemical's attorneys denied this claim. They maintained that the plaintiff's symptoms were consistent with traumatic stress disorder and fibromyalgia unrelated to the implants, and that she sought medical attention for the implants only after seeing an attorney.


"Proof of a Breast Implant Peril is Lacking, Rheumatologists Say"

by Gina Kolata. The New York Times, 25 October 1995, C11.

Mere days before the verdict in the above article was announced, the American College of Rheumatology issued a formal statement saying that there was no evidence that silicone breast implants cause the diseases attributed to them, and that the FDA and the courts should stop acting on the basis of anecdotal evidence. (In 1992, the FDA imposed a moratorium on use of the devices until the alleged health risks were investigated.)

The article notes that, since an estimated one million American women have received implants, it would be expected by chance alone that thousands would become ill with connective tissue and rheumatic diseases. Some doctors disagree with these conclusions and have testified in court that the devices cause a new type of auto-immune disorder. But Dr. Sam Ruddy, departing president of the American College of Rheumatology said that there was no scientific evidence to support this claim. Instead, there are just "collections of cases with no controls."

For a more technical article on this subject, see "Silicone Breast Implants and the Risk of Connective-Tissue Diseases and Symptoms" by J. Sanchez-Guerrero, et al. (1995), New England Journal of Medicine, 332, 1166-1170.


"In Scientific Studies, Seeking the Truth in a Vast Gray Area"

by Lena Williams. New York Times, 11 October 1995, C1.

This article reports on a one-day meeting of epidemiologists and journalists in Boston to try to find solutions to public confusion caused by contradictory recommendations on medical issues. There was plenty of blame to go around: Scientists tend to overstate their findings to get attention or grants or both. Journalists add to the problem by focusing on the most controversial or titillating aspects of medical research. In addition, the public is eager to find quick fixes to medical problems.

A recent study linking moderate weight gain in middle-aged women to an increased risk of death was held up as an example of the problem. The issues in this study were complicated, and many accounts did not give enough details to explain how the findings depended on factors like race and eating patterns. In addition, the report that the study's author serves as an adviser to two companies that make diet pills created doubts about the study.

The scientists reviewed some of the reasons that their findings may not be accurate: biases, inaccurate reporting by subjects, and other methodological problems. They agreed that they use too much jargon; they present journalists with the almost impossible task of learning about results and explaining them in a non-technical way with only a few days' study. They recommended that articles be released to journalists weeks in advance, rather than days in advance.


"Keno Is as Popular in Delis as in Bars"

by Ian Fisher. New York Times, 17 October 1995, B6.

New York has a new game called Quick Draw, a form of Keno. It is outpacing earning projections by almost 20/but some of the top-selling outlets are convenience stores and other stores where alcohol is not sold. This article discusses concerns about where and how the game is being played. The fact that you can play a new game every five minutes suggests that it may become additive for some players.

To play Quick Draw, you specify a set of numbers chosen from the first 80 integers. You can have from one to ten numbers in your set. The computer then picks 20 numbers at random from the integers from 1 to 80. You can bet 1, 2, 3, 4, 5, or 10 dollars. You are paid off according to how many numbers are in both your set and the set chosen by the computer. The payoffs are chosen in such a way that your expected loss, no matter how many numbers you choose, is about 40%.

Donald Trump and others tried to stop this game on the grounds that it was not a lottery as defined by the State Constitution and thus not exempt from New York's general prohibition against gambling. A judge ruled that "the game contained all the essential features of a lottery: i.e., consideration for chances, represented by numbers drawn at random, and a prize for the winning numbers. A lottery agent inserts the player's picks into a computer terminal--the player does not. Nor does the machine eject anything of value as would a slot machine--only a bet slip used by the player to compare the numbers to those drawn and displayed on the video screen." Some pretty fine distinctions are being made here.


"Ask Marilyn"

by Marilyn vos Savant. Parade Magazine, 15 October 1995, 13.

A reader writes:

I've heard that when playing cards, when you're dealt a pair, it increases the odds that your opponent is dealt a pair, too. Is this true? If so, how?

Marilyn says it's true and illustrates with a counting argument, assuming that you and your opponent are each dealt two-card hands. A pair in any of the 13 denominations can be obtained in C(4,2) = 6 ways, by choosing a pair of suits. Marilyn demonstrates this by explicitly listing the combinations. If you hold a pair, you have eliminated five of your opponent's opportunities for pairs, since there remains only one way for her to get a pair in the same denomination as you (there remain six options for any other denomination). On the other hand, if you don't hold a pair, you reduce to C(3,2) = 3 the number of ways she could get a pair in either of the two denominations you hold. This is a total loss of six opportunities, which is one more than the five she loses when you hold a pair. Her chances for a pair are indeed better when you hold a pair!

For more problems like this one, see "Do Good Hands Attract?" by S. Gudder (1991), Mathematics Magazine, 54(1), 13-16.


"Mortality Associated With Moderate Intakes of Wine, Beer, or Spirits"

by Morten Gronbaek et al. (1995), British Medical Journal, 310(6988), 1165-1169.

A number of studies have shown a U-shaped curve for the relative risk of mortality as a function of alcohol intake for both men and women. This article reports the results of a large study carried out in Denmark to assess the effects of different types of alcoholic drinks on the risk of death from all causes and from heart attacks, taking into account sex, age, socioeconomic conditions, smoking habits, and body mass index.

The study followed 13,285 subjects (6051 men and 7234 women) between ages 3 and 79 from 1976 to 1988. The authors found that beer intake had little effect on the relative risk of mortality. Intake of spirits also had little effect up to 3 to 5 drinks daily, at which point there was a significant increase in the relative risk of mortality. On the other hand, the relative risk as a function of wine intake dropped continuously, having its lowest value for 3 to 5 drinks daily. Even drinking wine only occasionally seemed to help.

This article was the basis of a segment on the television show 60 Minutes on November 5, 1995, on the benefits of wine in the prevention of heart disease. This was the second such discussion 60 Minutes has had. Their segment called the "French Paradox," shown four years ago, is generally credited in the wine business with causing an upsurge in red wine sales that continues today.


Return to Table of Contents | Return to the JSE Home Page