Teaching Bits: A Resource for Teachers of Statistics

Journal of Statistics Education v.5, n.1 (1997)

Joan B. Garfield
Department of Educational Psychology
University of Minnesota
332 Burton Hall
Minneapolis, MN 55455
612-625-0337

jbg@maroon.tc.umn.edu

William P. Peterson
Department of Mathematics and Computer Science
Middlebury College
Middlebury, VT 05753-6145
802-443-5417

wpeterso@panther.middlebury.edu

This column features "bits" of information sampled from a variety of sources that may be of interest to teachers of statistics. Joan abstracts information from the literature on teaching and learning statistics, while Bill summarizes articles from the news and other media that may be used with students to provoke discussions or serve as a basis for classroom activities or student projects. We realize that due to limitations in the literature we have access to and time to review, we may overlook some potential articles for this column, and therefore encourage you to send us your reviews and suggestions for abstracts.


From the Literature on Teaching and Learning Statistics


"The Evolution with Age of Probabilistic Intuitively Based Misconceptions"

by Efraim Fischbein and Ditza Schnarch (1997). Journal for Research in Mathematics Education, 28(1), 96-105.

This article reports the results of a study of the stability of probabilistic misconceptions across different age levels. The subjects in the study were students in grades 5, 7, 9, 11, and college students. All subjects were given the same 7-item test consisting of probability problems, each related to a particular misconception (e.g., representativeness, the conjunction fallacy, availability, etc.). Percentages of students who selected responses identified as representing a misconception were compared across the five groups, and trends over time observed. The researchers were surprised to find three different outcomes: one misconception appeared stable across all age levels, some grew stronger with age, and some grew weaker. The problems on the test are included in the article.


"Teaching Survey Sampling"

by Ronald S. Fecso, William D. Kalsbeek, Sharon L. Lohr, Richard L. Scheaffer, Fritz J. Scheuren, and Elizabeth A. Stasny (editor) (1996). The American Statistician, 50(4), 328-340.

An invited panel at the 1995 Joint Statistical Meetings led to the writing and publication of this article. Panel participants discussed what should be taught in a course on survey sampling and how such a course should be taught. The reason for the panel (and article) was to address many changes that have occurred in the field, such as new areas of research on survey sampling and new technology available to analyze data gathered in large-scale surveys. The paper is organized according to each participant's comments. Some highlights include Lohr's Top Ten List of Mistakes in Sampling, Scheuren's Fishbone Diagram of an Introductory Sampling Course, and Fecso's Survey Pyramid for Building a Healthy Survey.


"Using Graphics and Simulation to Teach Statistical Concepts"

by Mervyn G. Marasinghe, William Q. Meeker, Dianne Cook, and Tae-sung Shin (1996). The American Statistician, 50(4), 342-351.

This paper describes some activities involving simulation and dynamic graphics that were developed to help students better understand abstract statistical concepts. Modules were designed for several topics (e.g., sampling distributions, confidence intervals, and central limit theorem), and include a three- to six-page lesson/activity, as well as a software component. These modules also provide exercises and questions for students to answer and suggestions for the course instructor. This article provides examples of a few modules, including screens demonstrating the use of graphics and simulation. All materials are available on the World Wide Web (the address is listed at the end of the article).


"Multimedia for Teaching Statistics: Promises and Pitfalls"

by Paul F. Velleman and David S. Moore (1996). The American Statistician, 50(3), 217-225.

Multimedia materials are now becoming available for statistics educators to use in their courses. These materials combine text, sound, animation, graphics, and sometimes video and computing software. The authors describe many advantages that multimedia technology offers teachers and students, but also outline some potential pitfalls. They suggest some principles to guide the use and/or design of multimedia, and encourage us to keep in mind the importance of the human factor in helping students learn statistics.


"A Problem-Solving Approach to Teaching Business Statistics"

by Steven C. Hillmer (1996). The American Statistician, 50(3), 249-256.

In an attempt to make business statistics more relevant to future managers, a new type of course was designed and taught, based on problem solving. A five-step problem-solving framework is described which was used to structure the course. A course outline is also provided.


"Easy Implementation of Writing in Introductory Statistics Courses"

by Arnold J. Stromberg and Subathra Ramanathan (1996). The American Statistician, 50(2), 159-163.

There are very few articles published on the use of writing in statistics courses, and this paper provides a important contribution to this area. The authors describe their efforts to include writing in their courses and suggest techniques that worked for them as well as for their students. Building on their university's Writing Across the Curriculum Program, they adopted several strategies targeted at different reasons for poor writing. A helpful set of Guidelines for Constructing Writing Assignments is included, as well as instructions for a writing project and a peer evaluation of the project.


The Assessment Challenge in Statistics Education

eds. Iddo Gal and Joan Garfield (1997). The Netherlands: International Statistical Institute and IOS Press.

This is a collection of 19 papers, many written collaboratively, by over 30 statistics educators from six different countries. The chapters provide details on many different assessment approaches (e.g., concept maps, portfolios, projects, multiple-choice tests, etc.), focusing on primary through graduate educational levels. The book is organized into four main sections. The first section outlines curricular goals and general assessment frameworks and issues. The second section includes chapters that deal with assessing conceptual understanding of statistical ideas, and the third (and largest section) contains papers that each present an innovative model for classroom assessment. The last section focuses on unique issues related to assessing the understanding of probability.


Six articles on teaching statistics appear in a recent volume of Communications in Statistics: Theory and Methods (1996, 25(11)). This issue of the journal contains the proceedings from the Institute of Mathematical Statistics Central Regional Meeting: A Meeting in Honor of Robert V. Hogg.

"Independent Student Projects in Undergraduate Engineering Statistics and Quality Control Courses"

by Stephen Vardeman, pp. 2633-2646.

This paper provides a rationale for using projects in engineering statistics and quality control courses, and provides suggested topics as well as detailed guidelines given to students.

"Using Student Projects in an Introductory Course for Liberal Arts Students"

by Thomas L. Moore, pp. 2647-2661.

Projects have been used in so many of the author's statistics courses, he has compiled a list of interesting findings, all involving data on student characteristics or different aspects of his college. Moore describes why he finds projects to be important learning activities for students and provides the "nuts and bolts" of assigning and monitoring students projects.

"Elementary Statistics Laboratory"

by John Spurrier, Don Edwards, and Lori Thombs, pp. 2663-2673.

A statistics laboratory, based on the science laboratory model, was developed and taught by these authors. (They have also published a book with details on the individual lab activities). This paper describes the goals for the lab, some lab activities, and details on the lab facilities and equipment. A list is provided of the statistical concepts embedded in each lab activity.

"A Future for Statistics Education -- We WILL Do More with Less"

by Robert B. Miller, pp. 2839-2851.

Recognizing increased demands for statistics course, and decreased funding available to teach courses, this author suggests ways to do more (e.g., teach more students, and teach them well) with smaller budgets. He encourages statistics educators to apply their research and evaluation skills to examine the effectiveness of different teaching strategies under different resource constraints.

"Teaching a Chance Course"

by J. Laurie Snell, pp. 2853-2862.

This paper describes the NSF-funded Chance project: its goals, what has been accomplished, and what resources it has made available to statistics educators.

"Assessing Student Learning in the Context of Evaluating a Chance Course"

by Joan Garfield, pp. 2863-2873.

Different approaches to evaluating the impact of the Chance course on students are described, including the development of instruments used to assess students' attitudes and beliefs about statistics, their reasoning about chance events, and their ability to read and critique articles in the news.


Teaching Statistics


A regular component of the Teaching Bits Department is a list of articles from Teaching Statistics, an international journal based in England. Brief summaries of the articles are included. In addition to these articles, Teaching Statistics features several regular departments that may be of interest, including Computing Corner, Curriculum Matters, Data Bank, Historical Perspective, Practical Activities, Problem Page, Project Parade, Research Report, Book Reviews, and News and Notes.

The Circulation Manager of Teaching Statistics is Peter Holmes, ph@maths.nott.ac.uk, RSS Centre for Statistical Education, University of Nottingham, Nottingham NG7 2RD, England.


Teaching Statistics, Spring 1997
Volume 19, Number 1

"Understanding Conditional Probability" by Stephen Tomlinson and Robert Quinn

This article offers a new approach to teaching the difficult concept of conditional probability.

"Hey - What's the Big Idea?" by Alan Graham

A "big idea" in statistics can fail to register with students when it is obscured by finer details of lesson tasks or activities. Based on the graphics calculator, this article describes a lesson designed to introduce the Central Limit Theorem.

"Data Handling: An Introduction to Higher Order Processes" by Jane M. Watson and Rosemary A. Callingham

An activity is described which allows students with a range of abilities to become involved in data analysis and informal inference while working in self-selected small group environments.

"Composing Mozart Variations with Dice" by Zsofia Ruttkay

This article describes a musical game devised by W. A. Mozart in which dice are used to select randomly from a number of possible arrangements of each bar in a Viennese minuet.

"Strike It Lucky" by Mike Fletcher

This article analyses the decision-making processes of contestants in a popular TV quiz show.


Topics for Discussion from Current Newspapers and Journals


"Wild Cards in Poker Make the Game Less Challenging by Far"

Interview by Richard Harris. National Public Radio: All Things Considered, 29 November 1996.

Gadbois, in a paper in Mathematics Magazine (October 1996, Vol. 69, No. 4, 283-285) and Emert, in a joint paper with Dale Umbach in last summer's issue of Chance (1996, Vol. 9, No. 3, 17-22) report their independent discovery of a curious result: When you play poker with wild cards, it is not possible to rank the hands to ensure that the less likely hands are more valuable! (The results were also summarized by Ivars Peterson in Math Horizons, November 1996, p. 6). This should be of interest to players who enjoy wild-card games, as well as to probability/statistics teachers who like to use the example of counting poker hands.

To illustrate, suppose you are playing five-card draw poker with two jokers added to the deck as wild cards. A player holding a true pair and one joker will naturally declare this to be three-of-a-kind. However, this causes a problem, because there are now more ways to get three-of-a-kind than two-pair. Moreover, the situation cannot be remedied by decreeing that two-pair shall henceforth beat three-of-a-kind. A player holding the hand just described can now choose to pair the joker with one of the singletons in his hand, and thus declare the hand as two-pair. So it again becomes more likely that you will be dealt two-pair than three-of-a-kind. There are a number of analogous ranking dilemmas. In fact, with wild cards it is harder to get a "bust" hand than a pair, so perhaps a bust should beat a pair. How many players would go along with this?

It turns out that such problems were already anticipated by the famous card-playing expert John Scarne in his 1949 classic "Scarne on Cards." There he proposed -- and then rejected -- some ways to patch up the game with one wild card. Emert and Umbach considered a more sophisticated fix based on the definition of "inclusion frequency" for each type of hand, which acknowledges the possibility that a given five-card hand can be declared as different types. The inclusion frequency is the number of different five-card hands that can be declared as this type. They compare inclusion ratings for one-joker, two-joker and deuces-wild poker. One interesting finding: a flush beats a full house in each of these situations, and with deuces-wild it also beats four-of-a-kind!

You can listen to the NPR interview at

http://www.realaudio.com/rafiles/npr/password/nc6n2901-8.ram

by downloading the free RealAudio Netscape plug-in.


The following two articles are on the Consumer Price Index. There are excellent introductions to the CPI at the level of introductory statistics classes in David Moore's Statistics: Concepts and Controversies (Freeman, 4th edition just out) and in Jessica Utts' recently published Seeing Through Statistics (Duxbury, 1996).

"A Single Number Puts the Economy in a New Light"

by Steven Pearlstein. Washington Post, 11 December 1996, A1.

In early winter 1995, Federal Reserve Board Chairman Alan Greenspan testified before the Congress that he thought the Consumer Price Index (CPI) substantially overstated the rate of growth in the cost of living. His testimony generated a great deal of discussion, including the following comment by House Speaker Gingrich: "We have a handful of bureaucrats who, all professional economists agree, have made an error in their calculations. If they can't get it right in the next 30 days or so, we zero them out, we transfer the responsibility to either the Federal Reserve or the Treasury and tell them to get it right."

Of course, things aren't quite that simple, and a panel chaired by Stanford economist Michael J. Boskin was set up to study the accuracy of the CPI. The panel's final report, "Toward a More Accurate Measure of the Cost of Living," was released at the end of last year. The panel concluded that, in its present form, the CPI is not a true cost of living index (a fact long recognized by the Bureau of Labor Statistics that produces it). They pointed to a number of biases that occur when it is used as such a measure, and found that these have caused the CPI to overstate the true cost of living increases by about 1.1%. If the CPI is not modified, this trend is expected to continue.

If the estimate of inflation has been 1% too high for the last 20 years, then Pearlstein points out that the apparent stagnation in the economy since 1973, as measured by Gross Domestic Product, is magically transformed to an economy that has grown at a respectable rate of 3 to 4 percent per year on average. Projecting this into the future, the Treasury will be collecting more in tax revenue, Social Security payments will be increased at a slower rate, and the federal budget deficit over the next decade will look only about half as big as it does at the moment.

Some economists have observed that this line of reasoning leads to seemingly nonsensical projections. For example, it would imply that, in today's dollars, the typical family in 1960 was making about $16,000, putting them just above the poverty line. And by the year 2030 a typical family will be making about $90,000, again measured in today's dollars. On the other hand, the conclusion from the current CPI that inflation-adjusted wages are declining does not make sense either, since surveys show Americans are spending a larger proportion of their money on luxury items and increasingly expensive medical procedures.

Lawrence Katz, an economist at Harvard University and recently the Labor Department's top economist, remarks that there is little doubt that the CPI, which tracks the price of a relatively fixed basket of consumer goods, gives too high an estimate for inflation. This is caused by the fact that it ignores improvements in the quality of goods, the introduction of new goods, and the substitution of one good in the basket for another not in the basket. However, he feels that it is "logically impossible" that this bias has been as high as 1.1% over a long time period.

"Is the CPI Accurate? Ask the Federal Sleuths Who Get the Numbers"

by Christina Duff. The Wall Street Journal, 16 January 1997, A1.

Echoing Katz' comments above, this article poses the following key question: Does the CPI properly account for the times when consumers substitute less expensive goods (e.g., chicken for beef) when prices rise? Or when they buy a computer that is similarly priced yet much more powerful than what was available a few years before?

About 300 Bureau of Labor Statistics (BLS) employees are responsible for gathering the data that go into the monthly CPI estimates. The article chronicles some of the challenges faced by several of these workers. One of them, Sabina Bloom, travels 900 miles each month to visit some 150 sites, where she collects data on price changes. The sites are selected through BLS surveys indicating popular stores and categories of purchases. One such category might be "women's tops." Mrs. Bloom's job is then to interview a storekeeper to identify an item, size, and style (short- or long-sleeved, tank top or turtleneck, etc.) for the comparison.

Measuring discounts accurately is no easy matter. It is not uncommon to find sales of the form "save 45%-60% when you take an additional 30% off permanently reduced merchandise -- discounts taken at register." When an exact item cannot be found, price-takers must use their judgment to find a substitute. In many cases, the price-takers must rely on memory of local experts. For example, while the price of a bacon, lettuce and tomato sandwich in a certain restaurant may not have changed, the number of strips of bacon may have been reduced to compensate for a rise in the price of bacon.

An additional problem with the CPI is that the master list of categories to be priced is updated only once every ten years. This means that such items as cellular phones, for example, are too new to be included. Even seemingly "standard" items like television sets are problematic. How much of an observed price increase is due to inflation, and how much is due to quality improvements, such as adding stereo sound, cable capability or power efficiencies?


"Judge Rules Breast Implant Evidence Invalid"

by Gina Kolata. The New York Times, 19 December 1996, A1.

In 1993, the Supreme Court considered how to determine what scientific evidence should be allowed in the courts. Rather than giving specific criteria, the court ruled that Federal Judges should use their judgment, based on the ways that scientific theories are evaluated, to determine the admissibility of scientific evidence. Federal District Court Judge Robert E. Jones, who has been overseeing breast implant cases in Oregon, recently put this ruling to the test.

Jones assembled a panel of "disinterested scientists," asking them to survey the scientific evidence that had been submitted by the plaintiffs. He then held a four-day meeting in which 12 experts for the plaintiffs and the defense were questioned by lawyers for both sides, the court, and the panel of scientists. The panel then submitted its assessment of the plaintiff's scientific evidence. Acting on this assessment, Judge Jones ruled that the evidence was not of sufficient scientific validity to be presented to the court, and he dismissed 70 pending cases.

Jones' ruling is now being appealed. If upheld, it will be a serious blow to the thousands of breast implant cases across the country awaiting trial. In previous cases, women have won awards as high as $25 million by claiming silicone that leaked from the breast implant devices caused diseases ranging from classic autoimmune disorders to a new (allegedly silicone-induced) disease with symptoms of fatigue, headaches, and muscle aches and pains. One major manufacturer of implants, Dow Corning Corporation, was forced into bankruptcy by the settlements.

The story of how scientific evidence was ignored in these previous implant cases is told in a book described by the New York Times ("Notable Books of the Year 1996," 8 December 1996, Section 7, p. 1.):

Science on Trial: The Clash of Medical Evidence and the Law in the Breast Implant Case. By Marcia Angell. (Norton, $27.50.) An accessible, passionate indictment of the ignorance, opportunism and social indifference that enriched lawyers and a few plaintiffs, though the available scientific evidence was against them.

"The Age of Unreason: Welcome to the Factual Free-for-All"

by Kurt Andersen. The New Yorker, 3 February 1997, p. 40.

Andersen's essay laments the decline in the standards of evidence in journalism and popular discourse. There is no longer consensus about facts, he says, nor is there faith that truth will emerge from them. He cites a number of examples from major news stories of the past year. We don't know who was responsible for the epidemic of arson attacks on black churches, nor is there agreement that there even was even an epidemic. We still don't know who was responsible for the pipe bomb at last summer's Olympics, nor what was responsible for the explosion of TWA Flight 800. In the absence of hard evidence, we are presented with disputes over how the stories were reported and who believes them.

Andersen worries that if a story is scary enough, it will often be picked up by the media with its premises unchecked. His prime example is the panic over missing children. In the years following the 1979-81 murders of 23 children in Atlanta, the media regularly published stories about the numbers of children abducted each year by strangers. Estimates ran from 20,000 to 50,000 to more than 100,000. Andersen says that in 1984, as a reporter for Time magazine, he decided to do some checking. Noting that the low end estimate would correspond to 12 abductions per week in New York City alone, he called a half dozen urban police departments around the country ("more or less at random") and asked how many abduction cases they had had in the last year. Zero, one, or two were typical answers. From this, Andersen figures the national number was probably only in the hundreds. It now turns out, according to the current Washington Monthly, that the 50,000 figure was invented by the father of Adam Walsh during interviews in the weeks following Adam's 1981 abduction and murder in Florida.

In the second half of the article, Andersen explains how the World Wide Web, where everyone with a home-page becomes a publisher, has become a prime medium for the dissemination of pseudo-facts. He cites examples of postings, on sites he finds as professional-looking as those of prominent news-media, which propound theories that HIV is a by-product of a US biowarfare program or that Flight 800 was downed by a rift in the space-time continuum. This is certainly food for thought for those of us actively using the Web in our courses. How should we teach our students to separate the serious reporting from the fantastic quantities of junk?


"Women's Grim Question: Why?"

and

"A Sense of Menace"

by Richard Saltus. The Boston Globe, 5 January 1997, A1, and 6 January 1997, A1.

This is a two-part in-depth report on the latest breast cancer statistics. Much of the news is grim. In 1997, about 186,000 American women will be diagnosed with breast cancer. A woman's lifetime risk of getting the disease is 1/8 (12.5%) if she lives past 85. Breast cancer is the leading cause of death for women between the ages of 40 and 55.

"Breast cancer is the biggest unsolved health problem in the United States," says Walter Willett of the Harvard School of Public Health, a leading epidemiologist who has spent years studying the risk factors for breast cancer. There has been some recent progress in genetic research. One in ten women who get the disease have inherited mutations of the BRCA1 or BRCA2 gene, which apparently leads to cancer. But, for the other nine out of ten victims, crucial genes are somehow being damaged during the victim's lifetime. Some researchers suspect that chemicals with estrogen-like effects that are found in DDT, pesticides, plastics, and other compounds may be responsible. Others believe that an increase in a woman's lifetime exposure to her own reproductive hormones, such as estrogen, can increase her risk of cancer. Early menarche and delaying childbirth increase the body's exposure to estrogen. Estrogen stimulates the breast cells to divide and proliferate and with every additional cell division comes a chance for errors to creep into the genetic code.

The article presents a number of interesting data graphics. One shows a twenty-year trend of increasing breast cancer incidence (cases per 100,000), even as the death rate (per 100,000) remains relatively constant. On a related plot for all types of cancer, it is interesting to note that in the mid-1980s the lung cancer death rate surpassed breast cancer death rate. While age is known to be an important risk factor for breast cancer, another graphic tries to sort out whether the breast cancer risk is rising for younger women.


"A Professor Divides his Class in Two to Test Value of On-line Instruction"

by Kelly McCollum. The Chronicle of Higher Education, 21 February 1997, A23.

Jerald Schutte, a sociology professor at California State University at Northridge, has run an experiment to assess the value of on-line instruction. He randomly divided his statistics class into two groups. One half took the course in a traditional classroom setting. The other half completed a web-based course, in which problems were assigned by e-mail, students collaborated in small groups and consulted with the professor only through on-line "chat rooms." These virtual students went to the classroom only for the midterm and final exams, on which they out-performed the traditional students by an average of 20%.

One of the virtual students commented that she appreciated not having to feel intimidated by other students in the classroom. Schutte concurs, noting that it is easier to ask questions in the relative anonymity of the chat rooms. Another student, however, commented that workload was daunting. Indeed, Schutte expressed surprise that none of the virtual students had dropped the course in the face of the increased load.


"Vaccine is Blamed in 125 Polio Cases"

by Tim Friend. USA Today, 31 January 1997, A1.

This note will be of interest to those of us who use the Salk Vaccine Trial as a prime example of experimental design. According to the Centers for Disease Control in Atlanta, nearly all US cases of polio since 1980 were due to vaccinations. Of 133 confirmed cases from 1980-1994, 125 were associated with administration of the oral vaccine. A panel convened last fall concluded that a rate of 7-8 cases a year was unacceptable, and recommended a new regime for immunization.

Since 1980, children have been received three doses of oral vaccine by age 2. The new recommendation calls for two injections by four months of age, followed by two oral doses of weakened virus between the ages of 1 and 6. The article says that "the change in policy should eliminate the risk of vaccine induced polio." However, it will increase the annual cost of immunization by $14.7 million.


"Coping with Public Perception"

by Sylvia Adcock. Newsday, 4 February 1997, B22.

Air travel has been growing at nearly 6% a year, and some experts expect that the number of flights worldwide will double or triple over the next twenty years. The accident rate, measured in number of plane crashes per million flights, has been stable for the last ten years. Even if this rate stays unchanged -- in other words, current safety levels are maintained -- then in the future we may be facing one plane crash a week. Would the public tolerate such a figure? The article suggests that under this scenario it would be difficult to convince anyone that air travel is the safest mode of transportation.

The aircraft manufacturer Boeing predicts that there will be a "major hull loss" accident (a crash that effectively "totals" an airplane) every seven to ten days as soon as 2005. Boeing's vice president for safety worries that this could erode public confidence in air travel to the point that industry growth might stop.

Arnold Barnett of MIT points out that the right reason to work on cutting down the accident rate is because accidents are inherently horrible, not because of public perception. He adds that most of the increased number of crashes will be occurring outside the US, where they will receive little attention from the US media, and thus may not panic the US public.

Note: As of late February, the FAA had begun making airline safety information available on its website:

http://nasdac.faa.gov/internet/

However, there will be no overall ranking of airlines. Interested individuals will have to sort through the data on their own. The welcome page mentions that the site has been experiencing "extraordinarily high ... volume." It will be interesting to see what effect this has on public perception.


"Daily Millions Beats Odds: No One Wins -- 5-Month Losing Streak Puzzles Even Statisticians"

by Pat Doyle. Star Tribune, 7 February 1997, 1B.

The Daily Millions lottery was started nearly five months ago. At the time of this article, 34 million tickets had been sold without a single $1 million jackpot. It is stated that one would expect 3 or 4 winners by now and the probability of having no winners in this period is put at 1/38.

The Daily Millions lottery is run by the Multi-State Lottery Association which also runs the Powerball and Tri-West Lotto lotteries. Here is their description of the Daily Millions.

Every night we draw six balls out of three different drums. One drum contains red balls, the second drum contains white balls and the third contains blue balls. Two balls are drawn from each drum. The balls in each drum range from number 01 to 21. Players win by matching 2, 3, 4, 5, or 6 of the numbers drawn. A match occurs when you have the correct color and number for a given ball. The Grand Prize (won by matching all six balls drawn) is paid in cash. Match 5 pays $5,000; Match 4 pays $100; Match 3 pays $5 and Match 2 pays $2.

The jackpot is $1 million. Unlike other lotteries, winners do not have to share the prize with others having the same winning numbers. The one exception is when there are more than 10 winners for the jackpot. In this case the winners share a $10 million prize.

Match                  Win         Probability of win

6              $1 million               1/9,261,000
5                  $5,000               1/81236.8
4                    $100               1/12,498
3                      $5               1/98.6682
2                      $2               1/11.1781
The Daily Millions was invented to give a lottery where you do not have to share the jackpot with other winners. You also are about six times more likely to win the Jackpot in the Daily Millions lottery than in the Powerball lottery. These two factors were expected to boost lottery sales. However, the article reports that the fact that no one was hitting the jackpot was putting a damper on Daily Millions sales, which slumped from $3.75 million in its first week to $1.23 million in the week ending February 1.

Epilog: On Saturday, February 8 -- the day after the article ran -- Daily Millions had its first jackpot winner!

More information about this lottery can be found from the Multi-State Lottery Association web page, where the organizers give the following table for the total number of winners and the payouts since the lottery started up. The data below cover the period through March 2. Note that there have now been two jackpot winners.

Levels               Winners               Payout

Match 6                  2                $2,000,000
Match 5                490                $2,450,000
Match 4             20,553                $2,055,300
Match 3            396,459                $1,982,295
Match 2          3,505,653                $7,011,306

TOTAL                                    $13,498,901

"Some Systematic Biases of Everyday Judgment"

by Thomas Gilovich. Skeptical Inquirer, March/April 1997, pp. 31-35.

The article describes a number of tests that psychologists have designed to study fallacies in everyday human judgments.

The first test concerns the "Compared to What?" problem. Many statistics need a control group or baseline for comparison to discover the true meaning behind the number. But according to Gilovich: "The logic and necessity of control groups ... is often lost on a large segment of even the educated population."

An example appeared in Discover magazine in 1986, where it was stated that 90% of airplane-wreck survivors had formed a mental plan of escape before the wreck occurred. The magazine recommended that airline passengers be sure to know where all the exits and emergency exits are located and form an escape route. Obviously, the survey never included how many victims planned escape routes. But this means it is impossible to find out if forming an escape route really would increase one's chances of survival in a plane wreck. Another figure holds that 30% of all infertile couples who adopt a child eventually conceive a child or children of their own. However, this does not take into account all the infertile couples who do not adopt a child and still conceive a child eventually. Similarly, if a patient has cancer which goes into remission after mental imagery, would the cancer have gone into remission without the mental imagery or did the imagery truly have an effect?

A second bias is called the "Seek and Ye Shall Find" problem. When people test a hypothesis or theory, they often look more closely at the results which prove them correct. A test for this problem reads as follows:

"Imagine that you serve on a jury of an only-child sole-custody case following a relatively messy divorce. The facts of the case are complicated by ambiguous economic, social, and emotional considerations, and you decide to base your decision on the following few observations. To which parent would you award sole custody of the child?

A: Average income, average health, average working hours, reasonable rapport with the child, relatively stable social life.
B: Above-average income, minor health problems, lots of work-related travel, very close relationship with the child, extremely active social life."
Most respondents chose parent B. However, when the question was reworded as "To which parent would you deny custody of the child?," most responded with B as well. Parent B has several advantages and disadvantages. When the first version was asked, people looked for advantages (like the close relationship with the child, above-average income) which they found in parent B. When the second version was asked, the people looked for disadvantages and also found them in parent B (like minor health problems, lots of work-related travel). Apparently, the people were out to prove the outcome the question suggested. As in this situation, many people try to simply prove their hypotheses and only look at the positive evidence.

The third problem is the "Selective Memory Problem." People tend to remember events that agree with what they expected to happen. One test of this phenomenon presented a group of college students with a "diary" allegedly written by another student who said she was interested in the prophetic nature of dreams. In the diary, she recorded each night's dreams and kept a record of significant events in her life. For half of the dreams, there were corresponding events that could be interpreted as that dream coming true. For example, a dream in which she saw "lots of people being happy" was later followed by a professor cancelling a final exam, "which produced cheers through the class." After reading the diary, the participants were able to recall more of the dreams that were somehow fulfilled than those that were not.


Return to Table of Contents | Return to the JSE Home Page