William P. Peterson
Middlebury College
Journal of Statistics Education Volume 9, Number 1 (2001)
Copyright © 2001 by the American Statistical Association, all rights reserved.
This text may be freely shared among individuals, but it may not be republished in any medium without express
written consent.
by David Leonhardt, The New York Times, November 12, 2000, "Week in Review," p. 3.
by George Johnson, The New York Times, November 19, 2000, Sect. 4, p. 3.
by John Allen Paulos, The New York Times, November 22, 2000, p. A27.
These three articles comment on how close the presidential election was. They all echo the theme that the final difference was in some sense within the "margin of error" of our electoral process.
The first draws an analogy with Olympic sports competitions. In Sydney last summer, American swimmers Anthony Earvin and Gary Hall Jr. shared the gold medal in the 50-meter freestyle when each was timed at 21.98 seconds. Splitting it any finer was declared to be unfair after the 1972 Games, where a 400-meter race was decided by mere thousandths of a second. The article declares that "for all meaningful statistical purposes, the Florida vote was a tie." A difference of 300 votes out of six million is 1 in 20,000, or one-tenth the size of the potential difference between swimmers Earvin and Hall. The article concludes with an (unexplained) estimate of the chance that another race this close will occur in the next century: "the probability is just a handful out of a million." For comparison, the article notes that this is considerably smaller than the 1-in-1000 chance scientists recently gave for an asteroid colliding with the Earth in 2071. (For more on the asteroid, see the Washington Post reports later in this column.)
The second article compares the election with opinion polling, where we are accustomed to seeing margin of error statements. When the margin of error is larger than the difference measured in a poll, then the difference could be attributed to chance variation. The article therefore suggests that the winner of the presidential election was effectively chosen at random. Sources of "random" variation in the election included confusing ballots, machine errors reading the punch cards, and legal decisions on which recounts to accept. Confusion arising from the "butterfly" ballot was widely publicized. Industry spokesman stated the accuracy of the card readers ranged from 99% to 99.9%. This sounds impressive, but in absolute terms it potentially represents tens of thousands of errors statewide, which is more than the margin of victory. One downside of the Electoral College, according to the article, is that is "magnifies" such chance errors. For these reasons, the election might ultimately be a less accurate reflection of the will of the people than a statistically well-designed poll would be. "If we trusted statistics over counting," writes the author, "we could dispense with elections and just go with the polls."
In the last article, John Allen Paulos (well-known as the author of Innumeracy) also argues that the Florida election results amount to a tie. The difference is the certified tally is smaller than the errors obviously present: the tens of thousands of ballots disqualified for double voting, the anomalous total for Buchanan in Palm Beach, the disputed absentee ballots, and so on. The title of the article comes from Paulos' analogy that "measuring the relatively tiny gap in votes between the two candidates is a bit like measuring the lengths of two bacteria with a yardstick. The Florida electoral system, in particular, is incapable of making such fine measurements." Paulos concludes that we might as well toss a coin to declare a winner.
The New York Times on the Web maintains a site on the 2000 Election at
http://www.nytimes.com/pages/politics/elections/index.html
There you can find an interactive guide to the Florida vote, including data maps by county, and a timeline of the Bush margin through the various recounts.
By now, there have been many detailed statistical analyses of the election results. In our last edition of "Teaching Bits," we provided links to some of them. Here is the URL of another source, which includes brief annotations describing the statistical methods used.
http://www.bestbookmarks.com/election/#links
Or, if you would like to try your own analysis, you can download the election figures from the Florida Department of State:
http://enight.dos.state.fl.us/Report.asp?Date=001107
The Washington Post, November 7, 2000, p. A9.
On September 29, astronomers using the Canada-France-Hawaii telescope in Hawaii discovered an object in space moving on a "near-earth orbit." According to researchers at NASA's Jet Propulsion Laboratory in California, the object, named 2000 SG344, might be a small asteroid or part of a discarded rocket. They estimated that the object's trajectory would bring it within 3.8 million miles of Earth in the year 2030. Given uncertainties in the exact orbit, they calculated a 1-in-500 chance that the object would actually hit the Earth.
The first article quotes Dan Yeomans, the manager of NASA's Near-Earth Orbit project, as saying that 2000 SG344 had the best chance of hitting Earth of any object detected to date. He added that "if future observations show the impact probability is increasing rather than decreasing as we expect, then we'll have to make some decisions as to whether we should mount some mitigating campaign." As it turned out, the second article reported that the estimated probability of a collision in 2030 had been rather drastically adjusted downward--to zero! Additional observations of the orbit showed that the object would come no closer than 2.7 million miles to the earth in 2030, so there is no chance of a collision then. However, there is now an estimated 1-in-1000 chance of a collision in 2071.
by Linda Kulman, US News & World Report, November 13, 2000, pp. 68-72.
This article summarizes a number of cases where the news media have presented conflicting dietary advice, based on the results of epidemiological research. Prominent examples include coffee, eggs, butter/margarine, salt and fiber. Part of the problem is that the public doesn't understand the process by which hypotheses are tested and revised before a scientific consensus is reached. According to Harvard epidemiologist Walter Willett, "If things didn't shift, it would be bad science." David Murray of the Statistical Assessment Service (STATS) sees a conflict between the timetable of careful research and the deadlines faced by news reporters. He says "While science is contingent and unfinished, journalists want something definitive. They impose closure by the deadline. Out of that, the public thinks they are always changing direction." Nevertheless, says science writer Gina Kolata of the New York Times, the blame does not rest solely with reporters. She points out that the scientists are themselves quite enthusiastic about their own findings: "They say, 'I myself am taking folic acid.' I used to feed off their enthusiasm. But when you see one [study] after another fall, I've become much more of a skeptic." Another issue is the source of funding for the studies. The article cites research partially funded by the Chocolate Manufacturers Association which found that certain compounds in chocolate may be good for the arteries.
The author of the article recommends that critical readers consider the following questions when evaluating a health study.
For explanation of the last item, the article quotes Dr. Marcia Angell, the former editor of the New England Journal of Medicine: "The breakthroughs are in the first paragraph and the caveats are in the fifth."
by Thomas H. Maugh II, The Los Angeles Times, December 20, 2000, p. A1.
A cellular phone places a source of radio waves against the user's head, and over the years there has been public speculation that this might increase the risk of brain cancer. Two recent case control studies have found no such risk.
The first of these studies was published in the December 20 issue of Journal of the American Medical Association (Joshua E. Muscat et. al., Handheld cellular telephone use and risk of brain cancer. JAMA, 20 Dec 2000, Vol 284, No. 23, pp 3001-3007). The study involved 469 men and women, ages 18 to 80 who were diagnosed with brain cancer between 1994 and 1998. They were compared with 422 people who did not have brain cancer, but were matched to the cancer patients by age, sex, race, years of education and occupation. The second, similar, study appeared in the New England Journal of Medicine (Peter D. Inskip, et. al., Cellular- telephone use and brain tumors. NEJM, 11 Jan. 2001, Vol. 344, No. 2, pp. 79-86). In this study, 782 patients who were diagnosed with brain tumors between 1994 and 1998 and compared them with 799 people who were admitted to the same hospitals for conditions other than cancer. Neither study found an increased risk of brain cancer among those who used cell phones over a period of two or three years.
Researchers cautioned that data on heavy, long-term cell-phone use are not yet available. However, a large study now underway in Europe should provide such evidence. That study will not be concluded until 2003.
by Steven A. Holmes, The New York Times, December 29, 2000, p. A1.
Figures from the 2000 Census have been announced, giving the US population as 281,421,906. This is about 6 million more than the estimate of 275,843,000 that the Census Bureau made on October 1. Debate continues as to whether statistical adjustment would give more accurate figures. Republicans argue that the higher than expected total shows that efforts to improve traditional counting, including an advertising campaign to encourage compliance, have paid off. Kenneth Blackwell of Ohio, who co-chairs a board that monitors the Census, said: "We may have a situation where the differential undercount is wiped out." But Census Director Kenneth Prewitt was more cautious, commenting that "There is no way I can tell you today that these numbers are accurate. We are going to work these data backwards and forwards to find out how accurate we are, and then we're going to tell you."
Last year the Supreme Court decision ruled that statistically adjusted data could not be used for apportioning seats in the House of Representatives. Thus the overall impact on the 2003 Congress is already known. A total of 12 seats will shift in the reapportionment, with ten states losing seats and eight states gaining (for example, New York will lose two and California will gain one). On the other hand, the article reports that the Court did not rule on whether states could use statistical data when redrawing their own congressional districts. Census officials are expected to announce in February whether they believe that sample survey data from some 300,000 households should be used for this purpose. We can expect further political debate when the announcement comes.
by Atul Gawande, The New Yorker, 8 January 2001, pp. 50-53.
"Coins and Confused Eyewitnesses: Calculating the Probability of Picking the Wrong Guy"
by John Allen Paulos, "Who's Counting," ABCNEWS.com, 1 February 2001. http://www.abcnews.go.com/sections/scitech/WhosCounting/whoscounting.html
These stories highlight the difficulties in the use of police lineups for identifying criminals. The main point is that the ability of eyewitnesses to correctly identify a suspect depends critically on such factors as the make-up of the lineup, whether the suspects are presented sequentially or simultaneously, and the information provided to the witness--in particular, whether the actual culprit is in the line-up. Of particular concern is the false positive rate; that is, the chance that an eye-witness will incorrectly implicate a suspect. The New Yorker article states "in a study of sixty-three DNA exonerations of wrongfully convicted people, fifty-three involved witnesses making a mistaken identification, and almost invariably they had viewed a lineup in which the actual perpetrator was not present."
The New Yorker article summarizes the findings of Gary Wells, a psychologist at Iowa State University, who has extensively studied the eyewitness problem. His web page
http://psych-server.iastate.edu/faculty/gwells/homepage.htm
includes links to many resources on the subject. There is also an on- line demonstration where you can try to pick a suspect out of a photo lineup after viewing a security camera videotape.
Paulos' piece for ABCNEWS.com illustrates the problem via a thought experiment on coin-flipping. You are presented with a "lineup" of three pennies, and are told that two are fair and one has a 75% chance of heads. You previously observed that one of these pennies came up heads three times in a row. If you identify this one as the "culprit," what is the chance you are right? Using Bayes theorem, Paulos shows that the answer is 63%. He notes that this is not out of line with actual experience in police lineups, where "the probability of a correct identification...is frequently as low as 60 percent." He adds that "what's worse, innocents in the lineup are picked up to 20 percent or more of the time..."
Volume 9 (2001) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications