Teaching Bits: A Resource for Teachers of Statistics

Topics for Discussion from Current Newspapers and Journals

William P. Peterson
Middlebury College

Journal of Statistics Education Volume 11, Number 2 (2003), jse.amstat.org/v11n2/peterson.html

Copyright © 2003 by the American Statistical Association, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent.


"Large Trial Finds AIDS Vaccine Fails to Stop Infection."

by Andrew Pollack with Lawrence K. Altman, New York Times, February 24, 2003, p. A 1.

The first large-scale trial of an AIDS vaccine has ended with disappointing results. The vaccine, Aidsvax, was developed by the biotechnology company VaxGen. The company had hoped to have the vaccine approved for use by some time in 2004.

The experiment involved 5400 participants, of whom 5100 were sexually active gay men and 300 were women judged to be at high risk of infection. About 2/3 were assigned to the treatment group, and received seven vaccinations over a three year period. The control group received placebo injections. Overall, 5.7 % of the treatment group became infected during the study period, compared with 5.8% of the control group. This difference is not statistically significant. A controversy ensued, however, when the VaxGen reported that it had found significant effects in certain subgroups of the 500 minority participants. Among African-Americans and Asians, the treated participants had a 3.7% infection rate compared with 9.9% for the controls.

VaxGen's subgroup analysis was criticized by outside experts. These objections were summarized in a recent issue of Science (Cohen, J., "Vaccine results lose significance under scrutiny," Science, 299, March 7, 2003, p. 1495). The article quotes John Moore of Cornell University as saying that "blacks and Asians lumped together is biological rubbish. They [VaxGen] might as well do a subgroup analysis on signs of the Zodiac." John Gurwith, the VaxGen scientist who directed the trial, reported that VaxGen had done nine comparisons of subgroups based on race. A Bonferroni correction would therefore change the p-value that VaxGen computed for the black subgroup from 0.02 to 0.18.

A chart accompanying the Times article presents the following data from the trial.

All subjects
Total - Placebo: 1,679
Total - Vaccine: 3,330
Infected at End of Trial - Placebo: 98
Infected at End of Trial - Vaccine: 191
Percentage Who Became Infected - Placebo: 5.8%
Percentage Who Became Infected - Vaccine: 5.7%

White and Hispanic
Total - Placebo: 1,508
Total - Vaccine: 3,003
Infected at End of Trial - Placebo: 81
Infected at End of Trial - Vaccine: 179
Percentage Who Became Infected - Placebo: 5.4%
Percentage Who Became Infected - Vaccine: 6.0%

Black, Asian, other combined
Total - Placebo: 171
Total - Vaccine: 327
Infected at End of Trial - Placebo: 17
Infected at End of Trial - Vaccine: 12
Percentage Who Became Infected - Placebo: 9.9%
Percentage Who Became Infected - Vaccine: 3.7%

Black
Total - Placebo: 111
Total - Vaccine: 203
Infected at End of Trial - Placebo: 9
Infected at End of Trial - Vaccine: 4
Percentage Who Became Infected - Placebo: 8.1%
Percentage Who Became Infected - Vaccine: 2.0%

Asian
Total - Placebo: 20
Total - Vaccine: 53
Infected at End of Trial - Placebo: 2
Infected at End of Trial - Vaccine: 2
Percentage Who Became Infected - Placebo: 10.0%
Percentage Who Became Infected - Vaccine: 3.8%

Other minorities
Total - Placebo: 40
Total - Vaccine: 71
Infected at End of Trial - Placebo: 6
Infected at End of Trial - Vaccine: 6
Percentage Who Became Infected - Placebo: 15.0%
Percentage Who Became Infected - Vaccine: 8.5%


"In 'Social Dilemmas,' We Tend to Cooperate."

by Carey Goldberg, Boston Globe, March 18, 2003, p. C23.

A recent study challenges the traditional notion that rational economic behavior is based entirely on self interest. The Globe gives a quick outline of the experiment, but the details are more clear in a recent Nature article (Fehr, E., and Rockenbach, B., "Detrimental effects of sanctions on human altruism," Nature, 422, March 13, 2003, pp. 137-140.)

Subjects were put into pairs, in which one participant played the role of "investor" and the other played the "trustee." Both were initially given $10. The investor could choose to transfer any portion of her $10 to the trustee, with both parties understanding that the transferred amount would be tripled. The investor would also tell the trustee what amount she wanted back. If the investor transferred the full $10, and the trustee returned $20, then each would make $10 on the deal (in general, both parties benefit equally if the trustee returns 2/3 of the transferred amount). Obviously, a selfish trustee could return zero. On the other hand, an untrusting investor could choose to transfer zero, in which case no one would gain. Such situations are characterized here as "social dilemmas." As it turned out, 19 of 24 trustees paid back a positive amount. The game was played only once, so these results cannot be directly attributed to expected future rewards.

A second version of the game, conducted with different subjects, allowed the investor to threaten a $4 fine if the trustee failed to return the requested amount. When the fine was threatened, the average pay backs here were lower than in the original game, but when investors chose not threaten a fine, the average paybacks were higher than in the original. As described by lead researcher Ernst Fehr, we tend to respond positively to an expression of trust, but negatively to the distrust exhibited by a sanction.

The Globe quotes Fehr as explaining that while the US might well have needed to confront the notoriously uncooperative Saddam Hussein, our announced willingness to act without UN cooperation was an expression of distrust that hurt relations with our allies.


"What are the Odds?"

by Rose Simone, The Record (Kitchener-Waterloo, Ontario), March 29, 2003, p. H1.

The subtitle here reads "Chance and risk are part of everyone's life, from the casino gambler to the hospital patient trying a new drug. Mathematicians can help us determine the probability that things will happen, but beyond that, our world remains full of uncertainty." The article was inspired by a lecture at the University of Waterloo by Simon Singh. Singh is the author of a number of books on mathematical topics, including Fermat's Enigma (Bantam Books, 1998) and The Code Book (Anchor Books, 2000). You can read more information about his books and lectures at his Web site.

The present article is an essay on the ubiquity of chance phenomena. It includes a discussion of various gambling issues, including card counting in blackjack, roulette systems, and lottery games. Pointing out that the same kind of mathematics arises in health studies, it goes on to describe the difficulties that arose last summer when people struggled to interpret the risks and benefits of hormone replacement therapy. The article concludes with a philosophical discussion of the Uncertainty Principle and the deep question of whether probability is an essential feature of the universe.


"Quake Scientists Predict Big One Likely by 2032; Bay Area Fault Study Estimates 62% Chance of Deadly 6.7 Tremblor."

by David Perlman, San Francisco Chronicle, April 22, 2003, p. A1.

A group of more than one hundred earthquake experts from federal and state agencies, universities and earthquake engineering firms have collaborated to study Northern California's earthquake risk. Their work combines the latest seismic data with state-of-the-art computer models to predict earthquake damage. The headline announces the main finding: there is a 62% probability of a magnitude 6.7 or greater quake striking the area within the next 30 years.

The Chronicle quotes David P. Schwarz of the U.S. Geological Survey as saying

Our new data now is much more sophisticated and much more robust, but our results must still reflect many uncertainties. So we have to accept a broad error range, which could be anywhere from 38 percent to 87 percent, considering many different earthquake theories - which poses a big uncertainty. What we do know for sure, however, is that any big earthquake in the Bay Area will produce damaging ground motions over broad areas, and at substantial distances from the source of the quake.

The U.S. Geological Survey maintains a Earthquake Hazards Web site, where you can find technical summaries, news releases, and data graphics from the probability study. There is also a link to a Webcast of a 70-minute lecture entitled "Bay Area Earthquake Probabilities," which was sponsored by the University of California at Berkeley's Seismological Lab.


"The Bush Doctrine: How Many Wars are in Us?"

by John Vasquez, Newsday (New York), April 27, 2003, p. A26.

Vasquez is the author of The War Puzzle (Cambridge University Press, 1993), in which he proposes to investigate scientifically the causes and results of war. In the present article, he summarizes some of his arguments as they apply to the Bush Administration's approach to confronting terrorism.

Vasquez emphasizes the distinction between "preemptive" and "preventive" wars. The former are intended to undermine an imminent attack by striking first. The latter are undertaken to prevent a hostile regime from even developing the capability to mount an attack. The current US approach to terrorism seems to require a series of preventative strikes. Having just succeeded in Iraq, we have recently issued warnings to both Syria and Iran. Vasquez warns that we will cannot count on victory in every such engagement. He cites one statistical study of wars fought from 1816 to 1945, which found that the state that was wealthier and lost a smaller percentage of its population prevailed 84% of the time. While the US seems to enjoy these advantages, Vasquez wonders if anyone is paying attention to the 16% downside.

Furthermore, we cannot count on every conflict being decided quickly. Vasquez cites studies of the Korean and Vietnam Wars to show that at each point where US casualties increased by factors of 10 (from tens to hundreds to thousands), public support dropped significantly. Falling below a 50% approval rating has obvious political ramifications. Vasquez notes that the time it takes to reach this point depends on the level of public support at the start of the operation. He predicts that a prolonged series of wars will steadily diminish the public's will to fight.


"What Some Much-Noted Data Really Showed About Vouchers"

by Michael Winerip, New York Times, May 7, 2003, p. B12.

"Report Defends Vouchers But Fails to Quell Debate"

by Sam Dillon, New York Times, June 13, 2003, p. A29.

In the late 1990s, a large randomized study on the effect of school vouchers was conducted in New York City. Some 20,000 students had applied for $1400 vouchers to attend private schools in the city. Funding was limited, so a lottery was used to select 1300 recipients. Another 1300 applicants were selected as controls for the study, and they remained in public schools. Results from the study were announced in the summer of 2000 by Harvard professor Paul E. Peterson, who said that vouchers had produced a significant improvement in the performance of black school children. This experiment drew wide media coverage because vouchers had become an issue in the 2000 presidential debate, with George W. Bush in favor and Al Gore opposed. Peterson went on to publish a book, The Education Gap: Vouchers and Urban Schools (The Brookings Institute, 2002), which he co-wrote with his Harvard colleague William G. Howell.

However, Peterson's original partner for the study, the Princeton firm Mathematica, actually disagreed with his conclusions. Their reservations were reported several weeks after Peterson's initial announcement, but at that time they received much less attention. It turns out that while five grade levels had been studied, gains were observed only for fifth graders. Also, it was unclear why gains reported for the 519 blacks in the study did not show up among whites and Hispanics. According to the Times, there had been no plan prior to the study to separate the results by race. In the interest of scientific disclosure, Mathematica made the full dataset available to other researchers.

Economist Alan Krueger of Princeton examined the data himself, and found that the racial breakdown had considered only the race of the mother. Thus the child of a black mother and a white father was counted as black, while the child of a white mother and black father was counted as white. Including the latter category would have added 214 blacks to the sample. Another 78 blacks had been omitted because their background data were incomplete. Looking at the larger sample of 811 blacks, Krueger found no significant advantage for vouchers. He is quoted in the first article above as saying "This appeared to be high-quality work, but it teaches you not to believe anything until the data are made available."

But the second article shows that the debate is not yet over. Peterson and Howell are quoted there dismissing Kreuger's approach as "rummaging theoretically barefoot through data in the hopes of finding desired results." But Krueger maintained that "my conclusion after reviewing all the data is that these results are just not very robust."


"What's in a Name.com?"

by Richard Morin, Washington Post, May 11, 2003, p. B5.

A recent study suggests that beleaguered dot-com companies might be able to revive their stock values by simply dropping the dot-com from their names. P. R. Rau, a management professor at Purdue University, followed the stock prices of 150 publicly traded companies that made such a change between June 1, 1998 and August 31, 2001. To qualify, the original name had to have an Internet-style name that included the extension .com, .net or .web. For example, Zap.com, a California manufacturer of electric bicycles, shortened its name to Zap. Overall, the 48 .com firms in the sample averaged a 17% gain in stock price two days after the change, and 29% in the first month. The article reports that there was also a case-control aspect to the study. Each of the name-changers was matched with a company having similar products and financial profile. The stocks of the name changers significantly outperformed those of the comparison group.

In previous work, conducted during the technology boom, Rau had investigated the effect of adding a dot-com extension, and found a similar improvement in stock values. Furthermore, companies who added the dot-com during the good times and later dropped it actually benefited twice.


"A Mathematician Crunches the Supreme Court's Numbers"

by Nicholas Wade, New York Times, June 24, 2003, p. F3.

Lawrence Sirovich of the Mount Sinai School of Medicine in New York City is an expert on visual pattern recognition. He recently applied these skills to an analysis of voting patterns in the Supreme Court (Sirovich, L., "A pattern analysis of the second Rehnquist U.S. Supreme Court," Proceedings of the National Academy of Sciences, 100, June 24, 2003, pp. 7432-7437). His data were drawn from the 468 decisions handed down by the court since the appointment of Justice Steven Breyer in 1994. The same nine justices have served throughout this period.

The study is purely mathematical. It does not consider political orientations or the legal reasoning underlying the justices' opinions. A Court decision is represented simply as a vector of nine +1's and -1's. Each dimension represents a justice, with the +1 or -1 indicating whether that justice voted with the majority or minority. Since there must be more +1's than -1's in such a description, there are 28 = 256 possible vectors. Sirovich applies two kinds of analyses to these data, an information theory measure, and a singular value decomposition.

For the information-theoretic approach, Sirovich identifies two extreme court models. In the "Ominiscient Court," the justices always unanimously reach the best decision. Since there is only one possible outcome, the Shannon information is I = 0 bits. In the "Platonic Court" each justice sees equally compelling arguments for both sides of each case, and independently reaches a conclusion. Thus there are 256 equally likely outcomes, and a decision gives I = 8 bits of information. Sirovich interprets (I + 1) as the "effective number of ideal (platonic) justices." By this measure, he reports that the Court's rulings over the last nine years correspond to the action of 4.86 ideal justices.

For the second approach, Sirovich computed the singular value decomposition (SVD) of the 468 by 9 matrix whose rows record the decisions. Geometrically, the decision vectors lie in nine-dimensional Euclidean space. The SVD reveals that the voting patterns can be effectively described using just two vectors. One is close to a unanimous decision, and the other is close to the most frequently observed 5 to 4 split. (In fact, the latter was the vote that ended the recount in the 2000 presidential election: Justices Kennedy, O'Connor, Rehnquist, Scalia and Thomas in the majority, with Justices Breyer, Ginsberg, Souter and Stevens dissenting.) Each justice's voting record can be closely approximated by a fixed linear combination of these two vectors.

Sirovich used the same tools to analyze the Warren courts for two time periods during which no seats changed, 1959-1961 and 1967-1969. The record for the first period corresponded to 5.16 ideal justices, slightly more than the Rehnquist court. However, the SVD again described the voting pattern with just two vectors, corresponding to a unanimous decision and a 5-4 split. The mathematics alone does not provide reasons for the similarity. Sirovich suggested that it may be attributable to the kinds of cases that ultimately reach the Supreme Court, or that the 5-4 pattern might reflect the way a majority forms in practice.


William P. Peterson
Department of Mathematics and Computer Science
Middlebury College
Middlebury, VT 05753-6145
USA
wpeterson@middlebury.edu


Volume 11 (2003) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications