Issues in Assessing Conceptual Understanding in Probability and Statistics

Clifford Konold
University of Massachusetts

Journal of Statistics Education v.3, n.1 (1995)

Copyright (c) 1995 by Clifford Konold, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author and advance notification of the editor.


Key Words: Instruction; Pre-post testing; Student intuitions.

Abstract

Research has shown that adults have intuitions about probability and statistics that, in many cases, are at odds with accepted theory. The existence of these strongly-held ideas may explain, in part, why learning probability and statistics is especially problematic. One objective of introductory instruction ought to be to help students replace these informal conceptions with more normative ones. Based on this research, items are currently being developed to assess conceptual understanding before and after instruction.

1. Introduction

1 For the past 15 years, I have been part of a research group investigating student understandings of various probabilistic and statistical concepts. We have focused primarily on concepts where the intuitions of adults are at odds with accepted probability or statistical theory. These have included interpretations of probability (Konold 1989; Konold 1991; Konold, Pollatsek, Well, Lohmeier, and Lipson 1993), randomness ( Falk and Konold 1994; Konold, Lohmeier, Pollatsek, Well, Falk, and Lipson 1991), conditional probability (Pollatsek, Well, Konold, Hardiman, and Cobb 1987), the law of large numbers (Well, Pollatsek, and Boyce 1990), and conceptions of the mean (Pollatsek, Lima, and Well 1981). The kinds of data we have looked at have included simple responses on questionnaire items, complex responses to problems posed in individual interviews, and, most recently, the results of instruction in short tutoring sessions.

2 During the last five years I have been trying to apply our research findings to the development of software and curricula. In the process, I have found myself struggling with the question: How do we determine whether curricula are having the intended effect on students' thinking? One major assumption I make is that an effective course in probability ought to alter students' intuitive understandings about probability. In this paper, I summarize three major findings of our research that I think have important implications for assessment of intuitive understanding. These are that (1) students come into our courses with some strongly-held yet basically incorrect intuitions, (2) these intuitions prove extremely difficult to alter, and (3) altering them is complicated by the fact that a student can hold multiple and often contradictory beliefs about a particular situation. I do not make explicit recommendations here about what type of instruction might be most effective in altering such beliefs. As I will admit, assessments of my own instruction have not demonstrated the effects I would have hoped for, and it would take more chutzpah than I have to launch into a series of recommendations about teaching having shared these unimpressive results. However, some of the arguments I offer about the nature of knowledge and learning are suggestive of instructional approaches we should stress (and others we should de-emphasize) if we want to affect how students think (as opposed to how they respond on exams).

2. Major Findings

2.1. Students Have Theories or Intuitions About Probability and Statistics Prior to Instruction, Many of Which Are at Odds With Conventional Thinking

3 In a number of studies beginning in the early 70's, Tversky and Kahneman showed that the typical adult was not a statistician in miniature, that the reasoning of the statistical novice was fundamentally different from that of the expert. They posited a number of judgment heuristics that guide everyday reasoning under uncertainty. For example, consider the following problem from Kahneman and Tversky (1972):

All families of six children in a city were surveyed. In 72 families the exact order of births of boys and girls was G B G B B G. What is your estimate of the number of families surveyed in which the exact order of births was B G B B B B?

4 Assuming that the probability of boy and girl births were equal, the correct answer is that the sequences are equally likely. Thus 72 is the best estimate of the frequency of BGBBBB within the families sampled. However, over 80% of these subjects gave estimates of fewer than 72. (Note that if answers were affected by subjects' knowledge that the probability of the birth of a boy is slightly higher than that of a girl, this would have driven estimates in the opposite direction, toward values larger than 72). Kahneman and Tversky (1972) argued that this answer is based on reasoning according to a "representativeness heuristic." According to this heuristic, the likelihood of drawing a particular sample is judged by considering how similar the sample is to the parent population -- much like judging kinship based on parent-child resemblances. There are two general ways in which GBGBBG is considered highly representative of births. First, in contrast to BGBBBB, it has an equal number of boys and girls, better reflecting the fact that the individual outcomes are equally likely (or very nearly so). Second, in contrast to BBBGGG it appears more haphazard and hence indicative of the randomness of sex determination.

5 I have been following a somewhat different line of inquiry, investigating how novices interpret probabilities. For example, I have looked at what information college-age students derive from a probabilistic weather forecast. In an interview study (Konold 1989), several subjects interpreted the forecast "70% chance of rain today" as the prediction "It's going to rain today." I have termed reasoning of this form the "outcome approach." Asked for a probability of some event, people reasoning according to the outcome approach do not see their goal as specifying probabilities that reflect the distribution of occurrences in a sample, but as predicting results of a single trial.

6 Given the desire for predictions, outcome-oriented individuals translate probability values into yes/no decisions. A value of 50% is interpreted as total lack of knowledge about the outcome, leaving one no justification for making a prediction. As one subject volunteered in Konold (1989, p. 68):

If [the forecaster] said 50/50 chance, I'd kind of think that was strange ... that he didn't really know what he was talking about ....

7 Values sufficiently above or below 50% are encoded as "yes" or "no" predictions, respectively, as illustrated in this subject's response to the question of the meaning of the number in the phrase "70% chance of rain" (Konold 1989, p. 66).

Well, it tells me that it's over 50%, and so, that's the first thing I think of. And, well, I think of the half-way mark between 50% and say 100% to be like, well, 75%. And it's almost that, and I think that's a pretty good chance that there'll be rain.

8 This decoding of probability values into decisions is illustrated in Figure 1. In this illustration, all values in the white area on the right are mapped onto a "yes" decision, those in the gray area onto "don't know" and those in the white area on the left onto "no." Given this understanding of probability values, outcome-oriented individuals will assert that the forecaster has erred if it fails to rain on the day for which the 70% forecast was made.




Figure 1 Figure 1 (1.7K gif)

Figure 1. The Outcome Approach of Mapping Probability Values onto Predictions.


9 Below is a related item that we have recently used to assess the impact of instruction on the outcome approach (in Konold and Garfield 1993, as adapted from Falk 1993, problem 5.1.1, p. 111).

Weather problem. The Springfield Meteorological Center wanted to determine the accuracy of their weather forecasts. They searched their records for those days when the forecaster had reported a 70% chance of rain. They compared these forecasts to records of whether or not it actually rained on those particular days.
The forecast of 70% chance of rain can be considered very accurate if it rained on:
a. 95% - 100% of those days.
b. 85% - 94% of those days.
c. 75% - 84% of those days.
d. 65% - 74% of those days.
e. 55% - 64% of those days.

10 We have administered this item at the beginning of a number of statistics courses and workshops. Figure 2 shows the frequency of responses to the various options by 119 students.




Figure 2 Figure 2 (3.5K gif)

Figure 2. Frequency of Pre-Instruction Responses of 119 Students to the Weather Problem.


11 I should point out that this problem can be used to create lively discussion about the meaning of probability among a group of experts. One of the reviewers of this article, for example, objected to the use of the informal word "chance" in place of the technical term "probability," and also to expressing the probability as a percent. Indeed, one could argue that if the probability value given by the forecaster was meant only to communicate the probability of rain on the next day, that it should not be expressed in percentage form. We chose to express the probability as a percent and to use the word "chance" because those are common forms of expression in forecasts, and also to suggest that forecast accuracy might be legitimately assessed by looking at performance over time.

12 We cannot judge the accuracy of a weather forecast on a particular day by noting whether that forecast was correct or not. We can, however, determine whether a forecaster is accurate in assigning probabilities to forecasts by looking at long-term performance. A forecaster is calibrated when roughly p% of the time, events to which he or she assigns a probability of p% actually occur (cf. Lichtenstein, Fischhoff, and Phillips 1981, pp. 306-307). Roughly 32% of the students selected the correct range 65-74, which includes within it the normative value of 70%. The modal response (36%) was option a, the highest range. This response, which is in accordance with the outcome approach, suggests that these students expect rain to occur nearly all of the time when it has been forecast with a 70% probability. For these students, there seems to be little quantitative information conveyed in a probability value -- a probability of 80% would seem to communicate no additional strength of belief over one of 70%.

13 Performance on these and similar items demonstrates that many students come into a course on probability and statistics with some strongly-held intuitions about probability and statistics that conflict with what they are about to be "taught." Even so, some would probably argue that we ignore these intuitions and continue teaching much as we currently do, on the belief that we can "overwrite" student beliefs with more appropriate ones. This brings me to our second finding.

2.2. Students' Theories Are Difficult to Alter

14 I have heard teachers on the first day of class admonish students, "Forget everything you know about x." They hope for the situation depicted in Figure 3, where the information conveyed by the teacher will be accurately encoded by the students.




Figure 3 Figure 3 (3.5K gif)

Figure 3. Representation of View of Learning in Which Information to Be Learned is Copied Into the Mind of the Learner.


15 The "forget-everything" request implies an understanding that prior knowledge can influence what one learns, but reveals a failure to accept that there is no alternative. When presented new information, we have no other option than to relate it to what we already know -- there is no blank space in our minds within which new information can be stored so as not to "contaminate" it with existing information. Learning in the classroom involves students weaving selected and interpreted teacher outputs into an existing fabric of knowledge. In this way, learning is both limited and, at the same time, made possible by prior knowledge. This "constructive" view of learning, as depicted in Figure 4, explains the frequent gap between what students report and what we, as teachers, thought we clearly communicated.




Figure 4 Figure 4 (5.2K gif)

Figure 4. Representation of View of Learning in Which Information to Be Learned is Always Interpreted by and Integrated With the Existing Knowledge of the Learner.


16 My research objective has been to try to determine from student outputs, as depicted in the bubble on the right, the form of their theories, as shown in the schematic of the student's head on the left. A useful model of student thinking allows one to interpret and understand the logic of students' statements that might otherwise seem bizarre.

17 We have a variety of data suggesting that these intuitions are persistent and, to this point, survive our best teaching efforts.



Figure 5 Figure (5.1K gif)

Figure 5. Frequency of Pre- (Black) and Post- (Gray) Instruction Responses of 119 Students to the Weather Problem.


2.3. A Student Can Hold Multiple and Often Contradictory Beliefs About a Particular Situation

18 One of the instructional approaches we have been testing involves having students articulate their theories and then put them to the test, for example, by conducting computer simulations to determine the relative likelihood of various outcomes of some event and comparing these to their expectations. One of the difficulties we have encountered in using this approach is that students have multiple, and often conflicting, perspectives from which they can reason; furthermore, they can switch among these perspectives in reasoning about a single situation. Consider the following problem:

Coin problem: Part 1. Which of the following sequences is most likely to result from flipping a fair coin 5 times?
a. H H H T T
b. T H H T H
c. T H T T T
d. H T H T H
e. All four sequences are equally likely.
Coin problem: Part 2. Listed below are the same sequences of H's and T's as listed above. Which of the sequences is least likely to result from flipping a fair coin 5 times?
a. H H H T T
b. T H H T H
c. T H T T T
d. H T H T H
e. All four sequences are equally unlikely.

19 When we first administered Part 1 of this problem to undergraduates, we expected the majority of subjects would apply the representativeness heuristic and thus answer b. However, roughly 70% of the subjects correctly responded e, that the sequences were equally likely. We became suspicious of the reasoning underlying their answer after reading the accompanying written justifications, some of which were of the form "Any of the sequences could occur." Of course this is true, but this is not the same as the claim that they are all equally likely.

20 The next time we administered this problem we included Part 2, which appeared on the same page as Part 1. In several administrations of this two-part item, slightly over half of the subjects who selected e on Part 1 did not select e on Part 2; rather, they indicated that one of the other sequences is "least likely." Clinical interviews with 20 subjects using this two-part item (Konold et al. 1993) suggest that this inconsistency results from subjects' applying different perspectives to the two parts of the problem. In Part 1, many subjects think they are being asked, in accordance with the outcome approach, to predict which sequence will occur. Because each sequence has such a small probability of occurring, these subjects select answer e. They do not, however, mean by this that the sequences have the same probability, but rather that they cannot rule out the occurrence of any of them. In Part 2, many of these same subjects now apply the representativeness heuristic, suggesting, for example, that the sequence THTTT is less likely because it contains an excess of T's. We speculate that the change in perspective from the outcome approach to the representativeness heuristic is cued by the fact that we normally do not make predictions about the non-occurrence of events (e.g., which horse will not win the race). Because they cannot readily apply the prediction scheme of the outcome approach to Part 2 of the problem, they revert to the representativeness heuristic.

21 The results on the coin problems also illustrate the danger of drawing conclusions about the state of a person's knowledge on the basis of performance on a single item, or, for that matter, a series of items.

3. Some Caveats

22 In stressing the difficulty of altering various student conceptions, I certainly do not mean to discourage attempts to teach probability and statistics concepts. Rather I want to suggest that we do more than we currently do to determine the effects of instruction on whatever concepts we hope our instruction fosters. Had I not been testing for conceptual development, I am sure I would feel the curriculum I have designed was magnificent. Designing a set of assessment items has forced me to articulate my major objectives and to evaluate the effectiveness of the materials I am designing. I hope the curriculum is improving, and I shall find out. While I have been disappointed by the lack of change on the items, I have developed more confidence in the items, if for no other reason than that they are not easily taught to.

23 Let me emphasize that students in the classes I have assessed do reasonably well on in-class tests and quizzes that are given soon after the relevant topics are introduced and typically include items that resemble those they have been working on. But I do not regard most teacher-generated quizzes and tests as measures of conceptual change. In courses I teach, I avoid including on quizzes and tests items that are very similar to the ones I use pre- and post-instruction, because if I did so, I would undermine the usefulness of these items in measuring real rather than apparent conceptual change. I have no doubt that I could, in two weeks, get students to perform at high levels on the items I have presented here if that were my sole objective. I would do this by giving students many similar problems and discussing with them the correct solution.

24 I also should point out that the concepts I have been researching and trying to assess are those which are particularly problematic for students. This is not a sufficient reason to regard these concepts as central objectives in a course on probability and statistics. Certainly there are many concepts and skills that students are learning that the items I have been using do not address, and I would not recommend that the narrowly conceived items I have discussed become a general indicator of the success of some instruction. More generally, I worry about how items meant to assess achievement or readiness can begin to drive a curriculum, and too often in ways that are detrimental to conceptual change. I would prefer to learn nothing about how students' cognitions might be changing during instruction than, in the process of trying, to undermine my instructional objectives. Indeed, I think many classroom quizzes and tests do just this. In trying to assess competencies at the most rudimentary level, or to not overwhelm students with problems that require legitimate problem solving and reasoning, the teacher inadvertently leads students to believe that routine skills and memorized formulae are the important stuff.

Acknowledgements

A version of this paper was presented at the Annual Meeting of the American Statistical Association in San Francisco, CA, August 1993. This research was supported by funding from the National Science Foundation (Grant #MDR-8954626). The opinions expressed here are the author's and not necessarily those of the Foundation.


References

Falk, R. (1993), Understanding Probability and Statistics: A Book of Problems, Wellesley, MA: AK Peters.

Falk, R., and Konold, C. (1994), "Random Means Hard to Digest," Focus on Learning Problems in Mathematics, 16, 2-12.

Kahneman, D., and Tversky, A. (1972), "Subjective Probability: A Judgment of Representativeness," Cognitive Psychology, 3(3), 430-453.

Konold, C. (1989), "Informal Conceptions of Probability," Cognition and Instruction, 6, 59-98.

Konold, C. (1991), "Understanding Students' Beliefs About Probability," in Radical Constructivism in Mathematics Education, ed. E. v. Glasersfeld, Amsterdam: Kluwer, pp. 139-156.

Konold, C., and Garfield, J. (1993), "Statistical Reasoning Assessment. Part 1: Intuitive Thinking," unpublished manuscript, SRRI, University of Massachusetts, Amherst, MA.

Konold, C., Lohmeier, J., Pollatsek, A., Well, A.D., Falk, R., and Lipson, A. (1991), "Novice Views on Randomness," in Proceedings of the Thirteenth Annual Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education, 1, ed. R.G. Underhill, Blacksburg: Virginia Polytechnic Institute and State University, pp. 167-173.

Konold, C., Pollatsek, A., Well, A.D., Lohmeier, J., and Lipson, A. (1993), "Inconsistencies in Students' Reasoning About Probability," Journal for Research in Mathematics Education, 24, 392-414.

Konold, C., Well, A.D., Pollatsek, A., and Lohmeier, J. (1993), "Teaching the Law of Large Numbers via Dynamic Sampling," in Proceedings of the Fifteenth Annual Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education, 1, eds. J.R. Becker and B.J. Pence, San Jose: San Jose State University, pp. 299-305.

Lichtenstein, S., Fischhoff, B., and Phillips, L.D. (1981), "Calibration of Probabilities: The State of the Art to 1980," in Judgment Under Uncertainty: Heuristics and Biases, eds. D. Kahneman, P. Slovic, and A. Tversky, New York: Cambridge University Press, pp. 306-334.

Pollatsek, A., Lima, S., and Well, A.D. (1981), "Concept or Computation: Students' Misconceptions of the Mean," Educational Studies in Mathematics, 12, 191-204.

Pollatsek, A., Well, A.D., Konold, C., Hardiman, P., and Cobb, G. (1987), "Understanding Conditional Probabilities," Organizational Behavior and Human Decision Processes, 40, 255-269.

Well, A.D., Pollatsek, A., and Boyce, S.J. (1990), "Understanding the Effects of Sample Size on the Variability of the Mean," Journal of Organizational Behavior and Human Decision Processes, 47, 289-312.


Clifford Konold
Scientific Reasoning Research Institute
University of Massachusetts
Amherst, MA 01003
konold@srri.umass.edu

TERC
2067 Massachusetts Ave.
Cambridge, MA 02140
cliff_konold@qm.terc.edu

Return to Table of Contents | Return to the JSE Home Page