Journal of Statistics Education Volume 14, Number 2 (2006), jse.amstat.org/v14n2/johnson.html
Copyright © 2006 by Marianne Johnson and Eric Kuennen all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of the editor.
Key Words: Determinants of student performance; Introductory collegiate statistics; Mathematical skills.
The importance of mathematical skills to student performance in other quantitative disciplines is widely recognized, however. Studies have found that high Scholastic Aptitude Test (SAT) or American College Test (ACT) mathematics scores or having taken calculus have a significant and beneficial effect on student grades in economics courses (Anderson, Benjamin, and Fuss 1994; Durden and Ellis 1995; Ely and Hittle 1990; Johnson and Kuennen 2004). Further, Ballard and Johnson (2004) find that mastery of very basic mathematics concepts- of the kind covered in remedial or developmental mathematics courses-are positively and statistically significantly related to student success in introductory economics. Grillo, Latif, and Stolte (2001) find mathematics skills are important for pharmacology students, and Ely and Hittle (1990) document the importance of math skills for finance majors.
We realize that while advanced statistics is very much a mathematical discipline, introductory statistics is generally considered not to be a mathematics course, and the amount of mathematics used in the course can vary widely amongst instructors. Some instructors require students to be able to compute certain statistical measures by hand, maintaining that the students' ability to do the calculation themselves strengthens their understanding of the meaning of the statistic, while others will rely on calculators or software packages to do the calculations, stressing instead the ability of the student to correctly interpret the meaning of the calculation.
Naturally an instructor who requires a large amount of technical computations in statistics will find mathematical skills to be an important factor in their students' success, but the importance of mathematics skills may go beyond merely the ability to do the calculations, and also influence the ability to analyze data, reason quantitatively, and interpret the results of numerical computations. For example, while only arithmetic is needed to compute a standard deviation, other basic mathematics skills, such as understanding ratios, may be important in the ability to understand what the standard deviation measures, understand when to apply it and how to interpret the results of the computation, whether done by hand or the result of pushing a key on a calculator or computer. Hence, regardless of the level of computational rigor required by the instructor, basic mathematics skills may be an important determinant of student success in introductory statistics.
The purpose of this study is to identify the types of mathematics skills most associated with student success in an introductory business statistics course. We include a range of measures of mathematics skills, including the math courses students have taken and student scores on the mathematics and science portions of the ACT exam. We also actively measure math skills by student scores on a test of very basic mathematical skills, such as the ability to calculate the slope of a line or the area of a triangle, or to divide by a fraction. (see End Note 1)
The catalog description states only that the course will cover “descriptive methods, probability and inference, regression
and correlation, index numbers, and time series.” As is typical of many universities, instructors have considerable freedom
in designing course content, choosing texts, and developing their own evaluative procedures to meet the course objectives.
A comparison of the course syllabi indicates that the three professors covered the same topics, and all three had course
objectives that included the ability to apply appropriate statistical techniques to analyze data, and the ability to
interpret the results of statistical calculations.
Professor 1's graded material included weekly multiple-choice quizzes and three multiple-choice exams. Professors 2 and 3
required bi-weekly homework assignments involving pure technical computations, analysis and explanation, and some computer
exercises. In addition, each section had different textbooks and relied more or less heavily on computer software (Minitab)
as part of the course. All three professors agree with the categorization that Professor 1 is “high math,” Professor 2 is
“low math” and Professor 3 is “medium to medium-low math.” These categorizations refer to the complexity of the mathematics
used in lectures (e.g., Professor 1 would frequently use calculus in lectures) and the amount and complexity of numerical
calculation on exams and homework. Professor 2 did not require students to compute any statistics by hand, and Professors 2
and 3 placed greater emphasis on verbal description and explanation of statistical techniques and results than Professor 1.
While it naturally complicates our study to have three different professors, each with different teaching styles, course
structures, and levels of mathematical content, we view this as a strength of our study. Since we are interested in whether
good mathematics skills are important determinants of student performance in statistics, we seek to know if the results are
robust across teaching methodologies and course structures. If so, our results would be more widely applicable to other
instructors and at other universities.
2. Course Description and Objectives
“Economics and Business Statistics” is a 200-level (sophomore) university course in introductory statistical analysis
taught by the Economics Department. Three different economics professors participated in this study. The course is
designed primarily for business majors; however students of other majors also enroll in the course, most notably journalism
majors. There is a mathematics prerequisite of precalculus-level skills demonstrated by either completing precalculus
(college algebra) with a grade of C or better, or placing sufficiently highly on the university's math placement exam taken
by entering freshmen. Class sizes average roughly 35 students.
3. Design and Methodology
The catalog description states only that the course will cover “descriptive methods, probability and inference, regression and correlation, index numbers, and time series.” As is typical of many universities, instructors have considerable freedom in designing course content, choosing texts, and developing their own evaluative procedures to meet the course objectives. A comparison of the course syllabi indicates that the three professors covered the same topics, and all three had course objectives that included the ability to apply appropriate statistical techniques to analyze data, and the ability to interpret the results of statistical calculations.
Professor 1's graded material included weekly multiple-choice quizzes and three multiple-choice exams. Professors 2 and 3 required bi-weekly homework assignments involving pure technical computations, analysis and explanation, and some computer exercises. In addition, each section had different textbooks and relied more or less heavily on computer software (Minitab) as part of the course. All three professors agree with the categorization that Professor 1 is “high math,” Professor 2 is “low math” and Professor 3 is “medium to medium-low math.” These categorizations refer to the complexity of the mathematics used in lectures (e.g., Professor 1 would frequently use calculus in lectures) and the amount and complexity of numerical calculation on exams and homework. Professor 2 did not require students to compute any statistics by hand, and Professors 2 and 3 placed greater emphasis on verbal description and explanation of statistical techniques and results than Professor 1.
While it naturally complicates our study to have three different professors, each with different teaching styles, course structures, and levels of mathematical content, we view this as a strength of our study. Since we are interested in whether good mathematics skills are important determinants of student performance in statistics, we seek to know if the results are robust across teaching methodologies and course structures. If so, our results would be more widely applicable to other instructors and at other universities.
The data for the independent variables were gathered from a 26-question survey given on the first day of class. Thus the sample population of this study consists of 292 individuals who participated in the survey and completed the course; these students are a subset of the 388 students enrolled in the participating sections. Since some of the students did not participate in the survey, there is a possibility of bias in our estimates if those who filled out the survey were systematically different from those who did not (Chan, Shum, and Wright 1997; Douglas and Sulock 1995). We return to this point below. The survey consisted of questions dealing with demographics, motivation, and previous math experience. We attempt to control for motivation, attendance, and ability by using variables generated from the survey or provided by the university. These variables include students' official university GPAs, official ACT scores broken down by subject field, and student reported hours spent studying, working, and in other activities. (see End Note 2) The summary statistics for the variables are reported in Table 1. As is evident from Table 1, the students in this sample are predominantly white, male, and sophomores. Nearly all students were taking statistics because it is required for their major.
|Class Required for Major
|English is Native Language
|Weekly Hours Work for Pay||14.42||12.18|
| Work more than 0 but less than or equal to 20 hours/week
Work more than 20 but less than or equal to 30 hours/week
Work more than 30 hours/week
|Weekly Hours in an Extracurricular Activity||4.49||6.00|
| Paricipate more than 0 but less than or equal to 10 hours/week
Work more than 10 but less than or equal to 20 hours/week
Work more than 20 hours/week
|Weekly Hours Study for All Courses||11.54||7.36|
|Official University GPA||2.80||0.52|
|ACT English Score||22.05||3.32|
|ACT Reading Score||21.56||3.53|
|Grade in Course||2.81||1.03|
*We collected additional information that turned out not to be significantly related course performance. Variables included parental education levels, the grade students expected to earn in the course, sleeping habits, and self-reported attendance. In addition, we define “minority” = 1 if the student reported being nonwhite and minority = 0 if the student reported his or her race as white.
In this part of the country, students routinely take the ACT exam instead of the SAT exam. The ACT Exam is a multiple-choice test taken by many high-school students seeking entrance to college. It is largely equivalent to the SAT exam, but is divided into four areas, rather than two. The ACT exam contains sections on English, reading comprehension, mathematics, and science. We are particularly interested in student scores on the mathematics and science parts of the exam, as the tested skills and knowledge seem related to those most used in introductory statistics. The mathematics portion of the exam contains questions on pre-algebra and elementary algebra, intermediate algebra, geometry, and trigonometry. The science portion of the exam asks students to interpret data, read and analyze graphs, tables, and scatter plots, to comment on experimental design and the interpretation of experimental results, and to compare and evaluate conflicting viewpoints and hypotheses.
The background information gathered on students in this study is consistent with previous work in statistics and other related fields. Utts, et. al. (2003) identify grade point average (GPA), class standing, and gender as important confounding variables. They also include an expectations inventory and a computer-literacy inventory identifying the level of technology skills in students as control factors in their covariance analysis. Krieg and Uyar (1997, 2001) include similar control variables.
Three different professors participated in the study. Professors gave identical exams in each of their own sections, but exams differed across professors and across semesters. Thus, while not ideal, we choose to use “Grade in Course” as our dependent variable, rather than more instructor-dependent measures such as point total, and control for instructor-specific effects with dummy variables. We argue “Grade in Course” is additionally relevant since it is the variable of interest to the students, and has been used extensively in other studies of statistical learning and assessment (see Krieg and Uyar 1997; Krieg and Uyar 2001).
The summary statistics for the dependent variable, “Grade in Course” are also reported in Table 1. At this university, course grades are given out as 4.0, which is equivalent to an A, 3.5, which is equivalent to a B+, 3.0 which is equivalent to a B, etc. Roughly 20% of students received an A, across all 10 sections. Students most commonly earned a B, though over 10% of students earned a grade lower than a C.
Table 2. Mathematics Quiz
Answer the following mathematics questions to the best of your ability. Please do not use a calculator. 1. Solve the following system of equations for x: x = y - 6 y = 10 (a) -60 (b) 10/6 (c) 3 (d) 4 (e) -4 3.01% of students answered this question incorrectly. 2. Solve the following system of equations for x: y = 2x + 3 y = 3x (a) 0 (b) 3 (c) 3/5 (d) -3/2 (e) none of the above 15.11% of students answered this question incorrectly. 3. Suppose that . Then if a = 6 and b = 2, solve for x. (a) 12 (b) 8 (c) 3 (d) 4 (e) 1/3 2.47% of students answered this question incorrectly. 4. Suppose that . Then if x = 4 and b = 2, solve for a. (a) 1/2 (b) 2 (c) 4 (d) 8 (e) 16 7.65 % of students answered this question incorrectly. 5. Suppose that . Then if x = 4 and a = 8, solve for b. (a) 1 (b) 2 (c) 32 (d) 4 (e) 1/2 22.13 % of students answered this question incorrectly. 6. Perform the following division: (a) 3 (b) 3/2 (c) 3/4 (d) 4/3 (e) 1/3 38.40% of students answered this question incorrectly. 7. Find the area of the right triangle drawn below. The length of side a = 3 and the length of side b = 4, and the length of side c = 5. The area of the triangle is: (a) 3 (b) 4 (c) 6 (d) 12 (e) 25 31.68% of students answered this question incorrectly. 8. The coordinates of point A are (1,2) and the coordinates of point B are (2,4). Find the slope of the line. (a) 1/2 (b) 1 (c) -1 (d) 2 (e) -2 23.90% of students answered this question incorrectly. 9. The coordinates of point C are (1,4) and the coordinates of point D are (5,2). Find the slope of the line. (a) 1/2 (b) -1/2 (c) 2 (d) -2 (e) 5/4 30.58% of students answered this question incorrectly. 10. Suppose you want to carpet a rectangular room that is 6 feet by 12 feet. Carpet costs $10 per square yard. Note that 1 yard = 3 feet. How much does it cost to carpet the room? (a) $720 (b) $2160 (c) $240 (d) $80 (e) $8 57.14% of students answered this question incorrectly. 11. The fraction 13/38 is approximately (a) 0.15 (b) 0.25 (c) 0.35 (d) 0.45 (e) 0.55 27.90% of students answered this question incorrectly. 12. The square root of 100,000 is about (a) 30 (b) 100 (c) 300 (d) 1,000 (e) 3,000 77.07% of students answered this question incorrectly. 13. In a group of 900 voters, two-thirds said they would vote for the incumbent in the race for Governor. How many of the 900 voters said they would vote for the incumbent? (a) 200 (b) 300 (c) 330 (d) 600 (e) 660 9.89% of students answered this question incorrectly. 14. In 1997, a total of 3,000 students were enrolled at Moo University. In 1998, the corresponding figure was 3300. What is the percent increase in the number of students from 1997 to 1998? (a) 1% (b) 3% (c) 10% (d) 30% (e) 33% 33.70% of students answered this question incorrectly. 15. What is 80% of 60? (a) 24 (b) 36 (c) 40 (d) 48 (e) 50 17.46% of students answered this question incorrectly.
The mean score on the math quiz is 11.1 out of 15. As indicated in Table 2, some 22% of the students could not solve for b, given that x = 4 and a = 8. Further, 38% of the students could not divide 1/2 by 2/3; 33.4% of the students could not find the area of a right triangle; and between 24% and 30% of the students could not find the slope of a line, depending on whether the line slopes upward or downward.
While these exact mathematical skills may not be used directly in statistical calculations, these results suggest that a significant number of students would likely have difficulty in not only performing statistical calculations, but also understanding or interpreting statistical calculations. For example, a student that cannot compute areas will likely struggle with manipulating standard normal probabilities, a student that does not understand fractions or division may have difficulty in understanding means or standard deviations, and a student that cannot find the slope of a line will likely be unable to correctly interpret the slope in a linear regression.
We also include measures of the mathematics courses students have taken. At our university, students with sufficiently low scores on a university math-placement exam are required to take a remedial-math course. (In our sample, 16.8% of the students report facing this requirement. The university reports that, on average, roughly 21% of students are required to take remedial mathematics.) We also asked whether students took a calculus or business calculus course. In our regressions, we use a dummy variable for whether the student was required to take remedial math or if they had taken some form of calculus. The variables are summarized in Table 3.
Thus, we have several distinct measures of quantitative ability: (1) student scores on the Math ACT, (2) student scores on the Science ACT, (3) the score on the math quiz administered early in the semester, (4) whether the student has taken calculus or business calculus, and (5) whether the student had been required to take remedial math.
|Calculus or Business Calculus
|ACT Mathematics Score||21.46||2.98|
|ACT Science Score||22.43||2.74|
|Most Recent Math Course
Taking a math course this semester
Took a math course last semester
Took a math course within the last year
Took a math course two or more years ago
|Math Quiz Score||11.10||2.31|
| 15 correct
5 or fewer correct
For students enrolled in the course, but who did not complete the survey, we have university provided information about their GPA and ACT scores. Comparing these students to the survey sample, we find that the missing students have GPAs that are on average 0.10 lower than survey students (p-value < 0.05). The missing students also scored statistically significantly lower on their Math ACT than did survey students (p-value < 0.01). However, there were no statistically significant differences for English, Reading, or Science ACT scores. In addition, the missing students earned lower grades in introductory statistics compared to students who completed the survey. Thus, since poorer students were more likely to miss class the day the survey was completed, and since poorer students are also more likely to have problems with basic mathematics skills, as indicated by their Math ACT scores, we argue that if these students were included in the sample, our results would actually be strengthened. (see End Note 4)
An additional issue with our data is that for some students, we do not have values for their ACT scores. We are missing ACT scores for 94 students because transfer students and some special scholarship students are not required to provide an ACT score for admission to this university. We replace the missing ACT scores with predicted ACT scores from a regression of ACT score on a vector of student demographic and academic explanatory variables.
While one might expect the different measures of mathematics ability to be highly correlated, the actual correlation coefficients are surprisingly modest.(see End Note 5) As expected, there are positive correlations among the math-quiz score, GPA, and scores on the math and science portions of the ACT exam, though the value of the correlation coefficients never exceed 0.30. Student GPA and math quiz score are most highly correlated with the grade earned in introductory statistics (r = 0.50 and r = 0.22, respectively), though scores on the math and science portions of the ACT exam are also positively related to course grade (r = 0.14; r = 0.16, respectively). These results indicate that mathematics skills are complex and not easily represented by a single measure.
where the vector x contains k explanatory variables, is a defined as a k x 1 constant, and the error | x follows a standard normal distribution. Let be a set of seven undetermined threshold parameters. We define our dependent variable y (Grade in Course) by
Given this, we can derive the series of response probabilities determining y given the explanatory variables x:
where is the cumulative normal distribution function.
The parameters of this model can be estimated using a maximum likelihood function. The procedure is simple using statistical packages such as Stata, which contain an ordered probit estimation command. See also Greene (2000, pp. 875-879).
|Explanatory Variable||Estimated Coefficient||Standard Error||z-value|
|Hours Work per Week||0.00||0.01||0.20|
|Hours Study per Week||0.02||0.01||1.65*|
|Math ACT Score||-0.01||0.02||-0.53|
|Science ACT Score||0.05||0.03||1.90*|
|Reading ACT Score||-0.03||0.02||-1.36|
|English ACT Score||0.00||0.02||0.20|
|Math Quiz Score||0.09||0.03||2.73***|
|N, R-squared (End Note 7)||292, 0.16|
* Ordered probit estimation. Significance levels are indicated as: *=10%,**=5%, and *** =1%. Professor 3 serves as the comparison category. “Sophomore” = 1 if the student reported being a sophomore, and = 0 if the student reported any other university class status. “Minority” = 1 if the student reported a racial/ethnic category other than white, and = 0 if the student reported being white or Caucasian.
We find that the most important determinants of student performance are GPA, student score on the science portion of the ACT exam, student score the math quiz score, student gender, and professor. All of these measures are significant at the ten-percent level or better. We also find that the coefficients on whether a student is a sophomore, hours reported studying per week, and student score on the reading portion of the ACT exam are marginally significant (p-value < 0.18). Professor 1 gave significantly higher and Professor 2 gave significantly lower grades than Professor 3. Since no difference was found across semesters in terms of professor grading, the semester dummy variable was dropped in the final regression specification.
Holding all other values constant at their means, we can evaluate how values for a particular explanatory variable influence the probability that a student earns a higher course grade. Women are likely to earn significantly higher grades than men. While the estimated coefficient on minority indicates that these students performed more poorly in the course, this result is not statistically significant. Students taking the course as a sophomore are expected to do better than students taking the course out of sequence (p-value < 0.16). The number of weekly hours that students report working at paid jobs has a negative impact on course grade, though this is not statistically significant. (see End Note 8) On the other hand, the reported number of hours spent studying had a strong and positive impact on performance in the class (p-value < 0.10). A student who studies 20 hours a week increases their probability of a higher grade by 0.17 compared to a student who only studies 10 hours per week. And, as expected, GPA is positively and highly significantly correlated with statistics course grade, suggesting that the best predictor of student academic performance is past academic performance.
Student score on the mathematics portion of the ACT exam is negative, though the estimated coefficient is highly insignificant. Student score on the reading portion of the ACT exam is also negative, and this result is marginally significant (p-value < 0.17). While the estimated coefficient on the English portion of the exam is positive, this result is statistically insignificant. However, student score on the science portion of the ACT exam is positive and is statistically significant (p-value < 0.06). Thus, out of all the skills and knowledge tested by the ACT exam, those tested in the science portion are most highly related to those useful in introductory statistics.
We find it surprising that having taken calculus or business calculus is not related to course performance, and whether the student has taken remedial mathematics is only marginally significant. We hypothesize that this result is due to the fact that all students who wish to enter the College of Business at our university are required to take both this introductory statistics course and business calculus. Thus the homogeneity in the student reported values obscures any positive effect that calculus skills might have for students taking introductory statistics. Further, because this is a non-calculus based introductory statistics course, calculus skills may be less beneficial than solid basic math skills.
The math-quiz score is positively and significantly related to student performance (p-value < 0.01). This suggests that very basic math skills may be more important than previously recognized. Our coefficients indicate that, all else equal, a student who answers all 15 math questions correctly is likely to earn a half to full letter grade higher in the course. While we might expect this to be the case for professors who rely on heavily mathematical presentation or technical homework exercises, the result holds even when controlling for which professor the student had, as “Professor” is included as an explanatory variable in the regression. Thus, in each instructor's course, holding constant all the other explanatory variables, there is a positive relationship between math skills and course performance. The extent of this positive relationship varies across the professors, but it is consistently significant.
The variables that pass the Chow test are gender, sophomore, work, GPA, remedial mathematics, and English ACT score, and so we add interaction terms that measure the effect of these variables for each professor. For example, to measure the effect of being female in the various professor's classes, we add interaction terms female*professor 1 and female*professor 2. For a discussion the use of interaction terms with dummy variables, refer to Suits (1957, pp. 548-551).
The remaining variables in our estimation - minority, hours study per week, whether a student has taken calculus, math ACT score, science ACT score, reading ACT score, and mathematics quiz score - do not pass the Chow test, so the estimated coefficients for these variables do not differ significantly across professor. Hence, for these variables we do not add interaction terms. The results of the regression are shown in Table 5.
|Explanatory Variable||Estimated Coefficient||Standard Error||z-value|
|English ACT Score||-0.01||0.04||-0.23|
|Math ACT Score||-0.01||0.03||-0.55|
|Science ACT Score||0.05||0.03||1.86*|
|Reading ACT Score||-0.02||0.02||-0.83|
|Math Quiz Score||0.10||0.03||2.84***|
|N, R-squared (End Note 7)||292, 0.22|
* Ordered probit estimation. Significance levels are indicated as: *=10%,**=5%, and *** =1%.
The interaction terms created between the professor dummy variables and gender, sophomore, work, GPA, remedial mathematics, and English ACT score show the relative importance of these variables for the each of the three professors. Consider the example of the gender (female) variable. Recall that in the original regression (Table 4), women did significantly better overall. In Table 5, we can see to what extent being female is valued in each of the professor's courses. Since Professor 3 serves as the comparison category in the regression, the coefficient on “female” of 0.53 indicates that in Professor 3's classes being female contributes positively and significantly to the course grade. In Professor 2's class being female has an even stronger effect, with a total coefficient of 0.53 + 0.50 = 1.03. However, in Professor 1's course, being female contributed negatively to course grade (0.53 + -0.77 = -0.24).
The results from the interaction term specification of the model shows that some characteristics or skills are valued more highly in some professors' courses than in others. But more important are the skills that are consistently valued across the three professors to approximately the same degree - science skills, as measured by the science portion of the ACT exam, and the math skills assessed by our basic mathematics quiz. This result has important implications for statistical teaching. If certain skills are consistently useful for students, regardless of the course format or teaching style of the professor, then we should be paying more attention to developing these skills prior to or concurrently with students taking the course.
To analyze these basic mathematics skills further, we make another specification of the original model that, in addition to the total math-quiz score, includes each math question individually. That is, this regression includes a set of fifteen dummy variables, one for each question on the math quiz. Questions 2, 4, 6, 10 and 12 are independently significant at the ten-percent level or better. These questions deal with very basic concepts in arithmetic, algebra, and geometry including manipulating simple systems of equations, manipulating ratios, dividing fractions, a two-step word problem to find the area of a rectangle, and estimating square roots. Further, questions 5, 9, and 15 are marginally significant (p-value < 0.19). Thus, our regression results indicate that a variety of measures of quantitative skill have important effects on student achievement, including measures of some extremely basic skills. The results also indicate the particular types of skills that are important for introductory statistics.
No grade was attached to the math quiz, so some questions arise as to student motivation on the quiz. We examine the reliability of student scores on the math quiz by comparing them to other measures of student performance, such as their grade in the course, their GPA, and their ACT scores. Using Cronbach's Alpha as a test of reliability, we find a scale reliability coefficient of 0.669 for test items including math quiz score, GPA, mathematics ACT score, science ACT score, and course grade. This suggests that the math quiz score is largely consistent with other measures of student academic performance, and can be taken as a reliable measure of student mathematics ability. (see End Note 9)
We also perform sensitivity testing, following subjective cleaning of the data, removing the students for whom we have reason to believe that their math quiz score may be unreliable. We examine several regressions similar to that in Table 4, but dropping students with seemingly inconsistent values. We drop (1) students who scored a 7 or less on the math quiz, but earned a 3.0 or better in the course, (2) students who scored 7 or less on the math quiz, but earned the mean or higher on the composite ACT exam, and (3) students who scored 7 or less on the math quiz, but were not identified as needing remedial math work when they entered the university. This meant dropping roughly 16 students in each of the three situations examined. In all three cases, our results are stronger for the selected subsample. The coefficients on Science ACT and math quiz score rise and become more significant.
That student scores on the ACT science exam is positively and significantly related to course grade in introductory statistics is perhaps not surprising, since the ACT science exam tests skills such as interpreting graphs, tables and scatterplots, and has questions on experimental design, interpreting experimental results, and comparing alternative viewpoints and hypotheses. These same abilities are a large part of what we require of students in introductory statistics.
It is also not controversial that quantitative skills are important to success in introductory statistics. However, it is especially informative that very basic mathematics skills are among the most important indicators of student success in a course where many of the skills directly assessed (such as analyzing data with descriptive statistics, hypothesis testing, or linear regression) are not necessarily of a basic skills nature. In contrast, we find that neither taking calculus nor ACT Math score (measuring higher mathematics skills such as algebra, geometry and trigonometry) had a significant effect on course performance.
Moreover, we find that importance of the basic math quiz score, as well as the science ACT score, is consistent across the three professors in the study to approximately the same degree, even though the three professors had differing teaching styles and course emphases. This means that basic mathematics skills are an important determinant of student success in elementary statistics regardless of the level of mathematics presented, or the relative emphasis on computation versus interpretation by the instructor. That we find the basic math quiz score an important factor even for the “low math” professor who de-emphasized calculations and stressed interpretation of results, indicates that the importance of mathematics skills may go beyond merely the ability of students to “do the math”, it may also help students to analyze and reason quantitatively, and to understand and interpret statistical measures. The fact that our results are robust across three different teaching methodologies and course structures indicate that this study may be widely applicable to other instructors and at other universities.
Ballard, C. L. and Johnson, M. (2004), “Basic Math Skills and Performance in Introductory Microeconomics,” Journal of Economic Education, 35, 3-23.
Becker, W. (1987), “Teaching Statistical Methods to Undergraduate Economic Students,” American Economic Review, 77, 18-23.
Bresnock, A. E., Graves, P. E., and White, N. (1989), “Multiple-Choice Testing: Question and Response Position,” Journal of Economic Education, 20, 239-245.
Chan, K., Shum, C., and Wright, D. (1997), “Class Attendance and Student Performance in Principles of Finance,” Financial Practice and Education, 7, 58-65.
Cohn, E. (1972), “Students' Characteristics and Performance in Economic Statistics,” Journal of Economic Education, 3, 106-111.
Douglas, S. and Sulock, J. (1995), “Estimating Educational Production Functions with Correction for Drops,” Journal of Economic Education, 26, 101-112.
Ely, D. and Hittle, L. (1990), “The Impact of Math Background on Performance in Managerial Economics and Basic Finance Courses,” Journal of Financial Education, 16, 59-61.
Garfield, J., Hogg B., Schau, C., and Whittinghill, D. (2002), “First Courses in Statistical Science: The Status of
Educational Reform Efforts,” Journal of Statistics Education [On line] 10(2).
Greene, W. (2000), Econometric Analysis, 4th Ed., Upper-Saddle River, NJ: Prentice Hall.
Grillo, J.A., D.A. Latif, and S.K. Stolte. (2001), “The Relationship Between Preadmission Indicators and Basic Mathematics Skills at a New School of Pharmacy,” Annals of Pharmacotherapy, 35, 167-172.
Hillmer, S. (1996), “A Problem-Solving Approach to Teaching Business Statistics,” The American Statistician, 50, 249-256.
Johnson, M. and Kuennen, E. (2004), “Delaying Developmental Mathematics: The Characteristics and Costs,” Journal of Developmental Education, 28, 24-30.
Krieg, R. and Uyar, B. (1997), “Correlates of Student Performance in Business and Economics Statistics,” Journal of Economics and Finance, 21, 65-74.
Krieg, R. and Uyar, B. (2001), “Student Performance in Business and Economics Statistics: Does Exam Structure Matter?” Journal of Economics and Finance, 25, 229-241.
Magel, R. (1996), “Increasing Student Performance in Large Introductory Statistics Classes,” The American Statistician, 50, 51-56.
Maxwell, N., and Lopus, J. (1994), “The Lake Wobegon Effect in Student Self-Reported Data,” American Economic Review, 84, 201-205.
Park, K. and Kerr, P. (1990), “Determinants of Academic Performance: A Multinomial Logit Approach,” Journal of Economic Education, 21, 101-111.
Stromberg, A. and Ramanathan, S. (1996), “Easy Implementation of Writing in Introductory Statistics Courses,” The American Statistician, 50, 159-163.
Suits, Daniel B. (1957), “Use of Dummy Variables in Regression Equations,” Journal of the American Statistical Association, 52, 548-551.
Utts, J., Sommer, B., Acredolo, C., Maher, M., and Matthews, H. (2003), “A Study Comparing Traditional and Hybrid
Internet-Based Instruction in Introductory Statistics Classes,” Journal of Statistics Education [On line] 11(3).
Ward, B. (2004), “The Best of Both Worlds: A Hybrid Statistics Course,” Journal of Statistics Education [On line] 12(3).
Williams, M. L., Waldauer, C. , and Duggal, V. G. (1992), “Gender Differences in Economics Knowledge: An Extension of the Analysis,” Journal of Economic Education, 23, 219-231.
Wooldridge, Jeffrey M. (2002), Econometric Analysis of Cross Section and Panel Data, Cambridge, MA: MIT Press.
Department of Economics
University of Wisconsin Oshkosh
Oshkosh, WI 54901
Department of Mathematics
University of Wisconsin Oshkosh
Oshkosh, WI 54901
Volume 14 (2006) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications