M. Ryan Haley, Marianne F. Johnson and Eric W. Kuennen
University of Wisconsin  Oshkosh
Journal of Statistics Education Volume 15, Number 3 (2007), jse.amstat.org/v15n3/kuennen.html
Copyright © 2007 by M. Ryan Haley, Marianne F. Johnson and Eric W. Kuennen all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of the editor.
Key Words: business statistics; introductory statistics; education research; student gender
Studies have yielded highly mixed results as to differences in male and female student performance in statistics courses; the role that professors play in these differences is even less clear. In this paper, we consider the impact of professor and student gender on student performance in an introductory business statistics course taught by economics faculty. Using a sample of 535 students, we find, after controlling for academic and mathematical background, that students taught by a professor of the opposite gender fare significantly worse than students taught by a professor of the same gender. The presence of this gender effect highlights the importance of pursuing sound, genderneutral pedagogical practices in introductory statistics education.
A broad range of studies have documented genderspecific learning differences in science, mathematics, statistics, finance, and economics. In the majority of these disciplines, female students generally perform less well than their male counterparts. While this pattern is less evident in statistics, such results have been consistently documented in economics. An interesting confluence occurs at universities where statistics courses (business statistics courses, more specifically) are taught by economics faculty: How do genderspecific effects play out when statistical concepts are taught by economics faculty? We investigate this specific issue, and in so doing seek to add to the more general literature concerning the interaction between student and professor gender. Thus, unlike many existing studies, our analysis contains two sources of gender difference. The first is the effect student gender has on performance in business statistics, and the second concerns how professor gender interacts with student gender to impact course performance. Because our analysis involves statistics as taught by economists, we draw from genderbased studies in both disciplines.
Our inquiry builds on a unique and rich sample of 535 introductory business statistics students compiled from multiple sections across three semesters and four professors (two male and two female). While explanations for differences in academic performance by gender are extremely varied and sometimes even controversial, one particularly interesting area of research is the comingling of student and professor gender. Unlike biological or sociocultural reasons for gender differences in academic performance, how professor gender (or the interaction of student and professor gender) influences student performance is something that individual professors, departments, or institutions might more easily attempt to address.
The remainder of the paper is organized as follows. The next section describes some of the explanations given for differences in course performance based on gender and examines some of the relevant literature. Section three presents a description of the business statistics course offered at our university. Section four details the data set and frames the design of our study and the methods used in our analysis. How we measure student performance motivates section five. The estimation results are presented and interpreted in section six, and in section seven we collect our summary remarks and conclusions.
Genderspecific learning difference are evident even at an early age; for example, the National Assessment of Educational Progress (NAEP) documents that among nineyear olds, males perform better than females on math and science tests, but score lower in reading. These gaps widen (and become statistically significant) by age thirteen, and persist through secondary schooling (Dee 2007) and into the college years, influencing choices of majors.
Thus, despite trends suggesting that men are enrolling in and graduating from college at lower rates than women, men continue to outperform and/or outnumber women in fields such as mathematics, the sciences, engineering, finance, and economics (Freeman 2004). This pattern does not prevail in statistics, wherein women comprise a higher proportion of undergraduate majors and degree earners than in mathematics, engineering, or economics (Scheaffer and Stasny 2004).
Studies concerning the effect of gender on performance in statistics courses deliver, as a whole, mixed results (Alldredge and Brown 2006; Brooks 1987; Buck 1985; Cochran 2005; Hilton and Christensen (2002); Krieg and Uyar 2001; Schram 1996). Schram’s (1996) metaanalysis of gender differences (in applied statistics) concludes that malefemale performance is sensitive to the type of statistics course, the department offering the course, and how course grades are determined (i.e., exams, writing assignments, and/or homework). In general, women outperform men in statistics courses offered by business departments; however, the bulk of Schram’s analysis is based on statistics taught in education and psychology courses. Johnson and Kuennen (2006) show that female students outperform male students in an introductory business statistics course, while Harraway (2002) finds no gender difference in performance in an introductory biostatistics course in New Zealand. Buck (1985) hypothesizes gender effects may enter into statistics students’ performance in several different ways: (1) male students may tend to monopolize the inclass attention of professors, (2) female students may be more sensitive to rolemodel effects, (3) professors may have genderspecific performance expectations, or (4) gender may meaningfully impact academic confidence or math skills. However, in an examination of her psychology statistics students (at both the introductory and advanced undergraduate level), Buck finds no significant differences in performance across genders.
When introductory business statistics classes are taught by economics faculty (as is common in many business schools), the literature on gender and performance in economics courses can be drawn into the analysis of genderspecific learning effects in statistics. It has been consistently demonstrated that women perform less well than men in introductory economics courses (Ballard and Johnson 2005; Dynan and Rouse 1997; Robb and Robb 1999; and Siegfried 1995). Many possible explanations for this have been advanced, including lecture and testing pedagogies, the lack of female role models, and sociocultural norms (Jensen and Owen 2000; Robb and Robb 1999). There is some evidence that the gender gap in performance in economic courses abates at the more advanced level, though women remain underrepresented in the discipline (Borg and Stranahan 2002; Dynan and Rouse 1997; Williams, Waldauer, and Duggal 1992).
There are a small number of studies that, like our own effort here, seek to unravel not only the effect of student gender on class performance, but also the effect professor gender has on genderspecific student performance. While Robb and Robb (1999) find evidence that males outperform females in economics courses, they find no evidence that professor gender impacts genderspecific student performance in economics courses. In contrast, in a nationwide study using National Longitudinal Survey data, Dee (2007) finds that eighthgrade student performance in reading and mathematics is highly sensitive to whether students have samesex professors.
A number of hypotheses have been advanced to explain how professorstudent gender interaction occurs and why it is worthy of study. First, it may be the case that male and female professors have differing preconceived biases affecting how they interact and communicate with male and female students in the classroom (Dee 2007; Sadker and Sadker 1995). Second, some researchers point to a Pygmalion Effect, wherein professors unknowingly communicate different performance expectations to male and female students, which students then fulfill (Buck 1985; Jones and Dindia 2004). A third possibility is the rolemodel effect, wherein students respond positively to samegender professors (Buck 1985; Dee 2007; Jensen and Owen 2000; Robb and Robb 1999). A fourth hypothesis is that professors use teaching pedagogies, examples, and stories that are inherently more appealing to samegender students, which then predisposes samegender students to better course performance (Jensen and Owen 2000; Robb and Robb 1999).
Previous empirical studies of gender and performance in statistics courses focused on student gender, failing to take into account the potential importance of professor gender and professorstudent gender interaction. Our objective in this paper is to look carefully at the potential existence of these effects in 16 sections of introductory business statistics taught by two male and two female economics professors at our own university.
Our business statistics is a 200level (sophomore) course geared towards business and economics majors; however students from other majors (e.g., journalism, sociology, and geography) enroll in the course with some regularity. The minimum mathematics prerequisite is business calculus (with a grade of C or better) or equivalent. "Equivalent" ways to meet the math requirement include taking one semester of standard calculus or scoring sufficiently highly on the university’s math placement exam. The typical class size is approximately 40 students. The catalog description states that the course will cover "descriptive methods, probability and inference, regression and correlation, index numbers, and time series."
To identify the impact of professor gender on genderspecific student performance, we included four different professors in our study: two male and two female. Because we want to see if the effect of professor gender prevails across teaching methodologies and professorspecific course structures, having multiple professors of each gender is advantageous. However, we also recognize that we must control for professorspecific practices that may unduly influence the gender effect. A detailed, topicbytopic comparison of the course content revealed that all four professors covered nearly identical material, and a comparison of their syllabi revealed that they also had coincident course objectives; these observations were not surprising given administrative pressures to structure the class around the course’s catalog description. Despite these commonalities, exams consisted of a professorspecific mix of multiplechoice, problem solving, and essay questions. In addition, each professor used their own mix of homework, computer assignments, quizzes and exams to determine the final grade.
We control for these sources of heterogeneity by standardizing our dependent variable, course grade, by professor; this process is described in detail at the end of section five.
The data set for this study was collected, in part, using a 26question survey given on the first day of class in each of 16 sections (during the Spring 2004, Fall 2004, and Spring 2005 terms), and, in part, using official university records. The data set consists of 547 individuals who participated in the survey and completed the course; these students are a subset of the 630 students originally enrolled in the participating sections. The survey contained questions on student demographics, family background, motivation, and previous math experience, and is available from the authors by request. University and Institutional Review Board (IRB) approval was granted prior to conducting the study.
The breakdown of students by professor and professor gender are provided in Table 1. Of our sample, 40.3% of students enrolled were female, and roughly 50% of all students had a female professor. Students were fairly evenly distributed by gender across the professors.
Table 1. Students by Professor and Professor Gender
Professor 
Total Number of Students 
Percentage of Male Students 
Percentage of Female Students 
Professor 1 (Male) 
192 
61.5 
38.5 
Professor 2 (Female) 
206 
59.7 
40.3 
Professor 3 (Female) 
79 
62.0 
38.0 
Professor 4 (Male) 
70 
51.4 
48.6 




Female Professors 
285 
60.4 
39.7 
Male Professors 
262 
58.8 
44.2 
Total 
547 
59.7 
40.3 
We attempted to control for student motivation, attendance, and ability by using variables generated from the survey or provided by the university. These variables include students’ official university grade point average (GPA) and official collegiate entrance exam score (ACT exam) broken down by subject: Mathematics, Reading, English, and Science. Also included are studentreported hours spent studying per week and hours spent working per week. While our focus is the interaction of studentprofessor gender on course performance, our inclusion of these variables was motivated by previous research in statistics education (Hilton and Christensen 2002; Krieg and Uyar 2001; Utts, et. al. 2003). The summary statistics for the variables are reported in Table 2.
Table 2. Summary Statistics for Demographic Variables by Student Gender^{+}

Overall 
Female 
Male 
Test of Means or Proportions
(pvalues) 
Gender 
 
40.29% 
59.71% 
 
Percent Minority

3.96% 
3.57% 
4.22% 
0.70 
Percent with Female Professor 
52.10% 
51.13% 
52.76% 
0.71 
Grade in Course 
2.78 (1.03) 
2.89 (0.98) 
2.71 (1.07) 
0.04 
GPA 
2.82 (0.51) 
2.91 (0.50) 
2.76 (0.52) 
< 0.005 
ACT English 
21.30 (3.64) 
21.38 (3.69) 
21.25 (3.62) 
0.67 
ACT Math 
22.10 (3.26) 
22.10 (3.34) 
22.11 (3.20) 
0.99 
ACT Science

22.51 (2.80) 
21.92 (3.04) 
22.90 (2.55) 
< 0.005 
ACT Reading 
22.97 (3.73) 
21.82 (3.83) 
22.07 (3.67) 
0.44 
Expected Grade in Course 
3.31 (0.51) 
3.30 (0.51) 
3.31 (0.51) 
0.83 
Percent Took Remedial Math 
14.75% 
10.71% 
17.47% 
0.03 
Math Quiz Score 
11.03 (2.20) 
10.78 (2.17) 
11.20 (2.20) 
0.03 
Age 
20.93 (3.00) 
20.67 (3.14) 
21.10 (2.89) 
0.10 
Hours Work 
14.18 (11.87) 
14.66 (11.07) 
13.86 (12.38) 
0.43 
Hours Study 
11.18 (6.88) 
11.75 (6.80) 
10.79 (6.91) 
0.11 
^{+}We define minority = 1 if the student reported being
nonwhite and minority = 0 if the student reported his or her race as white. The
test of means (or proportions) tests whether the mean (or proportion) for
females is different from the mean (or proportion) for males. Where
appropriate, standard deviations are reported in parentheses below the
corresponding sample mean.
Table 2 contains ttests of differences of means (or ztest of differences of two proportions) between male and female students over a number of variables. In our sample, women have significantly higher GPAs than men and earned significantly higher grades in introductory business statistics.
There are several previously established results regarding gender differences in students that we were particularly eager to examine with our sample. First, a number of studies suggest that men score higher on standardized tests of science (Dee 2007). We found that on average, men scored significantly higher than women on the ACT Science exam; this is true for both the general population at our university and for our sample. Both the men and women in our sample outperform their counterparts in the general university population on this portion of the ACT exam; however, the increase for the women was larger. The science portion of the ACT is of special interest to us, as this exam asks students to interpret data, graphs, tables, experimental results, and to comment on experimental design, conflicting viewpoints, and hypotheses  skills that compare favorably to those needed in introductory business statistics (Johnson and Kuennen 2006).
Second, previous studies show that women tend to outperform men on tests standardized tests of English (Dee 2007), and indeed, in the general population at our university, on average women do perform better than men on the English portion of the ACT exam. In our sample, however, we find that no significant difference in mean ACT English and scores between males and females. This is due to the fact that, while both the men and women in our sample on average have higher ACT English scores that their counterparts in our general university population, it is the men that exhibit the larger and more significant increase in mean ACT English scores.
Third, it has been documented that women often perform less well in mathematical courses and on standardized mathematics exams (Dee 2007; Fennema, et. al 1990; Johnson and Kuennen 2006; Stage and Kloosterman 1995). We attempt to control for mathematical background with a variety of variables, including whether a student was required to take remedial mathematics, student scores on the Mathematics portion of the ACT exam, and student scores on a quiz of very basic math skills. The men in our sample did statistically significantly better on a test of basic math skills given on the first day of class, despite the fact that significantly more men were required to take remedial math than women. While in our general university population, males do have a significantly higher mean ACT Math exam scores than women, we find no significant difference between the male and female scores on the ACT Math exam in our sample. This is due to the fact that the men in our sample have, on average, lower (though not significantly) ACT Math scores than the males in our general university population, while the women in our sample have, on average, significantly higher ACT Math scores than their counterparts in the general university population.
To summarize, in our sample, as well as in the general university population, the men outperformed the women on the ACT Science exam. However, the men in our sample are stronger on the ACT English exam, compared to men in the general population, resulting in no significant difference between the men and women in our sample in terms of English skills. Furthermore, the women in our sample are stronger on the ACT Mathematics exam, compared to women in the general population, resulting in no significant difference between the men and women in our sample in terms of Mathematics skills. We will revisit these observations when discussing our estimation results.
We are missing data on roughly 15% of the students who were initially enrolled in the surveyed sections of the introductory business statistics course. We attribute the missing observations to the following reasons: failing to complete the survey accurately (e.g., including neither their name nor their student ID number), dropping the course, taking an incomplete, refusing to participate in the survey, or missing class on the survey date. We recognize that selfselection and droprates are serious data concerns (Chance and Garfield 2002), however we have not corrected for this relatively small percentage of nonresponse. Previous studies (such as those detailed in Schram 1996) suggest doing so is generally irrelevant when identifying genderspecific differences in statistics. Moreover, we have no reason to believe that there is a systematic pattern along gender lines that would meaningfully influence the results obtained using the students that did participate.
An additional issue is that our data set does not include ACT scores for 186 students. This occurred primarily among students who transferred from twoyear colleges, and thus were not required to provide an ACT score for admission to our university. Following standard practice for missing data, we replaced the missing ACT scores with predicted ACT scores from a regression of ACT score on a vector of student demographic and academic explanatory variables.
Measuring student learning is very difficult, largely because of the subjective nature of assessment. Students may not be able to effectively demonstrate their understanding of material under different testing or evaluation processes (Chance and Garfield 2002). For example, women, nonnative English speakers, and minorities have been shown to perform more poorly on multiplechoice exams, controlling for other factors. We seek to document the interactive effect between student and professor gender on final course grade, rather than a measure of student learning. Thus, we chose to use "Grade in Course" as our dependent variable, rather than more professordependent measures such as point total. We argue "Grade in Course" is additionally relevant because it is the variable of interest to the students (Johnson and Kuennen 2006; Krieg and Uyar 2001).
At our university course grades are based on the 4point scale, with half grades, as follows: 4.0 or A, 3.5 or AB, 3.0 or B, 2.5 or BC, 2.0 or C, 1.5 or CD, 1.0 or D, and 0.0 or F. Roughly 20% of students received an A, across all 16 sections. Students most commonly earned a B, though more than 12% of students earned a grade lower than a C. The explanatory variables most highly correlated with the grade earned in introductory statistics are student GPA, mathquiz score, and science ACT exam score (r = 0.53, r = 0.28, and r = .25, respectively). Women earned significantly higher grades in the course, on average, than did men. However, if we compare the grade earned in introductory business statistics to students’ overall GPA, we find that women can expect to do about the same in statistics compared to their usual course performance, whereas men can expect to do worse.
The dependent variable “grade” is actually a standardized course grade, which we use to account for differences in grading and exams across professors. If a student had professor A, then this student’s standardized course grade is constructed by taking this student’s grade in the course, subtracting professor A’s mean grade, and then dividing by the standard deviation of professor A’s grades. This new standardized grade variable now has a mean of zero and a variance equal to one.
We use this approach, instead of a series of professorspecific dummy variables, to control for professorspecific grading effects. The variation arising from differences in pedagogical approaches, differences in grading, and rigor of grading across professors is encapsulated in the standardized grade variable. This approach allows us to more easily focus on student and professor gender dummy variables and studentprofessor gender interaction terms. Grove, Wasserman and Grodner (2006) also uses standardized course grades as a dependent variable. An alternative approach would be to use the raw grade earned in the course as the dependent variable and include the abovementioned gender dummy variables along with professor specific dummy variables. This would yield equivalent regression results.
Statistical representations of grade formation (and even student learning) have a long history. Bloom (1976) developed what has come to be known as the "education production function" wherein students "produce" knowledge or grades with a variety of "inputs," including intelligence, effort, and various personal characteristics. Hanushek (1979) provided a methodological rationale for this statistical approach to measuring learning. Hanushek suggests basing the model to be estimated on theory, rather than utilizing a stepwise approach. Specifying a model of grade determination based on educational theory has several advantages over a stepwise regression approach. First, the results of such a model will be more comparable with the existing literature. Second, stepwise regression would fail to capture variables that are only statistically significant in interaction with other variables (such as our gender variables), meaning that important relationships could easily be lost. In this tradition, we estimate the following relationships using Ordinary Least Squares:
(1)in which the student standardized grade (stgrade) depends on an intercept term, α, and on k = 1,…,K explanatory variables, X_{k}, with coefficients β_{k}.. In addition, we suggest that the student’s grade depends on three gender variables, including dummy variables for the student’s gender (stugen), the professor’s gender (profgen), and the interaction of the two (stugen*profgen), whose coefficients are indicated by γ_{1}, γ_{2}, and γ_{3}, respectively. The error term u is presumed to have zero mean and constant variance. The specific explanatory variables used are presented in the text and tables of the following section. By standardizing course grade, we generate a continuous random variable, thereby making OLS a logical estimation tool. We report OLS estimates in this paper because of their more intuitive interpretation. All regressions are tested for fit and specification; heteroskedasticitycorrected (robust) standard errors are reported.
Consistent with previous studies, we find that race, university GPA, and basic math skills are among the most significant predictors of course grade in introductory business statistics; the regression results are reported in Table 3. Because the standard deviation of grades for all four professors in our study is approximately one grade point (measured on a fourpoint scale), it is convenient to interpret the estimated coefficients in our regressions as the effect of the variables on the course grade, measured in grade points.
Table 3. Determinants of Statistics Course Grade (Standard Errors Reported in Parenthesis)^{+}
Explanatory Variables 
Regression
1 
Regression
2 
Regression
3 
Regression
4 





University GPA 
1.015 (0.076)*** 
1.016 (0.076)*** 
1.014 (0.075)*** 
0.717 (0.106)*** 
ACT Math Score 
0.013 (0.014) 
0.013 (0.014) 
0.010 (0.013) 
0.007 (0.020) 
ACT Science Score 
0.012 (0.017) 
0.011 (0.017) 
0.018 (0.017) 
0.032 (0.026) 
ACT English Score 
0.009 (0.011) 
0.010 (0.011) 
0.009 (0.011) 
0.034 (0.015)** 
ACT Reading Score 
0.017 (0.012) 
0.017 (0.012) 
0.018 (0.012) 
0.015 (0.015) 
Math Quiz Score 
0.053 (0.017)*** 
0.054 (0.018)*** 
0.053 (0.017)*** 
0.068 (0.024)*** 
Remedial Mathematics 
0.047 (0.100) 
0.048 (0.101) 
0.047 (0.099) 
0.007 (0.097) 
Minority 
0.434 (0.172)** 
0.432 (0.172)** 
0.383 (0.170)** 
0.470 (0.167)*** 
Sophomore 
0.062 (0.070) 
0.059 (0.071) 
0.060 (0.070) 
0.017 (0.069) 
Female 
0.033 (0.073) 
0.035 (0.073) 
0.293 (0.099)*** 
0.223 (0.099)** 
Female Professor 
 
0.041 (0.068) 
0.292 (0.087)*** 
0.403 (0.607) 
Female × Female Prof. 
 
 
0.621 (0.136)*** 
0.476 (0.140)*** 
Female Professor x
GPA 
 
 
 
0.602 (0.148)*** 
Female Professor x
Math Quiz 
 
 
 
0.028 (0.033) 
Female Professor x
Math ACT 
 
 
 
0.005 (0.026) 
Female Professor x
Science ACT 
 
 
 
0.022 (0.039) 
Female Professor x
English ACT 
 
 
 
0.057 (0.021)*** 
Female Professor x
Reading ACT 
 
 
 
0.010 (0.023) 
Constant 
3.209 (0.317)*** 
3.193 (0.318)*** 
3.243 (0.314)*** 
3.54 (0.451)*** 
F 
28.74 
26.06 
26.59 
19.88 
Number Observations 
535 
535 
535 
535 
^{+} Minority = 1 if a student reports being Black, Hispanic, Asian, or Native American and =
0 if a student reports their race as white. Sophomore = 1 if students report being a sophomore at the university and
= 0 otherwise. Male students in a male
professor’s course comprise the baseline comparison category.
Numbers in parenthesis are standard
errors. We use “*” to denote
significance at the 10% level; “**” to denote significance at the 5% level; and
“***” to denote significance at the 1% level. We report heteroskedasticrobust
standard errors for all regressions.
For example, in Regression 1, a coefficient for the minority dummy variable of 0.434 indicates that a minority student is predicted to earn a grade that is 0.434 of a standardized grade (or nearly a half of a grade point) lower than a nonminority student. A onepoint increase in GPA is associated with a roughly onepoint increase in course grade. Further, for each additional question a student answered correctly on the math quiz, the expected course grade increases by roughly 0.05 of a grade point.
Because we are interested in how student and professor gender may influence student performance in the course, we consider several ways in which these variables could influence course grade. In the first regression (Table 3, column 2), student gender alone is not a significant contributor to student performance in introductory business statistics. The estimated coefficient on "female" is positive, indicating that women are predicted to perform slightly better than males, thought this result is not significant. In our second regression specification (Table 3, column 3), we add the gender of the professor as an explanatory variable, but find that professor gender is also not a significant contributor to student performance. Students are predicted to do only slightly worse in the female professors’ classes, and again, this coefficient is not significant. These results are generally consistent with existing studies (e.g., some of those described in the literature review section above) that find student gender has a minimal impact on course performance in introductory statistics.
The results of using this student and professor gender interaction term appear in the third regression specification in Table 3 (column 4). It is crucial to note, however, that these regression results cannot be immediately interpreted, but rather must be used in tandem with the definitions of the gender dummy variables to construct the impact of the genderbased interaction terms. To calculate the interaction effect, we proceed as follows: A male student in a male professor’s course would have values of 0 for "Female", 0 for "Female Professor", and therefore a value of 0 for "Female × Female Professor." Thus his combined studentprofessor gender coefficient would be calculated as
(2)
(3)
In contrast, a female student in a male professor’s course would have values of 1 for "Female", 0 for "Female Professor" and therefore a value of 0 for "Female × Female Professor." Her combined stud entprofessor gender coefficient would be calculated as
(4)
Finally, a female student in a female professor’s course would have values of 1 for "Female", 1 for "Female Professor", and therefore a value of 1 for "Female × Female Professor." Thus her combined studentprofessor gender coefficient would be calculated as
(5)
In Table 4 we summarize the estimated combined interaction coefficients, where, again, a male student in a male professor’s course is our baseline comparison category. Male students in a female professor’s course are predicted to perform 0.292 standardized grade points (nearly 1/3 of a letter grade) worse than male students in a male professor’s course. Similarly, female students in a male professor’s course could expect to do 0.293 standardized grade points worse than a male in a male professor’s course. Both of these results are statistically significant at the 1% level. The last result (in Table 4) implies that female students in a female professor’s class perform no better (in a statistical sense) than a male student in a male professor’s class. In summary, students perform significantly worse when learning introductory business statistics from an economics professor of the opposite gender.
Table 4. Impact of Professor Gender on Student Performance
StudentProfessor Gender 
Combined Coefficient (Standard
Error) 
Male Student in a Male Professor’s
Course 
0.000 
Male Student in a Female
Professor’s Course 
0.292
(0.087)*** 
Female Student in a Male
Professor’s Course 
0.293
(0.099)*** 
Female Student in a Female
Professor’s Course 
0.036 (0.100) 
It is tempting, but incorrect, to conclude that the coefficients 0.292 and 0.293 (from Table 4) are redundant; in fact, they quantify distinct interaction effects. Because we have professors and students of both genders, we have four distinct professorstudent interaction pairs. As per standard practice with “standard” dummy variables, one of the four interaction pairs must serve as the baseline (in our case, male professor and male student) against which the remaining three “pairings” are compared; this is transparent in Table 4 where the first entry (male student with male professor) delivers no effect, by construction. Additionally, the comparable magnitude of results two (0.292) and three (0.293) from Table 4 is not without precedent; Dee (2007) genderbased study of 8^{th} graders’ performance on standardized mathematics and reading exams contains a similar result.
In an effort to further analyze the crossgender effect found in the previous section, we return to look at the variables in our study that show significant student gender differences (refer back to Table 2). These include university GPA, score on the basic math skills quiz, and score on the science portion of the ACT exam. The first two variables are also highly associated with performance in introductory business statistics, as shown in our regressions. However, of these, only university GPA and English ACT score show a significant difference by professor gender.
By adding interaction terms between professor gender and GPA, ACT subscores, and the math quiz score, we can compare the effects that these factors have on course grade in the male and female professors’ courses, even while controlling for student gender. See Table 3 column (5). Two of these interaction terms are statistically significant. First, those who have higher GPAs receive a greater return from female professors. For example, for students in a female professor’s course, a one point increase in GPA is associated with a 1.319 increase in course grade, whereas in a male professor’s course, a one point increase in GPA is only associated with only a 0.717 increase in course grade. (The effect of a onepoint increase in GPA in a female professor’s course is be calculated as
whereas the effect of a onepoint increase in GPA in a male professor’s course would be
Note that both coefficients are statistically significant and positive; GPA is still a significant and positive contributor to course grade in male professors’ classes. However, student GPA is an even stronger contributor in the female professors’ classes.
The other significant difference is in English ACT score. For students in a male professor’s course, English ACT is a statistically significant and positive contributor to course grade: ten additional English ACT points is associated with a third of a gradepoint increase. However, for female professors, the contribution of English ACT to course grade is significantly lower than it is in the male professors’ classes. This results in English ACT having no statistically significant effect on course grade in the female professors’ classes.
We have examined the effect of student and professor gender on performance in introductory business statistics courses taught by economics faculty. In doing so, we controlled for academic and personal background, including preexisting skills in areas relevant to statistical analysis; e.g., math and reading skills. We find that students taught by a professor of the opposite gender fare significantly worse than students taught by a professor of the same gender. In addition, the size of this effect is relatively large, amounting to roughly threetenths of a grade point – easily enough to alter many students’ grades by a halfletter. By revisiting some of the possible explanations given for crossgender effects in Section 2, we can gain some insight into the results of our study.
There has been some research to show that males perform better on multiplechoice format and females perform better on an essay format, and that males perform better when course grade is determined largely by exams and females when course grade is determined in large part by homework (Dynan and Rouse 1997, Robb and Robb 1999, Walstad and Robson 1997). Recall that in our study, we have two professors of each gender; Table 5 summarizes the composition of exams for each professor, as well as the overall percentage of the course grade determined by exams. We ran separate regressions for each professor, and examined the coefficient on the dummy variable indicating whether the student was female. The crossgender effect is still evident in these professorspecific regressions, with the most significant differences occurring for Professor 1 and Professor 3. However, we cannot attribute the crossgender effect solely to the use of multiple choice exams or the weight given to exams in the course grade. For example, female Professor 3 used multiplechoice exams for a large portion of the course grade, and still female students statistically significantly outperformed male students in her class, all else equal.
Table 5. Exam Composition and Contribution to Course Grade, by Professor
Professor 
Exam Composition 
Exam Contribution to Course Grade 
Coefficient for Female (std. err.) 
Professor 1 (Male) 
Multiple Choice 
90 % 
0.279 (0.123)** 
Professor 2 (Female) 
Written Problems 
60 % 
0.179 (0.119) 
Professor 3 (Female) 
Multiple Choice & Written Problems 
75 % 
0.322 (0.162)** 
Professor 4 (Male) 
Written Problems 
70 % 
0.043 (0.200) 
Another question often raised is whether male professors tend to reward "male" skills and female professors tend to reward "female" skills when grading. We find no evidence that such differences can explain the crossgender effect in this study. While the males in our sample scored significantly higher than the females on the basic math quiz and on the science portion of the ACT exam, neither of these variables demonstrated statistically significant differences in their contribution to course grade by professor gender. Conversely, the grades given by male professors seem to more highly reward the skills measured by ACT English scores, but we found no significant difference in the ACT English scores by student gender.
The only variable for which we found a statistically significant difference for both the students and the professors was university GPA. Our data reveals that the female students have significantly higher university GPAs, and that overall (that is, even when controlling for student gender), students receive a higher return on the skills associated with a high GPA in courses with female professors. This could explain why female students do better in female professors’ courses, except that it is not clear exactly which skills are associated with a higher university GPA.
Our data reveals no salient explanation for the professorstudent gender effect, suggesting that there may be more subtle explanations for the differences. These theories include the role model effect, which submits that students may be inspired by instructors of the same gender to perform better in courses, or the Pygmalion effect, which suggests that some students may perform better than others simply because they are expected to by their instructors (the expectations may be conveyed either overtly or subconsciously). However, both of these effects are extremely difficult to document (Canes and Rosen 1995 and Robb and Robb 1999).
The professorstudent gender effect observed in our study raises some interesting pedagogical and institutional issues. Given the sizes of the estimates reported in this study, professors have a substantial impact on the relative performance of male and female students enrolled in their courses, in ways that the professors may not realize. Academic departments with an unbalanced gender distribution of faculty that teach statistics may be adversely impacting students of a particular gender. Because a wide variety of professors teach statistics at the university level – statisticians, mathematicians, psychologists, and economists, among others – it is important to examine teaching and learning in statistics across these different fields to see if any trends or patterns can be consistently observed (Jolliffe 2003).
In addition to institutionlevel considerations, the results of this study suggest that departments may wish to consider how student and professor genders interact to influence the learning environment. Departments could pay special attention to the course structure and skills they value when designing courses and grading schemes. Professors themselves may also seek to more carefully consider the skills rewarded in the courses. Further, while we cannot make any general claims based on our data, anecdotal evidence in the statistics education literature suggests that professors may indeed choose methods of presentation, stories, and examples that are potentially more attractive to their own gender.
Ultimately, however, this study only evidences the studentprofessor gender effects, and does not identify the root causes of student sensitivity to opposite gender professors. Nor does it comment on the likely outcomes of hiring more professors of a particular gender, on singlegender classrooms, or on singlesex institutions. Instead, the results of the study suggest that the gender interactions of students and professors are important, and it would be beneficial to better understand why this interaction occurs. We suggest that this may be a fruitful line for future inquiry.
Ballard, C., and Johnson, M. (2005), "Gender, Expectations, and Grades in Introductory Microeconomics at a US University," Feminist Economics, 11, 95  122.
Bloom, B. (1976), Human Characteristics and School Learning. New York: McGraw Hill Book Company.
Borg, M., and Stranahan, H. (2002), "Personality Type and Student Performance in UpperLevel Economics Courses: The Importance of Race and Gender," Journal of Economic Education, 34, 3  14.
Brooks, C. (1987), "Superiority of Women in Statistics Achievement," Teaching of Psychology, 14, 45.
Buck, J. (1985), "A Failure to Find Gender Differences in Statistics Achievement," Teaching of Psychology, 12, 100.
Canes, B., and Rosen, H.(1995), "Following in Her Footsteps? Women's Choices of College Major and Faculty Gender Composition," Industrial and Labor Relations Review, 48, 486  504.
Chance, B., and Garfield, J. (2002), "New Approaches to Gathering Data on Student Learning for Research in Statistics Education," Statistics Education Research Journal, 1, 38  44.
Cochran, J. (2005), "Can You Really Learn Basic Probability by Playing a Sports Board Game?" The American Statistician, 59, 266 272.Dee, T. (2007), "Teachers and the Gender Gaps in Student Achievement," Journal of Human Resources, 42, 528  554.
Dynan, K., and Rouse, C. (1997), "The Underrepresentation of Women in Economics: A Study of Undergraduate Economics Students," Journal of Economic Education, 28, 350  368.
Fennema, E., Peterson, P., Carpenter, T., and Lubinski, C. (1990), "Teachers' Attributions and Beliefs about Girls, Boys, and Mathematics," Educational Studies in Mathematics, 21, 55  69.
Freeman, C. (2004), Trends in Educational Equity of Girls and Women: 2004. NCES 2005016, U.S. Department of Education, National Center for Education Statistics. Washington DC, U.S. Government Printing Office.
Grove, W., Wasserman, T., and Grodner, A. (2006), "Choosing a Proxy for Academic Aptitude," Journal of Economic Education 37, 131  147.
Hanushek, E. (1979), "Conceptual and Empirical Issues in the Estimation of Education Production Functions," Journal of Human Resources, 14, 351  388.
Harraway, J. (2002), "Factors Affecting Performance in a University Service Course on Biostatistics: An Update," International Conference on Teaching Statistics, ICOTS6, http://www.stat.auckland.ac.nz/~iase/publications/1/4i1_harr.pdf.
Hilton, S., and Christensen, H. (2002), "Evaluating the Impact of Multimedia Lectures on Student Learning and Attitudes," International Conference on Teaching Statistics, ICOTS6, http://www.stat.auckland.ac.nz/~iase/publications/1/6f3_hilt.pdf.
Jensen, E., and Owen, A. (2000), "Why are Women Such Reluctant Economists? Evidence from Liberal Arts Colleges," American Economic Review, 90, 466  470.
Johnson, M., and Kuennen, E. (2006), "Basic Math Skills and Performance in an Introductory Statistics Course," Journal of Statistics Education, 14, online. http://jse.amstat.org/v14n2/johnson.html
Jolliffe, F. (2003), "Towards a Database of Research in Statistical Education," Statistics Education Research Journal, 2, 47  58.
Jones, S., and Dindia, K. (2004), "A MetaAnalytic Perspective on Sex Equity in the Classroom," Review of Educational Research, 74, 443  471.
Krieg, R., and Uyar, B. (2001), "Student Performance in Business and Economics Statistics: Does Exam Structure Matter?" Journal of Economics and Finance, 25, 229  241.
Robb, R., and Robb, A. (1999), "Gender and the Study of Economics: The Role of Gender of the Instructor," Journal of Economic Education, 30, 3  19.
Sadker, M. and Sadker, D. (1995), Failing at Fairness: How Our Schools Cheat Girls. New York: Touchstone.
Scheaffer, R., and Stasny, E. (2004), "The State of Undergraduate Education in Statistics: A Report from the CBMS 2000," The American Statistician, 58, 265  71.
Schram, C. (1996), "A MetaAnalysis of Gender Differences in Applied Statistics Achievement," Journal of Educational and Behavioral Statistics, 21, 55  70.
Siegfried, J. (1995), "Trends in Undergraduate Economics Degrees: A 19931994 Update," Journal of Economic Education, 26, 282  287.
Stage, F., and Kloosterman, P. (1995), "Gender, Beliefs, and Achievement in Remedial CollegeLevel Mathematics," Journal of Higher Education, 66, 294  311.
Suits, D. (1957), "Use of Dummy Variables in Regression Equations," Journal of the American Statistical Association, 52, 548  551.
Utts, J., Sommer, B., Acredolo, C., Maher, M., and Matthews, H. (2003), "A Study Comparing Traditional and Hybrid InternetBased Instruction in Introductory Statistics Classes," Journal of Statistics Education, 11, online. http://jse.amstat.org/v11n3/utts.html
Walstad, W., and Robson, D. (1997), "Differential Item Functioning and MaleFemale Differences on MultipleChoice Tests in Economics," Journal of Economic Education, 28, 155  171.
Williams, M., Waldauer, C., and Duggal, V. (1992), "Gender Differences in Economic Knowledge: An Extension of the Analysis," Journal of Economic Education, 23, 219  231.
M. Ryan Haley
Department of Economics
University of Wisconsin  Oshkosh
800 Algoma Blvd.
Oshkosh, WI 54901
haley@uwosh.edu
Marianne Johnson
Department of Economics
University of Wisconsin  Oshkosh
800 Algoma Blvd.
Oshkosh, WI 54901
johnsonm@uwosh.edu
Eric Kuennen, Corresponding Author
Department of Mathematics
University of Wisconsin  Oshkosh
800 Algoma Blvd.
Oshkosh, WI 54901
kuennene@uwosh.edu
Volume 15 (2007)  Archive  Index  Data Archive  Information Service  Editorial Board  Guidelines for Authors  Guidelines for Data Contributors  Home Page  Contact JSE  ASA Publications