Thomas E. Love
Case Western Reserve University
Journal of Statistics Education v.8, n.1 (2000)
Copyright (c) 2000 by Thomas E. Love, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author and advance notification of the editor.
Key Words: Cooperative learning; Management education; Projects; Regression; Short courses; Teams.
An approach used to assess project team work in a condensed (half-term) elective course is discussed. The instructor's evaluation method signals appropriate course goals to students. The scheme described encourages student groups to prepare presentations that will be attractive to people who will evaluate their work in the real world. Colleague comments determine one-half of each student's course grade. Students are randomly selected to lead the presentations, ensuring that all students are thoroughly involved in the process (including assessment). A report on the projects (and comments) completed by Masters of Business Administration (MBA) students at a midwestern school of management is provided, along with the inventory used to assess each team's work.
1 Many management recruiters now require prospective employees to analyze complex business scenarios in teams as part of the interview process. Candidates who come across as poor analysts often pay insufficient attention to several issues under the domain of modern statistics. Effective managers use data well. They are in the habit of plotting data. They can translate real-world problems into key questions that can be illuminated with data. Perhaps most of all, they ably communicate their results and recommendations.
2 With this in mind, educators need to encourage students to practice the skills most directly related to modern management practice. Effective electives must do more than provide tools for more advanced work in other functional areas. The development of lifelong statistical and analytical reasoning skills is pivotal and has been a focus of curricular reform. For examples, see Smith (1998) and Bradstreet (1996), among many others.
3 Sowey (1995) argued persuasively that statistics "learning that lasts" is a realistic and relevant goal for educators, and that the demonstration of practical usefulness is what motivates students to learn and retain statistical ideas. Sowey suggested that appropriate assignments are those in which
... students play the active role in formulating a verbally-stated problem in statistical terms, and must struggle with ill-conditioned data, decide for themselves the most appropriate technique to apply, cope with unanticipated analytical obstacles, and finally write a professional report on their investigations. (par. 35)
4 Similar conclusions have arisen from the Making Statistics More Effective in Schools of Business (MSMESB) conferences. See Easton, Roberts, and Tiao (1988) and Roberts (1994) for more details. An essential finding is that when confronted with real-world business problems, there is no substitute for direct, hands-on experience with using data to make decisions.
5 Since the fitting of standard statistical models is now trivially easy, there is a clear need for more detailed instruction in statistical thinking. This implies a focus on understanding and forecasting in various data settings, measuring how well models fit data, and on making meaningful inferences when assumptions fail. Love (1998) discussed a full-semester course that attempts to meet these goals. In this paper, a half-semester elective (or mini-course) in regression analysis is described that meets some of these challenges. The most important goal for the mini-course, stated in the syllabus and at the first session, is that students learn to understand and communicate the results of a statistical analysis.
6 Of special interest here is an assessment approach that encourages students to behave as they will in real-life decision-making scenarios. Students establish relevant and interesting research questions relating to a problem of interest, procure data to help answer the questions, pose other questions, and finally communicate their results to an audience of their peers.
7 The fact that virtually all MBA students are uninterested in statistics as a career mandates attention to marketing and flexible scheduling for elective courses. Mini-courses can be an attractive approach, as they allow time-pressed students to diversify their curriculum. A Fall 1997 survey at the Weatherhead School of Management (Weatherhead) concluded that roughly three times as many MBA students were interested in taking a half-course elective in statistics than were willing to consider a full course.
8 The compressed schedule of a mini-course makes thoughtful evaluation of student work a challenge, but it perhaps provides a more accurate reflection of common management practice. Managers need to be agile -- performing well in rapidly changing settings. For instance, in oral presentations, they must react well to unanticipated questions from the floor. Generally, they need to convince an audience (in a short time frame) of the importance of their conclusions and the value of their recommendations.
9 Several authors have discussed the effective assessment of student performance in statistics classes. Garfield (1994) described a "mismatch" between traditional methods of evaluating student performance and desirable outcomes. Hubbard (1997) and Cobb (1993) illustrated the strong relationship between student learning and methods of assessment. In consulting, statisticians recognize that measure specification is extremely important, especially when the measure is used for assessment of employees. Students will adjust their learning strategies in light of our assessment methods, and learning goals must therefore be clearly reflected in the manner in which instructors evaluate them. Assessment methods directly signal the instructor's values.
10 The context for this assessment scheme is a mini-course for second-year MBA students at Weatherhead. Students entered the course with a reasonably good understanding of correlation and simple regression, but limited exposure to multiple regression. Most students in the group attended school full-time.
11 The mini-course consisted of five weekly meetings, each two and one-half hours long, followed two to three weeks later by a full afternoon of project presentations. This schedule first exposed students to regression applications related to several functional areas of management. Next, project teams applied their learning to a problem and datasets of their own choosing and finally communicated their results in an oral presentation and a written executive summary.
12 Case studies illustrated the value of broad regression concepts. In each session, students were exposed to problems that motivated new ideas. Specific topics covered were residual analysis and diagnostics, transformations, model selection (best subsets and stepwise approaches), model validation (through split samples and forecasting analysis), modeling autocorrelation, as well as a review of some hypothesis testing, partial correlation, and prediction results. Students used Chapters 12-14 from Hildebrand and Ott (1998) for background material and a few examples.
13 The first class meeting took place in a computer lab, where students performed several analyses on instructor-provided mini-cases in real time using Statistix, SPSS, and Excel software. The remaining four sessions used a regular classroom equipped with a computer cart for the instructor. The instructor used PowerPoint slides, Statistix, and SPSS to provide focus for the sessions, obtain graphical and numerical analyses, and test out approaches suggested by the students. A computer training room housed the project presentations. All project teams used PowerPoint in their presentations. The teams used a variety of statistical software, including some programs not used in class.
14 Each student team met with the instructor several times outside of class to discuss problems with their data and analyses and to describe their planned outline for the presentation. E-mail was an effective vehicle for fast two-way feedback on smaller issues. The instructor acted as a facilitator, providing advice on the use of software and statistical tools, and encouraging each student to develop his or her own understanding of each new problem.
15 The Appendix gives a set of instructions for developing research questions and project proposals. The spirit of these instructions owes much to Roberts (1991). Chapter 12 of Frees (1996) and Chapter 17 of Hildebrand and Ott (1998) have also proven useful in steering students toward more effective communication.
16 In the MBA curriculum at Weatherhead, course projects are nearly always done in teams. As Garfield (1993) pointed out, businesses are eager to hire people who can work collaboratively to solve problems. These course teams were self-selected groups of two to four people. Typically, a team formed as a result of one student's suggesting a researchable problem that sparked the interest of others. Keeler and Steinhorst (1995) and Giraud (1997) presented evidence of the advantages of small groups in developing effective cooperative learning in introductory courses.
17 Project teams foster cooperative learning. Because the team project constituted 70% of the mini-course grade, students were well-motivated to make a serious effort to do the best possible job. Occasionally, colleagues suggested that an assessment scheme based substantially on group work could lead to situations where one student (often the most confident) does all of the work. Several steps were taken to ensure that all students in the group were involved in all stages of the project.
18 After receiving feedback on their initial proposal, all team members met with the instructor to respond to any comments. The instructor was careful to involve all students in this process, asking questions, challenging their arguments, and encouraging them to express their perspectives on each new aspect of the problem. Fifteen percent of the course grade was based on class participation (another 15% was based on three case studies, and the remainder on the project), and these sessions allowed for a more careful assessment of effort in this area. In the Fall 1998 session, two sections of the mini-course were offered, with 16 and 14 students, respectively. With such small groups in each section, it was possible for almost everyone to contribute ideas at each class session.
19 At each meeting, the instructor informally asked one group member to describe the contributions made by each member of the group, including themselves. In project settings where there are multiple tasks to be performed, students gravitate toward their areas of greatest comfort and familiarity. One typical team arrived with a plan of action. One student wanted to take on the task of preparing the slides, another the writing of the executive summary, while the other two did the meat of the analysis and data collection. The students were encouraged instead to participate in all facets of the project, and this led to a series of team meetings consolidating the approaches and ideas of a diverse group. On completing the project, the group expressed their gratitude for being forced to take on unfamiliar roles.
20 As noted previously, the most critical aim for the mini-course was that students understand and communicate the results of a statistical analysis. To assess this accurately, an oral presentation combined with the writing of an executive summary seemed most effective and relevant to their career goals. There are some tradeoffs to consider. The assignment of a lengthy paper and case studies and reading and an oral presentation was deemed too heavy a workload for a half-semester offering. With five team presentations in the first section, and four more in the second, it was impossible to allow more than 30 minutes per team, including time for the raters to complete an assessment instrument thoughtfully.
21 It seemed crucial to ensure that all students on a team be prepared to give the entire presentation. Random selection of speaker order appears to be an effective way to achieve this goal. The instructor was prompted to try this approach by a successful similar experience reported by Hopfe and Taylor at the Iowa MSMESB conference in June 1998. To help students buy in to this idea, it was helpful to point out that in business, any member of a project team can be called on to give a presentation at a moment's notice, for instance, in case of illness.
22 The details of this random selection may be of interest. The students were asked to prepare a 20-minute talk that could be divided into four pieces -- an introduction of one to two minutes in length, two longer sections of eight to nine minutes each, and a conclusion of one to two minutes. These sections were all keyed to individual slides, smoothing transitions. None of the nine teams experienced serious difficulty in conforming to this format.
23 At the start of each presentation, each team member drew a card numbered one through four. The students receiving cards one and two gave the two main parts of the presentation (in that order), and students three and four gave the introduction and conclusion (in that order). For groups with three students, one student did both the introduction and conclusion, and for the one group with two students, the project was divided in half, with speaking order determined at random.
24 In their course evaluations, several students remarked that this was an unusual but effective method of ensuring that all students were familiar with all parts of the presentation and analysis. Two students remarked that they were glad to have the opportunity to explain the parts of the analysis they had completed to the rest of the group, and then to quiz them on it to ensure that they were ready to give the talk.
25 Nine projects were completed in the Fall 1998 mini-course. Three of the projects studied problems directly related to the management of specific companies. In each of these cases, one of the team's members had a prior or current affiliation with the company. One team studied the selling processes of an industrial gas manufacturer and distributor of welding and cutting hardgoods. In particular, the goal of the project was to develop an understanding of the interrelationships between a variety of predictors under the control of marketing managers and the gross margin of various branches.
26 Another team attempted to study the question of market prioritization for a large manufacturer with a substantial retail business. The interest here was in developing a scoring scheme for potential new branch locations on the basis of a careful look at current retail distribution sites, and a wide range of demographic and market share information.
27 A third team studied four possible explanations of variation in profitability within a large network of branches of a distributor of industrial products. Of the four theories posited by members of the firm's management team, proxies for activity-based costing and efficiency metrics showed more direct relationships with branch profit margins than did measures of scale and of product mix.
28 Three teams studied longitudinal data on financial indices, generally quite successfully. One team studied globalization's effect on volatility in a large number of international stock markets. They were able to develop a series of useful models for a control group of relatively stable markets (U.S., U.K., Japan, and Germany) and two groups of "emerging markets" (including larger economies such as China, Brazil, and Mexico and smaller markets such as Indonesia, Malaysia, and Thailand). The two other teams working in this area generated forecasting tools for, respectively, the Standard & Poor's 500 index, and the 90-day U.S. Treasury Bill. They made use of interesting sets of predictors and drew some intriguing, though not likely to be profitable, conclusions.
29 The other three teams studied issues somewhat outside the traditional management curriculum. One team attempted to construct an understandable meta-analysis describing recent research findings regarding colon and prostate cancers. Though unsuccessful in obtaining useful patient-level data, the team supplemented this work with several models describing cancer rates in 46 nations and in the 50 states on the basis of various demographic characteristics.
30 Finally, two teams studied issues related to sports. One group gave an entertaining presentation on what might reasonably be expected from the expansion Cleveland Browns National Football League franchise in terms of on-field performance. To do this, they created a series of models describing the most recent 21 expansion teams in football, baseball, hockey, and basketball. While the inferential structure was a bit shaky, the students were able to pose some key questions that appear to be closely related to the eventual success of the teams. The other group studied the World Cup Soccer tournament, attempting to model the ranking of nations in the tournament on the basis of demographic characteristics and results in prior international competition.
31 The instructor began constructing a presentation assessment tool by asking students to suggest items that would be interesting and relevant, for course participation credit. The final inventory combined some of these items with some standard "minute paper" questions and some specific areas of special interest to the professor in light of desirable student outcomes. Garfield (1994) made several suggestions for providing more detailed feedback to students than just a overall grade, including the use of several categories (like "understands the problem" or "describes an effective solution") that are mirrored in the inventory. The resulting items are focused primarily on analysis and evaluation objectives, as defined in Bloom's (1956) taxonomy.
32 Gal and Ginsburg (1994) advocated the use of both Likert-type items and open-ended questions in exploring students' feelings about statistics. It seems reasonable to include both types of items in the assessment process when students are exposed to colleagues' presentations. Weatherhead students are exposed to combined inventories in course evaluations. The instructor actively encouraged raters to explain their responses to the Likert items in the comments section, and several students took advantage of this opportunity.
33 Educational assessment is an area that has been extensively studied. Erwin (1991) provided useful information on establishing objectives for outcome assessment and on the design of new methods in light of institutional needs. Gronlund (1965) provided a thorough discussion of the issues involved in using peer appraisal and self-report in evaluating learning and development. In creating the inventory, the instructor was also influenced by Payne's (1974) discussion of affective and performance-based assessment and by the literature on outcome orientation in education, particularly as discussed in Boyatzis et al. (1995).
34 For each team, 10 to 16 raters completed an evaluation. The raters included all students in that section who were not on the team, a graduate student in operations research serving as teaching assistant, and guests, including students from the other section and a marketing professor who sat in on the course. Also, the instructor completed an evaluation, but these scores were kept separate from the rest of the group.
35 The inventory is one page long with open-ended questions and comment opportunities on one side and a series of ten multiple-response (Likert) items on the other. The front side of the inventory requires answers to the five open-ended questions listed below, with ample space for raters to give a complete response. It was imperative to provide diagnostic information about where presentation and analytic skills were most in need of improvement in the eyes of the eventual target audience. To that end, the questions consider the impression left by the presentation, rather than solely the details of the analysis. On completing the course, students received a copy of the comments from all raters, with the instructor's comments indicated.
36 For the following ten items, the raters were asked to provide a rating on a six-point Likert scale where 1 = Strongly Disagree and 6 = Strongly Agree. The main motivation for using an even number of response categories was to force respondents "in the middle" to provide some indication of which way they were leaning. Sudman and Bradburn (1982) and Sommer and Sommer (1997) provided background and suggestions on the use of forced-choice Likert scales in attitudinal surveys. Mean item scores for the groups ranged from a low of 4 to a high of 5.81. A total of 118 ratings were received for the nine teams. No scores of 1 were given for any question, and the modal score was 5 for all questions (across teams).
37 Finally, as a validation check for the methodology,
and to try to capture the possibility that an important
element in the evaluation of the presentation was missing
from the item set, raters specified an overall grade for
the presentation (A+ through F). These results were then
converted to the usual four-point scale
38 In general, the scheme worked well. The scores provided by the raters closely matched the presentation grades of the instructor. The instructor's and raters' rankings of the presentations matched in all but two of the cases. The scores shown in Table 1 combine the sum of the ten Likert scale responses (70% of the weight) and the mean rater grades (20% of the weight) with a score based on the instructor's subjective assessment of the comments in response to the open-ended questions (10% of the weight). Spearman's rank correlation coefficient is .85, which is statistically significant at the .01 level.
Table 1. Fall 1998 Scores and Rankings
|Group||Peer Score||Peer Rank||Instructor Score||Instructor Rank|
39 Students rated the Cleveland Browns expansion problem third best overall, and the study of four explanations of profitability for a distributor of industrial products sixth, while the instructor had those two rankings reversed. What disagreements there were tended to focus on the tenth question, relating to audience involvement. The raters gave the Browns group the highest score of all nine teams (5.64) on question 10, while the instructor gave them only a 5. The industrial products team scored a 6 from the instructor on this dimension, but only a 4.40 from the raters.
40 Fifty percent of the course grade was based on the average presentation ratings provided by fellow students, the teaching assistant, and guests. Ten percent of the course grade was based on the instructor's evaluation of the presentation, 10% on the instructor's rating of the written executive summary and appendix, and the remaining 30% was based equally on class participation and three case study reports. In general, the students were somewhat more lenient than the instructor in grading their peers on the overall (A+ through F) scale, but their individual Likert scores were quite close to those of the instructor.
41 The most important part of the evaluation for the teams was the reaction of the raters in the open-ended comments section. Here, there was clear value added for the student teams in hearing comments from the raters. All students in the class rated the other project teams, and they were encouraged to ask questions during the presentations. While friendly, the students were clearly willing to ask tough questions and to try to poke holes in the sloppier parts of the arguments presented by the teams. A few words at the beginning of the presentation sessions by the instructor on the desirability of a cooperative atmosphere seemed to achieve desirable results. As Smith (1998) noted, students seem to take it easier on struggling speakers and to work harder on more confident speakers.
42 This a common question on minute papers, where a speaker is trying to determine whether the take-home message of a talk has come across effectively. Several raters interpreted this question as requesting a statement of the best part of the presentation. For example, for the industrial parts distributor team, one rater wrote that "It was really easy to understand their analysis. I really liked that they brought the data back to a cause -- meaning -- the data was a tool to support the rest of their theories and ideas."
43 Another rater suggested that "this was an interesting study on a real life situation, which might then be used as an appraisal tool, though it's not clear how." A third mentioned that "the data showed the causes coming out and affecting the R2 to a high degree. The manager should be able to get a good idea as to where to get the most bang for the buck." In the Likert scale items, this team scored well overall, but relatively poorly on audience involvement and take-away value.
44 Again, this is a common minute paper question, and one used frequently at Weatherhead in core statistics classes. Again looking at the comments for the industrial parts distributor, students captured a number of ideas. Unlike most groups, this team did not produce a single slide listing key questions until the end of the presentation (when they provided answers). Several raters remarked on this: "How they initially laid out the issues was confusing," "The problem definition and research questions were not that clear," and "I didn't feel like I fully knew the extent of the question they were studying" were typical comments.
45 Other students complained about the choice of final model: "They kind of breezed over the model validation part -- maybe more time could have been spent on model interpretation, meanings of numbers," and "It's hard to see the justification for their final model."
46 The third question, asking for comments on the presentation itself, was clearly the area where students felt most confident in their opinions. The comments were full of detailed, helpful criticisms of speaking styles, presentation formatting, "handoffs" between speakers, and presentation structure. One team made the decision to mirror the presentation style of the firm it was studying, which meant using a color and font scheme in PowerPoint that proved difficult to see in the computer lab. Raters were relentless in their criticisms of that choice, especially in light of some gorgeous and enlightening graphics work by the preceding project team. A few selected comments: "The oral presentation was good -- the slides could have been improved. While the language was concise, some of the visuals were difficult to read, even for the presenter. A handout was a must!" "Didn't read from notes!! Good looking slides with handouts available. Glad that the choice of output was well placed on the slides -- also really enjoyed the circling of important results." "Too many slides? For this type of presentation (20 minutes) -- 29 slides is a lot for us to absorb at only about 45 seconds per slide -- better off with about 15 slides."
47 When reacting to the statistical methods used, students focused their comments on three main issues. These were the presentation of predictors (especially the motivation behind transformations and details of data collection), residual analysis (often the identification and investigation of outliers as possible opportunities to learn more from the data), and model validation. Students were critical of the groups who either did no validation of their models on new data or discussed it only briefly.
48 Most of the project teams were surprised at the level of difficulty they experienced in selecting a series of useful predictors to use in their models, and groups that made their case well were rewarded with nearly universally positive feedback, including high scores on the "solved problems" question.
49 For the most part, raters used this opportunity to reiterate a suggestion they made in their answers to one of the previous four items. It was interesting to see that the groups with the strongest projects tended to receive more focused, detailed suggestions here, while the weaker presentations elicited mostly comments about data collection. Most students seemed to feel that the most challenging and important aspect of the projects was the collection of useful, interesting data, and the general consensus was clearly that the weaker presentations were mostly a result of weaker datasets.
50 While it is impossible to define a relevant comparison group, there is some evidence to suggest that students appreciated the approach to evaluation described here. When the issue was first broached in class, students were wary of being graded by colleagues. However, several students remarked after completing the course that this made them alter the kind of talk they would have given otherwise. In particular, several teams struggled with the style of the presentation, although there was no clear evidence of students putting style over substance in developing their problem solutions.
51 Most of the students were full-time second-year MBAs actively involved in job-hunting while taking the course. Three students came back after participating in complex scenario analyses as part of the corporate interviewing process, and mentioned that the demands of the project had served them well in that setting. Many students described their ability to communicate analytical results as substantially improved by the course, due in large part to the need to ensure that the rest of their project team was capable of presenting all of the work.
52 There were some other clear benefits to this approach. The instructor felt confident that students were more accurately and appropriately evaluated than in previous courses. The teaching assistant mentioned that she strongly preferred this activity, as she felt she had some stake in the performance of student teams. Several students from the first section volunteered to attend the grading sessions for the second group, as they felt the feedback they had received was useful, and that there was much to be learned simply from the process of evaluating others.
53 A missing element was a formal self-evaluation. While the instructor was careful to encourage each group to assess their progress at each team meeting, students did not rate their own project team. A reasonable suggestion might be to require such an evaluation from each group in advance of the final presentation, helping the group to see differences between their own perceptions of their strongest and weakest points, and those of their colleagues.
54 Weatherhead now offers mini-courses in categorical data analysis and time series analysis in place of the usual full-semester course combining the two. These mini-courses use an approach to assignments and assessment very similar to that described here. Many more MBA students than usual (including a majority of students in the original mini-course) enrolled in the new double-mini course. Student evaluations were very strong (4.8 and 4.9 ratings on a 1 to 5 scale) for the original regression mini-course in Fall 1998 and for the two pieces of the double-mini course in Spring 1999, and the instructor was a finalist for the 1998-99 Weatherhead Teaching Award.
The author would like to thank two referees and the editor for their helpful comments.
Students will work in groups of two to four on their final project. Each group will be responsible for writing a proposal, collecting data, performing appropriate analyses, giving an oral presentation, and preparing some key elements of a final report. Additionally, each group will consult with me regularly during the course to provide updates on the project's status and to ask specific questions. The project should be an application of regression techniques, combined with some sort of exploratory analysis.
There are an enormous number of available research problems, and I do not intend to place any restrictions on the type of data you collect, save that it should be amenable to the methods we will illustrate in class and in the readings. You should choose something of personal interest, and your plan of action should move through the rough steps of tentative model identification, statistical model fitting, diagnostic checking of fit and assumptions, refitting alternative models, and prediction. Try to design a study that will yield something meaningful.
In the proposal, you should include a problem definition and general plan of attack, preferably with some preliminary data and data analysis. An effective project proposal provides a working title for the project, poses the problem in general terms, and briefly provides some context motivating your interest. It then describes a series of specific research questions that you'd like to be able to answer; brainstorms about the potential effect of the independent variables you've chosen to study, giving some insight into why these were suggested; and describes any pre-data-collection insight into the direction of the hypotheses you plan to test. With my guidance, as necessary, you are responsible for deciding what problem you want to tackle, how to establish relevant and realistic research questions, what data you will need, and how to get them.
Bloom, B. (ed.) (1956), Taxonomy of Educational Objectives. Volume 1: Cognitive Domain, New York: McKay.
Boyatzis, R. E., Cowen, S. S., Kolb, D. A., and Associates (1995), Innovation in Professional Education, San Francisco: Jossey-Bass.
Bradstreet, T. E. (1996), "Teaching Introductory Statistics Courses So That Nonstatisticians Experience Statistical Reasoning," The American Statistician, 50, 69-78.
Cobb, G. (1993), "Reconsidering Statistics Education: A National Science Foundation Conference," Journal of Statistics Education, [Online], 1(1). (https://www.amstat.org/v1n1/cobb.html)
Easton, G. E., Roberts, H. V., and Tiao, G. C. (1988), "Making Statistics More Effective in Schools of Business," Journal of Business and Economic Statistics, 6, 247-260.
Erwin, T. D. (1991), Assessing Student Learning and Development, San Francisco: Jossey-Bass.
Frees, E. W. (1996), Data Analysis Using Regression Models: The Business Perspective, Englewood Cliffs, NJ: Prentice-Hall.
Gal, I. and Ginsburg, L. (1994), "The Role of Beliefs and Attitudes in Learning Statistics: Towards an Assessment Framework," Journal of Statistics Education, [Online], 2(2). (https://www.amstat.org/v2n2/gal.html)
Garfield, J. B. (1993), "Teaching Statistics Using Small-Group Cooperative Learning," Journal of Statistics Education, [Online], 1(1). (https://www.amstat.org/v1n1/garfield.html)
Garfield, J. B. (1994), "Beyond Testing and Grading: Using Assessment to Improve Student Learning," Journal of Statistics Education, [Online], 2(1). (https://www.amstat.org/v2n1/garfield.html)
Giraud, G. (1997), "Cooperative Learning and Statistics Instruction," Journal of Statistics Education, [Online], 5(3). (https://www.amstat.org/v5n3/giraud.html)
Gronlund, N. E. (1965), Measurement and Evaluation in Teaching, New York: MacMillan.
Hildebrand, D. K., and Ott, R. L. (1998), Statistical Thinking for Managers (4th ed.), Pacific Grove, CA: Duxbury.
Hubbard, R. (1997), "Assessment and the Process of Learning Statistics," Journal of Statistics Education, [Online], 5(1). (https://www.amstat.org/v5n1/hubbard.html)
Keeler, C. M. and Steinhorst, R. K. (1995), "Using Small Groups to Promote Active Learning in the Introductory Statistics Course: A Report from the Field," Journal of Statistics Education, [Online], 3(2). (https://www.amstat.org/v3n2/keeler.html)
Love, T. (1998), "A Project-Driven Second Course," Journal of Statistics Education, [Online], 6(1). (https://www.amstat.org/v6n1/love.html)
Payne, D. A. (1974), The Assessment of Learning: Cognitive and Affective, Lexington, MA: D. C. Heath & Co.
Roberts, H. V. (1994), "Reflections on Making Statistics More Effective in Business Schools (MSMESB)," in Proceedings of the Business and Economic Statistics Section, American Statistical Association, pp. 316-318.
Smith, G. (1998), "Learning Statistics by Doing Statistics," Journal of Statistics Education, [Online], 6(3). (https://www.amstat.org/v6n3/smith.html)
Sommer, B., and Sommer, R. (1997), A Practical Guide to Behavioral Research, New York: Oxford University Press.
Sowey, E. (1995), "Teaching Statistics: Making It Memorable," Journal of Statistics Education, [Online], 3(2). (https://www.amstat.org/v3n2/sowey.html)
Sudman, S., and Bradburn, N. M. (1982), Asking Questions, San Francisco: Jossey-Bass.
Thomas E. Love
Department of Operations Research & Operations Management
Weatherhead School of Management
Case Western Reserve University
Cleveland, Ohio 44106-7235
JSE Homepage | Subscription Information | Current Issue | JSE Archive (1993-1998) | Data Archive | Index | Search JSE | JSE Information Service | Editorial Board | Information for Authors | Contact JSE | ASA Publications