The Best of Both Worlds: A Hybrid Statistics Course.

Barbara Ward
Belmont University

Journal of Statistics Education Volume 12, Number 3 (2004), jse.amstat.org/v12n3/ward.html

Copyright © 2004 by Barbara Ward, all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the authors and advance notification of the editor.


Key Words: Distance education; Online vs. traditional; Statistics education; Web enhanced

Abstract

This study compares students’ performance and attitudes in a hybrid (blend of online and face-to-face) model of Elementary Statistics and a traditional (face-to-face) model of the same course. Performance was measured by test, quiz, project, and final exam grades. Attitude was measured by the results of a course survey administered at the end of the semester. Both models of the course required the same textbook and statistical computer package, were taught by the same instructor, and had similar demographic characteristics such as gender, major, and classification. Significant differences were found in an extra credit grade comprised of points earned on interactive worksheets, and attitudes toward the course. There was no significant difference in students’ performance as measured by grades. The value of hybrid courses as a viable option in distance education and their potential benefits to students and the educational institution are discussed.

1. Introduction

Numerous empirical studies have compared online course delivery to its traditional counterpart. In online courses, a good deal of the instruction takes place though Web based components such as chat rooms, threaded discussion groups, Internet activities, videos or slides of course materials, and links to resources. The online segment of a course tends to be asynchronous, allowing students to work on their own schedule in different locations. In traditional courses the professor and students are in the same location learning together at the same time, and the instruction is face-to-face. A considerable portion of recent research concludes that students in online courses perform as well as those in traditional courses when comparing student performance using pretest and posttest scores and grades (Dutton, Dutton, and Perry 1999; Schulman and Sims 1999; Miller, Cohen, and Beffa-Negrini 2001; Stephenson 2001; Tucker 2001; Yablon and Katz 2001; Utts, Sommer, Acredolo, Maher, and Matthews 2003). Online courses also perform favorably when comparing overall course satisfaction (Navarro and Shoemaker 2000; Ryan 2000; Ashkeboussi 2001; Brace-Govan and Clulow 2001; Gagne and Shepherd 2001; MacGregor 2001; Johnson 2002). Support for the “No Significant Difference Phenomenon” (Russell 1999a) abounds as is evident by the number of comparative studies reported on the website with the same name. Russell (1999b) also maintains a “Significant Difference Phenomenon” website, although there are fewer reported cases and most conclude a significant difference in only a few measured variables.

On the other hand, empirical studies that compare demographic characteristics have revealed a wide disparity in students who enroll in online courses as opposed to traditional courses in some settings. A comparative study at a community college concluded that typically online learners are female, Caucasian, twenty-six to fifty-five years of age, work full-time as professionals, and have more education and a higher family income than their traditional counterparts (Halsne and Gatta 2002). Other studies at state universities found that their online students are actually traditional students who sign up for the course because of scheduling or logistic conflicts (Ashkeboussi 2001; Utts et al. 2003). Online courses are often collaborations between universities and local industry (Stephenson 2001). Many are created to serve the educational needs of a particular population of students such as continuing education (MacGregor 2001) or graduate degrees (Ellis 2000; Ashkeboussi 2001; Gagne and Shepherd 2001).

In a survey of traditional and distance learning higher education members the National Education Association found that many distance learning students do not fit the previously established stereotype of older, part-time students who live far from campus and work full-time. The study found that there is an equal mix of students over and under twenty-five years of age, and who are full-time and part-time. Also the study finds that distance learning classes are not large, completion rates are high, and students live within one hour of campus (NEA 2000).

In many private and public universities, direct contact with students is an important part of teaching. Hybrid courses offer students the convenience of online technology and the comfort of personal contact with the professor. But, do performance and attitude differ for hybrid and traditional courses? Utts et al. (2003), in a study comparing traditional and hybrid models of Elementary Statistics, found no difference in students’ performance, but a slightly less positive attitude in the hybrid model. In the present study there was no significant difference in students’ performance as measured by grades, but hybrid students were less likely to complete extra credit assignments and appeared to have a more positive attitude.

2. The Case for the Hybrid Course

Merisotis and Phipps (1999, p. 31), in the second and third of three broad implications derived from a review of recent research, stated, “Second, it seems clear that technology cannot replace the human factor in higher education”, and, “Third, although the ostensible purpose of much of the research is to ascertain how technology affects student learning and student satisfaction, many of the results seem to indicate that technology is not nearly as important as other factors, such as learning tasks, learner characteristics, student motivation, and the instructor.” Young (2002, p. A33), in a report of hybrid course models, states, “Hybrid courses and hybrid degree programs promise the best of both worlds, offering some of the convenience of all-online courses without the complete loss of face-to-face contact.” It appears that the solution to the absence of the human factor and the need to address the diversity of student characteristics in fully online courses is a hybrid model; a course where the students can take advantage of all the technical opportunities offered by an online environment, yet at the same time have face-to-face contact with the professor and social involvement with classmates (Brown 2001; Carnevale 2002; Oblender 2002; Young 2002).

Many different types of hybrid courses are being taught. At Stevens Institute of Technology, class is held in a computer lab where students complete online activities while the instructor is present (Levine 2002). In a second case, students have minimal contact with the instructor by attending class only for an orientation session and the final exam (MacGregor 2001). In an introductory statistics course, online students had three face-to-face meetings with the professor during the course to solve problems that were not resolved through electronic contact (Yablon and Katz 2001). Ohio State University is teaching a hybrid introductory statistics course where students may select from an offering of online and in-class activities that include lectures, Web based activities, videos, training modules, discovery laboratories, reviews, and projects (Rensselaer Polytechnic Institute 2001). As in the present study of Elementary Statistics, University of Central Florida offers hybrid courses that meet fifty percent online and fifty percent in the classroom. Some potential benefits of hybrid courses to students and the educational institution are:

The present study addresses the following questions:

  1. Is there a significant difference in performance as measured by grades for students in a hybrid model and a traditional model of Elementary Statistics?
  2. Is there a significant difference in number of extra credit assignments completed for students in a hybrid model and a traditional model of Elementary Statistics?
  3. Is there a significant difference in number of students who formed groups to work on the final project in a hybrid model and a traditional model of Elementary Statistics?
  4. Is there a significant difference in attitudes as measured by a course survey for students in a hybrid model and a traditional model of Elementary Statistics?

3. Purpose of the Study

The purpose of the study was to compare students’ performance and attitudes in a hybrid (blend of online and face-to-face) model and a traditional (face-to-face) model of Elementary Statistics.

4. Methods

4.1 Data

The study was conducted spring semester 2002 at a private university. The students were required to take the course and nearly all were first year Business Administration or Music Business majors. The sample consisted of 78 students, 56 enrolled in two sections of traditional Elementary Statistics and 22 enrolled in one section of hybrid Elementary Statistics. All subjects in the study were conventional, full-time students between 18 and 22 years of age. As in the reported studies, students were not randomly assigned to the two models, but all sections were scheduled at approximately the same time of day so that the distribution of demographic characteristics would be similar. At the time of the study, online courses were relatively new at the university, so most students who enrolled in the hybrid course did not realize it was online enhanced. They merely signed up for the section because the time best fit their schedules. Both hybrid and traditional models required the same textbook and statistical computer package. Both models had the same instructor, grading rubric, covered the same material, had identical syllabi and weekly schedules, and required a linear regression project and report.

Outliers were identified as students who withdrew before the end of the semester. One traditional student and three hybrid students did not complete the course and consequently were eliminated from the final data analyses. The final data analyses comparing performance, extra credit, and working in groups were conducted on 19 hybrid students and 55 traditional students. However, the outliers were included in the comparisons of retention rates, gender, classification, and major.

4.2 Description of Course Models

The traditional model of the course met in a classroom for three fifty-minute sessions a week. Four class sessions (a total of 200 minutes) were scheduled in the computer lab where students analyzed class-generated data using a statistical computer package or worked on the final project. Class time was utilized for various activities consisting of lecture, answering questions, interactive worksheets, collaborative problem sessions, calculator activities, tests, and quizzes. The traditional students were given lab exercises, worksheets and problem sets that were identical to the documents available on the course Web page.

The hybrid class met once a week for seventy-five minutes during which the instructor answered questions on problems or worksheets and administered tests and quizzes. Two class sessions (a total of 150 minutes) were scheduled in the computer lab where the students had identical activities to the traditional class. An emphasis was placed on students coming to class having learned the material on their own using the textbook and tools from the course Web page. No new material was presented in class. The course Web page included course policies and hints for success, a daily study calendar, links to real data and statistics resources, a bulletin board for posting data and comments, a chat room, and course content modules. The course content modules included links to daily activities suggested by the study calendar. Activities included interactive worksheets, applet demonstrations of statistical concepts, review sheets and solutions, practice tests and solutions, computer labs, links to suggested readings on the Web, and Power Point reviews of the textbook material. The applet demonstrations, suggested readings, and Power Point reviews were made available to the traditional class.

Communication between instructor and students took place during scheduled office hours, before and after class, by telephone, and by email. Online office hours were scheduled for the hybrid class via an Internet chat room. Scheduled office hours and email were the most utilized form of communication for both hybrid and traditional students.

5. Data Analysis

5.1 Demographic Comparison

Chi-square tests to check for similarity of demographic characteristics before omitting outliers were analyzed at the .05 level of significance. Categories were combined to satisfy the test assumption that expected counts in all cells are greater than or equal to 5. There appeared to be no significant relationship between model of course and gender (chi-square = .33, df = 1, p = .57). The same seemed to be true for academic major when comparing business or non-business majors (chi-square = 2.1, df = 1, p = .15). When classification was compared, students were divided into freshmen, sophomores, and upper class students. The test indicated no significant relationship between classification and course model (chi-square = .065, df = 2, p = .97).

Fisher’s Exact Test was used to compare retention rates. It appeared that the difference in retention rate for the two models was not significant (p = .066). Three students withdrew from the hybrid model before the end of the semester resulting in a retention rate of 86%. In the traditional model only one student withdrew resulting in a retention rate of 98%. Hybrid student comments such as “meeting once a week for a statistics class is very hard”, and “[the instructor] did not have much time to teach”, indicated that some students in the hybrid model would have preferred more scheduled class time. The students who withdrew from the hybrid model were all freshmen. Exit interviews indicated that the main reason for withdrawal was the unexpected nontraditional aspect of the hybrid class. The student who withdrew from the traditional model did so because of scheduling.

5.2 Comparing Performance

Students in both course models were allowed the same amount of time to take similar tests and quizzes in class and took the same final exam. Project instructions and grading rubrics were the same for both models. Student performance was measured by six dependent variables:
  1. Quiz Grade
  2. Test 1 Grade
  3. Test 2 Grade
  4. Test 3 Grade
  5. Project Grade
  6. Final Exam Grade

Each grade was equally weighted and worth 100 points. The students’ final averages were calculated by taking the total of these six grades plus extra credit points then dividing by six. Extra credit is described and analyzed in Section 5.3.

Multivariate Analysis of Variance (MANOVA) was conducted at the .05 level of significance to determine the effect of course model on performance as measured by the six grades. A limitation of the study was the small sample size of hybrid students. Prior to the MANOVA test, data were examined for outliers and fulfillment of multivariate test assumptions: normality, linearity, and homoscedasticity.

Before examining multivariate normality, univariate normality of each dependent variable (grade) for each treatment (model) was assessed at the .05 level of significance. Histograms and Kolmogorov-Smirnov tests for normality revealed that three dependent variables, Project Grade, Quiz Grade, and Test 1 Grade, were negatively skewed. The data for these three variables were reflected and a log transformation was applied: NewX=log (101-X). When reexamined with univariate normality tests, all transformed variables were distributed normally. The original variables were replaced with the transformed variables in the subsequent MANOVA analysis.

The assumption of multivariate normality implies that in addition to univariate normality, all linear combinations of dependent variables are normally distributed. Bivariate scatterplots of all possible pairs of the six grades for each course model were examined and found to be convincingly elliptical, showing that linear combinations of pairs of dependent variables were approximately normally distributed and correlated. A correlation table showed a strong linear relationship at the .05 level of significance between the dependent variables for each model except Project Grade. Even though Project Grade had a moderate to weak linear correlation with some other grades for each model, it was retained in the MANOVA analysis. Multivariate normality was further supported by mound shaped histograms of residuals for each of the six grades.

Even though the assumption of homoscedasticity is usually met when multivariate normality is assumed, it was further supported by visually examining the MANOVA Residuals Versus Fitted Values plots for each of the six grades. Each residual plot showed about the same spread of residuals for both models, indicating that the variability in fitted values for each dependent variable is approximately the same for hybrid and traditional models.

MANOVA results indicated at the .05 level of significance that model does not significantly affect performance (F = .87, p = .52) as measured by the six grades. This F value for Wilks’ Lambda had 6 degrees of freedom in the numerator (due to the hypothesis of no difference in performance of the two models) and 67 degrees of freedom in the denominator (due to sampling error). Since there was not multivariate significance in performance of students in hybrid and traditional models, subsequent univariate tests for significant model differences in individual grades were not conducted.

5.3 Extra Credit Grade

Students’ willingness to work beyond course requirements was measured by the Extra Credit Grade. Students in both models received one point extra credit for each interactive worksheet that was completed and handed in on time. The Extra Credit Grade, the total number of accumulated points divided by the total number of possible points, represented the percentage of worksheets the student completed and turned in. The hybrid students had to print the worksheets from the Web page and hard copies were given to the traditional students. The traditional students often started the worksheets in class, but were responsible for completing them on their own. Extra Credit was not a required grade for the course therefore it was not included in the MANOVA set of dependent variables.

Kolmogorov-Smirnov tests for normality indicated that Extra Credit Grade was normally distributed in the traditional model and positively skewed in the hybrid model as illustrated in Figure 1. The hybrid students’ Extra Credit Grade median was 28 and the traditional students’ Extra Credit Grade median was 68. The majority of hybrid students did not make the effort to complete and turn in extra credit assignments. A Mann-Whitney test for medians conducted at the .05 level of significance indicated that the median Extra Credit Grade of the hybrid model was significantly less than the median Extra Credit Grade of the traditional model (Mann-Whitney W = 2338.5, p < .001).


Ward Figure 1Figure 1

Figure 1. Extra Credit


The contradictory results between Performance (no difference) and Extra Credit Grade (significant difference) could mean that the hybrid students had less motivation to complete work on their own when the activity was not a requirement of the course. Hybrid students’ comments such as, “We had to learn and work through at least one chapter, if not more, per week”, and “It was sometimes hard to keep up on homework assignments”, seemed to indicate that the amount of independent work required outside of class had an adverse effect on the extra credit effort.

5.4 Working in Groups

Students were required to complete a linear regression project proposal and report using current data generated by performing an experiment, conducting a survey, or found online. The students could work alone or in groups of two or three people. The hybrid model met together as a class a smaller number of times per week than the traditional model. Fewer students in the hybrid model were anticipated to work in groups because they would not know each other as well as those in the traditional model. Working in groups was measured by the number of students who formed groups to complete the final project. Since work on the final project was done the last two weeks of the semester, students in both models had ample time to form groups.

In the hybrid model, 12 students formed groups of two or three people to complete the final project and in the traditional model 34 students formed similar groups. Remaining students in both models worked alone on the final project. A chi-square test conducted at the .05 level of significance indicated no significant relationship between the number of students who worked in groups and course model (chi-square = .01, df = 1, p = .92).

5.5 Attitudes

The course survey administered at the end of the semester consisted of ten positively stated questions on which responses were measured by a five point Likert scale from 1 (strongly disagree) to 5 (strongly agree). The survey sample consisted of 16 students in the hybrid model and 52 students in the traditional model. Three hybrid and three traditional students were absent on the day the survey was administered. The average Likert score of each of the ten questions was calculated for both models. A higher average Likert score implies a more positive attitude on that question. Figure 2 compares the average scores for each question for hybrid and traditional models (see the Appendix for complete questions). The graph seems to indicate that both models reacted similarly when comparing questions, but the traditional model’s average scores tended to be lower for every question.

There was a positive correlation between the average Likert scores by question for the two models (r = .67, p = .04), indicating a comparatively constant difference in average responses on the questions. A paired t-test conducted at the .05 level of significance comparing the 10 average scores for each question for hybrid to the 10 average scores for each question for traditional showed that the mean average score for the hybrid model was greater (indicating a more positive attitude) than the mean average score for the traditional model (t = 6.17, df = 9, p < .01). The mean average Likert score by question was 4.22 with a standard deviation of .33 for the hybrid model and 3.67 with a standard deviation of .35 for the traditional model.


Ward Figure 2Figure 2

Figure 2. Average Likert Scores by Questions


Figure 3 shows a dot plot comparison of the average Likert score for each student for both models. There appeared to be several disgruntled students in the traditional model. A two sample t-test conducted at the .05 level of significance indicated a significant difference in mean average Likert scores of students for the two models (t = 2.91, df = 29, p < .01). The mean average score by student was 4.22 with a standard deviation of .63 for the hybrid model and 3.67 with a standard deviation of .75 for the traditional model.


Ward Figure 3Figure 3

Figure 3. Average Likert Scores by Student


The results of individual Mann-Whitney tests to compare medians of each question for the two course models are presented in Table 1. A more conservative .01 level of significance was used to analyze individual tests in this multiple-test comparison, resulting in an experiment-wise error rate less than .10. It appears that there were differences in median attitude scores for questions measuring: Knowledge of Instructor (p = .003), Presentation of Subject Matter (p = .006), Academic Motivation (p = .001), and Overall Rating of Instructor (p = .008). In each comparison the hybrid class had a higher median attitude score indicating a more positive attitude on these particular questions. Written comments like, “I did not spend enough time studying”, seemed to indicate that this difference in median attitude could be a result of hybrid students taking more accountability for their performance on the required components of the course. There was no evidence to support differences in median attitude scores at the .01 significance level for questions measuring: Course Content (p = .032), Course Organization (p = .013), Course Requirements (p = .114), Grading and Evaluation (p = .237), Attitude Toward Students (p = .114), and Overall Rating of Course (p = .233). It should be noted that the scores of individual questions are not summated Likert Scores and do not possess a normal distribution. The reported p-values are not adjusted for ties, thus are more conservative if ties are present. For complete questions see the Appendix.


Table 1: Comparison of Median Attitude Scores for Individual Questions
Question Title Median Score
Hybrid (Traditional)
Mann-Whitney W p-value
1. Course Content 4.5 (4.0) 681.0 0.032
2. Course Organization 4.5 (4.0) 707.0 0.013
3. Course Requirements 4.0 (4.0) 636.0 0.114
4. Grading and Evaluation 4.0 (4.0) 602.0 0.237
5. Knowledge of Instructor 5.0 (4.0) 744.0 0.003
6. Presentation of Subject Matter 4.0 (3.0) 727.0 0.006
7. Academic Motivation 5.0 (3.5) 775.5 0.001
8. Attitude Toward Students 5.0 (4.0) 636.0 0.114
9. Overall Rating of Instructor 4.5 (4.0) 720.5 0.008
10. Overall Rating of Course 4.0 (4.0) 603.0 0.233


6. Conclusion

Results of the present study indicate that there were no significant differences in students’ performance in a hybrid model and a traditional model of Elementary Statistics when grades on quizzes, tests, project, and final exam were compared.

A significant difference was found in the Extra Credit Grade. The hybrid model median was less than the traditional model median, suggesting that hybrid students did not take as much extra initiative beyond the course requirements as traditional students. Student comments seemed to indicate that the hybrid students had so much independent work that they were less likely to “go the extra mile” when the assignment was optional.

There was no significant difference in the number of students who chose to work in groups on the final project.

There were significant differences on a Likert scaled attitude survey in mean average scores by student for hybrid and traditional models as well as mean average scores by question for the two models. For both of these measures, the hybrid model appeared to have a more positive attitude. When individual survey questions were compared, there appeared to be no differences in median attitude scores for questions asking about Course Content, Course Organization, Course Requirements, Grading and Evaluation, Attitude Toward Students, and Overall Rating of Course. The hybrid model appeared to have a more positive (higher) median attitude score for survey questions asking about Knowledge of Instructor, Presentation of Subject Matter, Academic Motivation, and Overall Rating of Instructor. Hybrid students’ comments seemed to indicate that the difference in attitude is a result of their taking more accountability than traditional students for their performance on required components of the course.

Much research has been conducted at a myriad of undergraduate, graduate and professional courses at urban and suburban, public and private universities and community colleges. As in the present study, many of the online classes that were subjects of research were actually hybrid courses where students met with the professor several times. While there appears to be no difference in the performance and attitude of online students and traditional students in some settings, there can be differences in their demographic characteristics. However, in institutions that don’t require true “distance” education, where students are able to attend an occasional face-to-face class, a hybrid model of the course makes sense. When the benefits of online learning are combined with the versatility and personal contact of a traditional setting, the instructor, educational institution, and students have the “best of both worlds.”


Appendix: Course Survey Questions

  1. Course Content: The course presented a comprehensive body of information.
  2. Course Organization: The course organization was defined and implemented.
  3. Course Requirements: The assignments and activities contributed to my learning the subject matter.
  4. Grading and Evaluation: Test questions and/or evaluation procedures were fair and related to the subject matter.
  5. Knowledge of the Instructor: The instructor knew the subject matter well.
  6. Presentation of Subject Matter: The instructor communicated the course content in a clear and effective manner.
  7. Academic Motivation: The instructor set high standards and motivated me toward academic achievement.
  8. Attitude Toward Students: The instructor was interested in (me) and helpful with my academic progress.
  9. Overall Rating of Instructor: This instructor was an effective teacher.
  10. Overall Rating of the Course: This course was a valuable learning experience.


References

Ashkeboussi, R. (2001), “A Comparative Analysis of Learning Experience in a Traditional vs. Virtual Classroom Setting,” Academic Exchange Quarterly, 5 (4), 133-138.

Brace-Govan, J., and Clulow, V. (2001), “Comparing Face-to-Face With Online: Learners' Perspective,” Academic Exchange Quarterly, 5(4), 112-117.

Brown, D. G. (2001), “Hybrid Courses Are Best,” Syllabus Magazine, 15(3), 22.

Carnevale, D. (2002), “Online Students Don’t Fare as Well as Classroom Counterparts, Study Finds,” The Chronicle of Higher Education, 48 (27), 38.

Dutton, J., Dutton, M., and Perry, J. (1999), “Do Online Students Perform as Well as Lecture Students?,” North Carolina State University [Online]. (www4.ncsu.edu/unity/users/d/dutton/public/research/online.pdf)

Ellis, K. (2000), “A Model Class (Concord University Online Law School),” Training, 37 (12), 50.

Gagne, M. and Shepherd M. (2001), “A Comparison Between a Distance and a Traditional Graduate Accounting Class,” T. H. E. Journal [Online], 28(9), 58-65. (www.thejournal.com/magazine/vault/A3433.cfm)

Halsne, A. M., and Gatta, L. A. (2002), “Online Versus Traditionally-delivered Instruction: A Descriptive Study of Learner Characteristics in a Community College Setting,” Journal of Distance Learning Administration [Online], 5(1).
(www.westga.edu/%7Edistance/ojdla/spring51/halsne51.html)

Johnson, M. (2002), “Introductory Biology Online: Assessing Outcomes of Two Student Populations,” Journal of College Science Teaching, 31(5), 312-317.

Levine, L. (2002), “Using Technology to Enhance the Classroom Environment,” T. H. E. Journal [Online], 29(6), 16-19. (www.thejournal.com/magazine/vault/A3819.cfm)

MacGregor, C. (2001), “A Comparison of Student Perceptions in Traditional and Online Classes,” Academic Exchange Quarterly, 5(4), 143-148.

Merisotis, J. P., and Phipps, R. A. (1999), “What’s the Difference? A Review of Contemporary Research on the Effectiveness of Distance Learning in Higher Education,” Washington, D. C.: The Institute for Higher Education Policy, 31.

Miller, B., Cohen, N. L., and Beffa-Negrini, P. (2001), “Factors for Success in Online and Face-to-Face Instruction,” Academic Exchange Quarterly, 5(4), 4-10.

Navarro P. and Shoemaker, J. (2000), “Performance and Perceptions of Distance Learners in Cyberspace,” American Journal of Distance Education, 14(2), 15-35.

The National Education Association and Abacus Associates (2000), A Survey of Traditional and Distance Learning Higher Education Members, Washington, D. C.: The National Education Association.

Oblender, T. E. (2002), “A Hybrid Course Model: One Solution to the High Online Drop-Out Rate,” Learning & Leading with Technology, 29(6), 42-46.

Rensselaer Polytechnic Institute (2001), Press Release, The Pew Grant Program in Course Redesign at the Center for Academic Transformation, Rensselaer Polytechnic Institute [Online]. (www.rpi.edu/web/News/press_releases/2001/cat.html)

Russell, T. L. (1999a), No Significant Difference Phenomenon [Online]. (www.nosignificantdifference.org/nosignificantdifference/)

----- (1999b), Significant Difference Phenomenon [Online]. (www.nosignificantdifference.org/significantdifference/)

Ryan, R. C. (2000), “Student Assessment Comparison of Lecture and Online Construction Equipment and Methods Classes,” , 27(6), 78. (www.thejournal.com/magazine/vault/A2596.cfm)

Schulman, A. H., and Sims R. L. (1999), “Learning in an Online Format versus an In-Class Format: An Experimental Study,” T. H. E. Journal [Online], 26(11), 54-56. (www.thejournal.com/magazine/vault/A2090B.cfm)

Stephenson, W. R. (2001), “Statistics at a Distance,” Journal of Statistics Education [Online], 9(3). (jse.amstat.org/v9n3/stephenson.html)

Tucker, S. (2001), “Distance Education: Better, Worse, Or As Good As Traditional Education?,” Journal of Distance Learning Administration [Online], 4(4). (http://www.westga.edu/~distance/ojdla/winter44/tucker44.html)

Utts, J., Sommer, B., Acredolo, C., Maher, M. W., and Matthews, H. R. (2003), “A Study Comparing Traditional and Hybrid Internet-Based Instruction in Introductory Statistics Classes,” Journal of Statistics Education [Online], 11(3). (jse.amstat.org/v11n3/utts.html)

Yablon, Y. B., and Katz, Y. J. (2001), “Statistics Through the Medium of the Internet: What Students Think and Achieve,” Academic Exchange Quarterly, 5(4), 17-22.

Young, J. R. (2002), “’Hybrid’ Teaching Seeks to End the Divide Between Traditional and Online Instruction,” The Chronicle of Higher Education, 48(28), A33-A34.


Barbara B. Ward
Department of Mathematics and Computer Science
Belmont University
1900 Belmont Blvd.
Nashville, TN 37212-3758
U.S.A.
wardb@mail.belmont.edu


Volume 12 (2004) | Archive | Index | Data Archive | Information Service | Editorial Board | Guidelines for Authors | Guidelines for Data Contributors | Home Page | Contact JSE | ASA Publications