Daniel R. Jeske, Scott M. Lesch and Hongjie Deng
University of California  Riverside
Journal of Statistics Education Volume 15, Number 3 (2007), jse.amstat.org/v15n3/jeske.html
Copyright © 2007 by Daniel R. Jeske, Scott M. Lesch and Hongjie Deng all rights reserved. This text may be freely shared among individuals, but it may not be republished in any medium without express written consent from the author and advance notification of the editor.
Key Words: Statistical Consulting, Graduate Education, BradleyTerry Model, Multiple Comparisons
.
The Department of Statistics at the University of California at Riverside formally established a Statistical Consulting Collaboratory in the Fall of 2003. Agreeing with Carter, Scheaffer and Marks (1986), the first priority of the Collaboratory is to contribute effectively to the academic objectives of the Statistics Department. The Collaboratory is uniquely positioned to do this through the development and application of statistical methods to real world problems. Specific contributions the Collaboratory is making include: 1) curriculum material for the department’s graduatelevel statistical consulting class that addresses traditional pedagogical objectives [see, for example, Hertzberg, Clark and Brogan (2000), Taplin (2003), Johnson and Warner (2004), and Birch and Morgan (2005)], 2) curriculum material that both reinforces and broadens student knowledge in statistical methodology, 3) consulting opportunities for undergraduate and graduate students, 4) research opportunities that can develop into PhD dissertation topics, and 5) resume building activities for students through publication opportunities and industry internships made available through the Collaboratory client network.
The importance of skills on the nontechnical side of consulting has been discussed elsewhere [see, for example, Boen and Zahn (1982), Kirk (1991), Derr (2000)]. While these skills are essential, an equally (arguably more) important skill is broad technical expertise that enables choosing a correct analysis on the basis of informed judgments. A distinguishing characteristic of the Collaboratory is its ability to promote a statistical consulting pedagogy that goes beyond pragmatic solutions to consulting problems by exploring additional statistical methodology that is related to the client problem. Through this influence, the Collaboratory enhances the students’ ability to select appropriate methodology for a given problem. Moreover, it cultivates a curiosity and a selfsufficiency, which are attributes Russell (2001) discusses as crucial for a statistical consultant. In this paper, a case study is described that illustrates how the Collaboratory achieves these objectives. While case studies on statistical consulting have appeared in other work [see, for example, Tweedie (1998) and Cabrera and McDougall (2002)], our case study is different in that it specifically highlights how the Collaboratory influenced curriculum for the statistical consulting class. For example, references are given throughout to exercises included in Appendix A that were used as class assignments. The exercises by themselves are interesting and could be used as a supplement for a variety of statistics classes.
In the remainder of this Section, a more detailed overview about the mission of the Collaboratory is provided, and the specific consulting problem is introduced. In Sections 24, the major tasks associated with the consulting problem are discussed and the opportunities they provided to enhance graduate student training are highlighted. Within each of these sections, the statistical methods used to solve the consulting problem are first introduced and then accounts of the educational benefits generated by the consulting problem are detailed. The paper concludes with discussion in Section 5.
Clients of the Statistical Consulting Collaboratory include professors, graduate students and University administrators. Clients todate have been affiliated both with UCRiverside and also other local Universities. In addition, the Collaboratory attracts industry clients through both personal networking and referrals. In the framework of Does and Zempleni (2001), the Collaboratory is a hybridization of a noncommercial and commercial consulting unit, though todate the Collaboratory does not aggressively market itself to offcampus clients. The Collaboratory is directed by a tenured faculty member and employs a fulltime Associate Director with an M.S. degree in Statistics. While the Director position is ultimately responsible for all of the activities of the Collaboratory, his/her primary emphasis is to create and nurture opportunities within the Collaboratory that make it look and operate like an academic unit. The primary role of the Associate Director is to lend his/her technical consulting skills to projects, but other significant responsibilities include supervising Collaboratory Research Assistants (CRAs) and managing some administrative aspects of the Collaboratory. The Collaboratory has typically supported 23 CRAs during the academic year with partial research assistantships. During the summer months a larger number of opportunities for parttime employment are available. While the majority of the CRAs are graduate students in Statistics, undergraduate students from both Statistics and other departments (e.g., Computer Science, Business and Mathematics) within the University have made contributions to some of the industry client projects along the lines of data base construction and development of customized data processing routines. The Director and Associate Director hire CRAs based on the level of their experience with applied statistics and/or computer skills, their demonstrated work ethic, and their interest for gaining experience with statistical consulting.
Projects that are taken on by the Collaboratory loosely fall into two categories: Service or Collaboration. Service describes projects that utilize standard statistical methods, both wellknown and less wellknown to the clients. Collaboration describes projects where there is some aspect of novelty either in the development or application of statistical methodology. To fund the support that is provided to students working in the Collaboratory, fees are assessed for service projects. While the University provides the salary and benefits for the Associate Director, the fees are also intended to cover miscellaneous expenses such as software licenses and office supplies. Some projects start out as service but evolve into collaboration. When the transition to collaboration occurs and it is reasonable to expect the research could eventually be published in a Statistics journal, fees no longer apply. In the best collaborative relationships, a joint grant proposal would also be submitted.
The Statistics Department at UCRiverside has a mandatory three quarter class on Statistical Consulting for both MS and PhD graduate students. The Consulting Class (CC) is taught by the Collaboratory Director and usually has 1015 graduate students enrolled who are at least in the second year of their program. A great majority of the material covered in the CC is related to Collaboratory projects. Client visitations provide opportunities for the students to gain experience listening to clients and eliciting information that helps formulate objectives for the projects. Students are assigned to work on consulting projects independently and also in small groups. Lectures provide the students the necessary background they need to complete tasks associated with the projects. Throughout the duration of their work on the projects, students schedule meetings with the Director and/or Associate Director for additional direction and advice. Typically, students will have at least one interim meeting with the client before delivering a final presentation to them. The Director formulates homework exercises relating to each of the projects being addressed in the class. Appendix A contains the exercises that were extracted from the project being presented in this case study. The CC is a letter grade class, and includes a final exam that covers the statistical methodology relating to the consulting projects that were discussed during the quarter.
In order to respect proprietary issues, the client in this case study is referred to as Organization X. Organization X was an offcampus client that had _{} competing plans for a product’s architecture, which in this paper will be indexed by the integer values 1 to t. With t alternative plans, there are _{} pairs of plans and a panel of independent judges was enlisted to evaluate each pair of plans. Individual judges could only serve on one panel, a logistics constraint that arose due to the fact judges were animals. Let _{} denote the number of judges in the panel that compare the _{} pair of plans.
Each judge within a panel was able to express a preference as to which of the two plans is preferable. Prior to meeting with the Collaboratory, the data were analyzed by Organization X to construct approximate confidence intervals for the probabilities that a given plan is preferable to another plan. Let _{} denote the probability that plan i is preferred over plan j when a judge compares the two plans. Assuming that the judges within a panel are a random sample from the targeted population for the product, the number of expressed preferences for plan i when it is compared with plan j, say , follows a binomial distribution with trial parameter _{} and success probability _{}. The _{} observations from an Organization X experiment with _{} are shown in Table 1. (The numbers enclosed by parentheses in Table 1 represent expected cell frequencies and will be discussed in Section 3.1.) The intent was to use 30 judges for each panel. However, only 29 judges participated in the _{}, _{} and _{} panels. Approximate confidence intervals for _{} were constructed by the client from the formula _{}, where _{} [see, for example, Mendenhall, Beaver and Beaver (2006)] .
Plan i 
Plan j 

1 
2 
3 
4 
5 

1 
 
20 (23.4) 
22 (20.3) 
20 (16.6) 
1 (2.7) 
2 
9 (5.6) 
 
6 (9.9) 
7 (6.8) 
1 (0.7) 
3 
8 (9.8) 
24 (20.0) 
 
8 (10.8) 
2 (1.3) 
4 
10 (13.4) 
23 (23.2) 
21 (18.7) 
 
3 (2.2) 
5 
29 (27.2) 
29 (29.3) 
27 (27.7) 
27 (27.8) 
 
Table 1. Observed and Estimated Expected Cell Frequencies for Organization X Experiment
While Table 1 is the only data set provided to the Collaboratory by Organization X, they in fact do many experiments of the same type. The main goal of Organization X was to learn about the most appropriate type of analyses for data sets of this kind so that they could perform future analyses themselves. Organization X expressed specific interest in using the data to rank order the plans and to identify which plans were the “best” in a statistically significant way. Clearly, the confidence intervals they computed stop short of a formal ranking procedure. Organization X also expressed an interest in exploring the quality of their experimental design. In particular, they wanted to know if an alternative design could be employed that utilized fewer panels yet still provided enough information to adequately compare the alternative plans. A design that utilizes fewer panels would be attractive from the standpoint that it would be simpler to manage. The client noted that any proposed alternative design needed to adhere to the constraint that no judge could be asked to compare more than two plans.
Organization X requested a proposal from the Collaboratory that outlined tasks and deliverables associated with their stated goals. During the winter quarter of 2005, the objectives of the consulting problem were introduced to a CC, along with the existing mode of data analysis being carried out by Organization X. The students were first asked to participate in a brainstorm discussion about what could be done for this client. To guide the discussion and provide some relevant technical background, a detailed introduction to the BradleyTerry (1952) model for analyzing a paired comparison experiment was provided to the students. The students were then asked on a homework assignment to individually write a summary of this discussion and identify open issues concerning the proposed analysis techniques.
The Director then led a class discussion on basic proposal writing concepts and workload estimation techniques and used the submitted homework assignments to develop a draft of the proposal. Although the Director ultimately wrote the proposal, the students were able to observe firsthand what this activity involves. For most of the students, it was their first exposure to the challenge of setting a realistic project schedule that includes milestones and estimated costs. The students also observed the importance of performing background technical work (i.e., acquiring fundamental knowledge about BradleyTerry models) that can help make a proposal more compelling to a client.
After the proposal was submitted to Organization X, an iterative feedback loop with the client began and the CC was kept abreast of the proposal progress. In one instance, the client requested customized software that would automate the proposed analysis methods to the extent they could import their data into one computer program and get every aspect of their analysis as the output. In a sense, the request was for an expert system, which was more than the Director wanted to commit to. As an alternative, it was suggested to the client that the analyses be done with offtheshelf statistical software packages such as R or SAS, but not necessarily with one program and not necessarily without some human oversight. The students observed this decisionmaking process, and were also exposed to some of the important issues that arose during the proposal negotiating process. For example, they saw that it is acceptable, and even necessary, to declare some requests beyond the scope of the project. Furthermore, they gained a better appreciation for why a proposal must contain well organized, clearly defined tasks and how to avoid the pitfall of being too vague or overextending when writing the scopeofwork.
The final proposal was ultimately approved by the client in late Spring, 2005. The remaining project work described in this paper began in the late summer of 2005 when a CRA began working on the estimation analyses.
Because ranking the plans based on evaluations from judges is of interest, it is natural to think of the BradleyTerry (1952) modeling framework. As discussed in the previous section, the design used by Organization X is not the classic case where each judge evaluates every one of the _{} pairs of plans. Nevertheless, the key idea associated with the BradleyTerry model can be used in conjunction with a logistic regression model for the independent observations _{}. [Readers wanting a refresher on logistic regression are referred to Dobson (2002) or Agresti (2002).] In particular, suppose the plans have true (fixed and unobservable) merits _{} and suppose the following link function is assumed for _{}:
It follows from equation (1) that _{} so that if the ith and jth plans have the same merit then _{}, and otherwise larger _{} contrasts imply larger _{} values. It is also clear that it is not the values of _{} that are important, but only their relative differences _{}. In fact, only the differences _{}are identifiable in this model. The link function (1) may look a little peculiar for logistic regression contexts, but looks more familiar when written as _{}, where _{} and _{} is the _{} vector with _{} in the ith position and _{} in the jth position.
It follows that the likelihood function for _{} based on the _{} is
_{} (2)
where constants of proportionality have been neglected. Using equation (1), an equivalent representation of the likelihood in terms of _{} is
_{} . (3)
Since only the differences _{} are identifiable, an identifiability constraint is required when seeking the maximizing values _{} from equation (3). The constraint _{} is used by the R package, _{} is used by SAS and _{} is used in some of the BradleyTerry model literature. Strauss (1992) discusses how standard logistic regression software packages can be used to compute _{}.
Once the _{} have been obtained, maximum likelihood (ML) estimates of _{} are obtained as _{}. The scores used for ranking the alternative plans are _{}, where _{}. The ranking scores are ML estimates of the average preference probability of each plan when it is pairwise compared to the other _{} plans.
The R code provided in Appendix B was used with the data _{} shown in Table 1 to arrive at the _{} values for the client data set. Table 2 shows these values along with the estimated ranking scores _{} that are derived from the estimated pairwise preference probabilities _{} that are shown in Table 3. The values _{} are simply the row means of Table 3. A preliminary conclusion, based on the ranking scores, is that it appears Plan 5 is the most favored and that Plan 2 is the least favored.
Plan 
_{} 
_{} 
Ranking 
1 
0 
0.53 
2 
2 
1.437 
0.19 
5 
3 
0.731 
0.36 
4 
4 
0.213 
0.48 
3 
5 
2.302 
0.94 
1 
Table 2. Ranking Analysis of Alternative Plans
Plan i 
Plan j 

1 
2 
3 
4 
5 

1 
 
0.81 
0.67 
0.55 
0.09 
2 
0.19 
 
0.33 
0.23 
0.02 
3 
0.33 
0.67 
 
0.37 
0.05 
4 
0.45 
0.77 
0.63 
 
0.07 
5 
0.91 
0.98 
0.95 
0.93 
 
Table 3. Estimated Pairwise Preference Probabilities _{}
The client expressed a preference for using R since it is freeware, and through internet searching the CRA identified the R function BTm (Appendix B) to facilitate the ranking analysis shown in Table 2. Understanding how to use BTm in connection with the notation and parameterization presented in Bradley and Terry (1952) was not a trivial task, and provided the CRA with an appreciation for how to link theory to packaged statistical software.
By this time the 2005 fall quarter had begun and a new CC was available to participate in the work. The CRA prepared a draft of slides that summarized the ML analysis and, in preparation for a client meeting, presented them to the CC for peer review. Through this experience, the both the CRA and CC learned the importance of practicing presentations while preparing for a client meeting. In particular, the students learned how to assemble material that would answer client questions and also teach the client about statistical methods relevant to their problem. Especially in academic consulting environments, teaching is a strong element of the clientconsultant relationship.
The students in the CC were asked to write their own NewtonRaphson algorithm in R to verify the ML analysis carried out by the CRA. The primary purposes of this assignment were to have the students confront the issue that only the differences _{} are identifiable in the model, and to show them more clearly the necessity of a constraint such as _{} on the solution to the likelihood equations. In addition, the assignment reviewed a fundamental numerical optimization technique, and asked the students to think through the details of implementing the technique in a programming language. For a few students, this was their first experience with practical details of implementing an optimization technique. The assignment also asked the students to check and compare the computational results across two software packages (R and SAS).
The literature associated with the BradleyTerry model frequently expresses the likelihood shown in equation (3) in terms of quantities _{}, where_{} is the rank (1 or 2, with 1 corresponding to “preferred”) of the ith plan when compared to the jth plan by the kth judge. Exercise 1 in Appendix A was used to guide the student through a translation that connects equation (3) to the classic notation used for the BradleyTerry model.
The logistic regression model that uses the BradleyTerry link function is a reduction of a saturated model that has a separate binomial parameter for each of the _{} panels. The likelihood function for the saturated model would simply be equation (2) without the assumed link function given by equation (1). The ML estimates of the saturated model are easily seen to be _{}. A goodnessoffit test [see, for example, Dobson (2002) or Agresti (2002)] can be made using the statistic _{}. Under the null hypothesis that the logistic regression model with the BradleyTerry link function is an adequate reduced model, _{} follows a chisquare distribution with _{} degrees of freedom. For the Organization X data, it can be shown that _{} and _{} and therefore _{}. With _{} the null degrees of freedom for the chisquare distribution are _{}, and hence the pvalue for model adequacy is 0.11, suggesting the reduced model offers an adequate fit.
A visual way to illustrate the adequacy of the reduced model is to compare the observed and expected cell frequencies corresponding to Table 1. The numbers in Table 1 that are in parentheses are the expected cell frequencies _{} according to the fitted model, and the model adequacy is again reflected by the closeness of the observed and expected cell frequencies.
The _{} scores (shown in Table 2) provide a ranking of the alternative plans, but do not by themselves give an indication as to which of the differences _{} are nonzero. Holm’s (1979) sequential Bonferroni (SB) procedure was used to determine which of the estimated differences, _{}, are significantly different from zero in a statistical sense. Table 4 shows the ML estimates of each contrast, their asymptotic standard errors, the zscores for the hypotheses _{}, and the corresponding unadjusted and SB adjusted pvalues. Significance levels of 5% (*) and 1% (**) are also indicated in the table.
Contrast 
ML Est. 
Std. Error 
zscore 
Unadjusted pvalue 
Sequential Bonferroni Adjusted pvalue 
_{} 
1.437 
0.294 
4.88 
_{} 
_{} 
_{} 
0.731 
0.269 
2.71 
_{} 
_{} 
_{} 
0.213 
0.264 
0.81 
_{} 
_{} 
_{} 
2.302 
0.416 
5.53 
_{} 
_{} 
_{} 
0.706 
0.280 
2.52 
_{} 
_{} 
_{} 
1.223 
0.288 
4.24 
_{} 
_{} 
_{} 
3.739 
0.450 
8.31 
_{} 
_{} 
_{} 
0.517 
0.267 
1.94 
_{} 
_{} 
_{} 
3.033 
0.431 
7.03 
_{} 
_{} 
_{} 
2.516 
0.420 
5.98 
_{} 
_{} 
Table 4. Sequential Bonferroni Multiple Comparison Procedure
It can be seen from Table 4 that the only contrasts that are not significantly different at the 5% level are _{} and _{}. Figure 1 shows the grouping of the plans based on the 5% significance level, with the usual interpretation that plans that are connected by a line are not significantly different.
Figure 1. Multiple Comparison Groupings of Plans (5% Significance Level)
A detailed discussion of the goodnessoffit test for generalized linear models, with particular emphasis on how it applies to the consulting problem, was provided in the CC. Exercise 2 in Appendix A was assigned to the students to ensure they understand how to compute the degrees of freedom associated with the null distribution of _{} and interpret the results. The time spent discussing the goodnessoffit test set a good example for the students of how consultants need to pay attention to the adequacy of models presented to their clients, as they are typically the only ones in a position to make such evaluations.
The sequential Bonferroni procedure was not the first method considered for doing the multiple comparison test of the pairwise contrasts. Instead, an alternative method was developed in the CC based on the fact that under the null hypothesis _{} the _{} are independently distributed binomial random variables with parameters _{}. Hence, the null distribution of _{} can be determined to an arbitrary precision via Monte Carlo simulation as it depends only on the sample sizes _{}. Two plans would be declared different if and only if _{}, where _{} denotes the upper _{}percentile of the null distribution of Q. Students in the CC were asked to develop a simulation algorithm to verify that for the client data set _{}. The motivation for this exercise was to reinforce the role and usefulness of simulation studies when solving applied problems.
The multiple comparison procedure based upon Q has the property that the probability of at least one false positive under _{} is exactly .05, and as such would seem to offer something stronger than other conservative procedures. However, it turns out that while this method exhibits weak control of the Type1 familywise error rate, it does not exhibit strong control. The distinction between weak and strong control for a multiple comparison procedure [see, for example, Westfall and Young (1993), Romano and Wolf (2005)] is very important, but is not well known. The consulting problem provided a natural context to expose the concept in a lucid and accessible manner. Exercise 3 in Appendix A guided the students through this learning process.
The experimental design used by Organization X was balanced in the sense that all 10 pairs of plans were evaluated by a panel of judges. Organization X expressed an interest in knowing if there was a viable alternative to running a panel for each of the pairs of plans, while at the same time still being able to rank the plans and assess significance. The CRA suggested an alternative “cyclic” design that employs only four panels comparing the following plan pairs: _{}, _{}, _{} and _{}. Within the environment of Organization X, the cyclic design would be significantly simpler to manage. If the BradleyTerry link function is assumed to hold for all 10 pairs of plans, then all the contrasts _{} (_{}) remain estimable with the cyclic design.
One way to compare the balanced and cyclic designs is to evaluate and compare the power of the likelihood ratio test statistic of _{} under each design. For the balanced design, the full likelihood is _{}. For the cyclic design, the full likelihood is
and _{}, respectively, where _{} and _{}. In both cases, the null distribution of the LRT is approximately chisquare with 4 degrees of freedom.
Power for the balanced design was computed for the case where each of the 10 panels had 30 judges (the approximate panel sizes utilized by Organization X) and the power of the cyclic design was computed for the case where the 4 panels each had 75 judges (i.e., 300 judges for both designs). For a given alternative _{}, power for the balanced design was computed by: 1) simulating 1000 data sets consisting of observations _{} that are independent binomial distributions that have trial parameter equal to 30 and success parameters equal to _{}, and 2) computing the fraction of the data sets for which _{} is greater than _{}. For the same alternatives, power for the cyclic design was computed by: 1) simulating 1000 data sets consisting of observations _{} that are independent binomial distributions that have trial parameter equal to 75 and respective success parameters equal to _{}, _{}, _{} and _{}, and 2) computing the fraction of the data sets for which _{} is greater than _{}.
Results of the power simulations are shown in Table 5 for 14 different alternatives _{}. It can be seen that for 12 of the alternatives considered, the balanced design has considerably more power than the cyclic design. For these 12 alternatives, the loss of information by using fewer panels is not compensated for by using more judges in each panel. For alternatives _{} and _{} (rows 8 and 9) the cyclic design has higher power. Higher power for the cyclic design in these two cases occurs because a higher proportion of the nonzero contrasts are cyclic and hence the larger panel sizes are able to wield a bigger impact. In the latter alternative, for example, four of the six nonzero contrasts are cyclic. Unfortunately for the cyclic design, its advantage for specific alternatives cannot be exploited in the absence of apriori information about _{}.
Alternative _{} 
Power 

Balanced Design 
Cyclic Design 

_{} 
.06 
.05 
_{} 
.11 
.08 
_{} 
.18 
.10 
_{} 
.18 
.078 
_{} 
.54 
.38 
_{} 
.74 
.34 
_{} 
.78 
.38 
_{} 
.54 
.65 
_{} 
.75 
.95 
_{} 
.95 
.77 
_{} 
.99 
.75 
_{} 
.99 
.80 
_{} 
.12 
.074 
_{} 
.55 
.37 
The precision of contrast estimates can also be used to assess the sensitivity of competing designs. For _{}, the standard errors of all 10 pairwise contrasts _{} were estimated under both the balanced and cyclic designs using an additional simulation study. Table 6 shows estimated standard errors from 1000 simulated data sets. As might be expected, in the balanced design all contrasts exhibit the same standard error while the same is not true for the cyclic design. In the cyclic design, the standard error of a contrast depends on how many panels have to be utilized in order to estimate the contrast. For example, contrasts comparing the plans (1,2), (2,3), (3,4) or (4,5) are estimated with the highest precision since these contrasts are directly estimable from the panels that were run. Contrasts comparing (1,3), (2,4) and (3,5) have less precision because they require utilizing two of the panels that were run. For example, the contrast estimate _{} can be viewed as _{}, where the two terms come from panels that were actually run in the cyclic design. Similarly, contrasts comparing (1,4) and (2,5) require utilizing three of the panels that were run and the contrast comparing (1,5) requires utilizing all four of the panels that were run. The fact that the cyclic design does not estimate all contrasts equally complicates how a practitioner would go about assigning labels to the plans.
Contrast 
Estimated Standard Errors 

Balanced Design 
Cyclic Design 

_{} 
.23 
.24 
_{} 
.23 
.33 
_{} 
.24 
.42 
_{} 
.24 
.48 
_{} 
.23 
.24 
_{} 
.23 
.34 
_{} 
.24 
.41 
_{} 
.24 
.24 
_{} 
.24 
.33 
_{} 
.24 
.24 
Table 6. MonteCarlo Standard Error Estimates for the PairWise
Contrasts under the Alternative _{}
A particularly nice aspect of this case study is that the client was interested in receiving advice on how to design future experiments. This was ideal for demonstrating to the CC that statistical consulting involves not only data analysis, but also experimental design as well. The cyclic design is a minimal design in the sense there is no design with fewer panels that can still estimate all 10 pairwise contrasts. Exercise 4 in Appendix A asks the student to derive the ML estimates for all of the contrasts under the cyclic design (closed form expressions exist). The student is also asked to examine the consequences of the minimal nature of the design with respect to the goodnessoffit of the BradleyTerry link function.
Power and precision were introduced as criteria to compare the balanced and cyclic designs, and it was pointed out how a fair comparison between the two should have the same number of judges utilized for each case. The Director proposed the set of alternatives with consideration to their implied values for the _{}. Students in the CC conducted the power comparison by individually taking one of the alternatives _{} and developing their own R program to obtain Monte Carlo estimates of the power for each design. The experience the students gained while working on the power study further reinforced their programming and simulation skills.
The CRA’s presentation at a client meeting comparing the balanced and cyclic designs was very well received. In fact, the client ranked it as one of the most insightful aspects of the entire project, as it quantitatively justified the balanced design. Additionally, the power and precision metrics were shown to be a useful way to compare alternative designs if and when balanced designs become uneconomical (e.g., when t is large enough to make _{} unmanageable).
The role of triangulation analyses (i.e., using two or more methods to verify results) in statistical consulting cannot be emphasized enough. The case study provided an opportunity to show the CC how to be creative in checking the validity of statistical analyses. In particular, a linear model approximation was developed in order to crosscheck the ML analysis results, as well as the simulated power and precision estimates. The linear model approximation facilitates a nice link between the consulting problem and statistical methods that students should be very familiar with, and as such opened the door for a number of interesting homework assignments (see Exercises 58 in Appendix A).
Throughout the the project, the CRA was responsible for preparing slide presentations that were shared with the client at multiple client meetings. While the Director and/or Associate Director were always present at client meetings, the CRA gave the presentation and always had the first opportunity to answer client questions. This was valuable experience for the CRA, as was the process of preparing and rehearsing for the meetings.
Students in the CC typically work on 24 different projects at the same time, and the case study presented in this paper is illustrative of how they get involved in a class project. Other types of projects they work on are individual consulting and small group consulting. The motivation for project multiplexing is that it gives the students experience juggling projects and managing competing deadlines within the same course. While 24 simultaneous projects may be light by real world standards, it does provide the students a glimpse of what is to come if they were to take a job as a consulting statistician. The workload experience in the CC differs somewhat from the CRA experience, where for a CRA the simultaneous project load is usually capped at two. The rationale for the difference is that CRAs usually “own” client project whereas in the CC the responsibility for some of their projects is shared within a team environment.
The case study described in this paper illustrates how the Statistical Consulting Collaboratory at UCRiverside not only functions to solve client problems, but also significantly enhances the ability to teach students statistical consulting skills. The exercises in Appendix A are illustrative of the intentional effort made in the CC to go beyond a pragmatic solution to the consulting project for the client and extract from it additional enriching technical material for the students.
The work tasks associated with the case study enhanced the training of students in three different quarters of the CC, and in addition provided a unique set of experiences for the CRA. Table 7 provides a timeline summary for the major activities. It can be seen that the technical part of the work all transpired over a 12month period. While 12 months may seem a like a long time, the client understood the primary mission of the Collaboratory and was satisfied with incremental progress reports on the various facets of the analyses. Equally important, the timeframe of the project was also dictated by the client’s own pace for being available to receive, digest and provide feedback on the reported progress.
Academic Quarter 
CC Involved? 
CRA Involved? 
Principal Activities 
Winter 2005 
Yes 
Yes 
Proposal writing and submission 
Spring 2005 
No 
No 
Proposal reviewed by Organization X and project cost was negotiated. Proposal eventually approved. 
Summer 2005 
No 
Yes 
ML analysis of BradleyTerry model with R program. Multiple client meetings with Organization X. 
Fall 2005 
Yes 
Yes 
Goodnessoffit test, analysis of cyclic design, power and precision study, linear model analyses. Client meeting with Organization X. 
Winter 2006 
No 
No 
Organization X reviews results and uses R code to analyze new data sets on their own. 
Spring 2006 
Yes 
No 
Multiple comparisons analyses. Weak vs. Strong control for multiple comparison methods. Final client meeting with Organization X. Client is billed for the work. 
The case study that was presented here is a favorite example of projects handled by the Collaboratory due to the interest level that the BradleyTerry model elicited from the students. The benefits to the CCs and CRAs reported here are based on firsthand experiences, as the last two authors are former students of the CC and former CRAs as well. The popularity of this project was substantiated by student course evaluations and an increase in the number of students wanting to participate in the Collaboratory as a CRA.
A number of other Collaboratory projects have similarly been amenable to the process of merging the consulting aspects of a Collaboratory project with the educational objectives of the CC. Examples include the application and development of a changepoint algorithm for tracking reliability metrics in a data network, the use of Classification and Regression Tree Modeling (CART) methodology (and software) to predict success or failure in freshman chemistry classes, and the use of partial least squares analyses for performing chemical spectroscopy analysis. Not every project that comes to the Collaboratory can be integrated into the CC the way our case study was. For example, projects with very short timelines may need a more direct and efficient effort. However, many projects that cannot be worked with CC involvement in “real time” can still have one or more of their aspects incorporated retrospectively at a later date.
_{} .
The R code (Version 2.1.1) shown below provides the ML estimates and the variancecovariance matrix of _{} based on the Organization X data shown in Table 1. The key function in the R program is BTm, which fits BradleyTerry models using the identifiability constraint_{}. Lines 48 of the R program create a data matrix of the _{} values. The R structure ‘plan.dat.txta’, created in line 9, formats the _{} according to the requirements of the BTm function. Since _{}, the BTm function only returns _{} and therefore line 14 is necessary to insert a zero for the first coordinate of _{}. Lines 1819 similarly append a row and column of zeros to the variancecovariance matrix of _{}, which is returned in Line 17.
The BTm function is contained in a library named ‘BradleyTerry’ that needs to be invoked as show in line 2. Prior to invoking the ‘BradleyTerry’ library, two packages need to be downloaded from the local R CRAN (refer to http://www.rproject.org/) and installed into the local R environment. The two packages are the biasreduced logistic regression (brlr) package and the BradleyTerry models (BradleyTerry) package. After downloading the .zip files of these packages to a local hard drive, they can be installed from the ‘packages’ menu in the R window, selecting the option to “Install packages from local zip files.” The brlr package should be installed first, and then the BardleyTerry package.
_{}
We would like to thank reviewers and editors of our manuscript for many helpful comments that significantly improved the focus and presentation of our case study. We would also like to thank Theodore Younglove for some helpful discussions pertaining to the consulting aspects of the project.
Agresti, A. (2002), Categorical Data Analysis, 2^{nd} edition, New York, WileyInterscience.
Birch, J. B. and Morgan, J. P. (2005), "TA Training at Virginia Tech: A Stepwise Progression," The American Statistician, Vol. 59, pp. 1418.
Boen, J. R. and Zahn, D. A. (1982), The Human Side of Statistical Consulting, Lifelong Learning, Belmont, CA.Bradley, R.A. and Terry, M.E. (1952), “Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons,” Biometrika, Vol. 39, pp. 324345.
Cabrera, J. and McDougall, A. (2002), Statistical Consulting, Springer, New York.
Carter, R. L, Scheaffer, R. L. and Marks, R. G. (1986), “The Role of Consulting Units in Statistics Departments, The American Statistician, Vol. 40, pp. 260264.
Derr, J. (2000), Statistical Consulting: A Guide to Effective Communication, Duxbury, Pacific Grove, CA.
Dobson, A. J. (2002), An Introduction to Generalized Linear Models, 2^{nd} Edition, Chapman and Hall, Boca Raton, Florida.
Does, R. J. M. M. and Zempleni, A. (2001), “Establishing a Statistical Consulting Unit at Universities,” Kwantitatieve Methoden, Vol. 67, pp. 5163.
Hertzberg, V. S., Clark, W. S., and Brogan, D. J. (2000), “Developing Pedagogical and Communications Skills in Graduate Students: The Emory University Biostatistics TATTO Program, Journal of Statistics Education, Vol. 8, No. 3.
Holm, S. (1979), "A Simple Sequentially Rejective Multiple Test Procedure," Scandinavian Journal of Statistics, Vol. 6, pp. 6570.
Johnson, H. D. and Warner, D. A. (2004), "Factors Relating to the Degree to Which Statistical Consulting Clients Deem Their Consulting Experiences to be a Success," The American Statistician, Vol. 58, pp. 280286.
Kirk, R. E. (1991), "Statistical Consulting in a University: Dealing With People and Other Challenges," The American Statistician, Vol. 45, pp. 2833.
Mendenhall, W., Beaver, R. J. and Beaver B. M. (2006), Introduction to Probability and Statistics, Thompson/Brooks/Cole, Belmont, CA.
Romano, J. P. and Wolf, M. (2005), "Exact and Approximate Stepdown Methods for Multiple Hypothesis Testing," Journal of the American Statistical Association, Vol. 100, pp. 94108.
Russell, K. G. (2001), "The Teaching of Statistical Consulting," in Probability, Statistics and Seismology: A Festschrift for David VereJones, pp. 2026, edited by D. J. Dayley, Applied Probability Trust, Sheffield, UK
Strauss, D. (1992), “The Many Faces of Logistic Regression,” The American Statistician, Vol. 46, pp. 321327.
Taplin, R. H. (2003), "Teaching Statistical Consulting Before Statistical Methodology," Australian and New Zealand Journal of Statistics, Vol. 45, pp. 141152.
Tweedie, R. (1998), "Consulting: Real Problems, Real Interactions, Real Outcomes," Statistical Science, Vol. 13, pp. 129.
Westfall, P. H. and Young, S. S. (1993), ResamplingBased Multiple Testing, John Wiley & Sons, New York, New York.
Daniel R. Jeske
Department of Statistics
University of California
Riverside, CA 92521
U.S.A.
daniel.jeske@ucr.edu
Scott M. Lesch
Department of Statistics
University of California
Riverside, CA 92521
U.S.A.
slesch@ussl.ars.usda.gov
Hongjie Deng
Department of Statistics
University of California
Riverside, CA 92521
U.S.A.
hdeng001@student.ucr.edu
Volume 15 (2007)  Archive  Index  Data Archive  Information Service  Editorial Board  Guidelines for Authors  Guidelines for Data Contributors  Home Page  Contact JSE  ASA Publications