Is There a Correlation Between Scholarly Productivity, Scholarly Influence and Teaching Effectiveness in American Law Schools? An Empirical StudyAs valuable and interesting as this finding is, I tend to doubt that the relationship between scholarship and teaching can be empirically measured.
BENJAMIN BARTON
University of Tennessee, Knoxville - College of Law
--------------------------------------------------------------------------------
7/1/06
Abstract:
This empirical study attempts to answer an age-old debate in legal academia; whether scholarly productivity helps or hurts teaching. The study is of an unprecedented size and scope. It covers every tenured or tenure-track faculty member at 19 American law schools, a total of 623 professors. The study gathers four years of teaching evaluation data (calendar years 2000-03) and creates an index for teaching effectiveness.
This index was then correlated against five different measures of research productivity. The first three measure each professor's productivity for the years 2000-03. These productivity measures include a raw count of publications and two weighted counts. The scholarly productivity measure weights scholarly books and top-20 or peer reviewed law review articles above casebooks, treatises or other publications. By comparison, the practice-oriented productivity measure weights casebooks, treatises and practitioner articles at the top of the scale. There are also two measures of scholarly influence. One is a lifetime citation count, and the other is a count of citations per year.
These five measures of research productivity cover virtually any definition of research productivity. Combined with four years of teaching evaluation data the study provides a powerful measure of both sides of the teaching versus scholarship debate.
The study correlates each of these five different research measures against the teaching evaluation index for all 623 professors, and each individual law school. The results are counter-intuitive: there is no correlation between teaching effectiveness and any of the five measures of research productivity. Given the breadth of the study, this finding is quite robust. The study should prove invaluable to anyone interested in the priorities of American law schools, and anyone interested in the interaction between scholarship and teaching in higher education.
First, how do you define what counts as "good" scholarship? Mere volume? Even with the weighting that Barton did (i.e., counting certain types of articles or books more than others), I'm skeptical that volume is an appropriate measure.
What about the law professor who writes 10 tedious, inflated, and not-very-enlightening articles? Is that professor "better" at scholarship than a professor who writes one brilliant law review article that is massively influential? I'd say that the brilliant but terse scholar is "better" than the merely logorhheaic one, but Barton's methodology would give the logorhheaic professor 10 times as many points.
"But what about citation counts?" Yes, Barton did use citation counts in another phase of the analysis. I agree that this measure is better than mere productivity. Still, a citation count is problematic for several reasons. Many scholars might cite Bork's famous First Amendment article from the 1970s, but only because they view Bork as emblematic of a view that they reject. A citation count overestimates the skill of professors who write in trendy areas of the law, and underestimates the scholarly achievement of professors who may have written brilliant analyses of ERISA, or the taxation of international business structures, etc. Because citation counts can vary for reasons that are not really related to scholarly ability, there might be hidden correlations here.
More broadly, I'm not sure that the "quality" of legal scholarship can be feasibly reduced to a number. Legal scholarship comes in many different varieties, and I don't think that there is any uncontested metric of "quality" that all legal scholarship is trying to achieve.
2. How do you measure teaching effectiveness? Barton did it by looking at student evaluations, even while admitting the shortcomings of this approach:
I also am aware that the use of teaching evaluations as a proxy for teaching effectiveness is somewhat controversial. There are studies, both within law schools and higher education in general, that show that teaching evaluations have biases, including biases based on race (Smith 1999), gender (Farley 1996), and even physical attractiveness (O’Reilly 1987). Other studies have shown that student teaching evaluations are positively correlated with other measures of teaching effectiveness, including peer reviews and output studies, suggesting at least that student measures track other alternative measures (Bok 2004).Barton defends the use of student evaluations on the ground that there isn't any better source of data. That may be so, but again, I'm not sure that this data is all that reliable. Is a "good" teacher one who covers a certain amount of material in a clear fashion, but leaves most of the class bored and uninterested in pursuing the subject any further after the exam? Is a "good" teacher one who challenges and pushes the students to learn more, to think more deeply, etc.? (As Barton acknowledges, such a teacher might get lower ratings from those students who are essentially lazy.) Is a "good" teacher one who has a powerful, charismatic, and entertaining personality, such that he/she gets good ratings no matter what the students actually learn? Can a teacher be "good" if he is brilliantly quirky, such that half the students love him and half hate him? Is it better for a teacher to impart the maximum amount of information, or to teach the students one big idea that shapes the students' thinking long after the details of the class material are forgotten?
Many law faculty members have nevertheless argued to me that teaching evaluations are little more than a popularity contest. Some have even argued that teaching effectiveness is inversely correlated with teaching evaluations, since students tend to highly rank easy professors of little substance, while ranking professors who challenge them comparatively lower.
UPDATE: I should also point out that this study, for what it's worth, is entirely consistent with my previous posts. That is, this study doesn't show anything whatsoever about my main point, i.e., that the mere act of teaching a few classes a year could improve a professor's scholarship (as compared, say, to a think tank scholar who never leaves his office). Barton doesn't look at whether scholars are more productive or are cited more often if they (a) teach sometimes vs. (b) never teach at all.
So what? Is there some rule that you can't point out problems with a study unless you yourself simultaneously come up with better data?
ReplyDelete