Sunday, August 30, 2015

How Many Seagulls Does it Take to Create A Pooptastrophe? Updating Counting Crows



One way to know how many crows are on a telephone wire is to count them. If fact, it may be the only objective way to know how many crows are on the wire. What do you know about the crows once counted? You know how many crows there are. Do you know if they are wonderful for people too look at? Are they tormenting some poor kitty cat? Really all you know is the number of crows. What about the impact of the crows? You would laugh if I told you that counting them tells you anything about their impact, importance, usefulness, or whatever.

In the most recent effort to rescue citation counting as a measure of the importance of legal scholarship from being completely disregarded by all but a group of people who teach law, the gang at St. Thomas has published another work that purports to measure the impact of scholarship by counting the number of citation regardless of what what a work is cited for. The jump from counting to impact is a hard one since there appears to be no separate definition of impact. In the tautological world  of citation counting, counting equals impact and impact equals counting. And, impact is reserved for citations by other legal scholars only. Citations or impact on courts (it's all the same thing to counters) is irrelevant. Your school could have 27 Supreme Court citations and 13 citations in the Bosco State Law Review and your schhool would be ranked higher in impact than a school with 0 Supreme Court citations and 14 Bosco State citations.

I thought initially that the authors were pretty good at counting but I am not so sure. For example,  if you are the editor of a book of, say, 30 articles, you (and your school)  will be cited every time one the articles is cited, regardless of your contribution, including if it was strictly administrative. If you an editor or coauthor of a prexisting treatise or book of any kind, you are cited and, thus, have had an impact whether or not you had anything to do with the material that is cited. In fact, I am beginning to think cite counting may be the worse measure of impact.

In some ways, the St. Thomas effort may reflect too much time at the counting punch bowl. First, in an effort to illustrate that scholarship does not interfere with teaching,  they cite studies showing the absence of a negative correlation between writing and teaching "quality" which is measure by -- you guessed -- student evaluations. My personal hunch is that writing does not detract from teaching but,  I wonder if they missed the numerous studies showing that teaching evaluations by students are rarely correlated with actual learning. What I took from their analysis is that writing was consistent with being a good entertainer in the classroom.

Second, as a demonstration of their objectivity they select the most recent 5 year period completely arbitrarily. As best I can tell this is done for ranking the schools as well as individual faculty. I am not sure it makes sense for ranking schools. For individual faculty, it is to reduce the impact of oldsters like me. In a sense this makes individual rankings more current although one wonders about the difference between one or two citations without regard for judicial citations or the nature of the citation -- in text, an aside etc. (This is all the less excusable since WestLaw now includes cites by scholars, judges, administrative agencies and even documents filed with courts.)

 The problem with simply counting arises more importantly when they include  new lateral hires who have written nothing or very little at their new schools. What does this mean exactly?  The only thing that it can mean is that the school that was left has less scholarly impact (but how could it?) even though every or most citations are to  works produced at that school. At the same time, the new school gets impact credit even though that particular scholar has written nothing (and may never) at the school to which the impact is now assigned -- so much for the accuracy of counting and ranking the scholarly impact of school or even its current faculty.

Since they must defend the citadel of counting,  they are obliged to take on a recent study by Amy Mashburn and myself attempting to determine not counts (which are highly correlated with where you teach, where you published, and where you went to school) but whether you were cited for anything that seemed to influence another author. I guess you might all this "actual impact" as opposed to "faith based impact" on which citation counting is based. No doubt our effort is fair game, as is any subjective effort. And, as you would also expect from legal scholarship cheerleaders (all of whom are  on the team and almost none of whom are not law teachers)  we are accused of being too conservative in our labeling. For example, if someone quoted someone else as saying "the common law is complex," we did not regard the author of that incisive statement as having a significant impact on the citing author. We were even criticized for selecting a sample composed of the articles most likely to be cited and to have an impact. That critic, who was uncomfortable with our findings, actually suggested a sample of articles that would result in a worse outcome in terms of "impact."
.
In our work we were particularly worried about hearsay and appeals to authority.  For example, how about this: "Citation counts objectively measure impact," with the following footnote:"See David L. Schwartz & Lee Petherbridge, The Use of Legal Scholarship by the Federal Courts of Appeals: An Empirical Study, 96 CORNELL L. REV. 1345, 1354 (2011) ( saying in study of citations of legal scholarship in court decisions, “measuring the use of legal scholarship by measuring citations in opinions has the benefit of being a fairly objective measure”); Arewa, Morris & Henderson, supra note 7, at 1011 (referring to “objective criteria such as citation counts and the Social Science Research Network (SSRN) downloads” for peer review  of faculty scholarship, although acknowledging these “are not perfect measures either”)."

This is from the St. Thomas article. In a citation count, these citations will be as important as ones noting works the authors actually grappled with. But what are we to make of these citations. Did they influence the St. Thomas authors? I doubt it. Instead it looks like an appeal to authority without any real examination of whether the "authority" is "authoritative." At also represents, unfortunately, a norm in legal scholarship and, also why it gets so little attention outside a small world. What it means is "you should believe me because I found someone who agrees with me."But who knows if they know what they were talking about?  It is, at best, a substitute for real research.  Indeed, when my coauthor and I tracked down some citations, those cited were citing a third party who also cited someone farther down the line. That amounted to a hearsay appeal to authority.

The Mashburn/Harrison work has received a fair amount of attention and, surprisingly, most of it has been favorable. (In their hearts law professors know.)  In that article we challenged defenders of counting to redo our study. The St. Thomas group did not do this opting instead to decide it cannot be right because -- well, just because. Moreover,  no one challenging our work has opted to prove we are wrong by selecting an article and indicating how each cited work was influential or did not fall in the hearsay category.

The bad news is that that the authors may be right. Citation may equal impact. If so, based on the research methods widely used by law professors, we are in even bigger trouble.

And, finally, and this time I mean it. Suppose citations do equal scholarly impact. One law professor influences another and so on. There is hardly anything useful going on unless that impact is felt somewhere outside their closed group. Unless you assess that, you have no meaningful measure of anything.



12 comments:

Michael Risch said...

1. I totally agree for the most part
2. I think the St. Thomas study also limits to only "good" journals, right? So impact is also "subjective" in that sense
3. Despite my agreement, I can see the point of others about the "me, too" or the "someone else said it" cites. Petherbridge and Schwartz don't just SAY it's objective - they explain why. Now, you might disagree with that methodology, but citing someone comes up with a methodology that appears to be objective and explains why isn't just a "someone said it," it's a reliance (for good or for bad) on knowledge development by others. And that development (whether right or wrong) had an impact on how the later folks viewed the world.

Of course, you have to disentangle that from your example of non-new statements (citing others) where it's turtles all the way down. But if the evaluation is of impact, a citation to completely fabricated statements are still impactful even if wrong.

Anonymous said...

You are apparently unaware that citation studies are standard measures of scholarly impact in the natural and social sciences. What distinguishes law is that it's the only field where some professors rant against their use, apparently unaware of how common they are in other fields. You should venture out and talk to your colleagues in other disciplines some time.

Jeffrey Harrison said...

I am not sure I understand your reasoning, Anonymous. Does it mean counting is a way to assess the impact of scholarship? Or just that if other disciplines are screwed up law should be to? In any case, the citations in law are far more numerous than in other disciplines (I was in one of those disciplines) and many are just there to cover a lack research much of which does go on in other disciplines.

Jeffrey Harrison said...

Michael, all I would say is that a great deal of legal scholarship is representative of confirmation bias. I wonder how hard they looked for works that did not confirm their bias.

Anonymous said...

Professor Harrison, I was responding to your claim that the Sisk study is an attempt to "rescue citation counting from being completely disregarded by all but a group of people who teach law." This is false and reflects your ignorance of other fields. You should try responding to the substance of Sisk's rejoinder to you and Mashburn.

Jeffrey Harrison said...

I see your point now. Mine was that no one outside of law teaching takes legal research and the way it is evaluated seriously. I am not disagreeing that other discipline count citations. I think the big distinction is that law professors way over site because they have little in the way of actual data or analysis to back up their claims. When there is massive over citing simply to bulk up a work, counting citations seems meaningless to me.

Jeffrey Harrison said...

Anonymous, thanks, I have updated to deal with issue. I am happy that you agree with the rest of the piece.

Anonymous said...

I think citation count studies are useful to bring some attention to productive scholars who would otherwise not receive as much attention because they are not at top-ranked schools or are relatively new professors. You can see a study with someone new listed in the ranking, and then decide to check out their work. It doesn't tell you that the person's work is good: It just tells you that it is getting some attention, which means it might be worth looking at if you've missed it. In that sense, rankings make the system a small anount more meritocratic. Not perfectly meritocratic, but a small bit more than it would be otherwise. The problem is that the rankings are updated sufficiently often that the lists don't change very often. Just seeing a list with the same people on it isn't all that useful.

Bob

Unknown said...

In response to "(Perhaps a better measure is the sum of judicial and scholarly citations.)" I'd like to point out that a straight sum doesn't work. I've looked at this in a paper I presented at the JOTWELL conference last year. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2506633.

The central problem is that the citations per year from law reviews to other law reviews so dwarfs the number of citations from cases (or anything else) to law reviews that including the cases makes no difference to an ordinal ranking system where the total citations from both are added. You can see this happen in the Washington and Lee Law system for ranking law reviews.

A better system would have cases weighted and then added. Ideally, the cases would be weighted by the influence of the citing court as well (which is totally practicable with WestlawNext). I'm working on such a system for a more built up version of the paper, but I haven't had time to devote to it recently (I have job). Broadly, as your study shows, no citation counting system works all that well but if we are going to have one (and that's unavoidable as a practical matter) we may as well have one that incorporates the rest of the profession!

Anonymous said...
This comment has been removed by a blog administrator.
Fred said...

" For example, how about this: "Citation counts objectively measure impact," with the following footnote:"See David L. Schwartz & Lee Petherbridge, The Use of Legal Scholarship by the Federal Courts of Appeals: An Empirical Study, 96 CORNELL L. REV. 1345, 1354 (2011) ( saying in study of citations of legal scholarship in court decisions, “measuring the use of legal scholarship by measuring citations in opinions has the benefit of being a fairly objective measure”); Arewa, Morris & Henderson, supra note 7, at 1011 (referring to “objective criteria such as citation counts and the Social Science Research Network (SSRN) downloads” for peer review of faculty scholarship, although acknowledging these “are not perfect measures either”)."

This is from the St. Thomas article. In a citation count, these citations will be as important as ones noting works the authors actually grappled with. But what are we to make of these citations. Did they influence the St. Thomas authors? I doubt it. Instead it looks like an appeal to authority without any real examination of whether the "authority" is "authoritative." At also represents, unfortunately, a norm in legal scholarship and, also why it gets so little attention outside a small world. What it means is "you should believe me because I found someone who agrees with me."But who knows if they know what they were talking about? It is, at best, a substitute for real research. Indeed, when my coauthor and I tracked down some citations, those cited were citing a third party who also cited someone farther down the line. That amounted to a hearsay appeal to authority.
"


Isn't this how legal research works? I can't think of any other way someone could make the statement "Citation counts objectively measure impact", without citing like they did. How else could it be possible to prove that statement. I don't think there is an empirical way of doing so. Maybe this means the statement shouldn't be made in the first place, I don't know...

Jeffrey Harrison said...

Perhaps looking at how the works are actually used when cited.