Tuesday, April 24, 2018

Platform Failure

The big topic in antitrust these days is two or more sided platforms. These are things like Open Table, Amazon and both Scholastica and Expresso. They serve two markets and reduce the transaction costs of having those markets interact. For example, in the case of Open Table, diners do not want to call several restaurants to find a table and restaurants would prefer not to spend hours on the telephone so they join the Open Table listings.

In the case of Scholastica and Expresso, writers of law review articles would prefer not to print out, address, and mail articles to law reviews and then wait for post card acknowledgments. And law reviews benefit by a standardized method of receiving submissions. Given the lower costs to submit, they are likely to receive more submissions as long as they are signed onto the system.

Here transaction cost reduction means reducing quality, in  a sense raising barriers to entry. In the olden days, professors send out articles by mail, maybe a batch of 10, then a batch of ten more as they work down the rankings. I explained this to a law review editor several years ago at a very low ranked review and he asked -- due to the paucity of submissions the review had received-- when the review could expect authors to finally send drafts. Yes, there was a time when a 50 or 60th ranked review might  get only a  smattering of manuscripts.

Recently I asked a colleague how many reviews he had submitted his latest to. The answer was 90. I recently submitted a piece to 99 reviews. Why? Thanks to the platform, the cost in terms of time and dollars is inconsequential especially if your school picks up the tab.

But what does this mean. Law reviews receive, in many cases, thousands of submissions. They are swamped. Do they read each one? Can they even begin to read each one? Of course not. This means the flawed process by which articles are selected is made even worse - editors rely on institutional authority more than ever. They count citations of those submitting articles,  they consider the schools at which authors teach as well as those from which they graduated. It is even harder for authors who depend on substance to gain entry to the elite reviews.  There is no known correlation between the quality of a work and institutional authority. There is a known correlation between citations and the rank of review as well as the rank of the schools professors graduated from and attended. If citations are correlated with reading then articles are read on the basis of who wrote them, not the quality of the ideas.

Can you blame the editors? Hardly, what are they to do? Can you blame the authors? I do not see why. If someone is submitting to 60 reviews it's best to do likewise.

No comments: