With the start of Larry Wasserman’s blog (for those who don’t know him: he’s a renown statistician and machine learner) I also had a look at his homepage. It turns out that he is a proponent of post-publication peer review. To be precise he published a short essay in which he gives arguments for why the current peer review process should be abolished. He mainly argues that the quality of the output of the current system is bad and that it is unnecessarily exclusive. So he proposes free, not reviewed publication with subsequent quality control which essentially corresponds to post-publication peer review.
I generally agree with his criticism and wish that his proposal will become reality at some point. He notes in his conclusion:
When I criticize the peer review process I find that people are quick to agree with me. But when I suggest getting rid of it, I usually find that people rush to defend it.
I have done the first part, but now, instead of defending the current peer review process directly, I’ll try to illuminate the reasons why people find it hard to turn away from it.
In my view, the main function of peer review is filtering: the good into the pot, the bad into the crop, but instead of making binary decisions only, the established system of differently valued journals essentially implements a rating for papers. At least this is, I think, what people have implicitly in mind, even though it has long been argued that the impact of a journal has nothing to do with the value of an individual paper published in the journal. Probably most people know about the flaws of this evaluation process, but they accept the errors in return for at least a rough, implicit rating of quality.
Therefore, any system which replaces the current peer review process has to implement a rating for individual papers. In about this blog I discuss why scientists may be reluctant to publicly criticize (or even explicitly rate) a paper. Then again, the rating could just consist of counts of positive mentions (cf. Facebook-likes). This goes into the direction of bibliometrics, an apparently old field trying to quantitatively analyze scientific literature which became more important in the internet age. While a few people seem to work on it, I have so far not seen a convincing, i.e., comparable and robust metric on the level of an individual paper. I’m confident, though, that we get there at some point.
There is one dimension that is usually neglected in this discussion: time. The current peer review process is relatively quick. It may take one year in some cases before a paper gets published, but then it immediately has the mentioned implicit rating based on the prestige of the journal. Usually its even faster. In post-publication peer review the value of a paper may only stabilize very slowly depending on who promotes it initially. For example, citation counts may only be meaningful after 2 to 5 years. This poses a problem in the practical world of science where the next, short-term job of a young researcher depends on his output and how it is valued.
The issue of time is particularly prominent in the evaluation of conference submissions. For example, at NIPS the evaluation process for submitted papers merely takes about 3 months after which final decisions are made. Can a post-publication peer review process converge to a stable evaluation within 3 months?
Finally, there is an additional function of peer review which I have not mentioned so far: confidential feedback. Many scientists don’t want to publish half-cooked research and try to make their publications (and the research therein) as good as they can before anyone else, especially the public, sees it. In the best case, a closed pre-publication peer review then acts as an additional sanity check which prevents potential mistakes from becoming public and, therefore, saves the authors from the embarrassment of publicly admitting to having made a mistake (just think about the disrepute currently associated with having to publish an erratum, let alone to retract a paper). Nobody likes to make mistakes and often enough we like it even less to have to admit to one.
In conclusion, I do agree with most criticism of the current peer review process, but I also believe that scientists won’t readily change to a new process unless it implements the functions I discussed here. In particular, such a new process needs to provide a timely, but also accurate and stable, evaluation of presented research. In my opinion, post-publication peer review (or indirectly bibliometrics) cannot currently provide these functions, but it may in the future. What remains are the social constraints of the scientists: the political reasons making individual scientists reluctant to openly criticize the work of others or to make and admit to mistakes. I have the impression that these constraints are deeply rooted in human nature and, hence, are difficult to overcome. If such a feat could be achieved, then only through concerted action of the whole scientific community which would need to adjust how research, contributions to discussions and mistakes are evaluated.
2 thoughts on “Larry Wasserman, post-publication peer review and the neglect of time and politics”
Is there another argument that with the peer review system, there is at least some control of the quality of reviewers (although many people would have you believe there is none!). Without this, a paper could have it's "score" manipulated (surreptitiously or maliciously) by authors, colleages, friends, or adversaries?
Of course if the metric only depends on objective things such as citation counts this is less of a problem, but it is still falliable. For instance, a large network of allied researchers (such as from the new booming PRC research field) could work together to produce an enormous web of co-cited articles with virtually no real content.
There are a lot of things that are wrong with the current peer review process. But, in my opinion, there is one that is particularly annoying – clannishness. As someone who is involved with the process both in PC roles and as an author, and despite well meaning efforts to be objective about reviews, I know that there are always a few papers that get pushed just below the threshold of acceptability because 'he is not like us'. Often, some fraction of these papers are precisely the ones that have long term impact. Note that there is no foul play here in that reviewers do critique the arguments as scientists but there is also nontrivial subjectivity in how impact and significance are judged, which is after all what separates the 'better' venues from the rest. Couple this with all other social phenomena in academia and you get a system that is quite far from optimal.
A more public publishing model, where a paper is checked for correctness but otherwise let through, may well allow for a higher quality long term debate about alternate 'research programs', without handicapping people who happen not to be judged favorably in beauty contest settings. The point is that publishing is not only about competition – which is all we seem to talk about in these debates. The point of publishing, in the end, is to contribute to a long slow conversation that is science!