Scientists find themselves in a difficult situation: On the one hand, they have to motivate their research in the best possible way in order to be able to publish well and reach the largest possible audience. Practically, this means that the main benefits of the research have to be highlighted while distracting details may be left out. On the other hand, the advancement of science also depends on the replicability of experiments and clarity about possible restrictions of the chosen approach. This calls for detailed descriptions of experiments and a critical view on presented research.
Especially in younger fields, in which a lack of well-supported knowledge allows many alternative hypotheses to coexist, scientists have to try to convince fellow scientists that their hypothesis is true and that their direction of research is viable. This often leads to an emphasis of the motivation and a neglect of detail and critical reflection in research papers which makes it harder to evaluate the real scope and impact of the presented work. In other words, you sometimes first have to decode a research paper before understanding what is really going on.
This brings us to the big question of research evaluation: How can you tell that the research presented in a paper is good (technically correct, interesting, …)? Currently, science uses peer review, a system which has frequently been criticised, e.g., because it is conservative in the sense that new ideas have a harder time to get through, or because your direct competitors judge your work. Personally, I do not think that we will be able to get around peer review. After all, there will only ever be the few peers who can properly understand and therefore judge what a scientist does.
But the peer review system may be improved and there is a continuing debate about how this may be achieved (see e.g. the NIPS conference debate on future publication models). In particular, new web-technologies provide interesting new opportunities to do this. One step in this direction has been made by switching to open access publishing in which arbitrary readers can freely access research papers over the internet (see e.g. the Public Library of Science, BioMedCentral, or the Frontiers journals). A further step is to build a social network around research publishing as attempted by Mendeley. But new forms of publishing alone do not necessarily improve research evaluation. Akin to open access there are now suggestions for “open evaluation” which pulls the review process into the public. For example, Frontiers in computational neuroscience prepares a special topic on Beyond open access: visions for open evaluation of scientific papers by post-publication peer review (as of date 21/05/2011) which collects proposals how this may best be realised. Also, Nikolaus Kriegeskorte proposes open post-publication peer review as the future of scientific publishing.
A simple form of open evaluation is commenting on a research paper on the corresponding article/journal website. Nature has allowed commenting only since March 2010, but others, like BioMedCentral (2002) or PLoS (2003), implemented it much faster. The problem is just that scientists seem to be reluctant to publicly comment on other’s research papers. A Nature analysis of comments on scientific articles which were published in the journal PLoS One showed that only roughly 12% of research articles had comments which were from other people than the editors or authors of the articles themselves.
There are some good reasons why a scientist would not want to publicly comment on or review a research paper. First of all, you might (perhaps unwillingly) offend colleagues by critically commenting on their work which may backfire when they review your next research article, or proposal. Second, there is not much benefit for the commentator. For example, you cannot add published comments to your publication list (as of yet). So, why would you go through the extra work of writing a (critical) comment or a non-anonymous review for a journal, if you personally only gain the fear of potentially offending someone who you may need in the future?
Well, I believe that science lives from an open debate. Call me an idealist, but in the end we are all working together to broaden the horizon of knowledge. Constructive criticism can provide new perspectives on work and help to improve it. If there is a place in our society for an honest discourse about the pros and cons of a proposal, then where, but in the search for the truths in this world, should it be?
It is important that criticism is factual and objective and that such criticism is not misinterpreted as an attack on the personality of the author of a piece of research. Now, here lies the difficulty. Of course, everyone who ever got a negative review knows how devastating this can be for oneself. After all, that piece of research may have taken months, or even years, to prepare and then a random outsider comes along and says that it’s all worthless? No question, as much as there is bad quality research there are bad quality reviews. But, honestly, aren’t a few of these negative reviews quite reasonable when reconsidered with some distance and aren’t these comments helpful for improving your work?
This brings me to the content of this blog. The majority of posts will be critical reviews of published research papers. They usually consist of a short summary of the paper under consideration and end on a critical conclusion reflecting my views on the paper at the time of reading. These reviews were originally written for my own benefit and particularly relate the content of a research paper to my own interests. Nevertheless, as I have accumulated quite a few of them over the last years, I decided to contribute to open research evaluation by publishing my views on papers. I do not claim to provide the last word on the subject with these reviews (see also the Disclaimer), but I always try my best to provide a fair, technically correct and objective view and feel deeply sorry, when I do not succeed in doing so.
I think that open peer review is a step forward for science. There are a few details which still need to be worked out, but I like the general idea and hope that it develops.
PS: I found out that Noah Wardrip-Fruin coined the term “blog based peer review“. Although this sounds pretty much like what I do in this blog, he went quite a few steps further by letting the public comment on his manuscript of a book which was published piece by piece in a blog.