The process of publishing has been the same for a very long time: You submit your paper, you receive your (generally 3) reviews, you are happy or not (often it depends if your paper is accepted or not) and that's it. If you don’t know what I'm speaking about I invite you to have a look on this hilarious blog post: http://matt.might.net/articles/peer-fortress/
I see 3 major problems or risk with the current reviewing process:
- The opinion of 3 persons, although not so bad cannot be considered statistically significant. Many rejected papers are just submitted again and get a totally different outcome. We call that the lottery effect ;-)
- The review can be a false negative. It means the review says the paper is bad while it is not necessarily the case; maybe (some) reviewers were acting in bad faith or just not expert enough to understand fully the idea.
- The review can be a false positive. It means the review says the paper is wonderful while it is not really outstanding. Sometimes reviewers having not thoroughly read the paper prefer to be rather positive (note that in case of doubt I prefer this attitude). I don’t say anything about reviewing a friend that might also happen.
I think social networks tools (or some inspired of them) could greatly help to improve transparency and quality of the reviewing process. Here are some ideas:
- Why not make the reviews public and attached to the paper on the conference website? This would force the reviewers to write the best reviews they can. I think it is very valuable information, why not share it with the readers?
- Why not allow commenting on the reviews just like on a blog. If a reader disagree with the review or has a different opinion he can tell it and share it with others. When I buy a book on amazon, I first read the reviews written by the readers. Why don’t we have similar information attached to articles?
- Why not add a “like” and “tweet” button next to the paper. Today scientists are comparing themselves based on h-index (this is something I discovered a few months when I joined academia). You may like a paper even if you don’t cite it. I had a discussion with Jean-Charles Régin recently about this h-index. In our field, it seems easier to be cited when you work on applications rather than on algorithmic ideas such as global constraint. Does it mean it has less impact? Not sure … But it’s always nice when you start your article to cite some related applications first. The number of “like” and “tweet” might also be interesting if scientists really want to compare ;-) My opinion is that whatever the comparison measure is, it will quickly become biased at some point. Before the h-index is was simply the number of publications that mattered. Why do you think it’s not the case anymore ;-)?
Maybe having some of these ideas implemented would make our CP conferences more fun and interactive.
I also believe it would favor collaboration between scientist. If you have a discussion with someone about a technical point, maybe new ideas will emerge which is very beneficial for our field. This is what we try to do with posters no?
The day of the conference, we could already present some information/comments published on social networks attached to the article. I see one major problem: are today’s reviewers ready to expose their review publicly and expose them to public comments?
CP conference are well behind OR conference such has Informs on the usage of social networks. If I remember well, it was in the program of some persons recently elected at ACP so I hope it will change soon.