Double blind reviewing
I've attended a few academic conferences now, but this is the first time I've been part of the organizing committee on any of them. Part of my responsibility is to help coordinate the double-blind peer reviewing process, and it’s been quite an eye-opener for me.
When an academic submits a conference paper, it is anonymized. They don't put their name on it, and they are supposed to remove anything that might identify them. The programme chair (or track chair in a larger multi-track conference) sends out the papers to be reviewed. The reviewers are recruited from anyone who has submitted a paper this year, was accepted last year, and any colleagues or students of the organizer. Each paper is reviewed by three or more reviewers, who submit comments about the paper to go to the authors and make a recommendation about whether the paper should be accepted. The programme chair then assesses all the reviews and decides who gets accepted and who doesn't. Regardless of whether they were accepted, the authors get to see what the reviewers wrote, but they never find out who their reviewers were.
For CHINZ 2009, we have now received all the reviews and are in the process of deciding who gets accepted and who doesn't. When there is unanimous agreement from the reviews to either accept or reject, the decision is easy. The more difficult cases fall into two categories. The first is when all of the reviewers rated a paper as fairly average. Because the ratings are a fairly coarse scale, it's hard to determine where to draw the line. We can only accept a finite number of papers, and trying to find a clear separation between the ones we accept and the ones we reject is not always easy.
The other difficult cases are whether the reviewers wildly diverge. For instance, two reviews say the paper is a 'Clear accept' while the third says it is a 'Clear reject' or vice versa. In those cases, we have to look at the reviews and decide how much weight to accord each reviewer. Sometimes a reviewer might not be very familiar with the topic and therefore have low confidence in their review and so you can put less emphasis on their recommendation. And some people just seem to be either excessively harsh or excessively generous. One person recommended every paper they reviewed as a ‘clear accept’, even when the other reviewers thought it was a load of rubbish.
It never really occurred to me that the people making these kind of margin calls were just academics like me. I guess like many other things in life, I’ve always thought the people making the decisions were vastly more knowledgeable than myself.