> Hi Marco,
>
>> May be somehow out of topic, but I don't see the equation more
>> review(er)s => more fair selection process.
>
>> My understanding is that more review(er)s => more noise, more variance,
>> larger grey area, more random selection.
>
> But the authors would be able to rebut bad reviews. The ability of refute bad reviews is a major advantage of the proposed system. The TPC mentor can ultimately judge the worth of a review.
A number of conferences have had rebuttals for years; the general experience is that they greatly increase the work for the authors (and the reviewers), but rarely change the result, among other problems. Based on this experience, some conferences have abandoned them.
>
>> Add to this that the review(er)s are now "auto-assigned" (by reviewers
>> that voluntarily review a paper). The noise can only increase:
>> non-experts attracted by a paper will only provide modest and light
>> reviews at best.
>
> These non-expert reviews will only get as much weight as they deserve. On the other hand, any expert, high quality voluntary reviews can significantly improve the overall review process. I think it is overly pessimistic to assume that voluntary reviews would be bad quality. They could be bad quality. They could be very good quality as well.
We already have lead TPCs, e.g., in Infocom. In general, I think it would be helpful in this discussion if the participants were to inform themselves about the spectrum of conference review techniques that are in use. Speaking from EDAS experience, the creativity of conference chairs in inventing new twists on the basic review model is astounding - but the variation in outcome seems rather modest (and, unfortunately, the attempts at measurements rather than speculation, few).
>
>> Yes this happens also now, but all the claiming and paper assignment
>> phases should guarantee a good match between expertize and fairness etc.
>
> In my view, there are many people (including myself) who have suffered from bad, unfair reviews over the years and feel helpless against the system. The existing system does not meet the fairness expectations of many people.
As long as there are people involved and as long as the number of papers to review is large, this seems hard to avoid. I think we *can* do a better job of evaluating reviewers, so that the lazy, incompetent, unhelpful and hostile reviewers are removed from the pool. But I suspect all of us have reviewed papers and missed the point or have been wrong, in either direction, about the value of work.
>
>> At last, I fully agree that the extra time that can be devoted to the
>> voluntary reviews is very limited...
>
> This may be true for many people. But, still there could be many people who would cherish the opportunity to review papers interesting to them.
I agree that better opportunities for volunteers would be useful. The Transactions on Networking is in the process of setting up a system for volunteering, so that there's a database of reviewer candidates for editors to draw on, as one more source.
>
>> Just check _when_ reviews are submited now, _how_ many reviews are still
>> missing, etc.etc.
>
> A major reason for this, in my view, is the inability of the current system to tap the reviewer pool properly.
That's one point I tend to agree with - the "pick from the people I know" mechanism scales badly, particularly as the topic breadth increases and the geographic spread of the discipline gets larger.
>
>> This system may possibly work for small numbers (<50). It will never scale for x100 submissions.
>
> That is really an assumption that needs to be tested.
The best thing to do is to run a conference and try it. Non-blind reviewing, for example, was tried (for Global Internet) and the results of the experiment were published. They were not definitive, but interesting.
Henning
_______________________________________________
Tccc mailing list
Tccc@lists.cs.columbia.edu
https://lists.cs.columbia.edu/cucslists/listinfo/tccc
No comments:
Post a Comment