2010-08-16

Re: [Tccc] IETF model? Re: Different community, similar problems? (Henning Schulzrinne)

I'd like to add my 2c. We tried to run CSET workshop for one year and enable
authors to rate reviewers (blinding the reviewer's name). Even though we
publicized this and even noted it in reject/accept notifications no reviews
were put in. This year we had an option for reviewers to rate each other
reviews - again no takers.

Jelena

On Mon, Aug 16, 2010 at 2:20 PM, Joe Touch <touch@isi.edu> wrote:

>
>
> On 8/16/2010 11:48 AM, Henning Schulzrinne wrote:
> ...
> >> That is really an assumption that needs to be tested.
> >
> > The best thing to do is to run a conference and try it. Non-blind
> > reviewing, for example, was tried (for Global Internet) and the results
> > of the experiment were published. They were not definitive, but
> interesting.
>
> Speaking now as a member of the steering committee of GI, I'll suggest
> this:
>
> - if you think you have a better way to ru(i)n a conference, please
> do so with one *you* develop and offer to shepherd for many years
>
> This particular experiment was indeed written up, except for the part
> about the number of papers submitted, which dropped by 50% and took
> *years* to recover. This happened in period when no other workshop or
> conference reported a similar effect, i.e., it wasn't just 'economic
> downturn'.
>
> Conferences are more than just 'this year'; they are multi-year events
> that take many years to build a reputation. Playing around with how the
> conference is run has consequences - not just for the year of the
> experiment, but many years after.
>
> My experience is that virtually every "experiment" in how to run a
> conference consists of a mechanism intended to address a problem that
> either doesn't exist, the mechanism doesn't solve, or isn't useful to
> solve.
>
> For open reviews, it was "reviewers are mean". The result was that most
> authors felt that the reviews were 'nicer'. Not that they were more
> informative, more useful, or provided more detail - perhaps even less so
> on any of these metrics (that wasn't measured, BTW).
>
> For other mechanisms, the main point appears to be dealing with some
> sort of impropriety by relieving the chair of their responsibility of
> checking *every* received review (in some places, TPC tiers; in others,
> double-blind was used to address this issue).
>
> Overall, if you want to play with these mechanisms, yes, sure. But at
> least try to run it as a real experiment (with a control group the same
> year run the conventional way), and report *all* the results.
>
> Joe
> _______________________________________________
> Tccc mailing list
> Tccc@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/cucslists/listinfo/tccc
>
>
_______________________________________________
Tccc mailing list
Tccc@lists.cs.columbia.edu
https://lists.cs.columbia.edu/cucslists/listinfo/tccc

No comments: