2010-08-12

Re: [Tccc] Different community, similar problems? (Henning Schulzrinne)

Part of the problem is that we're making conferences serve many functions, without necessarily a systems view of what we're trying to accomplish and a clear appreciation that some of the goals may not be easy to jointly optimize:

(1) Spread new ideas: while this might still work for certain focused workshops, for large conferences, a large fraction of the papers are outside the audience's immediate interest and only a handful of the sub-community is likely to listen to the presentation (if they are not reading email...);

(2) Get feedback on new work: Given the limited opportunity for true discussion, this is probably not very effective on average. Most questions asked are clarification questions or (worse) "but you should have cited my [vaguely related] work".

(3) Evaluate research and researchers: given the lag time of citations and the limited visible (product, real-world) impact of most papers, we substitute "has been accepted in a 20%-and-below conference" with "this person is doing quality research". We may pretend that this is not so, but I've been on too many tenure and dissertation committees to pretend otherwise. But even a significant number of SIGCOMM papers get zero citations, so our impact predictor is not perfect.

Any measurement process has noise, and my general perception is that our processes work reasonably well around the 25-30% mark, i.e., a different set of reviewers and TPC members would arrive at very similar conclusions. But we know from shadow TPC experience, among other data points, that things get pretty noisy below that. There are also built-in biases: "large" systems are likely to be more impressive than small ones, for example, so there's a bias towards larger research projects. Sometimes, material that would be unlikely to pass muster in a core conference in an area gets accepted since it is interestingly different - SIGCOMM and wireless has had this problem, in my opinion.

Because the system is "pass/fail", this also yields an optimization process of just trying to clear the bar. Given the need to weed out 80-90% of the papers, the TPC naturally tends to pay less attention to formalities and presentation, and (with a few exceptions of shepherding), we have no real process in place to ensure that review comments are reflected in the final version. This knowledge doesn't exactly encourage comments that improve presentation or content. (Unlike in a journal, asking for another experiment or more data is pretty pointless for a conference submission where the turn-around time between acceptance and camera-ready is probably 4-6 weeks, at best.) Thus, even for our "best" conferences, many of the published papers are pretty sloppy - from typos, bogus citations and grammar issues to more serious problems. Once a paper has been accepted at a highly selective conference, few such papers get published in journals or another conference, so many papers present the "almost, but not quite finished" state. Particularly for systems papers, this often makes it difficult to reproduce results or the system, assuming that anybody tries.

Except for tweaks ("slides" are now PowerPoint) and automation of the review process, the basic conference model hasn't changed in 20 years, so it might well be time to at least reflect on the objectives and see how we can improve things. I think our primary objective should be making research progress for science/engineering on important problems overall, rather than just turning the publishing system into a sports league.

Henning

On Aug 12, 2010, at 4:14 PM, Joe Touch wrote:

> Hi, Mark et al.,
>
> Again, speaking as an individual.
>
> On 8/12/2010 12:33 PM, Mark Allman wrote:
>> Well, quick math says that if the bottom 50% don't show up then you at
>> least get to double this 10%, right?
>
> Overall, FWIW, this is key to the point I was trying to make.
>
> My broader reason for raising this is:
>
> We can't continue, IMO, to claim that "networking" or even CS as a whole
> is different from other fields enough that our meetings always have <10%
> accept rates. We can't continue to claim that "Sigcomm is really like a
> journal".
>
> Ultimately, we play in broader enterprises - tenure evaluations,
> performance reviews, etc. - that span disciplines. We've been beating
> our head against the 'system' that assumes how many journal vs.
> conference vs. workshop pubs are reasonable across a broad board.
>
> Yes, ultimately, we're all headed for some evolution of this model,
> e.g., based on electronic publishing, etc. But that's not what's driving
> the current dissonance. IMO, it's time to consider whether life would be
> better if we just accepted a more conventional idea of what a conference is.
>
> Joe
> _______________________________________________
> Tccc mailing list
> Tccc@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/cucslists/listinfo/tccc
>


_______________________________________________
Tccc mailing list
Tccc@lists.cs.columbia.edu
https://lists.cs.columbia.edu/cucslists/listinfo/tccc

No comments: