On 8/12/2010 8:57 AM, Mark Allman wrote:
>
> Joe-
>
>> IMO, we need to determine whether there's something about our work
>> that truly is unique and warrants<10% conference accept rates and
>> long review timelines - and make that case on its own merit (not just
>> "that's how it's always been"),
>
> I'm not sure I even understand your point here Joe. Two things ...
>
> - Conferences have long review timelines? I don't agree. I mean,
> 3--4 months is pretty much all I ever wait for conferences (less for
> workshops, more for journals). That doesn't strike me as
> particularly long or onerous.
Depends on the conference. Infocom is around 4 months. Sigcomm was
closer to 5. The point isn't that it's longer than journals, it's that
the process can be "long".
That's quite a lot longer than the gap between the CFP and the
submission. Granted, many people know when cyclical CFPs are coming out,
but the presumption is that it takes 2-3x longer to process reviews than
to write the paper. That seems a bit upside down to me.
> - As for the rate... its driven by the conference style (see Victor's
> nice post on this) and the denominator. I just finished chairing
> IMC (a pretty top-tier venue for measurement work). We had an
> accept rate of 22% (47 / 211 submissions). Is that good? Bad? I
> dunno.
That can be hard to say as a raw number. The better question is "accept
rate after quick rejects", i.e., papers so badly out of scope they
didn't warrant a review (e.g., promotional product descriptions sent to
a technical meeting).
...
> But, what I know is that the 47 papers fill a full three day
> single track conference---i.e., the time allotted. (Note: the
> papers are a mix of long (14 page) and short (6 page) papers and get
> differing amounts of presentation time at the conference.) Last
> year's conference also filled three full days, but had a higher
> accept rate because the number of submissions was lower. The 2009
> conference went to three full days (from 2.5) in an effort to
> increase the numerator. Conferences can also go multi-track. Etc.
> But, these sorts of decisions have cons, as well. So, to me, the
> rate really says very little by itself (e.g., as cited on people's
> vitas) and we spend far too much time obsessing about the accept
> rate.
So basically when you buy shoes, you show up and ask what size they have
in stock and wear the ones that fit the best? :-)
I.e., IMO, you pick the papers you want, not the papers that fit. Yes,
that means adjusting the meeting schedule as a result - adjusting the
number of tracks, adding panels if useful, or just adjusting the meeting
dates accordingly.
That's NOT what many conferences with low accept rates have chosen to do.
Again, I'm asking us to ask ourselves whether only 10% of our papers are
really conference quality, or whether we're picking 10% to make the
shoes fit.
> And one more thing on hyper-criticality:
>
> - Half the papers received at most of the conferences and workshops I
> have been on the PC of are junk of some form (writing, technical
> methodology, analysis, etc.). And, many more simply don't measure
> up to close to the bar. I don't condone reviewers being rude or
> biased or whatnot in their comments. But, lets also not pretend
> that of IMC's 211 submissions there were 211 good papers and
> oh-isn't-it-a-shame that we couldn't accept all of them. That just
> isn't the case. The input to the process is not somehow beyond
> reproach. A data point is that going into the IMC PC meeting a few
> weeks ago we were still considering 72 papers---i.e., about
> one-third of the submissions.
Some of the current definition of "the bar" is based on the 10% we
accept at past meetings. Many people might look at the next 10% of
Sigcomm submissions and say, e.g., "that's not a Sigcomm paper" - but
that is colored by the artificially high bar set in the past. The
question for a conference paper should be:
- is it reasonably correct (not perfect)
- would it be useful to discuss, for both authors and attendees
Right now the bar at <10% meetings is "can I find a reason to reject
it". That's not a conference paper criteria, IMO.
> And, BTW, I try very hard to look for the good in all these junk
> papers and encourage authors. I think many times there are indeed
> interesting / novel / useful nuggets in these papers, but these are
> not well explored and I am hopeful that they can be with a bit more
> elbow grease and so I try to be helpful. So, it isn't like I think
> these "junk" papers are somehow beyond repair or doomed to "junk"
> status forever. But, we should face facts that we see a lot of
> cruddy papers and perhaps one of the things we should try to do as a
> community is have a little more shame in terms of what we send out.
Don't get me wrong about the general quality of submissions - yes, there
are a lot of papers that ought to have been caught, either by the
co-authors, the department/organization, or colleagues - before they
went out the door. That's a different issue, but one that I don't think
results in a <10% accept rates at large meetings.
Joe
_______________________________________________
Tccc mailing list
Tccc@lists.cs.columbia.edu
https://lists.cs.columbia.edu/cucslists/listinfo/tccc
No comments:
Post a Comment