12

Is there any research/study/survey/dataset that looked at how often the reviewers correctly guess the identity of the authors when the review is double-blind?

I am aware that the answer is likely field-dependent, or even publication venue dependent. I am mostly interested in computer science, machine learning and natural language processing, but curious about other fields as well.

ff524
  • 108,934
  • 49
  • 421
  • 474
Franck Dernoncourt
  • 33,669
  • 27
  • 144
  • 313

5 Answers5

17

Conferences in programming languages are moving toward double-blind reviewing, and since the idea is debated, people are collecting some evidence, including actual peer-reviewed papers.

In particular, even when reviewers believe they guessed the authors, it turns out they are sometimes/often incorrect, as profmartinez’s answer suggests. In the citations you’ll find actual numbers; I won’t attempt a literature review myself.

Links

Wrzlprmft
  • 61,194
  • 18
  • 189
  • 288
Blaisorblade
  • 1,869
  • 13
  • 25
10

After many program committees, I've come to the conclusion that we grossly overestimate our ability to guess authors based on the content of a double-blind submission.

profmartinez
  • 2,817
  • 11
  • 21
  • 1
    I agree with this. One journal I review for "unblinds" on acceptance - on one where I was near certain, I was wrong, and I missed the work of someone I know personally. – Fomite Feb 11 '16 at 20:40
8

I consider it my duty as a reviewer not to get distracted by irrelevant data like authors, so I just check the paper as is. I'm sure that with a bit of guess work (and a fast search or two) I'd be able to identify most of the authors, I deliberately try no to do so.

Most areas are really closely knit, you'd be able to identify a colleague by manierisms in writing, approach elected, results used (and papers cited). And give a good shot at identifying students of colleagues, at least up to advisor.

vonbrand
  • 9,982
  • 1
  • 25
  • 46
7

The following paper presents findings from a recent investigation at three major Software Engineering and Programming Languages conferences (namely, ASE, OOPSLA and PLDI 2016).

Claire Le Goues, Yuriy Brun, Sven Apel, Emery Berger, Sarfraz Khurshid, Yannis Smaragdakis: Effectiveness of Anonymization in Double-Blind Review. CoRR abs/1709.01609 (2017)

During the review process, the reviewers were urged to provide a guess if they thought they knew an author of the given paper.

On the percentage of papers where a guess was made:

For the three conferences, 70%–86% of reviews were submitted without guesses, suggesting that reviewers typically did not believe they knew or were not concerned with who wrote most of the papers they reviewed.

On the correctness of guesses:

When reviewers did guess, they were more likely to be correct (ASE 72% of guesses were correct, OOPSLA 85%, and PLDI 74%). However, 75% of ASE, 50% of OOPSLA, and 44% of PLDI papers had no reviewers correctly guess even one author, and most reviews contained no correct guess (ASE 90%, OOPSLA 74%, PLDI 81%).

On the effect of reviewer expertise on guessing:

We conclude that reviewers who considered themselves experts were more likely to guess author identities, but were no more likely to guess correct.

On the effect of (correct and incorrect) guesses on paper acceptance:

We observed different behavior at the three conferences: ASE submissions were accepted at statistically the same rate regardless of reviewer guessing behavior. [...] OOPSLA and PLDI submissions with no guesses were less likely to be accepted (p <= 0.05) than those with at least one correct guess. PLDI submissions with no guesses were also less likely to be accepted (p <= 0.05) than submissions with all incorrect guesses.

Summary:

We find that 74%–90% of reviews contain no correct guess and that reviewers who self-identify as experts on a paper’s topic are more likely to attempt to guess, but no more likely to guess correctly.

lighthouse keeper
  • 25,771
  • 3
  • 65
  • 117
7

Probably every time. Anyone that admitted to breaking the blinding though would probably face some serious negative repercussions, and so I wouldn't expect anyone to speak up or respond to a survey that would allow a good study to be done. I've never deliberately broken the blinding on a double-blind review, but I've certainly received papers that were improperly blinded where the true authors were obvious just from looking at the title page, so I had to send them back to the editor/program chair as unreviewable.

I know of no studies about this.

Bill Barth
  • 48,733
  • 6
  • 112
  • 194
  • Thanks. "I wouldn't expect anyone to speak up or respond to a survey that would allow a good study to be done." -> sure the study could be retrospective, i.e. once the actual review process is over. – Franck Dernoncourt Jan 27 '16 at 17:22
  • 2
    @FranckDernoncourt, even if promised anonymity, would you answer such a survey truthfully if you had managed or tried to break the blinding? – Bill Barth Jan 27 '16 at 17:26
  • I agree there would be some bias. Many surveys encounter this kind of issue, e.g. https://www.amstat.org/Sections/Srms/Proceedings/papers/1996_162.pdf , I guess that's life :) – Franck Dernoncourt Jan 27 '16 at 17:37
  • Guessing the authors' identity is so normal that a PC surveyed his reviewers to see if their guesses turned out to be correct. – Blaisorblade Feb 11 '16 at 20:20
  • 1
    @Blaisorblade, is that true? I never try to guess since I respect the ideal of double-blind review in attempting to reduce conscious and unconscious bias against underrepresented groups in paper review. – Bill Barth Feb 11 '16 at 21:11
  • 1
    @BillBarth: "Trying to guess" is indeed discouraged, but that's different from "guess" (as in, reading the N-th paper on topic X associated with researcher Y, and automatically guessing this paper also comes from Y). I certainly meant the latter, which is hard to avoid. References in my answer; in particular, the PC survey is here: https://www.cs.umd.edu/~mwh/papers/popl12recap.pdf. And this FAQ from our conferences distinguishes "guess" from "seeking out information on the authors' identity". – Blaisorblade Feb 11 '16 at 21:28