27

How do reviewers/referees make sure that the author has done the work as he described in his paper and the written results are achieved?

Do they ask for the code or the tools that you have used? Will they test the output to make sure that the results are not faked?

I know that reviewers with 20+ years experience in the specific field can figure out if the results are real or fake or if they are even accomplishable, but it is still easy for the writer to fake results without any monitors..

For example in CS machine learning, you can extract an extra feature from the texts and claim that you got better results in classification than others have done before, when in reality you did not. So how do reviewers make sure that your work is sincere?

Krebto
  • 3,589
  • 3
  • 25
  • 45
  • 1
    I don't think this is a duplicate. The linked paper asks specifically about data manipulation; this question asks about more general dishonesty. (For example: How do we know that the authors' code is consistent with the paper's description of the algorithm?) – JeffE Apr 05 '17 at 13:51
  • @JeffE same thought, I don't even know how they voted to close the question... – Krebto Apr 05 '17 at 14:02
  • 1
    Of course you're free to accept whichever answer you want. But it's weird that you'd pick one that doesn't answer your question and has 10 folds votes lag behind the most voted one. It's seems you're more interested in validating your frustration with peer review (a specific case maybe) than genuinely interested in an answer. – Cape Code Apr 05 '17 at 16:11
  • @CapeCode 10 folds votes lag behind the most voted answer doesn't make it wrong answer. Anyway, as u said, I am free to accept any answer. For instance, It will be more interesting if you asked why community is voting to close a question where they claim it duplicates but in reality it is not! – Krebto Apr 05 '17 at 18:07

4 Answers4

43

They don't. Reviewing papers is volunteer work, which has to be done besides the regular job. So no more than half a day is spent on reading the paper and writing the review. That is enough to filter out obvious scams. But for more sophisticated fraud we rely on people trying to use the results once published and finding out that it does not work. The threat of the subsequent sanctions (and for most, the internalized honor code) is hoped to prevent most fraud.

Maarten Buis
  • 43,487
  • 8
  • 87
  • 152
  • Relying on people to approve someone research fraud is not an effective method to detect scams and most probably the author who do this will have a plan B, and if this author is a Ph.D. student where he already gained his degree will not affect him directly as it will affect his/her supervisor... – Krebto Apr 04 '17 at 11:21
  • 3
    I don't understand your comment. A review won't approve anyone, as it is blind and we don't know who wrote it. The only thing that is being evaluated is the paper. – Maarten Buis Apr 04 '17 at 11:28
  • 2
    Are you suggesting that the penalties for getting caught with fraud are not severe? If that is the case, then you are simply wrong. – Maarten Buis Apr 04 '17 at 11:30
  • 1
    Peer review is not intended to detect fraud. So the fact that it does not do that is thus not a bad thing. – Maarten Buis Apr 04 '17 at 11:32
  • How it simply wrong? if the penalties are not enough severe, then we will find more and more research papers as scams... – Krebto Apr 04 '17 at 11:34
  • 2
    It is simply wrong because they are severe. – Maarten Buis Apr 04 '17 at 11:35
  • severe for whom? the student who already get his doctoral degree and already working in industry or for the professor who was considered as the main supervisor to the student? – Krebto Apr 04 '17 at 11:40
  • 6
    @Krebto: There are consequences for the advisor, obviously, and s/he is certainly in a position to be able to tell if the work done by the student is genuine. They would get the fallout and that is how it should be, because their role, in part, is ensuring the integrity of their students. Besides, if they really committed a serious fraud, I'm guessing the word would get out of academia. I doubt the integrity (and a reputation of integrity) is something irrelevant to an industry researcher. – tomasz Apr 04 '17 at 11:59
  • 11
    On a different note, when you say about not spending more than half a day reading the paper, I think this depends on the discipline. If someone did a full math paper review in half a day, I would say that they most likely did an extremely sloppy job (assuming the paper is not total bunk). – tomasz Apr 04 '17 at 12:01
  • 12
    @Krebto There have been cases of PhD degrees being revoked later when fraud was discovered. This obviously also has pretty severe consequences for the job situation of the person. – Tobias Kildetoft Apr 04 '17 at 12:18
  • 6
    @Krebto, I think you make some valid points, and obviously you have some concerns about the validity of peer review as part of the scientific process. I don't really know what you expect to get out of this question though - Maarten has answered your question accurately and succinctly. Arguing that it's not good enough isn't going to change the reality. – zelanix Apr 04 '17 at 12:46
  • One should note, though, that the majority of papers if never thoroughly read after they are published. They are cited once or twice in the introductory section of a paper (author X has also used method Y), but probably nobody digged into the details of proofs or reproduced any experimental results. – J Fabian Meier Apr 05 '17 at 06:46
  • 1
    "has to be done besides the regular job" I'd say that reviewing is part of the regular job. – Dirk Apr 05 '17 at 13:12
  • @MaartenBuis A review won't approve anyone, as it is blind and we don't know who wrote it — Not in all areas. Theoretical CS conferences do not use double-blind reviewing. (Theory conference submissions are much more often incorrect than dishonest.) – JeffE Apr 05 '17 at 13:45
13

when I review papers, as an example, I have limited time between other tasks that must be done. I generally do 2 'read throughs', where I will go through the paper in about a couple of hours (depending on the length of the paper).

Asides from checking for consistencies between sections (e.g. are the results accurately mentioned in the abstract etc), I check:

  • that the method of how the data is obtained is clear
  • the data is clearly and descriptively displayed
  • evidence of some data validation (important in my field)
8

In software conferences, it is becoming a trend to have artifact evaluation sessions where produced data or software is provided to be evaluated according to the claims given in the submitted paper. While the process is currently optional, it does increase confidence in the claims that are made by the papers that had undergone such process.

5

There are notorious wrongful claims that peer review failed to catch for the exact reason mentioned in the question. Reviewers don't and most of the time cannot check the validity of the data.

Cold Fusion, is an example where hundreds of peer-reviewed manuscripts got published.

Hendrik Schoen Scandal is another case where lots of fabricated data got published in very high impact journals.

Burak Ulgut
  • 1,636
  • 1
  • 14
  • 16