11

Disclaimer: This is the third incarnation of my question because the previous two have met with silence.

Adapted to this site:

For a scientific study to be accepted as "truth" the standard process is peer review. When the subject is something like math where you can follow through all the steps and recreate the logic in your head, that works great.

But what about cases when that's not an option - for example, research into the efficiency of a vaccine or a new drug? After Phase 3 trials the pharmaceuticals company releases the data which is then scrutinized by (I'd expect) both peer researchers from rival companies and government agencies. But how can they detect that it's not all or in part a lie? There is after all a huge financial motivation for the company to do it. Even when the reviewers are highly motivated to find any traces of fraud, what is the actual process by which they do it, if they can't replicate the study itself and compare the results?

Richard Erickson
  • 15,076
  • 3
  • 43
  • 70
Vilx-
  • 407
  • 2
  • 9
  • 9
    Did you consider that there is also a huge financial reward for not lying? Being the company known for producing a drug with (very) harmful side effects is not going to sit well with shareholders since the company value could take a big hit. – Jeroen Jan 27 '21 at 08:40
  • 4
    @Jeroen - True, but only if you're found out. And also there are other ways to cheat - for example not by masking harmful side effects, but by exaggerating efficiency. That's a lot harder to detect. – Vilx- Jan 27 '21 at 08:49
  • 9
    Determining "truth" is not part of peer review. https://academia.stackexchange.com/q/148027/13240 – Anonymous Physicist Jan 27 '21 at 09:31
  • 1
    Exaggerating efficiency (saying 85% rather than 80%) is much less harmful than masking harmful side effects. But other will eventually replicate studies when others compare their new drug to the company in questions drug. – Ian Sudbery Jan 27 '21 at 10:11
  • 7
    I can't answer your question, because I'm not a pharmacologist or experimental researcher, but two important points are that (1) reviewers can check the research design and the appropriateness of the methods used in a study without performing a replication, (2) reviewers rely on rough heuristics and experience to perform a "smell test" - if something "smells fishy", they can ask for more evidence. this is more of an art than a science. in any case, peer review is no silver bullet against fraud. – henning Jan 27 '21 at 10:34
  • Even subsequent experiments may fail to find a notably wrong value. See Timeline of measurements of the electron's charge – iBug Jan 27 '21 at 18:21
  • Since vaccines are of particular interest right now: It would involve a huge amount of work and malice to fabricate a phase 3 study. And what is there to gain? The malice would be detected in phase 4 (aka in the real world) and than the company would be on the hook for damages, which can be quite the expense. Ask Grünenthal if Contergan was a good investment. – Christian Jan 27 '21 at 18:30
  • 1
    For a scientific study to be accepted as "truth" the standard process is peer review. You should never take a single study as "truth", your response should be 'that's interesting'. Taking single studies as truth is why science journalism, particularly around diet and food science, is a complete mess of contradicting info. When a whole bunch of studies, preferably ones that look at the question from different angles, start agreeing with each other, that's when you start to have some claim on having found some truth. – eps Jan 27 '21 at 18:33
  • @Christian - "It would involve a huge amount of work and malice to fabricate a phase 3 study." - how so? At the end of the day it's just a stack of papers. Skip the phase, produce fake papers. Sure, there's a few people involved, but my gut feeling says you should be able to keep the number of conspirators below 20. Am I wrong? Although I agree that real world results later would expose the fraud, that would be MUCH later. – Vilx- Jan 27 '21 at 19:59
  • @eps - I agree. Unfortunately this isn't always feasible - such as in the aforementioned Phase 3 drug studies. Though, of course, after the drug is deployed you can then slowly start gathering real-world statistics. – Vilx- Jan 27 '21 at 20:06
  • @Vilx making a huge stack of papers make sense is already a lot of work when you don't lie. And for a study with 40.000 people you need more than 20 people. – Christian Jan 27 '21 at 21:52

3 Answers3

42

I take issue with your first statement, that for a "...study to be accepted as "truth" the standard process is peer review". The purpose of peer review is not to be the judge of what is true or not, but to evaluate (simply put) whether a study is well-conducted, is using adequate methods, is acknowledging relevant previous research, and if the conclusions are supported by the data and analysis. Another way of putting this might be that peer review is about validation of the claims made, but not about validation of truth. Some other thoughts and guidelines on the purpose of peer-review can be found here; on scope and responsibilities, from PNAS (see "Peer Reviewer Instructions" and "Reviewer responsibilities), and a relativly clear statement on the scope of peer-review, from The Royal Society (see "Reviewing instructions").

In a scientific context, "truth" is something that follows from repeated studies that confirms previous results, and is based on a network of theory and observation that is (to a large extent) congruent. However, generally, I would say that it is more appropriate to define the scientific method as a way to search for "truth", than a method to determine what is "true" (opinions will probably vary on this though).

When it comes to cheating, especially with regard to data, the possibilities to detect this during peer-review is limited, even in the ideal situation when data has been made fully available. If researchers for example fully fabricate data, or tamper with raw data, this will not be caught in peer-review, since only the modified data will be avaliable. Add to this that the time available to peer-review is very limited, so a full statistical re-analysis of the data is not possible. This is also one of the reasons for the need of replicated studies and other studies with supporting evidence before results are accepted as "true".

fileunderwater
  • 2,607
  • 22
  • 30
  • 1
    Does that mean that a peer review does not validate data in any way? For example, if I conducted a study of the average age of people passing my house and concluded that all 1000 of them were 27 years old and named "John" - that would obviously be nonsense. But peer review would let that pass assuming everything else was in order? – Vilx- Jan 27 '21 at 09:04
  • 7
    No, in your paper you would need to describe the methodology, which in your scenario (almost surely) has a huge flaw, and that would be detected in the peer review. As written in the answer, your study will not have been "well-conducted" and will not have used "adequate methods". – cheersmate Jan 27 '21 at 09:18
  • @Vilkx- In many fields, it's a standard practice that raw data are supposed to be made publicly available. But reviewers are under no obligation to check them. Some fields have introduced dedicated "artifact evaluation" processes following the actual peer review. – lighthouse keeper Jan 27 '21 at 09:38
  • 2
    @Vilx-: Most likely (not sure about Medicine, but other subjects), if you'd omit the fact that you only ask people who are 27 years old, people would not realize that unless the result seems really silly (as in this case). Indeed, there are many "false" studies out there (sometimes out of malice, sometimes because of lacking knowledge). In the case of Covid vaccanication, however, the study gets "replicated" in the sense that failures will be detected when the population is vaccacinated. – user111388 Jan 27 '21 at 09:42
  • 2
    @cheersmate - Oh, no, the methodology description would be perfect. It's just that I wouldn't do any of it and would simply invent all the raw data. – Vilx- Jan 27 '21 at 09:59
  • 1
    @Vilx- This varies a lot between scientific fields, so it is hard to give a general answer. Overall, peer-review is not a one-size-fits-all process, and the areas covered by peer-review therefore varies between fields, journals and individual researchers/peer-reviewers. However, yes, in some fields there are attempts to also review and data. The situation/possibility to do this has become a bit better with time (again, with huge differences between fields) due to mandates to openly publish data sets. [cnd...] – fileunderwater Jan 27 '21 at 10:14
  • [Cnd...] However, to fully review data and the statistical analysis of this data is a huge undertaking, and the normal peer-review process doesn't have time for this. Remember, the normal time set aside for peer-review (generally done unpaid, by voluteers) is a couple of hours, to sometimes a couple of days, so the possibility to fuly review data is limited, even when full data sets are available.

    I should also mention that I didnät really notice your Q title at first, and mostly wrote my answer on the issue of valitating "truth". The issue of cheating is diffucult, ...[cnd..]

    – fileunderwater Jan 27 '21 at 10:17
  • 1
    [cnd...] and only some aspect can really be caught by peer-review, even in the ideal case. If researchers for example fully fabricate data, or tamper with raw data, this will not be caught in peer-review, even when full data sets have been made openly available. Therefore the need for replicated studies and other studies with supporting evidence to approach something that resembles "truth" – fileunderwater Jan 27 '21 at 10:19
  • 1
    For an interesting view on peer review, truth and correctness, read Gelman's blog: https://statmodeling.stat.columbia.edu/?s=peer+review – Ethan Bolker Jan 27 '21 at 16:36
  • 2
    @Vilx- well crafted fraud is indeed extremely hard to detect and usually requires someone involved coming forward. As other answers have said, the strong disincentive is that if you do it long enough you will most likely be caught and your reputation ruined. – eps Jan 27 '21 at 18:42
16

Quite simply, it doesn't and it can't. Further, the aim of peer review is not to detect fraud. Peer review can answer the questions:

  1. Does this study answer an interesting question that has not already been answered else where?
  1. Does the study use the correct methodology to answer the question? Are there flaws or gotchas in the implementation? For example I am currently having a back and forth over what is the most appropriate way to remove a particular sort of bias from a data set. These are the sort of subtleties that a non-expert reader might not be able to detect
  2. Check that the conclusions drawn are supported by the data and analysis provided. Are there subtle reasons that what the authors claim doesn't follow? For example, it might take an infectious disease epidemiologist expert to tell where a particular interpretation of results about Covid is falling victim to the Texas Sharpshooter Fallacy.

Journals can and sometimes do detect particularly egregious cases of fraud (like some categories of imagine manipulation), and reviewers will hopefully catch cases where authors are being evasive, cherry picking data, ignoring flaws in the data, or suggesting things that don't quite follow, but outright lies are more or less impossible to catch.

This is why generally things don't become accepted as truth on the basis of just a single study. While outright replications are rare, future studies will use previous studies as starting points, and if those previous studies are incorrect it will become apparent as the house of cards built on them doesn't stand up.

In fact we rarely ever accept anything as TRUTH. Science doesn't find truth, and all papers are wrong. Instead science as a whole, average over everything asymptotically approaches truth, but on a small scale it is not a smooth approach, but random walk. A biased one to be sure, more two steps forward and one back than the opposite.

This is why breaking into a new field can be difficult. You need to absorb the complete milieu of the field. You need to get a feel for what the field as a whole believes, rather than what an individual paper says. That's not to say the field is always right and the individual paper wrong, but siding with the field will make you right more often than wrong.

Anyon
  • 26,132
  • 8
  • 87
  • 116
Ian Sudbery
  • 38,074
  • 2
  • 86
  • 132
  • 3
    "all papers are wrong" - I would adapt this statement for mathematics where it's probably not a good way of saying it. Even if true statements mathematics ultimately depend on an agreement of the community, mathematical knowledge is typically orders of magnitude more stable than in any other topic. – Captain Emacs Jan 27 '21 at 10:36
  • Yes. This more applies to empirical disciplines. – Ian Sudbery Jan 27 '21 at 11:25
6

TL;DR: Finding outright fraud is not the job of peer review; it is not difficult to cheat in a publication and it is not easy in general to discover it. However, fraud in important work will ultimately be found out. Fraud in unimportant work may linger for a while because nobody will bother to use or reproduce the results.

Peer review rarely can identify fabricated data directly (there are exceptions, see the case of Jan Hendrik Schön, where graphs were identically reproduced in different contexts; or cases where image manipulation can be clearly established).

However, note that fabricating data is the ultimate scientific crime, even worse than plagiarism. If the question is important, you waste other researchers valuable time and direct them away from other more productive lines of work.

Furthermore, if the question is important, you will be found out. It may take time, but you will be found out. This is how science works. It makes mistakes, results are foggy, but the fog will clear at some point. If you ever fabricated data, you will have a very hard time to ever be believed again - actually, I would venture so far as to say you will never be believed again. No one wants to waste their time on work by someone who is not just sloppy (such as the Cold Fusion case), which is bad enough, but actively mislead their peers.

If the question is unimportant, and one is out of the eye of scientific scrutiny, then one may survive for a while in the system (there were cases where whole careers were built on this over longer periods); however, then, what's the point? What's a charlatan without an audience?

Peer review is mostly a sanity check for the most coarse omissions, mistakes, or really clumsy fakes. But discovering the latter is not the purpose of peer review. Given above incentives to not lie, peer review assumes that the authors have given their best shot at being truthful and it tries to capture honest mistakes; another role is evaluating the quality of the research (which is often very subjective and may have a latency of decades before it becomes more "objectively" evaluable).

[Addendum: One major class of issues could theoretically be discovered by peer review in a similar way as vote tampering, namely by statistics such as Benford's law - however, unlike in voting where results matter immediately and on a large scale, peer reviewers do not typically invest the time to run detailed evaluations of whether the statistics has been tampered with. Scientific work is not treated as adversarial as would be vote manipulation or intelligence work, and it would be a huge waste of time to do so, as there is enough to do with the exploration of the unknown.]

Captain Emacs
  • 47,698
  • 12
  • 112
  • 165
  • I see. I wonder though if companies are not more resilient against bad reputation than individuals. Sure, if it later comes to light that data was falsified then there will be fines, maybe even someone jailed, difficulties in the stock market... but the company will carry on. And the general public has a short memory. Scandals are a dime a dozen these days. A few years later things will be back to normal. – Vilx- Jan 27 '21 at 12:29
  • @Vilx- That's a different question and has nothing to do with typical peer review issues. There is a reason why you have to disclose company links as reviewer. Reputational damage can still be very expensive, even if not destructive. Business follows different laws from academia. – Captain Emacs Jan 27 '21 at 13:21
  • True, true. Well... as you probably have gathered, I'm really looking for the answer to the question "what makes the test results of a vaccine vendor trustworthy?" And... I thought it was the same peer review process at play here, even though the "peers" in this case are government overseers. Guess not. My search continues. – Vilx- Jan 27 '21 at 13:55
  • 1
    No system, peer review or anything else can prevent a concerted attempted at outright lies. Things that argue against complete invention are: A very large number of people are involved in these trials, they would ALL need to keep their mouths shut. And often these trials are not carried out by the companies themselves, but by contract trials organizations, and these are paid the same irrespective of the outcome of the trial. And they really would be wiped out by such a scandal, which would have to involve 1000s of people never saying anything. – Ian Sudbery Jan 27 '21 at 14:06
  • @IanSudbery Indeed. OP is to some extent justified in the abstract question of safety approval, though probably not in the concrete one of the vaccines, as show the details of the issues around the approval of the Boeing 737 Max. In a vaccine approval, the damage created by knowingly withholding/faking facts is, however, so much more expensive than even that, not just in terms of immediate loss of lives, but also in the reputational damage to vaccination campaigns as a whole, that I expect the full force of the law and governmental power to be unleashed at the perpetrators. – Captain Emacs Jan 27 '21 at 14:14
  • 1
    One only has to look at the large fraction of drugs that fail to make it through the approvals process to see that this is at least not ubiquitous. – Ian Sudbery Jan 27 '21 at 14:54
  • @Vilx- it's pretty hard to do what you are talking about in large companies simply because of the bureaucracy and number of people involved, even for smaller decisions. the incentives are more in line when you are talking about small startups whose future may depend on a product being approved and there's less people involved that can put the brakes on or call foul. A perfect example being Theranos -- there's no way Holmes could have pulled that off as a manager in some giant pharma company. And if anything, brands are FAR more image conscious these days than before social media. – eps Jan 27 '21 at 19:01
  • @IanSudbery - Citation very much desired? :) – Vilx- Jan 27 '21 at 20:07
  • @IanSudbery - Also - 1000s of people? Well, if you actually do the study, yes. But for merely producing the right paperwork, shouldn't it be enough with just a few people? – Vilx- Jan 27 '21 at 20:08
  • Citation for which bit? That most trails are actaully conducted by CROs or that a large proportion of drugs fail trials? – Ian Sudbery Jan 28 '21 at 00:43
  • 1000s might have been an exaggeration, 100s perhaps. You couldn't just pretend to do a trial but not actaully do it. What are all those people going to do when they find out they've been involved in a trial for the last 3 years that they've never heard of? I mean, you might just about get away with inventing numbers and putting them in a table for an academic journal, but not for a medical regulator - you'd need the details of all the centers involved, the enrolling physicians, the trial nurses, the consent forms... – Ian Sudbery Jan 28 '21 at 00:50