My question might be a stretch given my lack of formal experience with writing and publishing research papers. But one thing I've noticed in my line of work and reading published papers is that the methodology seems to be second to the 'meat' of a paper, analytics and discussion. Given the relative ease of
- Hosting a repository of script used to analyze the data
- Hosting a database of the aforementioned data
And its prevalence within the computer science (and other subjects such as academia) to create reproducible problems of which people can replicate to either confirm or test.
Why do papers omit a relatively simple, yet far-reaching, component to publicized work?
Why isn't the work behind analysis compiled in such a way that it can be executed with a straightforward execution of the script?
When I asked this question to a PhD Coworker, the response was that "One can always ask the author for details". But given the amount of published journals, requesting and receiving verifiable and in most cases workable copies would be extremely difficult.
In my mind, this would be as simple of linking a paper's data methodology to a GitHub repository listing the types of software used, the script, and link to the database in question. The reader would (if they have the necessary software to run) download the script, download the data, run the script, and see all of the results and figures in the published paper.
I think I am missing something in my understanding of how the process works, hence my question to the Academia SE.
I found some answers that I believed to have answered my question and deleted the question:
- Why are papers accepted even if they don't release code or data to allow reproducibility?
- How to get the data to reproduce a published result?
But upon further reflection, I realized that not all research are created equally. Gathering, compiling, cleaning, analyzing data is a completely different methodology than for example, wet-labs or fieldwork. Given the cost and replicability of different research methods, is there a point where the cost/difficulty is so high such that it is impossible (or so incredibly costly, that very few will be able to afford) to disprove a proven (regardless of actual authenticity) concept or theory?
An analogy would be the notion of a troll-patent. No matter how wrong the patent holder is, the cost of litigation is so high and the process convoluted, the original inventor has no choice but to acquiesce and relinquish his/her rights.
Applying this to the realm of academia, the analogy would be that the cost to disprove something is high, no one is willing to put in the effort to disprove it.
If such is the case, is the notion of (to costly to prove or disprove) accepted as a reason to not belief the findings of a research project?
Or is it (as suggested by one of the answers) to accept that publications are conducted in good faith? But if we take into account publicized 'junk-science' that are touted by some as truth, where would the standard stand?
If I were a citizen reading a study by a prestigious publication, by reputable authors, that required the resources of millions of dollars to produce, is it inherently true?