Context
In light of the perishability of external resources as illustrated here: https://xkcd.com/1909/ I was wondering if it is desirable to ensure a single PDF, including its research code, can be replicated as stand-alone.
This question pertains to articles produced with latex that use code as part of their research. Often, this code is hosted in an external repository, which is convenient but might some day stop being accessible.
Sometimes, the code used in the research is included in the appendices of the article PDF, which would make it available as long as the PDF exists. However, that still requires a scientists that wants to reproduce the results and/or enhance the work that is presented, to manually copy-paste the code and form it into some sort of runnable format (e.g. with line endings etc). This could be made easier in the following way (among undoubtedly many others):
One could include the code and data in the pdf, write a separate reproducibility appendix that contains a small code that scans the PDF to reproduce the research (and the report).
However, I have never found an article that has that level of reproducibility. Furthermore, I would expect this could save quite a bit of time of people reproducing the work of others, as it is made easier.
Example
Suppose someone published an article that describes how a certain type of pancakes can be baked. To illustrate the method, a code simulating pancake baking is written, and the particular pancake baking datafile (4 kb) is created to simulate that particular pancake baking. The results of the simulation are plotted in the report. The code is published in somerepositoryhost.com/somerepository
Fast forward 10 years, the original author died, the repository is not longer hosted, and the particular pancake baking datafile is not available anymore. Yet, person V wants to verify whether the that particular pancake could indeed be baked according to the presented method. Now person V should be able to reproduce the code and accordingly the results presented in the paper based on the method section. However, that requires more work than having to run a single command that does that for person V (as long as person V posesses hardware that can still run that code).
By lowering the amount of work that is required to reproduce the results, one could make it easier to verify the articles, and to build upon them.
Question
Why is this level of reproducibility not a standard in the fields and/or latex-compiled articles that do not rely on either large external input datasets or custom computing platforms?
Note
This is not to suggest that this would be the best, or even a perfect way to guarantee reproducibility of scientific articles that rely on code, it is merely an approach that could be used to increase the speed of reproducibility and the ease of building upon such work when externally referenced code- and/or data resources have decayed.