As the author of that page I feel that some clarification is necessary:-
Am I running a totally different version of the programs?
Yes. You're running the IID test. You need to run ea_non_iid
. What you've run assumes that the data sample is IID within a p=0.01 certainty. It then calculates the min. entropy of the dataset using the maximum probability ($H_\infty$). That's easy.
ea_non_iid
attempts to measure $H_\infty$ of correlated data. That's hard.
The reason 90B is pretty useless (and never used) is that code assumes uniformly distributed data. Well to be honest, no one really knows what the authors were thinking. [Insert appropriate conspiracy theory, but I draw your attention to Federal Information Security Modernization Act (FISMA) of 2014, 44 U.S.C. § 3551 et seq., which is referenced on page 3 of 90B].
Other than a few laboratory binary entropy sources, most generate some form of non uniform distribution. You can get really weird ones depending on how the source is sampled and packed into bytes. That site has examples, and this is another one from a current project's source:-

The site also says to not trust anything on that site. Do your own research and have a look at:-
John Kelsey, Kerry A. McKay and Meltem Sönmez Turan, Predictive Models for Min-Entropy Estimation, and Joseph D. Hart, Yuta Terashima, Atsushi Uchida, Gerald B. Baumgartner, Thomas E. Murphy and Rajarshi Roy, Recommendations and illustrations for the evaluation of photonic random number generators.
This is an extract:-

You can see that in some cases $H_\infty$ is underestimated sixfold. Their various predictors are not very good. From experience and research I trust the LZ78Y compression predictor the most, but still. This is consistent with my own testing as shown.
John Kelsey is one of the 90B authors and so he criticises himself!