I found a potential wrong application of machine learning validation methods in one paper recently published in Nature Energy, which is the best energy journal (>50 impact factors). The authors used k-fold cross validation over a forecasting model on gasoline demand under COVID-19 with Google mobility time-series data and the COVID-19 data. They claimed they don't have over-fitting issue, as both training and testing cases reach R-square above 0.8. That should be wrong, as time-series validation should be considered instead of K-fold cross validation. It would be cheating to use K-fold in this case. Please correct me if I am wrong.
I reached the editor, and there is further evaluation ongoing since then. However, the editor does not promise to take the proper editorial action since I am not willing to disclose my personal information. Is that true? Could anyone make suggestion on this? How to report this misconduct in a correct way if I want to remain anonymous? Thanks a lot for your advice!