1

I am training a neural network and doing 10-fold cross validation to measure performance. I have read lots of documentation and forums telling that the set of weights that should be saved or checkpointed are the ones that results to lowest val_loss and not highest val_accuracy, since the former usually results to higher testing accuracy.

Out of curiosity, I checkpointed both highest val_accuracy and lowest val_loss during my training. However, I found out that for some folds, I am getting better testing accuracy when I use the set of weights with highest val_accuracy compared to lowest val_loss. So during my cross-validation, I chose the set of weights that resulted to higher testing accuracy regardless of whether it came from the highest val_accuracy or lowest val_loss, and then just averaged the resulting testing accuracy across the 10 folds.

Is my methodology valid?

Ethan
  • 1,633
  • 9
  • 24
  • 39
nununu
  • 11
  • 1

0 Answers0