The whole idea of a validation set is that the model does not know about this data, so you can get an unbiased estimation of the model's performance. Then based on this unbiased estimation you find the best parameters for the model. The problem is, that finding the hyperparameters is in itself a way of training your model. So with optimized hyperparameters the model starts to overfit on the validation set. That is why to check the real accuracy of the model you actually need a different part of data, that your model never saw. This is the testing part of data.
Therefore in your case, when you do not have any hyperparameters, you can just use the division on train and test.
If you want a higher accuracy of estimating your model's accuracy than it is better to use Cross Validation instead of just dividing on train and test. Neural networks usually do not do full cross validation because it means increasing the computation time several times (5 for 5-fold cross validation).
With hyperparameters the ideal way is to do a double cross validation. One for validation set and one for test set. This is too expensive computationally so it is only used on very simple models, like a ridge regression.
Also very few models, that I know of, do not have hyperparameters. And usually those that do not have hyperparameters perform poorly compared to the ones that have. Ridge regression is often better than linear regression. Neural networks with variable number of layers perform better than fully automated neural networks.
Is the checking for overfitting separate from tuning the hyperparameters
yes. – Mohammad Athar Feb 28 '20 at 17:55