In the model, you will decide how best to get uncertainty. If you used Bayesian optimization (that's a great package for it in Python), for example, you get a covariance matrix along with your expected values, and so inherently get an uncertainty measure. In this case, you can make predictions as to the underlying function of your data, and the (co-)variance will provide levels of uncertainty, as shown by the width of the green bands around the line below:

So the red points show where we have some sample data... notice that we have none e.g. at X = 4
and X = -1
, which is why we have high uncertainty; the 95% confidence interval is very large.
If you use a standard deep neural network TPO perform classification, for example, there is no inherent measure of uncertainty. All you really have is your test accuracy, to let you know how well the model performs on hold-out data. I cannot remember where it is explained, but I believe it is not actually feasible to interpret the class prediction values in terms of uncertainty.
For example, if you are predicting cat
or dog
for an image, and the two classes receive (normalized) logit values [0.51, 0.49]
respectively, you cannot assume this means very low certainty.
model.predict_prob()
will give you the expected probabilty, but for a given probabilty you might have a higher variance (in other words a higher uncertainty). – Tanguy Jun 16 '21 at 21:31