I'm training a classifier and I want to collect incorrect outputs for human to double check.
the output of the classifier is a vector of probabilities for corresponding classes. for example, [0.9,0.05,0.05]
This means the probability for the current object being class A is 0.9, whereas for it being the class B is only 0.05 and 0.05 for C too.
In this situation, I think the result has a high confidence. As A's probability dominants B's and C's.
In another case, [0.4,0.45,0.15], the confidence should be low, as A and B are close.
What's the best formula to use to calculate this confidence?
> 0.85
in the correct class a confident prediction, anything between0.3
and0.85
low confidence, and anything beneath0.3
wrong – Recessive Mar 03 '20 at 05:46