I understand the need for Explainability in AI. However, I am uncertain of what is meant by 'making AI explainable'.
What needs to be explainable? Is it the output of a model? Does it refer to the model itself? Does it refer to the user interface of the tool that the AI is a part of? Is it all of the above? If so, what is not included in Explainable AI?
What do we strive for when making AI 'explainable'? Are there commonly referred to definitions of explainability of AI, which go beyond 'to understand how a decision was made'?