I have been using the 3 metrics independently for a while now, but trying to figure out if they are actually 3 separate things (with similar-looking definitions/names) or there is some underlying connection between them.
mAP - Mean Average Precision is the average of the Precision-Recall curve over various thresholds. 1 2
MAP@k - Mean (over all data points) of the AP@K (which is average precision for K predictions) 3 4 5
Macro-Precision - Un-weighted average of classwise-precision 6 7
It would be great to hear thoughts on how to reconcile these concepts. Please do check the links for more details on what I am referring to since nomenclature might vary from place to place.