In my opinion the essentials are:
Linear algebra. Comes up explicitly e.g. in neural nets but it is used implicitly all over the place e.g. in solving regression.
Calculus, but focus on differentiation rather than integration. Be sure to include partial differentiation and finding minima and maxima of functions of multiple variables because this is used in...
Optimization. Understand gradient descent at the very least, and what can go wrong when you try to optimize functions that aren't concave / convex.
Probability and statistics. You definitely need to understand the basics such as what are random variables, what are means and expectations, measures of spread including standard deviation. It would also be useful to understand linear regression, and the different perspective provided by Bayesian approaches. The danger of going too far into statistics is that a significant part of statistics is concerned with hypothesis testing, which isn't very helpful for machine learning.
All in all, try to get the basics of these areas sorted, but don't go too far because:
It's perfectly possible to be a successful practitioner of machine learning without an in-depth understanding of the maths.
Once you have enough basics it's going to be better to take a "just in time" approach to learning because the underlying maths involved is so wide and deep that you couldn't possibly be expert in all of it even if you wanted to.
Finally, if you haven't already, Take Andrew Ng's machine learning course on coursera, which is a great overview of the area, and covers just enough maths to avoid glossing over important issues, but not so much that is inaccessible if you're willing to put the effort in.