It is common to define optimization problems as minimization problems instead of maximization. And by multiplying your target functions with $-1$ you can transform one into the other:
$$\max_{w} \log{L(w)} \Leftrightarrow \min_{w} -log{L(w)}$$
So to maximize the log-likelihood you minimize the negative log-likelihood. Basically it just comes down to conventions in optimization theory.
Moreover, since $L(w) \in [0,1]$ its logarithm $\log{L(w)}$ will be less than or equal to $0$ (note that $log{0}$ is not defined). Accordingly $\max_{w} \log{L(w)}$ means to maximize a negative number which is, at least to me, less intuitive than minimizing a positive number.
The more interesting part is actually the log-transformation which increases numerical stability of your calculations (since it "transforms" the multiplication to a sum and thereby reduces the risk of underflowing).