It doesn't matter. All that changes is the sign of $\lambda^*$, where $(x^*,y^*,\lambda^*)$ is the critical point. Dealing with maximization doesn't change it either. You can see this because regardless of how you formulate the method, you still have
$$\nabla L(x,y,\lambda) = \begin{bmatrix} f_x(x,y) \pm \lambda g_x(x,y) \\ f_y(x,y) \pm \lambda g_y(x,y) \\ \pm (g(x,y)-c) \end{bmatrix} = 0.$$
Some advanced material follows. It is not really useful for Calc III, but it is useful if you're interested in deeper work in optimization.
There are advantages to picking one of the conventions. For instance, consider the case when $g$ is convex (and not constant) and we are minimizing $f$. Then $A=\{ x : g(x) \leq c \}$ is connected and bounded, with $\partial A=\{ x : g(x)=c \}$. Then the minimum of $f$ on $\partial A$ occurs at the minimum of $L(x,\lambda)=f(x)-\lambda (g(x)-c)$ over $A \times [0,\infty)$. We can see this because for $(x,\lambda) \in A \times [0,\infty)$, the term $-\lambda (g(x)-c)$ is a penalty for being inside $A$ rather than on $\partial A$. It's a penalty because it makes $L$ bigger and we are minimizing $L$.
The opposite convention is natural with a maximization problem with a concave constraint function $g$: the maximum of $f$ on $\partial A$ will occur at the maximum of $L(x,\lambda)=f(x)+\lambda (g(x)-c)$ over $A \times [0,\infty)$.
This perspective is useful; it is the origin of the class of numerical optimization methods called interior point methods. It also suggests various physical interpretations of the Lagrange multipliers themselves, such as "shadow prices" in economics.