I know this question was asked some time ago, but I just stumbled upon it myself wondering the same thing, so I will post what I have found out in case it is helpful to someone else.
I think the answer you seek is in Boyd and Vandenberghe's Convex Optimization (freely available on Boyd's website: http://stanford.edu/~boyd/cvxbook/) in sections 5.1-5.5 (especially, see subsection 5.5.5). Below are my key takeaways from that text.
Let
\begin{align}
\min \;\;\;\; &f_0(x)\\
\text{s.t.} \;\;\;\; &f_i(x) \leq 0 \;\;\;\; i = 1,\ldots,m \tag{1} \label{eq:1}\\
&h_i(x) = 0 \;\;\;\; i = 1,\ldots,p,
\end{align}
where $x \in \mathbb{R}^n$ be the optimization problem (not necessarily convex) we're considering. The Lagrangian of this problem is
$$
L(x, \lambda, \nu) = f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x).
$$
The Lagrangian dual is
$$
g(\lambda,\nu) = \inf_{x \in \mathcal{D}} \left( f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x) \right),
$$
where $\mathcal{D} := \left( \bigcap_{i=0}^m \operatorname{dom} f_i \right) \cap \left( \bigcap_{i=1}^p \operatorname{dom} h_i \right)$. The Lagrangian dual problem, in turn, is
\begin{align}
\max \;\;\;\; &g(\lambda,\nu)\\
\text{s.t.} \;\;\;\; &\lambda \succcurlyeq 0.
\end{align}
Note that
\begin{align}
\sup_{\lambda \succcurlyeq 0, \; \nu} L(x,\lambda,\nu)
&= \sup_{\lambda \succcurlyeq 0, \; \nu} \left( f_0(x) + \sum_{i=1}^m \lambda_i f_i(x) + \sum_{i=1}^p \nu_i h_i(x) \right) \\
&=
\begin{cases}
f_0(x) & \text{if } f_i(x) \leq 0, \; i = 1,\ldots,m, \text{ and } h_i(x) = 0, \; i=1,\ldots,p \\
\infty & \text{otherwise}.
\end{cases}
\end{align}
This means that
$$
p^\star := \inf_x \sup_{\lambda \succcurlyeq 0, \; \nu} L(x, \lambda, \nu)
$$
is the optimal value of the primal problem \eqref{eq:1}. Note that there is no restriction on the $x$ values over which we take the infimum, since the conditions of \eqref{eq:1} are already included in the $\sup L$. Moreover, by definition,
$$
d^\star := \sup_{\lambda \succcurlyeq, \; \nu} \inf_x L(x,\lambda,\nu)
$$
is the optimal value of the dual problem.
Let us now assume that strong duality holds. Then $d^\star = p^\star$. Furthermore, also assume that both the primal and dual problem optimal values are attained, i.e., that there exist feasible $x^\star$ and $(\lambda^\star,\nu^\star)$ such that $f(x^\star) = d^\star$ and $g(\lambda^\star,\nu^\star) = d^\star$. Then
\begin{align}
f_0(x^\star)
&= g(\lambda^\star,\nu^\star) \\
&= \inf_x \left( f_0(x) + \sum_{i=1}^m \lambda_i^\star f_i(x) + \sum_{i=1}^p \nu_i^\star h_i(x) \right) \\
&\leq f_0(x^\star) + \sum_{i=1}^m \lambda_i^\star f_i(x^\star) + \sum_{i=1}^p \nu_i^\star h_i(x^\star) \\
&\leq f_0(x^\star)
\end{align}
(for motivations of each equality/inequality, see page 242 in Boyd and Vandenberghe's book). This implies that the inequalities are in fact equalities. We from this, we can conclude that $x^\star$ solves the problem $\inf_x L(x,\lambda^\star,\nu^\star)$. Finally, assume that $\inf_x L(x,\lambda^\star,\nu^\star)$ has a unique solution (which happens e.g. if $L(x,\lambda^\star,\nu^\star)$ is a strinctly convex function of $x$). Then $x^\star$ is that solution, and therefore
$$
x^\star = \arg\min_x L(x,\lambda^\star,\nu^\star).
$$