In rectangular coordinates, one can define the divergence of a vector field $\vec{V}$ as the following limit
$$ \text{Div } V = \lim_{\epsilon \rightarrow 0} \frac{1}{\epsilon^3} \oint_{\partial C_{\epsilon}(p)} \vec{V} \cdot d \vec{A} \: \: \: \: (1), $$
where $C_{\epsilon}(p)$ is the cube of length $\epsilon$ and with corners at $p$ and $p+\epsilon(1,1,\ldots,1)$.
In the setting of Riemannian manifolds we have the formula
$$ \text{Div } V = \frac{1}{\sqrt{|g|}} \partial_i \left( \sqrt{|g|} V^i \right). \: \: \: \: (2)$$
By the divergence theorem (for manifolds) one also obtains a definition for $\text{Div } V$ as in $(1)$. My question is how to deduce $(2)$ from such a definition.
I'd like to have an answer which doesn't use differential forms or Lie derivatives.