Fix a point $(a,b)\in \Bbb{R}^n\times\Bbb{R}^n$ and let $(h,k)\in\Bbb{R}^n\times\Bbb{R}^n$. Then,
\begin{align}
g\left((a,b)+(h,k)\right)&=g(a+h,b+k)\\
&=g(a,b)+g(a,k)+g(h,b)+g(h,k)
\end{align}
Define $L:\Bbb{R}^n\times\Bbb{R}^n\to\Bbb{R}$ as $L(h,k):=g(a,k)+g(h,b)$. Then, $L$ is a linear transformation and the above equation shows that
\begin{align}
g((a,b)+(h,k))-g(a,b)=L(h,k)+g(h,k)
\end{align}
We can show the remainder term $g(h,k)$ is small by using the Cauchy-Schwarz inequality:
\begin{align}
\frac{|g(h,k)|}{\|(h,k)\|}\leq \frac{\|h\|\cdot\|k\|}{\|(h,k)\|} = \frac{\|h\|}{\|(h,k)\|}\cdot \|k\|\leq 1\cdot \|k\|=\|k\| \leq \|(h,k)\|
\end{align}
and clearly the RHS (and thus the LHS) approaches $0$ as $(h,k)\to (0,0)$. Thus, the inner product $g$ is differentiable at $(a,b)$ with $Dg_{(a,b)}=L : (h,k)\mapsto g(a,k)+g(h,b)$.
If you were to represent the linear transformation $Dg_{(a,b)}=L$ as a $1\times (2n)$ matrix relative to the standard basis $\{(e_1,0),\dots, (e_n,0),(0,e_1),\dots, (0,e_n)\}$ on $\Bbb{R}^n\times\Bbb{R}^n$, where $e_i=(0,\dots, 1,\dots 0)\in\Bbb{R}^n$ and the basis $\{1\}$ on $\Bbb{R}$, then we get
\begin{align}
[L]&=
\begin{pmatrix}
L(e_1,0) & \cdots &L(e_n,0)&L(0,e_1)&\cdots L(0,e_n)
\end{pmatrix}\\
&=
\begin{pmatrix}
a_1 & \cdots & a_n & b_1 & \cdots &b_n
\end{pmatrix}.
\end{align}
Generalization.
As a side remark, the calculation of the derivative of $g$ is not special to the inner product on $\Bbb{R}^n$. More generally, if you take any finite-dimensional real vector spaces $V,W,X$ and a bilinear (not-necessarily symmetric) mapping $g:V\times W\to X$, then an almost identical calculation will show $g$ is differentiable at every point $(a,b)\in V\times W$ and that for all $(h,k)\in V\times W$, $Dg_{(a,b)}(h,k)=g(a,k)+g(h,b)$. The only subtlety in the more general setting is that we have to show the remainder term $g(h,k)$ is small, so for this one has to use the fact that all bilinear maps are bounded in the sense that there exists a $C>0$ such that for all $(h,k)\in V\times V$, we have $\|g(h,k)\|_X\leq C\|h\|_V\cdot \|k\|_W$.
This actually works more generally for multilinear maps:
If $V_1,\dots, V_n,W$ are finite-dimensional real/complex normed vector spaces and $g:V_1\times \cdots\times V_n\to W$ is a multilinear map, then $g$ is differentiable at every point $a=(a_1,\dots, a_n)\in V_1\times \cdots \times V_n$, and for every $h=(h_1,\dots, h_n)\in V_1\times\cdots \times V_n$, we have
\begin{align}
Dg_a(h)&=\sum_{i=1}^ng(a_1,\dots, a_{i-1}, h_i, a_{i+1},\dots a_n)
\end{align}
The intuitive way of thinking about this theorem is that multilinear maps are like "products", so $g(x_1,\dots, x_n)$ means the "product with respect to $g$" of the elements $x_1\in V_1,\dots x_n\in V_n$. So, the derivative of a product is of course very simple: differentiate each thing one by one keeping all others fixed, so symbolically:
\begin{align}
D(x_1\cdots x_n)&= \sum_{i=1}^nx_1\cdots Dx_i\cdots x_n\tag{$*$}
\end{align}
where as I mentioned, the "product" $\cdot$ really means with respect to $g$, and here $Dx_i$ really means the derivative of the canonical projection function $x_i:V_1\times\cdots \times V_n\to V_i$, so $(*)$ written more properly is
\begin{align}
Dg&=\sum_{i=1}^ng(x_1,\dots, Dx_i, \dots, x_n)
\end{align}
which is a suppressed way of saying that if we calculate derivatives at a point $a=(a_1,\dots, a_n)$ and apply it to the displacement $h=(h_1,\dots, h_n)$, then
\begin{align}
Dg_a(h)&=\sum_{i=1}^ng(x_1(a),\dots, (Dx_i)_a(h),\dots, x_n(a))\\
&=\sum_{i=1}^ng(a_1,\dots, h_i, \dots, a_n),
\end{align}
which is of course what I wrote in the highlighted blocktext. Hopefully this multitude of ways of presenting the product rule (and the way to interpret the extremely condensed notation) is helpful.
Examples
As some examples, suppose $V=M_{n\times n}(\Bbb{R})$ and $g:V\times V\to V$ is matrix multiplication: $g(A,B)=A\cdot B$. This is a bilinear operation, so for any $(A,B),(h,k)\in V\times V$, we have
\begin{align}
Dg_{(A,B)}(h,k)&=g(A,k)+g(h,B)=A\cdot k+ h\cdot B
\end{align}
Notice here the ordering is important. The condensed way of writing this equation is
\begin{align}
D(A\cdot B)&=A\cdot (DB) + (DA)\cdot B
\end{align}
(or sometimes people use a lower case $d$, so $d(AB)=A\cdot dB + (dA)\cdot B$).
For another example, let $V=\Bbb{R}^n$ and consider the determinant as a multilinear function of the columns of a matrix $\det:V^n\to\Bbb{R}$. Then,
\begin{align}
D(\det)_A(H)&=\sum_{i=1}^n\det(a_1,\dots, h_i,\dots a_n)
\end{align}
where $A=(a_1 \quad \cdots \quad a_n)$ and $H=(h_1\quad \cdots \quad h_n)$ are the columns of $A$ and $H$.