4

I realize that derivatives with respect to symmetric matrices have been well covered in prior questions. Still, I find that the numerical results do not agree with what I understand as the theory.

Minka states (page 4) that for a symmetric matrix $\Sigma$ we have the following result

$$\frac{d\log|\Sigma|}{d\Sigma} = 2\Sigma^{-1} - (\Sigma^{-1}\circ I)$$ stemming from the constraing that $d\Sigma$ must be symmetric. This is in contrast to the result of $$\frac{d\log|\Sigma|}{d\Sigma} = \Sigma^{-1}$$ if $\Sigma$ was not constrained to be symmetric.

I checked this using finite differences and my result agrees with the later $\frac{d\log|\Sigma|}{d\Sigma} = \Sigma^{-1}$ rather than the former $\frac{d\log|\Sigma|}{d\Sigma} = 2\Sigma^{-1} - (\Sigma^{-1}\circ I)$. Also, this paper from Dwyer (1967, see Table 2) says the answer is $\Sigma^{-1}$ without mentioning any correction for symmetry. Why is it then than that the former correction for symmetric constraints is given? What is going on here?

Here is the code and output (notice the off diagonals are incorrect):

> library(numDeriv)
> 
> A <- matrix(c(4,.4,.4,2), 2, 2)
> q <- nrow(A)
> 
> f <-function(A) log(det(A))
> 
> matrix(grad(f, A), q, q)
            [,1]        [,2]
[1,]  0.25510204 -0.05102041
[2,] -0.05102041  0.51020408
> 2*solve(A)-diag(diag(solve(A)))
           [,1]       [,2]
[1,]  0.2551020 -0.1020408
[2,] -0.1020408  0.5102041
> solve(A)
            [,1]        [,2]
[1,]  0.25510204 -0.05102041
[2,] -0.05102041  0.51020408
jds
  • 237
  • 1
  • 8

1 Answers1

4

Consider $B$, a symmetric matrix of dimension $2$; $B=\begin{pmatrix}x&y\\y&z\end{pmatrix}\in S_2$ may be identified with the row vector $[x,y,z]$.

Now let $f:A\in S_2\cap GL_2 \rightarrow \det(|\log(A)|)$. Its derivative is $Df_A:H\in S_2\rightarrow tr(HA^{-1})=<H,A^{-1}>$ where $<.>$ is the standard scalar product over the matrices. Then $H=\begin{pmatrix}h&k\\k&l\end{pmatrix}$ is identified with $[h,k,l]$ and $A^{-1}=\begin{pmatrix}a&b\\b&c\end{pmatrix}$ with $[a,b,c]$.

$Df_A(H)=ha+2kb+lc=[a,2b,c][h,k,l]^T$.

Finally $\nabla(f)(A)=[a,2b,c]$, that is identified with $\begin{pmatrix}a&2b\\2b&c\end{pmatrix}=2A^{-1}-A^{-1}\circ I$.

i) Note that the derivative is a linear application and the gradient is its associated matrix; more exactly, it's the transpose of the associated matrix (a vector).

ii) about your example, for $A=\begin{pmatrix}p&q\\q&r\end{pmatrix}$. The command $matrix(grad..)$ doesn't see that $a_{1,2}=a_{2,1}$ and considers that there are $4$ variables whereas there are only three.

In fact, the derivatives with respect to $p,q,r$ are $0.2551020, -0.1020408, 0.5102041$. More generally, you must mutiply by $2$ the derivatives with respect to $a_{i,j}$ when $i\not= j$ and keep the other.

  • Thank you for the answer. However, I am not following the last 4 lines. In particular, where is 2lb coming from? why not also 2kc? In the last line why do you switch to ∇(f) rather than Df? Sorry I think I am missing something. Also, why is the finite difference result different (see original question)? – jds Nov 21 '18 at 15:11
  • Yes, I change $k$ with $l$. For the other questons, cf. my edit. –  Nov 21 '18 at 18:49
  • 1
    It can be identified, with the vector [x, y, z], or just as well, [x, 3y, z], but this identification stretches the off-diagonal directions. This is not necessarily disastrous in practice as the zeroes are in the same location as the real derivative. Just symmetrizing the normal matrix derivative is the right thing. See https://arxiv.org/abs/1911.06491 for the history and analysis of what went wrong and https://saturdaygenfo.github.io/posts/symmetric-gradients/ for why it matters in practice. – wnoise Jan 19 '22 at 16:27