I realize that derivatives with respect to symmetric matrices have been well covered in prior questions. Still, I find that the numerical results do not agree with what I understand as the theory.
Minka states (page 4) that for a symmetric matrix $\Sigma$ we have the following result
$$\frac{d\log|\Sigma|}{d\Sigma} = 2\Sigma^{-1} - (\Sigma^{-1}\circ I)$$ stemming from the constraing that $d\Sigma$ must be symmetric. This is in contrast to the result of $$\frac{d\log|\Sigma|}{d\Sigma} = \Sigma^{-1}$$ if $\Sigma$ was not constrained to be symmetric.
I checked this using finite differences and my result agrees with the later $\frac{d\log|\Sigma|}{d\Sigma} = \Sigma^{-1}$ rather than the former $\frac{d\log|\Sigma|}{d\Sigma} = 2\Sigma^{-1} - (\Sigma^{-1}\circ I)$. Also, this paper from Dwyer (1967, see Table 2) says the answer is $\Sigma^{-1}$ without mentioning any correction for symmetry. Why is it then than that the former correction for symmetric constraints is given? What is going on here?
Here is the code and output (notice the off diagonals are incorrect):
> library(numDeriv)
>
> A <- matrix(c(4,.4,.4,2), 2, 2)
> q <- nrow(A)
>
> f <-function(A) log(det(A))
>
> matrix(grad(f, A), q, q)
[,1] [,2]
[1,] 0.25510204 -0.05102041
[2,] -0.05102041 0.51020408
> 2*solve(A)-diag(diag(solve(A)))
[,1] [,2]
[1,] 0.2551020 -0.1020408
[2,] -0.1020408 0.5102041
> solve(A)
[,1] [,2]
[1,] 0.25510204 -0.05102041
[2,] -0.05102041 0.51020408