Let $A=[a_{ij}(x)]$ be a non singular matrix valued function with inverse $A^{-1}=B=[b_{ij}(x)]$
I am trying to use the chain rule to justify $\dfrac{\partial}{\partial x^i} (\log|\det A|)=\dfrac{(\operatorname{cof}A)_{rs}}{det A} \dfrac{\partial a_{rs}}{\partial x^i}=b_{rs} \dfrac{\partial a_{rs}}{\partial x^i}$
The solutions just say the proof follows by noting the expansion of determinant by rows $$\det A=\sum^n _{r=1} A_{ir} (\operatorname{cof} A)_{ir}$$ for any fixed $1 \leq i \leq n$ and then using the chain rule.
The proof I have from the book "Tensor calculus" by Schaums outline on page 106 is as given.
By the chain rule $$\frac{\partial}{\partial x^i} (\log |\det A|)=\frac{1}{\det A} \frac{\partial}{\partial x^i} (\det A)=\frac{1}{\det A} \frac{\partial}{\partial a_{rs}} (\det A) \frac{\partial a_{rs}}{\partial x^i}=\frac{A_{rs}}{\det A} \frac{\partial a_{rs}}{\partial x^i}=b_{sr} \frac{\partial a_{rs}}{ax^i}$$
Where does $\det A=\sum^n _{r=1} A_{ir} (\operatorname{cof} A)_{ir}$ come into it?