5

$$A=\begin{pmatrix}1 & -a_1 & -a_1 &\cdots & -a_1\\ -a_2 & 1 &-a_2 & \cdots &-a_2\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ -a_{N-1} & -a_{N-1} & \cdots& 1 & -a_{N-1}\\ -a_N & -a_N & \cdots & -a_N & 1 \end{pmatrix}.$$

Where $a_i\geq0\;\forall\; i\in\{1, \cdots, N\}$ and $$\sum\limits_{i=1}^{N}\dfrac{a_i}{a_i+1}<1.\quad (1)$$

  • EDIT 1: The condition $(1)$ must guarantee that the inverse exists.

  • EDIT 2 In fact, there is no formula given for $A^{-1}$. The problem is to find $P_i$ in the following equation: $$P_i-a_i\sum\limits_{j\neq i}^{N}P_j=\alpha a_i\;\forall\;i\in\{1, \cdots, N\}.$$ This is equivalent to $AP=b$ and hence $P=A^{-1}b$. They said that $P_i$ is given by: $$P_i=\dfrac{\alpha}{1-\sum\limits_{j=1}^{N}\dfrac{a_j}{1+a_j}}\dfrac{a_i}{1+a_i}.$$ Where $b=[\alpha a_1, \alpha a_2, \cdots, \alpha a_N]^{\mathrm{T}}$ and $P=[P_1, P_2, \cdots, P_N]^{\mathrm{T}}.$

This matrix is given in a paper: the authors said that its inverse is given by $A^{-1}$ when $(1)$ is satisfied. I do not know how to proceed to invert it.

How did they get $P$ without getting $A^{-1}$ ?

Thank you very much.

Jika
  • 2,970
  • 1
    Its inverse doesn't necessarily exist. – JPi May 02 '14 at 15:00
  • What is the methods used to invert it? – Jika May 02 '14 at 15:01
  • There are many, including various decompositions, etc. – JPi May 02 '14 at 15:03
  • @JPi And after the edit, how to invert it? – Jika May 02 '14 at 15:06
  • Is it supposed to have a "nice" inverse? (by nice, I mean a simple formula for th i,j element of $A^{-1}$). By inverting $A^T$ using Gauss pivoting, it looks like pivot coefficients are constant in each column, but apart from that, it's not "obvious" to find a formula. – Jean-Claude Arbaut May 02 '14 at 15:12
  • In fact, there is no formula given for $A^{-1}$. The problem is to find $P_i$ in the following equation: $$P_i-a_i\sum\limits_{j\neq i}^{N}P_j=\alpha a_i;\forall;i\in{1, \cdots, N}.$$ This is equivalent to $AP=b$ and hence $P=A^{-1}b$. They said that $P_i$ is given by: $$P_i=\dfrac{\alpha}{1-\sum\limits_{j=1}^{N}\dfrac{a_j}{1+a_j}}\dfrac{a_i}{1+a_i}.$$ @Jean-ClaudeArbaut see the edit also. – Jika May 02 '14 at 15:34

1 Answers1

10

The matrix $A$ can be written in the form $$ A=(I+\mathrm{diag}(a))-ae^T, $$ where $a=[a_1,\ldots,a_N]^T$, $\mathrm{diag}(a)$ denotes the diagonal matrix with the entries of $a$ on the diagonal and $e^T=[1,\ldots,1]$. For inversion, use the Sherman-Morrison formula: $$ A^{-1}=(I+\mathrm{diag}(a))^{-1}+\frac{(I+\mathrm{diag}(a))^{-1}ae^T(I+\mathrm{diag}(a))^{-1}}{1-e^T(I+\mathrm{diag}(a))^{-1}a}. $$

The matrix is invertible provided that $$ e^T(I+\mathrm{diag}(a))^{-1}a\neq 1 \quad\Leftrightarrow\quad \sum_{i=1}^N\frac{a_i}{a_i+1}\neq 1. $$

So if $b=\alpha a$, then $$ P=A^{-1}b=\alpha A^{-1}a=\frac{\alpha(I+\mathrm{diag}(a))^{-1}a}{1-e^T(I+\mathrm{diag}(a))^{-1}a}. $$ From here, you can get that its $i$th component $P_i$ is $$ P_i=\frac{\alpha[(I+\mathrm{diag}(a))^{-1}a]_i}{1-e^T(I+\mathrm{diag}(a))^{-1}a}=\frac{\alpha\frac{a_i}{a_i+1}}{1-\sum_{j=1}^N\frac{a_j}{a_j+1}}. $$

  • 1
    Very nice, to resort to Sherman-Morrison! – Jean-Claude Arbaut May 02 '14 at 15:42
  • $$A^{-1}a=(I+\mathrm{diag}(a))^{-1}a+\dfrac{\cdots}{\cdots}a.$$ Where did $(I+\mathrm{diag}(a))^{-1}a$ go? – Jika May 02 '14 at 16:04
  • 1
    @Jika I'm not sure what you mean. If you refer to why the formula for $A^{-1}$ has two terms and the formula for $A^{-1}a$ only one, it's because you can in $A^{-1}a$ factor out the vector $(I+\mathrm{diag}(a))^{-1}a$ times something like $1+\frac{x}{1-x}=\frac{1}{1-x}$. You can of course factor out $(I+\mathrm{diag}(a))^{-1}$ (to left or right) in the formula for $A^{-1}$ already. – Algebraic Pavel May 02 '14 at 16:13
  • @PavelJiranek I understand. Thank you very much for your help. (+10) – Jika May 02 '14 at 16:44