12

I found a curious matrix $$T = \begin{bmatrix}1&2&1\\1&0&-1\\1&-2&1\end{bmatrix}$$ This matrix (or actually $\frac 1 2 T$) performs

  1. Local mean value (integral) estimation.
  2. Local derivative estimation by central midpoint distance.
  3. Local second order derivative by midpoint (but half step length compared to 2).

Yet if we calculate its inverse, we find:

$$T^{-1} = \frac 1 4 T$$ or maybe more interesting:

$$\left(\frac 1 2 T\right)^{-1}=\frac 1 2 T$$ Matrix and inverse are the same!

Is this a coincidence? Or what properties or choices of estimators will give us this curious behavior? Is this a property of differential and integral operators in general or just because of some specific choice of how to estimate them?

mathreadler
  • 25,824
  • 3
    I suspect you're on to something. I'm curious what people would have to say. I think if you flessed this out with more detail, you might get more responses.

    For example, you probably want to show explicitly that you are multiplying matrix T by values of some function at 3 different spots, perhaps represented in a vector.

    The series of derivatives forming a matrix reminds me of the Wronskian, but I don't think its applicable.

    The Jacobian also comes to mind. That might make more sense combined with Difference Equations.

    – TurlocTheRed Nov 27 '18 at 21:07
  • ? In what sense does that matrix $T$ do any of the things you mention? There seems to be a lot of assumed context, in particular of what the vector space over which $T$ supposedly acts is – FShrike Jan 22 '23 at 23:00
  • Row 2 and 3 are taught in many courses in beginners calculus as approximation to first and second derivative for smooth functions. Row 1 is popular in engineering. Perhaps for it's Fourier transform's nice properties. Compare Fejer to Dirichlet kernel. A flat [1,1,1] or [1,1] would be Dirichlet. – mathreadler Jan 23 '23 at 17:06
  • 1
    It is a really interesting question but I think it could help to mention that you think of the $3$-vectors $\mathbf{u}$ you are multiplying with $T$ to consist of function values $u_{i+1},u_i,u_{i-1}$ at discrete space steps. Not everyone comes from a numerical maths background. Also: perhaps it helps to consider that $\frac{1}{2}T$ is an involutory matrix and that all involutory matrices $A$ are related to projections $P$ by $\frac{I-A}{2}=P$ which is easy to show. A matrix $P$ is a projection when $P^2=P,.$ – Kurt G. Jan 24 '23 at 04:55
  • @KurtG. Yes it could be function values at these discrete steps or local integral approximations around those points if the function itself is not smooth enough. Hmm that is interesting but when I study P I don't manage to find something. The 1 eigenvalued eigenvector of the projection is 1 -1 -1 but it does not make me much wiser. The 4 point Hadamard-Walsh transform is unitary and has the same property with the fourth being the third order differential approximator 1 -1 1 -1. – mathreadler Jan 24 '23 at 22:42
  • The only eigenvalues of projections $P$ are $1$ and $\color{red}{0},.$ If I had an answer to your question above I would have written it. My comment tried to suggest to you what buzzwords you might want to add in an edit to attract the attention of smarter an more experienced people. – Kurt G. Jan 25 '23 at 02:19
  • 1
    The matrix $\tfrac{1}{2\sqrt{2}}]1,3 ,3 ,1; 1,1 ,-1, -1; 1, -1, -1, 1 ; 1 ,-3, 3 ,-1]$ is also its own inverse. – Alex K Jan 26 '23 at 13:33

1 Answers1

7

Alas, I should have done more searching at the start before deriving an answer from scratch!

It turns out this has already been investigated.

For an arbitrary dimension, these are known as Kravchuk polynomials.

Its involutory properties such as $$\left(K^{(n+1)}\right)^2=2^nI$$ can be found and proved in for instance, Feinsilver and Shott (2010): On Krawtchouk Transforms.


Original Answer

This is a collaborative effort with @MykolaPochekai. We generalise the matrix $T$ to an arbitrary dimension.

We begin by defining the $(n+1)\times(n+1)$ dimensional matrix $T^{(n+1)}$ whose elements satisfy the following constraints\begin{align}T^{(n+1)}_{i,0}=1,&\quad T^{(n+1)}_{i,n}=(-1)^i&\quad\forall0\le i\le n\\T^{(n+1)}_{0,j}={n\choose j},&\quad T^{(n+1)}_{n,j}=(-1)^i{n\choose j}&\quad\forall0\le j\le n\\T^{(n+1)}_{i,j}+T^{(n+1)}_{i+1,j}&=2T^{(n)}_{i,j}&\quad\forall0<i,j<n\end{align} for each $n>1$. We use the convention that $\binom ab=0$ whenever $b>a$. The last constraint can be interpreted as approximating a local $n$th-order derivative by midpoint.

Theorem 1. For each $n>1$, the matrix $2^{-n/2}T^{(n+1)}$ is an involution.

The method of proof is as follows

  • deriving a closed form for each entry of $T^{(n+1)}$ (Lemma 2)

  • showing that all off-diagonal entries of $(T^{(n+1)})^2$ are zero (Proposition 5)

  • showing that all main diagonal entries of $(T^{(n+1)})^2$ equal $2^n$ (Proposition 6).


Using a bivariate generating function, it is possible to derive a closed form for an arbitrary element $T^{(n+1)}_{i,j}$.

Lemma 2. For every $0<i,j<n$, $$T^{(n+1)}_{i,j}=(-1)^i\sum_{k=0}^i\binom ik\binom{n-k}j(-2)^k.$$

Proof. As the algebra is rather tedious, we provide a very brief sketch; further details can be found in this MSE answer. Define $$f(x,y)=\sum_{n\ge1}\sum_{i=0}^nT^{(n+1)}_{i,j}x^{n-1}y^i$$ where the indices are chosen so that the terms disappear in the base index. Applying the double summation to the recurrence relation $T^{(n+1)}_{i,j}+T^{(n+1)}_{i+1,j}=2T^{(n)}_{i,j}$ yields $$(2xy-y-1)f(x,y)=\sum_{n\ge1}\binom njx^{n-1}=\frac{x^{j-1}}{(1-x)^{j+1}}$$ after significant simplification. To extract the coefficient $[x^{n-1}y^i]$, we expand the denominator into two power series \begin{align}f(x,y)&=x^{j-1}\sum_{m\ge0}\binom{m+j}jx^m\cdot\sum_{\ell\ge0}(2x-1)^\ell y^\ell\end{align} so that \begin{align}T^{(n+1)}_{i,j}&=[x^{n-j}]\sum_{m\ge0}\binom{m+j}jx^m\cdot\sum_{k=0}^i\binom ik2^kx^k(-1)^{i-k}.\end{align} Taking $m+k=n-j$ gives us the desired result. $\quad\square$

Remark 3. There appears to be a circular, hyperbolic pattern. I have asked about it here.

Corollary 4. $T^{(n+1)}$ anti-diagonalises the matrix $$P=\begin{pmatrix}1&&&&\\&-1&&&\\&&1&&\\&&&-1&\\&&&&\ddots\\&&&&(-1)^n\end{pmatrix}.$$ In particular, we have $T^{(n+1)}J_{n+1}\left(T^{(n+1)}\right)^{-1}=P$ where $J_{n+1}$ is an exchange matrix.

Proposition 5. For all $0\le i,j\le n$, we have $T^{(n+1)}_{i,j}=(-1)^iT^{(n+1)}_{i,n-j}$ (rows) and $T^{(n+1)}_{i,j}=(-1)^jT^{(n+1)}_{n-i,j}$ (columns). Consequently, due to symmetry of rows and columns, it follows that $(T^{(n+1)})^2_{i,j}=\sum\limits_{r=0}^nT^{(n+1)}_{i,r}T^{(n+1)}_{r,j}$ becomes zero for all $i\ne j$.

Proof. From Lemma 2, we write the second binomial coefficient using the coefficient operator. This gives the following direct proof. \begin{align}T^{(n+1)}_{i,n-j}&=\sum_{k=0}^i\binom ik[x^{n-j}](x+1)^{n-k}(-2)^k\\&=[x^{n-j}](x+1)^n\sum_{k=0}^i\binom ik\left(-\frac2{x+1}\right)^k=[x^{n-j}](x+1)^n\left(\frac{x-1}{x+1}\right)^i\\&=[x^{-j}]\left(1+\frac1x\right)^n\left(\frac{1-1/x}{1+1/x}\right)^i\\&=[x^j](1+x)^n\left(\frac{1-x}{1+x}\right)^i=[x^j](1+x)^n(-1)^i\sum_{k=0}^i\binom ik\left(-\frac2{1+x}\right)^k\\&=(-1)^i\sum_{k=0}^i\binom ik[x^j](1+x)^{n-k}(-2)^k\\&=(-1)^iT^{(n+1)}_{i,j}.\end{align} Similarly, proving $T^{(n+1)}_{i,j}=(-1)^jT^{(n+1)}_{n-i,j}$ is equivalent to showing that $$[x^j](x+1)^i(x-1)^{n-i}=(-1)^j[x^j](x+1)^{n-i}(x-1)^i$$ which can be verified by direct comparison. $\quad\square$

Proposition 6. For all $0\le i\le n$, we have $(T^{(n+1)})^2_{i,i}=2^n$.

Proof. By definition, \begin{align}(T^{(n+1)})^2_{i,i}&=\sum\limits_{r=0}^nT^{(n+1)}_{i,r}T^{(n+1)}_{r,i}\\&=\sum_{r=0}^n\sum_{k=0}^i\sum_{\ell=0}^r(-1)^{i+r}(-2)^{k+\ell}\binom ik\binom r\ell\binom{n-k}r\binom{n-\ell}i.\end{align} In Combinatorial identity: $2^N=\sum_{m=0}^N\sum_{r=0}^n\sum_{s=0}^m(-1)^{n+m}(-2)^{r+s}\binom nr\binom ms\binom{N-r}m\binom{N-s}n$ for all $0\le n\le N$, @MarkoRiedel proves that this indeed equals $2^n$ for all $0\le i\le n$. $\quad\square$

  • I will need to read it more thoroughly later. But it seems very nice! Includes approximation of all differential operators up to $n$. Yes I also noticed the zero crossings very similar to Fourier transform when I looked at HadamardWalsh 4 point transform. – mathreadler Jan 27 '23 at 08:13
  • I buy the closed-form for $T_{i,j}^{(n+1)}$ and will play around with the diagonal. Meantime, could you please explain the remark about the placement of negative signs? Isn't the DFT a complex matrix? Even looking at real/imaginary parts of the $4\times 4$ case, I don't see it.

    Overall, I would be fine to award the bounty to this answer.

    – Integrand Jan 27 '23 at 22:04
  • @Integrand Please see revised with a plot of the distribution of signs. mathreadler, thanks for the interesting question. – ə̷̶̸͇̘̜́̍͗̂̄︣͟ Jan 28 '23 at 00:59
  • 1
    @mathreadler These are called Kravchuk polynomials, I have edited my answer. – ə̷̶̸͇̘̜́̍͗̂̄︣͟ Feb 09 '23 at 00:27