2

Why is the matrix defined by $\left(\sum\limits_{k = -N}^{N}\exp(2\pi \textbf{i} k\frac{i-j}{M})\right)_{i,j=1,\ldots,M}$ of rank $2N+1$ if $2N+1 \leq M$?

This problem appeared while linearizing an algorithm trying to calculate integrals of the form $\int_{\mathcal{M}} K(x,y)f(x) \, dx$ where $K(x,y)$ is a nonnegative-definite kernel and $\mathcal{M}$ is a (nice enough) manifold in $\mathbb{R}^d$ for some $d \in \mathbb{N}$.

Now, since $K(x,y) = \sum\limits_{k = -N}^{N}\exp(2\pi \textbf{i} k(x-y))$ is such a kernel, the derivation dictates that we must calculate $(K(x_i,x_j))_{i,j = 1,\ldots,M}$ for a chosen set of training points $x_i$. Choosing the rather nice "grid points" $x_i := \frac{i-1}{M}$, the problem above pops out very easily. Since I was asked to check that the matrix in this case has rank $2N+1$ for $M$ large enough, I set out to do just that -- but using the definition of the rank (dimension of the image space) or that we can represent every other column/row by $2N+1$ distinct columns/rows I got nowhere.

I would be glad if anybody could help me out on this small detail to justify my writeup (on the approximation quality of said algorithm) using this example :).

TheOutZ
  • 1,256
  • 8
  • 18

1 Answers1

1

With $\omega_k = \exp(2 \pi i k/M)$ is $$ X = \left( \sum_{k=-N}^{N} \omega_k^{p-q}\right)_{p, q=1, \ldots M} = \sum_{k=-N}^{N} a_k a_{-k}^T $$ where $a_k$ are the vectors $$ a_k = (\omega_k, \omega_k^{2}, \ldots, \omega_k^{M})^T \in \Bbb C^M $$ for $k = -N, \ldots, N$.

It has been shown here that $X$ has rank $2N+1$ if $a_{-N}, \ldots, a_N$ are linearly independent.

To show this linear independence we consider the $M \times (2N+1)$ matrix built from the column vectors $a_{-N}, \ldots, a_N$. This is a submatrix of the $M \times M$ matrix $$ Y = \left(\omega_k^{q} \right)_{k=-N, \ldots, M-1-N, q=1, \ldots, M} $$ and that is a multiple of the Vandermonde matrix $V$ of $1, \omega, \ldots, \omega_{M-1}$. That Vandermonde matrix has a non-zero determinant since the $\omega_k$ are pairwise distinct. It follows that $Y$ has maximal rank $M$, and consequently the column vectors $a_k$ are linearly independent.

Martin R
  • 113,040
  • I'm going to go through your argument tommorow, but since it looks sound I'm going to give you the benifit of confidence and award the checkmark straight away. – TheOutZ Nov 13 '22 at 21:51