$\newcommand{\M}{\mathcal{M}}$Recently I encountered a certain class of matrices whose determinants behave in an interesting manner. Define $\M(n,k)$ for positive integers $n,k$ with $k\leq n$ to be the real $n\times n$ matrix with all $1$s on the diagonal, all $1$s for $k-1$ entries to the right of the diagonal on each row, and $0$s everywhere else. Note that if there are less than $k-1$ entries to the right of the diagonal, then the $1$s carry over to the leftmost columns. For example: $$\M(4,2)=\begin{bmatrix}1&1&0&0\\0&1&1&0\\0&0&1&1\\1&0&0&1\end{bmatrix}\quad\text{and}\quad\M(3,1)=\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}.$$
At first, I believed that $\M(n,k)$ would always be nonsingular when $k<n$, but this turned out to be false. The smallest example where this conjecuture fails is $\M(4,2)$, whose determinant is $0$. After even more numerical testing I have been led to believe the following new conjecture:
The determinant $\det\M(n,k)$ is $0$ if and only if $\gcd(n,k)>1$. If $n,k$ are coprime, then $\det\M(n,k)=k$.
I have tested for all $k\leq n$ up to $n=9$ with the help of a computer without any counterexamples. Does anyone have an idea how this conjecture might be proven?
Here are some partial results. It is trivial that $\det\M(n,k)$ is zero when $k=0$ or $n$, and that it is $1$ when $k=1$. I can prove the result when $k=2$ as well: Let $U(n)$ be the $n\times n$ matrix with all entries $0$ except a $1$ at the bottom leftmost corner. Then, we have the recurrence relation $$\det\M(n,2)=\det\big(\M(n-1,2)-U(n-1)\big)+(-1)^{n+1}\det\big(\M(n-1,2)^t-U(n-1)^t\big)$$ which of course simplifies to $$\det\M(n,2)=(1+(-1)^{n+1})\det\big(\M(n-1,2)-U(n-1)\big).$$ Upon verifying $\det\M(2,2)=0$, it's clear that this proves the conclusion for $k=2$ if we assume that $\det\big(\M(n-1,2)-U(n-1)\big)=1$. Call the LHS $x(n-1)$. It is easy to compute that $x(2)=1$, from which it follows by induction $x(n)=1$ for all $n$, since $x(n+1)=x(n)$.
Unfortunately, I am at a complete loss in any of the cases other than $k=0,1,2,n$. It seems like (for sufficiently large $n$) as $k$ increases from $3$ until $\lfloor n/2\rfloor$, the proof going by the same thinking as my proof for $k=2$ would get increasingly complicated until it becomes hopeless to even attempt. (I could be wrong, of course.) Not to mention the general case for any positive integer $n$.
Does anyone have an idea how to attack the general problem? Thanks in advance!