I'm trying to use Example 4 in Section 2.5 of Philip J. Davis's book Interpolation and Approximation (Dover 1975). The aim is to fix an error in an answer I posted last night. This gives the problem a certain urgency, otherwise I would struggle on until light dawns! (If I don't get anywhere with the book, I'll try to follow this answer by joriki instead.)
I'll translate Davis's notation into the notation I'm using for the other problem, then specialise down to the case that's needed. This is almost a case of "simple" Hermite interpolation, except that for exactly one of the $n$ interpolation points, no derivative value is specified. (The derivative is zero at all the other $n - 1$ interpolation points, which ought to simplify things.)
Let $x_1, x_2, \ldots, x_n$ be distinct real numbers, $\alpha_1, \alpha_2, \ldots, \alpha_n$ positive integers, and $N = \alpha_1 + \alpha_2 + \cdots + \alpha_n - 1$. In the case of particular interest, there is a single index $i$ such that $\alpha_i = 1$ and $\alpha_j = 2$ for all $j \ne i$, so $N = 2n - 2$.
Deviating slightly more (still not harmfully, I hope) from Davis's notation, set: \begin{gather*} w_j(x) = \prod_{\substack{k=1\\k\ne j}}^n(x - x_k)^{\alpha_k} \quad (1 \leqslant j \leqslant n), \\ l_{jm}(x) = \frac{(x - x_j)^m}{m!}w_j(x)\frac{d^{(\alpha_j-m-1)}}{dx^{(\alpha_j-m-1)}}\left[\frac{1}{w_j(x)}\right]_{x=x_j} \quad (1 \leqslant j \leqslant n; \ 0 \leqslant m < \alpha_j), \\ p(x) = \sum_{j=1}^n\sum_{m=0}^{\alpha_j-1}r_j^{(m)}l_{jm}(x), \end{gather*} so that $p$ is a polynomial of degree $N$, constructed from $N + 1$ given real numbers $r_j^{(m)}$. The claim is: $$ p^{(m)}(x_j) = r_j^{(m)} \quad (1 \leqslant j \leqslant n; \ 0 \leqslant m < \alpha_j). $$ This is clearly equivalent to: $$ l_{jm}^{(q)}(x_k) = \delta_{jk}\delta_{mq} \quad (1 \leqslant j, k \leqslant n; \ 0 \leqslant m, q < \alpha_j), $$ where $\delta$ is the Kronecker delta.
In our special problem, $r_{j}^{(1)} = 0$ for all $j \ne i$ (and $r_i^{(1)}$ isn't defined).
Making matters even simpler, we have $r_j^{(0)} = 1$ for all $j \leqslant i$, and $r_j^{(0)} = 0$ for all $j > i$. Therefore: $$ p(x) = l_{10}(x) + l_{20}(x) + \ldots + l_{i-1,0}(x) + l_{i0}(x). $$ The last term in this sum, at least, presents no problem: $$ l_{i0}(x) = w_i(x)\frac{d^{(0)}}{dx^{(0)}}\left[\frac{1}{w_i(x)}\right]_{x=x_i} = \frac{w_i(x)}{w_i(x_i)} = \prod_{\substack{j=1\\j\ne i}}^n\left(\frac{x-x_j}{x_i-x_j}\right)^2. $$ This only has to satisfy $l_{i0}(x_j) = \delta_{ij}$ ($1 \leqslant j \leqslant n$), and it does.
The reason I am posting this question is that I cannot understand why the terms for $j \ne i$: $$ l_{j0}(x) = w_j(x)\frac{d}{dx}\left[\frac{1}{w_j(x)}\right]_{x=x_j} $$ satisfy the conditions $l_{j0}(x_k) = \delta_{jk}$ ($1 \leqslant k \leqslant n$). This does not seem evident from the definition, nor does it become any more evident when I write out the form of $l_{j0}(x)$ in the special case at hand.
In the present problem: $$ w_j(x) = (x - x_i)\prod_{\substack{k=1\\k\ne i,j}}^n(x - x_k)^2 \quad (j \ne i). $$ But it seems clearer to describe the problem in the general case, because there we have simply: $$ l_{j0}(x) = w_j(x)\left[-\frac{w_j'(x)}{w_j(x)^2}\right]_{x=x_j} = -\frac{w_j'(x_j)}{w_j(x_j)}\frac{w_j(x)}{w_j(x_j)}, $$ therefore: $$ l_{j0}(x_j) = -\frac{w_j'(x_j)}{w_j(x_j)}, $$ and this is not always equal to $1$. (We do, however, have $l_{j0}(x_k) = 0$ for all $k \ne j$.)
I expect my mistake is equally simple, but what is it?