4

Motivation

I have the following non-homogeneous Bessel differential equation $$\frac{\mathrm{d}^2R}{\mathrm{d}r^2}+\frac1r\frac{\mathrm{d}R}{\mathrm{d}r}+\alpha^2R=J_0(\alpha r)$$

I want to find the general solution for this ODE. I know that the general solution can be written as

$$R=R_h+R_p$$

where $R_h$ is the basis for the homogeneous ODE and is known to be

$$R_h=C_1 \, J_0(\alpha r) + C_2 \, Y_0(\alpha r).$$

Next, for finding the particular solution $R_p$ one can use the method of variation of parameters to obtain a particular solution using the homogeneous ones. However, this will lead to some messy solution with hard integrals! As the MAPLE or WOLFRAM both use this technique their result is not valuable for me. I know that

$$R_p=\frac1{2 \alpha^2}\left[\alpha rJ_1(\alpha r)\right] \tag{*}$$

and it can be verified by putting it into the ODE. In fact, I saw this in some published paper.


Question

Is there an elegant method to obtain such a particular solution mentioned in $(*)$?

Ice Tea
  • 435

2 Answers2

3

Using variation of the constant method. Looking for the solution in the form $$R_p = C(r)J_0(\alpha r),$$ then $$\dfrac d{dr}R_p = \dfrac d{dr}C(r)J_0(\alpha r) + C(r)\dfrac d{dr}J_0(\alpha r),$$ $$\dfrac{d^2}{dr^2}R_p = \dfrac{d^2}{dr^2}C(r)J_0(\alpha r) + 2\dfrac d{dr}C(r)\dfrac d{dr}J_0(\alpha r) + C(r)\dfrac{d^2}{dr^2}J_0(\alpha r).$$ Substitution to non-homogenius equation gives: $$J_0(\alpha r)\dfrac{d^2}{dr^2}C(r) + \left(2\dfrac{d}{dr}J_0(\alpha r)+\dfrac1rJ_0(\alpha r)\right)\dfrac{d}{dr}C(r) + \left(\dfrac{d^2}{dr^2}J_0(\alpha r)+\dfrac1r\dfrac d{dr}J_0(\alpha r)+\alpha^2J_0(\alpha r)\right)C(r) = J_0(\alpha r),$$ $$J_0(\alpha r)\dfrac{d^2}{dr^2}C(r) + \left(\dfrac1rJ_0(\alpha r) + 2\dfrac d{dr}J_0(\alpha r)\right)\dfrac{d}{dr}C(r) = J_0(\alpha r),$$ Do substitution: $$D(r)=\dfrac{d}{dr}C(r),$$ then $$J_0(\alpha r)\dfrac{d}{dr}D(r) + \dfrac1rJ_0(\alpha r)D(r) + 2\dfrac d{dr}J_0(\alpha r)D(r) = J_0(\alpha r),$$ $$J_0^2(\alpha r)D(r) + 2rJ_0(\alpha r)\dfrac d{dr}J_0(\alpha r)D(r) + rJ_0^2(\alpha r)\dfrac{d}{dr}D(r) = rJ_0^2(\alpha r),$$ $$\dfrac{d}{dr}\left(rJ_0^2(\alpha r)D(r)\right) = rJ_0^2(\alpha r)$$
Use the formula $\int zJ_0^2(z)dz = \dfrac{z^2}2\left(J_0^2(z)+J_1^2(z)\right):$ $$rJ_0^2(\alpha r)D(r) = \dfrac{r^2}{2}\left(J_0^2(\alpha r)+J_1^2(\alpha r)\right),$$ $$D(r) = \dfrac{r}{2}\dfrac{J_1^2(\alpha r)+J_1^2(\alpha r)}{J_0^2(\alpha r)},$$ $$\dfrac d{dr}C(r) = \dfrac1{2\alpha}\dfrac{d}{dr}\left(\dfrac{rJ_1(\alpha r)}{J_0(\alpha r)}\right),$$ where $$\dfrac{dJ_0(\alpha r)}{dr}=-\alpha J_1(\alpha r),\quad \dfrac{dJ_1(\alpha r)}{dr}=\alpha\left(J_0(\alpha r)-\dfrac1{\alpha r}J_1(\alpha r)\right)$$

$$C(r) = \dfrac r{2\alpha}\dfrac{J_1(\alpha r)}{J_0(\alpha r)},$$ $$\boxed{R_p = \dfrac r{2\alpha}J_1(\alpha r)}$$

And the same way for $Y_0(\alpha r).$

3

Start with the equation $$ \frac{d^2}{dr^2}J_{0}(\alpha r)+\frac{1}{r}\frac{d}{dr}J_{0}(\alpha r)+\alpha^2J_{0}(\alpha r)=0. $$ $J_0$ has an everywhere convergent power series. So you can differentiate with respect to $\alpha$, and interchange orders of differentiation: $$ \frac{d^2}{dr^2}(rJ_{0}'(\alpha r))+\frac{1}{r}\frac{d}{dr}(rJ_0'(\alpha r))+\alpha^2(rJ_0'(\alpha r))+2\alpha J_{0}(\alpha r)=0. $$ Therefore $R(r)=-\frac{r}{2\alpha}J_{0}'(\alpha r)=-\frac{1}{2\alpha^2}(\alpha r)J_{0}'(\alpha r)$ is a solution of $$ R''(r)+\frac{1}{r}R'(r)+\alpha^2R(r)=J_0(\alpha r). $$

Disintegrating By Parts
  • 87,459
  • 5
  • 65
  • 149
  • (+1) This is the elegant answer I was looking for! :) Many many thanks. So this technique can be used when ever the RHS of ODE is the solution of the homogeneous ODE! :) Nice technique! Have you seen it before? :) – Hosein Rahnama Dec 10 '15 at 22:33
  • 1
    @H.R. : You're welcome. If you use fixed endpoint conditions, then the classical eigenfunction solutions always vary analytically with the eigenvalue parameter $\lambda$. There is a hidden condition on $J_0$, but the trivial condition is $J_0(\alpha r)|{r=0}=1$ for all $\alpha$; the hidden condition has to do with an asymptotic behavior at $r=0$, and is always $0$. In fact $J_0(\sqrt{\lambda}r)$ must be an entire function of $\lambda$ because $L_0J(\sqrt{\lambda}r)=\lambda J_{0}(\sqrt{\lambda}r)$, which it is because of the even powers in the power series expansion. – Disintegrating By Parts Dec 10 '15 at 22:35
  • 2
    @TrialAndError Congratulate, your technique is impressive! Really, $$\dfrac{d^2}{dr^2}(-rJ_1(\alpha r))+ \dfrac1r \dfrac d{dr}(-rJ_1(\alpha r))+\alpha^2(-rJ_1(\alpha r)) + 2\alpha J_0(\alpha r)=0,\quad$$ $$\dfrac{d^2}{dr^2}\left(\dfrac r{2\alpha}J_1(\alpha r)\right) + \dfrac1r \dfrac d{dr}\left(\dfrac r{2\alpha}J_1(\alpha r)\right)+\alpha^2\left(\dfrac r{2\alpha}J_1(\alpha r)\right) = J_0(\alpha r).$$ – Yuri Negometyanov Dec 11 '15 at 08:00
  • 2
    Human vs MAPLE & WOLFRAM - 2:0 ;) – Yuri Negometyanov Dec 11 '15 at 08:09
  • 1
    This is a "resonant" non-homogenuous case, so that technique successful isn't accidental. – Yuri Negometyanov Dec 11 '15 at 08:51
  • 1
    @YuriNegometyanov: Yeah, Human is the Winner! :D – Hosein Rahnama Dec 11 '15 at 11:24
  • 1
    @YuriNegometyanov : I'm glad you think I'm a human. Machines win again. – Disintegrating By Parts Dec 11 '15 at 14:30
  • 1
    TrialAndError :) – Yuri Negometyanov Dec 11 '15 at 18:24