1

I have many questions about the existence conditions when doing ODE. However I think that they have a common root(treating $y$ as a variable, instead of treating it as a function), so it can be meaningful to expose them in only one question:

(In the following points I'll present examples of differential equations, to explain better. My question, however, is of general nature so I don't really care about the kind of ODE)

Simplification of functions that depend only upon the independent variable $$a(x)y'+b(x)y=f(x) \ \ \ \mathbf{[1]}$$ Dividing by $a(x)$: $$y'+\frac{b(x)}{a(x)}y=\frac{f(x)}{b(x)} \ \ \ \ \ \ \ \mathbf{[2]}$$ Now this equation is equivalent to the former except when $a(x)=0$. Now suppose that we manage to find the general integral $\varphi$ of $\mathbf{[2]}$

Let $\mathcal{D}$, be the maximal domain in which is meaningful to define the map $x\mapsto \varphi(x)$ with codomain $\mathbb{R}$(that in general may be constant(s) dependent)

(Let's suppose that there aren't further simplifications or passages that require other existence conditions)

Now according to my logic the domain in which the general integral functions should be defined is(withouth other considerations):

$$(\mathcal{D}\cap\text{Dom}(a)\cap\text{Dom}(b)\cap\text{Dom}(f))/a^{-1}(\{0\})$$

(Or to be even more precise also a subset of this "maximal domain", such that each point is still of accumulation should be fine)

Is this correct? In the case that I'm correct, these gives us only the general integral of $\mathbf{[2]}$ that in general is a subset of the general integral of $\mathbf{[1]}$:am I correct?

Pseudo-inverses

Let's suppose we have: $$\frac{dy}{dx}=xe^{-x}(y^2+1)$$ Now comes a step that for me is completely unclear and infomal, so I don't know if this may be the cause of what comes next: $$\frac{1}{y^2+1}dy=xe^{-x} dx$$ Now I integrate both sides: $$\arctan(y)=-e^{-x}(x+1)+c$$ Now my hearth suggests me that this is possible iff: $$-\frac\pi2<-e^{-x}(x+1)+c<\frac\pi2$$ So let's call $A_c$ the set in which this disequation is verified. Supposing $x\in A_c$: $$y=\tan(-e^{-x}(x+1)+c)$$ So the general integral is: $$\varphi_c(x)=\tan(-e^{-x}(x+1)+c)$$ This should be defined in $A_c$ and only there(or in a suitable subset), because otherwise the the simplification of $\tan$ and $\arctan$ is meaningless. But if we try for example $c=0$, then $A_0 \approx (-1.39,+\infty]$. But in reality $\varphi_0(x)=\tan(-e^{-x}(x+1))$, satisfies the ODE everywhere excepts in the points in which $-e^{-x}(x+1)=k\frac\pi2$. Why is it so? Is there something wrong?

Simplification of functions that depend upon the $y$

Oh lord! These are the worse! Let's suppose: $$ y'=xy \ \ \ \mathbf{[1]} $$ $$ \frac{dy}{dx}=xy$$ $$\mathbf{(strange \ and \ unclear \ step)}$$ $$ \frac{1}{y}dy=x \ dx \ \ \ \mathbf{[2]}$$ Okay so I divided by $y$ so I should impose $y\neq 0$:but what should it mean? $y$ is a function, not a variable. Reasoning logically on what an ODE actually is, I came to the conclusion that this should mean: $y(x)\neq 0 \ \ \forall x\in \text{Dom}(y)$ Where I mean that if there are zeroes of $y$ in $\text{Dom}(y)$, they must be eliminated from $\text{Dom}(y)$, because if they are included we obtain a solution of $\mathbf{[2]}$, that won't be a solution of $\mathbf{[1]}$. However: $$ \frac{1}{y}dy=x \ dx $$ $$\ln|y|=\frac{x^2}{2}+c$$ $$|y|=ce^{\frac{x^2}{2}}\ \ \ c>0$$ Now the situation gets fun. I've seen many times this step: $$y=\pm ce^{\frac{x^2}{2}}\ \ \ c>0$$ $$y= ce^{\frac{x^2}{2}} \ \ \ c\neq 0$$ This is in my opinion conceptually wrong, because we are still treating $y$ as a mere variable.In my opinion $y$ could be for example(but it's not the only option), equal in some traits to $3 e^{\frac{x^2}{2}} $ and in other traits to $-3 e^{\frac{x^2}{2}} $. Even if we impose the domain of the solution to be a single interval, who assures us that we can't compose two traits in a way that makes the function still continous and derivable in the junction point? I don't know if it's possible in this case,but I'm speaking in general. However: $$y=c e^{\frac{x^2}{2}}, c \neq 0$$ And now according to my professor, we must consider the case that we lost in the condition existence that is to say $y=0$, that is to say that now we have to suppose $y$ to be the null function : What!? I don't believe that the condition $y\neq 0$ is $y$ isn't the null function:it's really much stronger than that!! Adding the null function we get: $$y=c e^{\frac{x^2}{2}} $$ But who tells us that there isn't some other solution of $\mathbf{[1]}$ that has some zeroes but is not the null function?

Thank you in advance for your help :)

Kandinskij
  • 3,709
  • 1
    I didn't read it all but there is a theorem called Picard–Lindelöf theorem and it is really rigorous. https://en.wikipedia.org/wiki/Picard%E2%80%93Lindel%C3%B6f_theorem – Hermis14 Jan 13 '21 at 15:30
  • Thank you, but I studied that theorem and it only assures local existence and uniqueness of a solution of a Cauchy Problem. And here the nature of the question is not local. – Kandinskij Jan 13 '21 at 15:32
  • 1
    You can indefinitely extend the interval of existence by testing the Lipschitz condition. – Hermis14 Jan 13 '21 at 15:38
  • I'm sorry, buy I strongly disagree that the theorem you presented solves all my problems. Because first of all it requires the domain of $f$ in $y'=f(x,y)$ to be open(otherwise you cannot construct the cylinder as in the proof). Secondly, because not all the function are Lipschitz with respect to $y$. Third because it simply doesn't answers many points of the question. – Kandinskij Jan 13 '21 at 15:51
  • 2
    @Eureka: It does solve at least some of your problems, since it implies (for ODEs such that the conditions are satisfied, of course) that solution curves cannot cross, and this means that if $y(x)=0$ (for all $x$) is a solution, then you know that any other solution must be nonzero everywhere. So when seeking those other solutions, it is indeed safe to divide by $y$. (But many books explain these things terribly bad, so I can understand your confusion and frustration.) – Hans Lundmark Jan 13 '21 at 16:06
  • 1
    And regarding the “unclear and informal” step where you separate the variables, that's something that has been discussed many times already on this site. See here, for example: https://math.stackexchange.com/questions/27425/what-am-i-doing-when-i-separate-the-variables-of-a-differential-equation (and the “linked questions” there). – Hans Lundmark Jan 13 '21 at 16:08
  • 1
    It's actually nothing else than “computing antiderivatives using the chain rule backwards”, much like when you recognize at a glance that $2x \cos(x^2)$ is the derivative of $\sin(x^2)$, but with an as yet unknown function instead. For example, one may recognize that $y'(x) \cos(y(x))$ is the derivative of $\sin(y(x))$ with respect to $x$. And the thing you need to know, in order to recognize that, is that $\cos y$ is the derivative of $\sin y$ with respect to $y$, i.e., you need to be able to compute $\int \cos y , dy$, and in that step you (temporarily) treat $y$ as variable. – Hans Lundmark Jan 13 '21 at 16:46
  • 2
    To be clear, here I'm not talking about any of your examples, but about a situation where you have an ODE of the form $y'(x) \cos y(x) = f(x)$. Then you can solve it by integrating both sides with respect to $x$, which gives $\sin y(x) = F(x)+C$, where $F$ is some antiderivative of $f$. On the right-hand side you just have an integral $\int f(x) dx$. On the left-hand side, you also have an integral with respect to $x$, but in order to compute it (using the chain rule backwards), you need to be able to compute the integral $\int \cos y , dy$, which is with respect to $y$. – Hans Lundmark Jan 13 '21 at 16:57
  • Thank you. I think that I understand that passage now. – Kandinskij Jan 13 '21 at 16:58
  • 1
    And if it bothers you to use $y$ (since it's already used as the name of the sought function), you might as well say that you're computing the auxiliary integral $\int \cos u , du$, for example. – Hans Lundmark Jan 13 '21 at 16:58

0 Answers0