3

Consider the ODE $$\frac{dx}{dt} = ax + b$$ where $a$ and $b$ are two parameters. The way to solve this is to divide both sides by $ax+b$ and integrate:

$$\int \frac{\dot x}{ax+b}dt = t+C \\ \frac{\log|ax+b|}{a} = t+C \\ x(t) = Ke^{at}-\frac ba$$

Easy enough. But I'm not sure why we're not excluding some possible solutions in the first step of this approach. Doesn't dividing by $ax+b$ immediately rule out any solution where $x(t)=-\frac ba$ anywhere in the interval over which the function is defined? That seems like we might be losing a lot of potential solutions. So why is the above solution the general solution?

1 Answers1

1

You're right, and this is a good issue to point out. In this case, it's straightforward to show uniqueness, though: Suppose that $x(t)$ is a solution and notice that if $y(t) = e^{-at} x(t)$, we have

\begin{align*} y'(t) &= -ae^{-at} x(t) + e^{-at} x'(t) \\ &= -ae^{-at} x(t) + e^{-at} \big(ax(t) + b\big) \\ &= be^{-at} \end{align*}

Now integrating shows what $y$ must be, and hence $x$. No division by zero at all.


Sometimes, one can solve an equation in a somewhat ad hoc manner, as this separation of variables does, and then simply check that the solution is valid by substitution and unique by an easy argument like this one.