In general, why don't we consider the negative case when doing $u$ substitutions that involve a $u^2$ (i.e. when subbing back in for $x$ the square root only has the positive value $+\sqrt{x}$ rather than $-\sqrt{x}$ or whatever function is being square rooted)
A simple example from my textbook:
integrate $f(x)=\int \frac{1}{1+\sqrt{x}} dx $ by substitution of $x=(u-1)^2$
through some simple substitution the textbook answers reach:
$f(x)=2(1+\sqrt{x})-2(\ln(1+\sqrt{x}))+C$
why cannot the case $f(x)=2(1-\sqrt{x})-2(\ln(1-\sqrt{x}))+C$ work in this method?
Is it just that you first define $u=1+\sqrt{x}$ rather than the substitution they give in the question, and then reach the $x(u)$ function?
I thought that it might be because $x(u)$ and $u(x)$ functions must be inverse of each other, but you can still define that inverse as the negative and it should work. Any input is appreciated, am I just missing something extremely obvious?