Although it is true that choosing any point in $[1,2]$ will lead to convergence with this problem, it is better to be safe.
Without any assumptions on the second derivative, you can guarantee convergence using the Newt-safe algorithm, which essentially combines bisection with Newton's method to guarantee every iteration remains in the current bracket.
Imagine for sake of example, your initial bracket was $[0,2]$ instead of $[1,2]$, and you attempted to start with $x_0=0$. But upon doing this, you found $x_1=-1\notin(0,2)$. Then it is clear Newton's method is not converging to the root and you should instead take $x_1=1$, the midpoint of the bracket. Now depending on the sign of $f(1)$, update the bracket. In this case the new bracket becomes $[1,2]$. Now trying Newton's method with $x_1=1$, we find that we are starting to converge.
Usually what occurs is bisection leads the way to "finding the initial point for Newton's method", as with the above example, and now you don't have to worry about starting close to the root at all. Despite Newton's method failing if you start with $x_0<1/\sqrt3$, Newt-safe will converge even with an initial bracket such as $[-10,10]$.