Unfortunately, the main mathematical motivations to take this approach use math you perhaps haven't learned yet.
Idea 1: Linear Algebra
In a linear recurrence, we have a matrix which sends:
$$M:(A_{n},A_{n-1},\dots,A_{n-k+1})^T\to(A_{n+1},A_n,\dots,A_{n-k+2})^T$$
Then the values $r$ are eigenvalues of $M$. If you know the eigenvalues of a matrix $M$, it makes it much easier to represent matrix powers $M^n$.
Idea 2: Generating Functions
Alternatively, you can look at it in terms of generating functions. For example, if $A_{n+1}=A_n+A_{n-1}$, we get a power series called the generating function:
$$a(x)=\sum_{n=0}^{\infty} A_nx^n$$
Turns out the recurrence relation can be rewritten $(1-x-x^2)a(x)=A_0+(A_1-A_0)x$ -- the other terms cancel out. So $$a(x)=\frac{A_1+(A_1-A_0)x}{1-x-x^2}$$
Edit: Here' the "why."
We can see this as:
$$\begin{align}
a(x)&=A_0&+&A_1x&+&A_2x^2&+&\dots&+&A_nx^n+\dots\\
xa(x)&=&\,&A_0x&+&A_1x^2&+&\dots&+&A_{n-1}x^n +\dots\\
x^2a(x)&=&\,&\,&\,&A_0x^2&+&\dots&+&A_{n-2}x^n+\dots
\end{align}$$
Subtracting the second and third from the first gives us:
$$(1-x-x^2)a(x)=A_0+(A_1-A_0)x+(A_2-A_1-A_0)x^2+\dots+(A_n-A_{n-1}-A_{n-2})x^n$$
But note that $A_2-A_1-A_0=0$, and likewise for all the later terms $A_n-A_{n-1}-A_{n-2}$.
At first glance, knowing that $(1-x-x^2)a(x)=A_0+(A_1-A_0)x$ doesn't seem to give us much, but if we factor $1-x-x^2=(1-r_1x)(1-r_2x)$ we can use something called "partial fractions" to write:
$$\frac{a+bx}{1-x-x^2} = \frac{\alpha_1}{1-r_1x} + \frac{\alpha_2}{1-r_2x}$$
And we know the power series of these two terms: $$\frac{\alpha_i}{1-r_ix}=\sum (\alpha_ir_i^n)x^n$$
So we get, from adding those two power series, $A_n=\alpha_1r_1^n+\alpha_2r_2^n.$
In both cases, we get a problem when $r_1=r_2$ - then partial fractions doesn't work as above, and the matrix (usually) does not lead to a "diagonalizable" matrix. Then we get a different answer, but even then, the base case $r^n$ is an important case.
The case when $r_1=r_2$
In the simple case when there is only one root, we get a partial fraction decomposition like:
$$\frac{\alpha}{1-rx} + \frac{\beta}{(1-rx)^2}$$
(With a recurrence of higher degree, of course, the general terms are of the form $\frac{\alpha}{(1-rx)^k}$, but I'm sticking with the simplest recurrences.)
Then we use that:
$$\frac{1}{(1-rx)^2} = \sum_{n=0}^{\infty} (n+1)r^nx^n$$
So in some way $(n+1)r^n$ is the "more natural" choice, but since $A_n=r^n$ and $B_n=(n+1)r^n$ both satisfy our recursion, then $B_n-A_n=nr^n$ satisfies our recurrence.
The direct computational reason is related follows. If $p(x)$ is divisible by $(x-r)^n$ then $p'(x)$ is divisible by $(x-r)^{n-1}$.
Let's take a specific example, $p(x)=x^2-2x+1=(x-1)^2$. Now define $p_n(x)=x^np(x)=x^{n+2}-2x^{n+1}+x^n$, which is also divisible by $(x-1)^2$. Taking derivatives, we see that:
$$0=p_n'(r)=(n+2)r^{n+1}-2(n+1)r^n +nr^{n-1}$$
If $A_n=(n+1)r^n$ then this means that:
$$0=A_{n+1}-2A_n + A_{n-1}$$
In general, if your polynomial has a root $r$ that repeats $k$ times, then you can use $A_n=p(n)r^n$ where $p(n)$ is any polynomial of degree $k-1$.
Aside: From the linear algebra perspective, repeated roots means the matrix is not diagonalizable, so you are stuck using Jordan Normal Form, a more general approach. Exponentiation still is relatively easy to do for a matrix in Jordan Normal Form, but you get polynomials in $n$ for some of the terms.
For example, when $A_{n+1}=2rA_{n}-r^2A_{n-1}$ the matrix is:
$$M=\begin{pmatrix}0&1\\2r&-r^2\end{pmatrix}=S\begin{pmatrix}r&1\\0&r\end{pmatrix}S^{-1}$$
For some invertible matrix $S$. So:
$$M^n = S\begin{pmatrix}r&1\\0&r\end{pmatrix}^nS^{-1}=S\begin{pmatrix}r^n&nr^{n-1}\\0&r^n\end{pmatrix}S^{-1}$$