$$\lim\limits_{x\to a}f'(x) = \lim\limits_{x\to a}\frac{f(x)-f(a)}{x-a}$$
This claim assumes that $f'$ is necessarily continuous (what we might write as $f \in \mathcal{C}^1(D)$ for the appropriate domain $D$). This is not always the case; some examples are here, such as the function
$$f(x) := \begin{cases}
x^2 \sin(1/x), & x \ne 0 \\
0, & x = 0 \end{cases}$$
$$\lim\limits_{x\to a}(f(a)+(x-a)f'(x))=\lim\limits_{x\to a}f(x)$$
Claiming this, further, makes certain assumptions about limit laws that need not hold. In particular, more or less what you're doing is claiming
$$\lim_{x \to a} \frac{f(x)-f(a)}{x-a} = \frac{\displaystyle \lim_{x \to a} f(x)-f(a)}{\displaystyle \lim_{x \to a}x-a} \tag{1}$$
and then multiplying by the denominator to get
$$\left( \lim_{x \to a} f'(x) \right) \left( \lim_{x \to a} (x-a) \right) = \lim_{x \to a} \Big( f(x) - f(a) \Big) \tag{2}$$
and then splitting the latter limit to get
$$\left( \lim_{x \to a} f'(x) \right) \left( \lim_{x \to a} (x-a) \right) = - f(a) + \lim_{x \to a} f(x) \tag{3}$$
and then adding $f(a)$ to both sides,
$$f(a)+ \left( \lim_{x \to a} f'(x) \right) \left( \lim_{x \to a} (x-a) \right) = \lim_{x \to a} f(x) \tag{4}$$
and then recombining the left hand side all under the same limit:
$$ \lim_{x \to a} \Big( f(a) + f'(x) (x-a) \Big) = \lim_{x \to a} f(x) \tag{5}$$
Quite a few rules for limits were broken along the way.
In $(1)$, we can only claim that
$$\lim_{x \to a} \frac{f(x)}{g(x)} = \frac{\displaystyle \lim_{x\to a} f(x)}{\displaystyle \lim_{x\to a} g(x)}$$
when all three limits involved exist (as finite numbers, to be clear: $\pm \infty$ is not a case where we say a limit exists here), and the bottom limit of the fraction is nonzero. A problem arises: the bottom limit goes to zero in $(1)$.
In $(2)$, you just multiplied both sides by $0$ as a result; $(2)$ is identical to saying $0=0$, so no actual information of note may be derived.
In $(3)$, I would be hesitant to say you can split the limit up like that. Like the above rule, for any $\alpha,\beta \in \mathbb{R}$, we can only say
$$
\lim_{x \to a} \Big( \alpha f(x) + \beta g(x) \Big) =
\alpha \lim_{x \to a} f(x) + \beta \lim_{x \to a} g(x)$$
when all three limits exist. Does the limit as $x \to a$ of $f(x)$ necessarily exist? One should do more work to justify this claim.
$(5)$ has many of the same concerns as all of the previous rules mentioned (aside from a product equivalent to the quotient law), just applied in reverse.
Granted, for a restricted class of functions, your equality does hold, and we have the more general observation mentioned in Golden_Ratio's answer that
$$\lim_{x \to a} \Big( f(a) + g(x) \Big) = \lim_{x \to a} f(x)$$
when $g(x) \xrightarrow{x\to a} 0$, so one begs the question: why is the formula
$$f'(x) \approx f(a) + f'(a) (x-a)$$
considered a good approximation?
One can certainly handwave some details. For $x$ very much near $a$, if
$$f'(a) = \lim_{x \to a} \frac{f(x) - f(a)}{x-a}$$
is expected to hold, then for $x$ very near $a$ (say, $x = a +\varepsilon$ for some $\varepsilon$ very very small with $|\varepsilon|>0$), the limit and derivative should be about the same. That is,
$$f'(a) \approx \frac{f(a+\varepsilon) - f(a)}{a +\varepsilon - a} = \frac{f(a+\varepsilon)-f(a)}{\varepsilon}$$
But then rearranging,
$$f(a+\varepsilon) \approx \varepsilon f'(a) + f(a)$$
and since $\varepsilon = x-a$, the desired approximation arises. Of course, when concerned with rigor, one should avoid such hand-waving.
A past MSE post somewhat handles why the tangent-line approximation is the "best" linear approximation (since your scalings of $f'(x)$ still render the approximation linear) -- it is the only linear function (in the sense of a form $y=mx+b$) whose error tends to $0$ faster than $x-a$ tends to $0$.
Also of interest may be Taylor's theorem, particularly results tied to the remainder term and bounds on it.
or $$\lim\limits_{x\to a}f'(x) = \lim\limits_{x\to a}\frac{f(x)-f(a)}{x-a}$$ I am unsure whether this is right. Let $~g(x)~$ denote $~f'(x).~$ I see 2 problems with your assertion. 1st, although I could be mistaken, just because $~g(x)~$ is well defined at $~(x = a),~$ does not mean that there is some small neighborhood around $~(x=a)~$ such that $~g(x)~$ is well defined there. $\color{red}{\text{I could be mistaken about this first objection}}$ ...see next comment
– user2661923 Mar 18 '23 at 06:36