I'm not sure exactly what counts as the right intuition here -- over $\mathbb R$ for example, a nontrivial rotation has no eigenvalues or eigenvectors, so linear transformations of that kind appear to be left out in the cold if you seek to understand linear maps in terms of eigenvalues.
On the other hand, if you move to an algebraically closed field $\mathbb C$, then every linear map has an eigenvalue (which is equivalent to the statement that every polynomial has a root). Now an eigenvector of a linear map gives a $1$-dimensional invariant subspace: if $\alpha \colon V \to V$ and $\alpha(v_0)=\lambda_0 v_0$ with $v_0\neq 0$, then $L=\mathbb C.v_0$ is preserved by $\alpha$ and $\alpha_{|L}$ is just multiplication by $\lambda_0$. Thus on the direct sum of the eigenspaces of $\alpha$, the linear map just acts as scalar multiplication (by a different scalar on each different eigenspace).
Algebraic multiplicity arises, however, because linear maps in two dimensions get to be more interesting that linear maps in dimension $1$: Over $\mathbb R$ this was obvious because of things like rotations, but it remains true over $\mathbb C$, though the fact that $\mathbb C$ is algebraically closed constrains things:
If $\alpha\colon V\to V$ is a linear map where $\dim(V)=2$, then $\alpha$ has an eigenvector, $v_0$ say, with eigenvalue $\lambda_0$, and if $L=\mathbb C.v_0$, then $\alpha$ acts by scalar multiplication on $L$. Now for any $v_1 \notin L$ the set $\{v_0,v_1\}$ is a basis of $V$ and if $\alpha(v_1) = \mu_1.v_1+\mu_0.v_0$ for some $\mu_0,\mu_1 \in \mathbb C$, then
$$
\begin{split}
\alpha(v_1+c.v_0) &= \mu_1.v_1 +\mu_0 v_0+ c\lambda_0.v_0 \\
&= \mu_1.v_1+(\mu_0+c\lambda_0)v_0\\
\end{split}
$$
so that if $\mu_1.c = \mu_0+c\lambda_0$, that is, if $c = \mu_0/(\lambda_0-\mu_1)$, then $v_1+c.v_0$ is an eigenvector with eigenvalue $\mu_1$. Thus $V$ has a basis of $\alpha$-eigenvectors unless $\mu_1=\lambda_0$ and $\mu_0 \neq 0$ in which case $V$ has the basis $\{u_0,u_1\}$ where $u_0=v_0,u_1=\mu_0^{-1}v_1\}$ with respect to which $\alpha$ has matrix
$$
\left(\begin{array}{cc} \lambda_0 & 1 \\ 0 & \lambda_0 \end{array} \right)
$$
From this matrix we see that $\lambda_0$ geometric multiplicity $1$ and algebraic multiplicity $2$. The terminology comes from considering eigenspaces and the characteristic polynomials, but the fact that the algebraic multiplicity is at least the geometric multiplicity just reflects the fact that in dimension $2$ and higher, one has the shear map $s\colon V\to V$, given by $s(u_0)=u_0$ and $s(u_1)=u_1+u_0$. Indeed $\alpha = (\lambda_0-1)\text{id}_V + s$.