The eigenspace of (a square matrix) $A$ corresponding to $\lambda$ is the collection of all vectors $\mathbf{x}$ that satisfy $A\mathbf{x}=\lambda\mathbf{x}$, or equivalently, $(A-\lambda I)\mathbf{x}=\mathbf{0}$.
The generalized eigenspace of $A$ corresponding to $\lambda$ is the collection of all vectors $\mathbf{x}$ for which there exists a positive integer $k$ for which $(A-\lambda I)^k\mathbf{x}=\mathbf{0}$
What I'm looking for is an equivalent point of view as follows: Writing $A\mathbf{x}=\lambda\mathbf{x}$ can be interpreted as the ubiquitous saying that an eigenvector is a vector that only transforms by a factor when $A$ is acting on it. By looking at all vector $\mathbf{x}$ which satisfy $(A-\lambda I)\mathbf{x}=\mathbf{0}$, we obtain exactly those vectors.
But by looking at $(A-\lambda I)^k\mathbf{x}=\mathbf{0}$ I can't see the intuitive idea of defining the generalized Eigenspace like this. Sure, if we want a "simple", deconstructed "look" of what our linear function does (aka maybe an almost diagonal form) it works (if we are over an algebraically closed field), at least the impression I got from my textbook is basically that we are trying to obtain a nearly diagonal matrix and the only way to construct such thing is by looking at generalized Eigenspaces and by quietly doing so, we do obtain what we want.
But that can't be the way this concept was conceived, right?
So to sum the question up: If I want to find all vectors that are only "stretched" with a factor of $\lambda$ by my linear function, I look at the eigenspace $E=\{ \mathbf{x} | (A-\lambda I)\mathbf{x}=\mathbf{0}\}$. Is there a similar thought when looking at the generalized Eigenspace $E^{k}=\{ \mathbf{x} | (A-\lambda I)^k\mathbf{x}=\mathbf{0}\}$? How would one arrive at this definition?