Found this one sitting around on my laptop, and realized I'd never posted it; better late than never, I guess! Here goes:
Not to put too fine a point on it, but Maisim Hedyelloo's answer does not tell the whole story. True, the eigenvalues of the matrix $A$ are the repeated pair 3, 3; this follows from the fact that the characteristic polynomial of $A$ is
$\det(A - \lambda I) = \det(\begin{bmatrix}
5 - \lambda & 2 \\
-2 & 1 - \lambda
\end{bmatrix}) = (5 - \lambda)(1 - \lambda) + 4 = \lambda^2 - 6\lambda + 9= (\lambda - 3)^2$,
which easily seen to have root $\lambda = 3$ of multiplicity $2$. Now consider the matrix $N$ defined by the equation
$2N = A - \lambda I = A - 3I = \begin{bmatrix}
2 & 2 \\
-2 & -2
\end{bmatrix} = 2 \begin{bmatrix}
1 & 1 \\
-1 & -1
\end{bmatrix}$,
so that
$N = \begin{bmatrix}
1 & 1 \\
-1 & -1
\end{bmatrix}$.
A simple calculation reveals that
$N^2 = 0$,
i.e., $N$ is nilpotent of degree $2$; hence $2N$ is as well: $(2N)^2 = 4N^2 = 0$. Next, we recall that for any initial vector $y(0) = y_0$, the solution to the ODE is $y(t) = e^{At}y_0$; and, based upon what we have done already, the matrix exponential $e^{At}$ is easily had. Note that, by the above, $A = 3I + 2N$, so that
$e^{At} = e^{(3I + 2N)t}$.
At this point, the critical step is to observe that
$e^{(3I + 2N)t} = e^{3It}e^{2Nt}$;
this equation between the matrix exponentials on either side holds by virtue of the fact that the two matrices $3I$ and $2N$ commute; that is $(3I)(2N) = (2N)(3I)$, or, if you prefer the bracket notation, $[3I, 2N] = 0$. Under these circumstances, it can be shown that the rule $e^{X + Y} = e^Xe^Y$ holds for matrices exactly as it does for ordinary real or complex numbers. In the present case of course the relation $(3I)(2N) = (2N)(3I)$ or
$[3I, 2N] = 0$ is trivially obvious, since one of the matrices in question is a scalar multiple of the identity matrix $I$; but the result is quite general. A full discussion and proof may be found in Hirsch, Smale, and Devaney's Differential Equations, Dynamical Systems, and an Introduction to Chaos, Second Edition, Elsevier Academic Press, 2004, pp. 123-130. In any event we can now conclude that
$e^{At} = e^{3It}e^{2Nt}$,
and since it is evident that $e^{3It} = e^{3t}I$, we further have
$e^{At} = e^{3t}Ie^{2Nt} = e^{3t}e^{2Nt}$
so all we really need to do is evaluate $e^{2Nt}$; but this is easy: since $N^2 = 0$, the power series expansion of $e^{2Nt}$ is truncated after the first-degree term, hence we have
$e^{2Nt} = I + 2Nt$,
whence
$e^{At} = e^{3t}e^{2Nt} = e^{3t}(I + 2Nt)$.
It is now a relatively simple matter to write down the solution which passes through the point
$y_0 = (y_{01}, y_{02})^T$ at the time $t = 0$. Since
$I + 2Nt = \begin{bmatrix}
1 + 2t & 2t \\
-2t & 1 - 2t
\end{bmatrix}$,
we have
$y(t) = e^{At}y_0 = e^{3t}(I + 2Nt)y_0 = e^{3t}\begin{pmatrix}y_{01} + 2t(y_{01} + y_{02}) \\ y_{02} - 2t(y_{01} + y_{02})\end{pmatrix}$
$= e^{3t}\begin{pmatrix}y_{01} \\y_{02}\end{pmatrix} + (y_{01} + y_{02})te^{3t}\begin{pmatrix} 2 \\ -2 \end{pmatrix}$;
it is easy to verify, by a direct and elementary computation, that such $y(t)$ satisfies $y'(t) = Ay(t)$ with $y(0) = y_0$. Examining this equation, we observe that in the event that $y_{01} + y_{02} = 0$, i.e. $y_{01} = -y_{02}$ the solution $y(t) = e^{3t}(y_{01}, y_{02})^T = e^{3t}(y_{01}, -y_{01})^T = y_{01}e^{3t}(1, -1)^T$ is in fact of the general form suggested by Maisam Hedyelloo in his answer; but this is indeed a very special solution, lying as it does in the unique, one-dimensional eigenspace of $A$ generated by the eigenvector $(1, -1)^T$. In general we should expect $y_{01} + y_{02} \ne 0$, in which case the solution contains a (vector) term proportional to $te^{3t}$. We can re-write $y(t)$, thus:
$y(t) = e^{3t}\begin{pmatrix}y_{01} \\y_{02}\end{pmatrix} + (y_{01} + y_{02})te^{3t}\begin{pmatrix} 2 \\ -2 \end{pmatrix}$
$= e^{3t}\begin{pmatrix} y_{01} \\ -y_{01}\end{pmatrix} + e^{3t}\begin{pmatrix} 0 \\ y_{01} + y_{02}\end{pmatrix} + (y_{01} + y_{02})te^{3t}\begin{pmatrix} 2 \\ -2 \end{pmatrix}$
$= y_{01}e^{3t}\begin{pmatrix} 1 \\ -1\end{pmatrix} + (y_{01} + y_{02})e^{3t}\begin{pmatrix} 0 \\ 1\end{pmatrix} + 2(y_{01} + y_{02})te^{3t}\begin{pmatrix} 1 \\ -1 \end{pmatrix}$
$= (y_{01}+ 2(y_{01} + y_{02})t)e^{3t}\begin{pmatrix} 1 \\ -1\end{pmatrix} + 2(y_{01} + y_{02})e^{3t}\begin{pmatrix} 0 \\ \frac{1}{2}\end{pmatrix}$,
which exhibits the complete solution in terms of a component lying in the unique eigenspace of $A$, spanned by $(1, -1)^T$, and a component $2e^{3t}(y_{01} + y_{02})(0, \frac{1}{2})^T$ which lies in the one-dimensional subspace spanned by the generalized eigenvector $(0, \frac{1}{2})^T$. The fact that Amzoti's choice of generalized eigenvector, $(-\frac{1}{2}, 0)^T$, is different that the present one, $(0, \frac{1}{2})^T$, yet still yields a valid solution is, I believe, explained by the fact that
there is a certain latitude in the choice of the generalized eigenvector $v_2$ corresponding to the eigenvector $v_1$ via the formula $(A - \lambda)v_2 = v_1$, for if we add a scalar multiple of $v_1$ to $v_2$ we still obtain a generalized eigenvector satisfying the equation $(A - \lambda)v = v_1$: taking $v = v_2 + \alpha v_1$ we see that
$(A - \lambda)(v_2 + \alpha v_1) = (A - \lambda)v_2 + \alpha(A - \lambda)v_1= v_1$
by virtue of the fact that $(A - \lambda)v_1 = 0$; if we now observe that $(\frac{1}{2}, 0)^T = (0, \frac{1}{2})^T + \frac{1}{2}(1, -1)^T$ we see that the apparent discrepancy between Amzoti's choice of generalized eigenvector and mine is resolved. (Note: Amzoti chooses $(-1, 1)^T$ for the eigenvector and $(-\frac{1}{2}, 0)^T$ for the generalized eigenvector, which is consistent with our choice of eigenvector $(1, -1)$ since eigenvectors are, shall we say, scale invariant; that is, $Av = \lambda v$ if and only if $A(\alpha v) = \lambda (\alpha v)$ for $\alpha \ne 0$.)
In the development I have given here, I have used the concepts of nilpotence and the nilpotent part $2N = A - \lambda I$ of the matrix $A$ in lieu of the more conventional methodology which exploits generalized eigenvectors; such an approach makes the direct calculation of $\exp{At}$ simple, as has been seen above. It is in fact the nilpotent part of $A$, $2N$, which gives rise the generalized eigenvectors since $(A - \lambda)v_2 = 2Nv_2 = v_1$, and we have seen that it is the nilpotent part which gives rise to the terms of the functional form $te^{3t}$ in our solution. Such terms may also be derived via generalized eigenvectors directly, as is well-known, and has been mentioned by Amzoti in his/her answer. All of this machinery of course generalizes to higher dimensions; but the discussion of such matters would take us even further afield, and I have, for the moment, written enough. I hope these words are graced to bring with them a little of what Henry Jones, Sr., sought in, if I am not mistaken, Raiders of the Lost Ark, and that is: illumination.
Fiat Lux!