$\newcommand{\var}{\operatorname{var}}\newcommand{\cov}{\operatorname{cov}}\newcommand{\E}{\operatorname{E}}$ For points $0\le t_1 < t_2<\cdots < t_n$ we have an $n\times n$ matrix $\Sigma$ of covariances $\cov (X_{t_i},X_{t_j})$.
For the moment neglecting the case where $\Sigma$ is singular, we see that this is a symmetric positive-definite matrix with real entries. In linear algebra one learns that such a matrix can be diagonalized by an orthogonal $n\times n$ matrix $G$, i.e. $GG'=G'G=I_n$ and $D=G'\Sigma G$ is a diagonal matrix (where $A'$ means the transpose of $A$). Since it is positive-definite, its diagonal entries are positive. Replacing each of them with the reciprocal of its square root, we get a positive-definite symmetric matrix $D^{-1/2}$ such that $(D^{-1/2})^2 = D^{-1} = G'\Sigma^{-1} G$. Now we give the name $\Sigma^{-1/2}$ to $G'D^{-1/2}G$, and observe that $\Sigma^{-1/2}(\Sigma^{-1/2})' =\Sigma^{-1}$.
What is the matrix of covariances of
$$
\Sigma^{-1/2}\begin{bmatrix} X_{t_1} \\ \vdots \\ X_{t_n} \end{bmatrix} \text{ ?} \tag 1
$$
If we take any $k\times n$ real matrix $M$, the matrix of covariances of
$$
M\begin{bmatrix} X_{t_1} \\ \vdots \\ X_{t_n} \end{bmatrix}
$$
is $M\Sigma M'$ (a $k\times k$, not $n\times n$, matrix). Hence the matrix of covariances of the vector in $(1)$ is $\Sigma^{-1/2}\Sigma\Sigma^{-1/2}$. If you can satisfy yourself that this is $I_n$, then you're where you need to be, since achieving that is the reason for all of the above.
Now we have reduced the question to this: If $U_{t_1},\ldots,U_{t_n}$ are jointly Gaussian and their matrix of covariances is $I_n$, then why is that enough to determine their distribution completely? I.e. why must two zero-mean Gaussian random vectors that both have $I_n$ as their matrix of covariances have identical distributions?
Call that $n\times 1$ vector $U$. Notice that for any $n\times n$ orthogonal matrix $M$, the matrix of covariances of $MU$ is $M I_n M' = I_n$. In other words, rotating does not change the distribution. Rotation-invariance implies the density depends on the coordinates only through their sum of squares. Add to that the assumption of Gaussianity of each component separately and you have a density of the form $(\text{constant}\cdot\exp(\text{constant}\cdot(u_1^2+\cdots+u_n^2))$. Our assumptions determine the density.