A Gaussian Process (GP) is a collection of random variables $X_t : \Omega \to R_t$ over an index set $T$ such that for every finite collection $t_1, ..., t_n \in T$, the vector $(X_{t_1}, ..., X_{t_n})$ is a multivariate Gaussian. We can associate the mean function $\mu : T \to \mathbb{R}$ and the covariance function $C : T \times T \to \mathbb{R}$ with $$\mu(t) = \text{mean of} X_t ~~\text{ and }~~ C(t,r) = \text{Cov}(X_t, X_r)$$
The question I have is: Why is the Gaussian Process uniquely determined by the (values of) the functions $\mu$ and $C$?
Before you scream 'Duplicate' let me say that I know that this question was asked a lot of times already, for example here and here. However, the people who asked the question did mark an answer as 'correct' that was absolutely incomplete. The question I have is not why a finite number of random variables $(X_1, ..., X_n)$ that are multivariate gaussian distributed with mean $\mu$ and covariance matrix $\Sigma$ are uniquely determined by $\mu$ and $\Sigma$ but rather why the whole Gaussian Process is uniquely determined by finite subsets of it. I even happen to think that the answer to the question
Is a Gaussian Process uniquely determined by its mean and covariance function?
is either 'no' or 'the question is not precise'. Because what does it mean for two Gaussian Processes $X = (X_t)_{t}$ and $Y = (Y_t)_{t}$ to be 'equal' or at least 'equal almost everywhere'? It means that the induced mesaures of the total random variable $X = (X_t)_t$ (on the space $\prod_t R_t$, a possibly uncountably infinite product) and $Y$ are equal.
Question: Can somebody explain why the total induced measures are uniquely determined by there finite marginalizations?