The SVD is a generalization of the eigendecomposition. The SVD is the following.
Suppose $ A \in \mathbb{C}^{m \times n}$
now
$$A = U \Sigma V^{T} $$
where $U,V^{T}$ are orthogonal matrices and $\Sigma $ is a diagonal matrix of singular values. The connection comes here when forming the covariance matrix
$$AA^{T} = (U \Sigma V^{T}) (U \Sigma V^{T})^{T} $$
$$AA^{T} = (U \Sigma V^{T}) (V \Sigma^{T}U^{T})$$
$$ AA^{T} = U \Sigma V^{T} V \Sigma^{T} U^{T}$$
Now $VV^{T} =V^{T}V = I $
$$ AA^{T} = U \Sigma \Sigma^{T} U^{T} $$
Also $ \Sigma^{T} = \Sigma $
$$ AA^{T} = U\Sigma^{2} U^{T}$$
Now we have $ \Sigma^{2} = \Lambda $
$$ AA^{T} = U \Lambda U^{T}$$
The actual way you compute the SVD is pretty similar to the eigendecomp.
In respect to the PCA, it is telling you specifically in the answer you have take the covariance matrix and normalize it (centering). Then it only take the left singular vectors and singular values I believe while truncating it.
A truncated SVD is like this.
$$A_{k} = U_{k}\Sigma_{k} V_{k}^{T} $$
this means the following
$$A_{k} = \sum_{i=1}^{k} \sigma_{i} u_{i} v_{i}^{T} $$
So you actually read that they aren't the same. It uses the SVD in forming because it is simpler. The last part states
The product
$$ U_{k}\Sigma_{k}$$
gives us a reduction in the dimensionality which contains the first k principal components here we then multiply be the principal axis
$$X_{k} = U_{k}\Sigma_{k} V_{k}^{T} $$
This is commonly referred to as a truncated SVD.