Let's define the following: $$ d_N = \det\left(\prod_{k=1}^N A(k)\right) = \prod_{k=1}^N \det(A(k))=\prod_{k=1}^N a_{11}(k)a_{22}(k)-a_{21}(k)a_{12}(k) $$ where $a_{ij}(k)\sim\mathcal{N}(0,\sigma^2)$.
So then the expected value is given by: $$ \mathbb{E}[d_N]=0=:\mu_N $$
Next, define $$ \hat{S}_1(k) = \hat{a}_{11}(k)\hat{a}_{22}(k),\;\;\hat{S}_2(k) = \hat{a}_{21}(k)\hat{a}_{21}(k) $$
where $\hat{a}_{ij}(k)\sim\mathcal{N}(0,1)$.
So, the variance is written:
\begin{align}
\mathbb{V}[d_N] &= \mathbb{E}[d_N^2]= \mathbb{E}\left[ \prod_{k=1}^N (a_{11}(k)a_{22}(k)-a_{21}(k)a_{12}(k))^2 \right]\\
&=\prod_{k=1}^N \mathbb{E}\left[ (a_{11}(k)a_{22}(k)-a_{21}(k)a_{12}(k))^2 \right]\\
&= \prod_{k=1}^N \mathbb{E}\left[ (\sigma^2\hat{S}_1(k)-\sigma^2\hat{S}_1(k))^2 \right]\\ &= \sigma^{4N}\prod_{k=1}^N \mathbb{E}\left[ (\hat{S}_1(k)-\hat{S}_1(k))^2 \right]
\end{align}
Now for some random variable algebra. First, note that the $\hat{S}_i$ are chi-squared because they are a product of independent normal RVs (e.g. here): $$ \hat{S}_1(k), \hat{S}_2(k)\sim\chi^2_1 $$
Next, define $\mathfrak{D}_k= \hat{S}_1(k) - \hat{S}_2(k)$. The difference between two chi-squared variables (e.g. here, or A note on gamma difference distributions by
Bernhard Klar) follow a variance-gamma (generalized Laplace) distribution:
$$ \mathfrak{D}_k\sim\Gamma_\mathcal{V}(\mu,\alpha,\beta,\lambda,\gamma)=\Gamma_\mathcal{V}\left(0,\frac{1}{2},0,\frac{1}{2},\frac{1}{2}\right) $$
This also tells us that:
\begin{align}
\mathbb{E}[\mathfrak{D}_k] &= \mu + \frac{2\beta\lambda}{\gamma^2} = 0 \\
\mathbb{V}[\mathfrak{D}_k] &= \frac{2\lambda}{\gamma^2}\left( 1 + \frac{2\beta^2}{\gamma^2} \right) = 4
\end{align}
So now we can complete the computation:
$$
\mathbb{V}[d_N]=\sigma^{4N}\prod_{k=1}^N \mathbb{E}\left[ \mathfrak{D}_k^2 \right]
= \sigma^{4N}\prod_{k=1}^N (\mathbb{V}[\mathfrak{D}_k] + \mathbb{E}[\mathfrak{D}_k]^2) = 4^N\sigma^{4N} =: \varsigma^2_N
$$
Hopefully I didn't make any arithmetic mistakes.
Anyway, woah, so that is potentially a very large variance. Clearly it depends heavily on the value of $\sigma$. But anyway, ignoring the limit, we can bound our target using the Chebyshev inequality:
$$
P(|d_N-\mu_N|\geq \varsigma_N\kappa) \leq \frac{1}{\kappa^2}\;\;\;\implies\;\;\;
P(|d_N|\geq 2^N\sigma^{2N}\kappa) \leq \frac{1}{\kappa^2}$$
Maybe there is a better concentration inequality.
But, if we denote $\sigma = \hat{\sigma}/\sqrt{2} $, then at least what this tells us is that: if $\hat{\sigma}<1$, then the probability that $d_N$ is non-zero is essentially zero.