I'm doing some thesis work on lattices, but probability theory is not my strong suit and I'm not sure how to solve this problem:
I have vectors $\mathbf{a}_i, \mathbf{x} \in \mathbb{R}^n$, and scalars $b_i \in \mathbb{R}$ for $0 < i \leq k$. Each scalar and each coordinate in each vector is drawn from a $N(0,\sigma^2)$ distribution.
Now I have $k$ random variables:
$$X_i = \mathbf{a}_i \cdot \mathbf{x} + b_i$$
where $\cdot$ is the scalar product. I am interested in
$$Y = \sqrt{ \sum_{i = 1}^k X_i^2 } .$$
Specifically, I would like to know the probability that $Y$ exceeds a certain bound $L$.
My thinking so far:
As far as I can tell $X_i \sim N(0, n\sigma^4 + \sigma^2)$ since it is the sum of $n$ products of two Gaussian variables and 1 Gaussian variable ($n$ is large, so we apply the central limit theorem). This lead me to the chi-square distribution with a non-standard sigma such in this question here, which appears to mean $Y$ would follow a gamma distribution.
However, the $X_i$ are not independent because of the shared $\mathbf{x}$, are they? So I found this, which appears to be what I'm looking for (let me know if I'm wrong). Now I'm having trouble with the covariance matrix.
$$\mathrm{Cov}[X_i,X_j] = \mathrm{E}[X_i X_j] = \mathrm{E}[(\mathbf{a}_i \cdot \mathbf{x} + b_i)(\mathbf{a}_j \cdot \mathbf{x} + b_j)] $$
When I work out the sum I get a huge mess, but what stands out is that you get products between the components of $\mathbf{a}_i, \mathbf{a}_j$, and $\mathbf{x}$, but only the components of $\mathbf{x}$ appear more than once (squared) in the same term. Since all these components are independent and have mean 0, is the expectation value of each term without a squared component 0? However,
$$ \mathrm{E}[a_{i1} a_{j1} x_1^2] = \mathrm{E}[a_{i1}] \mathrm{E}[a_{j1}]\mathrm{E}[x_1^2] = 0$$
right? So am I correct in thinking that the $X_i$ can:
Be seen as a multivariate normal distribution?
Are dependent but happen to have a covariance of 0? So that my covariance matrix $\Sigma$ is a diagonal matrix where all of the elements are equal to $n\sigma^4 + \sigma^2$.
Now I'm unable to entirely follow the answer in the second link I gave. $\Sigma$ is already diagonal, so that should simplify things quite a bit. I can't tell what the $\Lambda$ becomes. I think the diagonal elements become $\lambda_i = n\sigma^4 + \sigma^2$ (really not sure about this part)? So the quadratic form is a linear combination of $k$ independent chi-square variables with 1 degree of freedom. So for $X = (X_1,...,X_k)$:
$$Y^2 = Q(X) = \sum^k_{i = 1} \lambda_i (N(0,1))^2 = (n\sigma^4 + \sigma^2) \sum^k_{i = 1} (N(0,1))^2$$
This feels strange to me. Am I on the right path?