15

Let $(X_n)$ be a sequence of i.i.d $\mathcal N(0,1)$ random variables. Define $S_0=0$ and $S_n=\sum_{k=1}^n X_k$ for $n\geq 1$. Find the limiting distribution of $$\frac1n \sum_{k=1}^{n}|S_{k-1}|(X_k^2 - 1)$$

This problem is from Shiryaev's Problems in Probability, in the chapter on the Central Limit Theorem. It was asked on this site in 2014, but remains unanswered. I posted it yesterday on Cross Validated, and I think it's worth to cross-post it here as well.

Since $S_{k-1}$ and $X_k$ are independent, $E(|S_{k-1}|(X_k^2 - 1))=0$ and $$V(|S_{k-1}|(X_k^2 - 1)) = E(S_{k-1}^2(X_k^2 - 1)^2)= E(S_{k-1}^2)E((X_k^2 - 1)^2) =2(k-1)$$

Note that the $|S_{k-1}|(X_k^2 - 1)$ are clearly not independent. However, as observed by Clement C. in the comments, they are uncorrelated since for $j>k$ $$\begin{aligned}Cov(|S_{k-1}|(X_k^2 - 1), |S_{j-1}|(X_j^2 - 1)) &= E(|S_{k-1}|(X_k^2 - 1)|S_{j-1}|)E(X_j^2 - 1)\\ &=0 \end{aligned}$$

Hence $\displaystyle V(\frac1n \sum_{k=1}^{n}|S_{k-1}|(X_k^2 - 1)) = \frac 1{n^2}\sum_{k=1}^{n} 2(k-1) = \frac{n-1}n$ and the variance converges to $1$.

I have run simulations to get a feel of the answer

import numpy as np
import scipy as sc
import scipy.stats as stats
import matplotlib.pyplot as plt

n = 30000 #summation index
m = 10000 #number of samples

X = np.random.normal(size=(m,n))
sums = np.cumsum(X, axis=1)
sums = np.delete(sums, -1, 1)
prods = np.delete(X**2-1, 0, 1)*np.abs(sums)
samples = 1/n*np.sum(prods, axis=1)

plt.hist(samples, bins=100, density=True)
plt.show()

Below is a histogram of $10.000$ samples ($n=30.000$). The variance from the generated samples is $0.9891$ (this complies with the computations above). If the limiting distribution was $\mathcal N(0,\sigma^2)$, then $\sigma=1$. However the histogram peaks at around $0.6$, while the max of the density of $\mathcal N(0,1)$ is $\frac 1{\sqrt{2 \pi}}\approx 0.4$. Thus simulations suggest that the limiting distribution is not Gaussian.

It might help to write $|S_{k-1}| = (2\cdot 1_{S_{k-1}\geq 0} -1)S_{k-1}$.

It might also be helpful to note that if $Z_n=\frac1n \sum_{k=1}^{n}|S_{k-1}|(X_k^2 - 1)$, conditioning on $(X_1,\ldots,X_{n-1})$ yields $$E(e^{itnZ_n}) = E\left(e^{it(n-1)Z_{n-1}} \frac{e^{-it|S_{n-1}|}}{\sqrt{1-2it|S_{n-1}|}}\right)$$

enter image description here

Gabriel Romon
  • 35,428
  • 5
  • 65
  • 157
  • I would check the "martingale central limit theorem" if I were you (the sum is a martingale in the natural filtration, plus the variance seems to scale correctly). I do not know if Shiryaev assume this to be known though... – Olivier Aug 21 '19 at 08:32
  • @Olivier There is such a theorem in the second volume of the textbook, but the hypotheses do not seem trivial to verify... – Gabriel Romon Aug 21 '19 at 09:16
  • Yep, even using a simpler version https://en.wikipedia.org/wiki/Martingale_central_limit_theorem (not adapted to this case, since increments are not bounded), it is not clear that the conditional variance is concentrated... – Olivier Aug 21 '19 at 13:13
  • 1
    A quick check from my phone (sorry for typos): aren't the covariances zero? It looks like you get an explicit variance of $(n-1)/n$, for what it's worth. – Clement C. Aug 24 '19 at 19:48
  • @ClementC. Yes, I agree with your computations. – Gabriel Romon Aug 24 '19 at 20:04
  • What exactly is covered in that book, prior to the exercise? The CLT, that's all? – Clement C. Aug 24 '19 at 20:21
  • @ClementC. For reference, this is Problem 3.4.14 in the problem book. The corresponding textbooks are Volume 1 and Volume 2. – Gabriel Romon Aug 24 '19 at 20:24
  • Thanks! But be careful — I'm not 100% sure such links are welcome on Math.SE (unless they are allowed by the author/publisher). – Clement C. Aug 24 '19 at 20:45
  • What would the analysis give under the simplification/mistake that the summands are correlated? I am wondering why the result could be non-Gaussian (from a teaching point of view), when the problem is at the end of a chapter on CLTs and just before another one. – Clement C. Aug 24 '19 at 22:47
  • 3
    My simulation also suggests that the limiting distribution may not be gaussian. Let us look at a similar problem that produces non-gaussian distribution. Let us replace $|S_{k-1}|$ by $S_{k-1}$ and $X_k^2 - 1$ by $X_k$. Then the resulting sum

    $$ \frac{1}{n} \sum_{k=1}^{n} S_{k-1} X_k $$

    converges in distribution to the Ito integral

    $$ \int_{0}^{1} W_s , \mathrm{d}W_s = \frac{W_1^2 - 1}{2}, $$

    where $(W_t)_{t\geq 0}$ is a Wiener process. Since $W_1 \sim \mathcal{N}(0, 1)$, this limiting distribution cannot be gaussian. Now I suspect that OP's problem also suffers the same issue.

    – Sangchul Lee Aug 24 '19 at 23:40
  • @JGWang The Laplace distribution with parameters $(0,1)$ has variance $2$, whereas the limiting variance should be $1$. Are you sure you're not mistaken ? – Gabriel Romon Aug 30 '19 at 07:12
  • @GabrielRomon, thank you for replication. I will check and give the details leter on. – JGWang Aug 30 '19 at 09:30
  • @GabrielRomon Sorry, I made a mistake, the limiting distribution is not as I said in above comment. Please forget it. – JGWang Aug 30 '19 at 11:24
  • 1
    Does anyone know why the currently deleted answer was deleted? – Clement C. Aug 31 '19 at 04:16
  • 2
    @ClementC., My guess is that he found an error in his answer. For instance, the inequality $$\mathbb{P}\left(\sqrt{i-1}\left|X_1\right|\left|X_2^2-1\right|>n\varepsilon\right)\leq\mathbb{P}\left(\left|X_1\right|\left|X_2^2-1\right|>n\varepsilon\right)$$ in his verification of the 1st condition is false when $i \geq 2$. Still, I think the proof can be salvaged to deduce the same conclusion. – Sangchul Lee Aug 31 '19 at 05:21
  • @SangchulLee Does this limiting distribution match simulations ? – Gabriel Romon Aug 31 '19 at 07:09
  • 1
    My simulation suggests that they are indeed equal. You may also try comparing two asymptotic distributions $$Z_n = \frac{1}{n}\sum_{k=1}^{n} |S_{k-1}|(X_k^2-1) \qquad\text{and}\qquad Y_n = \frac{1}{n}\left(2\sum_{k=1}^{n}S_{k-1}^2\right)^{1/2}X_k $$ for the purpose of numerical experiments by yourself. (Here, $Y_n$ converges to $\left(2\int_{0}^{1}W_s^2,\mathrm{d}s\right)^{1/2}N$ in distribution as $n\to\infty$. But since the limiting distribution is unlikely to have a closed-form, I chose to approximate it.) – Sangchul Lee Aug 31 '19 at 07:38
  • Using Cor3.1 of Hall & Hedye's book or the result of Kurtz & Protter(Ann. Prob. 19(1991)1035-1970), the limit distribution – JGWang Aug 31 '19 at 10:11
  • Using Cor3.1 of Hall & Hedye's book or the result of Kurtz & Protter(Ann. Prob. 19(1991)1035-1970), the limit distribution is that said by Sangchul Lee above. $Z=\sqrt{2}\int_0^1W(s)dB(s)$ – JGWang Aug 31 '19 at 10:24
  • 1
    @JGWang Can you expand that into a full answer? – Clement C. Sep 01 '19 at 15:29
  • @ClementC. Just verify that the conditions required in book or paper are satisfied. – JGWang Sep 02 '19 at 01:03
  • @JGWang I don't have the book, unfortunately -- and this is also not quite how this website works. – Clement C. Sep 02 '19 at 01:54
  • Oh man, you're really missing the point. This is not for me as much as it is for the future people browsing this website (since comments are not meant to last, while answers -- full, or "hint"-type ones which provide hints and references to the reader instead of full details -- are). Anyways, please edit your comment to not link to an illegal download website, this may be slightly against the rules. @JGWang – Clement C. Sep 02 '19 at 03:11
  • @ClementC. Thank you for your comment. I delete my former comment. I think Sangchul Lee also known the limit distribution and also convince this fact from Lee's comment . – JGWang Sep 02 '19 at 03:35
  • If you don't have time, @JGWang, I can write, based on your comment, a Community Answer (nobody gets reputation, but the contents can be found later by people who search the website). The point is, comments may disappear, answers stay... – Clement C. Sep 02 '19 at 03:39
  • @ClementC. Thank you. Of course you could write it. – JGWang Sep 02 '19 at 03:44
  • @ClementC. Using the result in Kurtz & Protter's paper to prove the limit distribution, it is easier. Using the result in Hall & Hedye's book, to verify the convergenvce in probability of conditional variance is difficult. – JGWang Sep 02 '19 at 07:52
  • 2
    @ClementC. I also noticed another error in Davide Giraudo's answer, and it is more fundamental than the previous one. Indeed, if we write $I_n=\sum_{i=1}^{n}S_{i-1}^2/n^2$, then $I_m$ and $I_n$ are asymptotically independent for $n\gg m$. So $I_n$ can only converge in distribution, not in probability, and the martingale difference CLT described in Theorem 3.2 of Hall & Heyde cannot be applied directly. – Sangchul Lee Sep 02 '19 at 11:25
  • @SangchulLee what do you think about JGWang's suggestion ? If this is a deadend I'll crosspost on MathOverflow. – Gabriel Romon Sep 02 '19 at 11:40
  • @SangchulLee Indeed the first mistake was actually more a typo because I forgot the exponent $1/2$. The second one is indeed more serious. I undeleted the answer to explain that, and to save the time of someone who would like to try this approach. – Davide Giraudo Sep 02 '19 at 13:04
  • @DavideGiraudo, Now I am less certain as to whether we can apply martingale CLT. Indeed, if the theorem could somehow have been be relaxed so that the condition 3 only requires convergence in distribution, then the theorem should have yielded $$\frac{1}{n}\sum_{i=1}^{n}S_{i-1}X_i\xrightarrow[n\to\infty]{?}\left(\int_{0}^{1}W_t^2\mathrm{d}t\right)^{1/2}N$$in distribution for $W$ standard BM and $N$ independent standard normal variable, which seems not right. (We already know that the correct limit in distribution is $(W_1^2-1)/2$.) – Sangchul Lee Sep 02 '19 at 15:27
  • 1
    @Gabriel Romon, I am not familiar to the stochastic calculus jargon, so I did some elementary level proofs in my newly posted answer. – Sangchul Lee Sep 02 '19 at 15:39
  • 2
    @SangchulLee But in this case we have the same limiting process. Moreover, there exists central limit theorems for martingales where we only have convergence in distribution of the sum of conditional variances. See for instance Theorem 2 in https://arxiv.org/pdf/1803.09100.pdf – Davide Giraudo Sep 02 '19 at 22:08
  • 1
    @DavideGiraudo, Thank you for the link. Theorem 2 in the link seems like a direct generalization of what is discussed in Chapter 3 of Hall & Heyde, especially considering the use of quantity $T_n(t)=\prod_j (1+iX_{n,j})$. Since this theorem seems to work for $X_{n,i}=\frac{1}{n}|S_{i-1}|(X_i^2-1)$ and fail for $X_{n,i}=\frac{1}{n}S_{i-1}X_i$, I am genuinely curious as to how the conditions are fulfilled or violated for each of them. Overall, a quite interesting line of questions! – Sangchul Lee Sep 02 '19 at 22:18

2 Answers2

14

Let $(X_n)_{n\geq 1}$ be a sequence of i.i.d. standard normal variables. Let $(S_n)_{n\geq 0}$ and $(T_n)_{n\geq 0}$ be given by

$$ S_n = \sum_{i=1}^{n} X_i \qquad\text{and}\qquad T_n = \sum_{i=1}^{n} (X_i^2 - 1). $$

We will also fix a partition $\Pi = \{0 = t_0 < t_1 < \cdots < t_k = 1\}$ of $[0, 1]$. Then define

$$ \begin{gathered} Y_n = \frac{1}{n}\sum_{i=1}^{n} | S_{i-1} | (X_i^2-1), \\ Y_{\Pi,n} = \frac{1}{n} \sum_{j=1}^{k} |S_{\lfloor nt_{j-1}\rfloor}| (T_{\lfloor nt_j\rfloor} - T_{\lfloor nt_{j-1} \rfloor}). \end{gathered}$$

Ingredient 1. If $\varphi_{X}(\xi) = \mathbb{E}[\exp(i\xi X)]$ denotes the characteristic function of the random variable $X$, then the inequality $|e^{ix} - e^{iy}| \leq |x - y|$ followed by Jensen's inequality gives

\begin{align*} \big| \varphi_{Y_n}(\xi) - \varphi_{Y_{\Pi,n}}(\xi) \big|^2 &\leq \xi^2 \mathbb{E}\big[ (Y_n - Y_{\Pi,n})^2 \big] \\ &= \frac{\xi^2}{n^2}\sum_{j=1}^{k} \sum_{i \in (nt_{j-1}, nt_j]} 2 \mathbb{E} \big[ \big( | S_{\lfloor n t_{j-1} \rfloor} | - | S_{i-1} | \big)^2 \big]. \end{align*}

From the reverse triangle inequality, the inner expectation is bounded by

\begin{align*} 2 \mathbb{E} \big[ \big( | S_{\lfloor n t_{j-1} \rfloor} | - | S_{i-1} | \big)^2 \big] \leq 2 \mathbb{E} \big[ \big( S_{i-1} - S_{\lfloor n t_{j-1} \rfloor} \big)^2 \big] = 2(i-1-\lfloor nt_{j-1} \rfloor), \end{align*}

and summing this bound over all $i \in (nt_{j-1}, nt_j]$ yields

$$ \big| \varphi_{Y_n}(\xi) - \varphi_{Y_{\Pi,n}}(\xi) \big|^2 \leq \frac{\xi^2}{n^2} \sum_{j=1}^{k} (\lfloor n t_j \rfloor - \lfloor n t_{j-1} \rfloor)^2 \xrightarrow[n\to\infty]{} \xi^2 \sum_{j=1}^{k} (t_j - t_{j-1})^2. \tag{1} $$

Ingredient 2. From the Multivariate CLT, we know that

$$ \Bigg( \frac{S_{\lfloor nt_j\rfloor} - S_{\lfloor nt_{j-1}\rfloor}}{\sqrt{n}}, \frac{T_{\lfloor nt_j\rfloor} - T_{\lfloor nt_{j-1}\rfloor}}{\sqrt{n}} : 1 \leq j \leq k \Bigg) \xrightarrow[n\to\infty]{\text{law}} ( W_{t_j} - W_{t_{j-1}}, N_j : 1 \leq j \leq k ), $$

where $W$ is the standard Brownian motion, $N_j \sim \mathcal{N}(0, 2(t_j - t_{j-1}))$ for each $1 \leq j \leq k$, and all of $W, N_1, \cdots, N_k$ are independent. By the continuous mapping theorem, this shows that

$$ Y_{\Pi,n} \xrightarrow[n\to\infty]{\text{law}} \sum_{j=1}^{k} W_{t_{j-1}} N_j. $$

Moreover, conditioned on $W$, the right-hand side has normal distribution with mean zero and variance $2\sum_{j=1}^{k} W_{t_{j-1}}^2 (t_j - t_{j-1}) $, and so,

$$ \lim_{n\to\infty} \varphi_{Y_{\Pi,n}}(\xi) = \mathbb{E}\left[ \exp\bigg( -\xi^2 \sum_{j=1}^{k} W_{t_{j-1}}^2 (t_j - t_{j-1}) \bigg) \right]. \tag{2} $$

Ingredient 3. Again let $W$ be the standard Brownian motion. Since the sample path $t \mapsto W_t$ is a.s.-continuous, we know that

$$ \sum_{j=1}^{k} W_{t_{j-1}}^2 (t_j - t_{j-1}) \longrightarrow \int_{0}^{1} W_t^2 \, \mathrm{d}t $$

almost surely along any sequence of partitions $(\Pi_k)_{k\geq 1}$ such that $\|\Pi_k\| \to 0$. So, by the bounded convergence theorem,

$$ \mathbb{E}\left[ \exp\bigg( -\xi^2 \sum_{j=1}^{k} W_{t_{j-1}}^2 (t_j - t_{j-1}) \bigg) \right] \longrightarrow \mathbb{E}\left[ \exp\bigg( -\xi^2 \int_{0}^{1} W_t^2 \, \mathrm{d}t \bigg) \right] \tag{3} $$

as $k\to\infty$ along $(\Pi_k)_{k\geq 1}$.

Conclusion. Combining $\text{(1)–(3)}$ and letting $\|\Pi\| \to 0$ proves that

$$ \lim_{n\to\infty} \varphi_{Y_n}(\xi) = \mathbb{E}\left[ \exp\bigg( -\xi^2 \int_{0}^{1} W^2_t \, \mathrm{d}t \bigg) \right], $$

and therefore $Y_n$ converges in distribution to $\mathcal{N}\big( 0, 2\int_{0}^{1} W_t^2 \, \mathrm{d}t \big)$ as desired.

Sangchul Lee
  • 167,468
  • 1
    In the application of the multivariate CLT, how do you guarantee that in the limiting law $W$ is independent from the $N_j$'s? Is it obvious (since the $S_j$'s and $T_j$'s are not)? – Clement C. Sep 02 '19 at 18:40
  • @ClementC., That is the good point which I did not stressed out but is in fact important. A quick answer: it is because $S_m$'s and $T_n$'s are uncorrelated. This is very specific to this problem, and in general we do not (and should not) expect independence of the limiting $W$ and $N_j$'s, such as in the case of $$\frac{1}{n}\sum_{k=1}^{n}S_{k-1}X_k \to \int_{0}^{1}W_s,\mathrm{d}W_s.$$ – Sangchul Lee Sep 02 '19 at 21:43
  • A longer answer: The $2k$ random variables in $\mathcal{S}={W_{t_j}-W_{t_{j-1}},N_j:1\leq j\leq k}$ are jointly normal, meaning that any linear combinations are again normal (or equivalently, they form a $2k$-dimensional multivariate normal distribution). Also we easily find that any two distinct RVs from $\mathcal{S}$ are pairwise uncorrelated. Being jointly normal, this implies that RVs in $\mathcal{S}$'s are mutually independent. So we are critically exploiting the property of joint normality here. – Sangchul Lee Sep 02 '19 at 22:06
  • I see. Yes, that's an important point... – Clement C. Sep 02 '19 at 22:13
  • In the very first inequality, how do you get $\forall t, ;|\varphi_X(t)-\varphi_Y(t)|^2\leq E(|X-Y|^2)$ ? I get that $|e^{itx}-e^{ity}|\leq |t||x-y|$ implies $$|E(e^{itX})-E(e^{itY})|^2\leq E(|e^{itX}-e^{itY}|)^2\leq |t|^2 E(|X-Y|)^2\leq |t|^2 E(|X-Y|^2)$$ – Gabriel Romon Sep 03 '19 at 09:32
  • 1
    @GabrielRomon, Ah, yes, I forgot to write the factor $|\xi|^2$. Thank you for pointing this out, I fixed it now. Thankfully, this does not affect the validity of the proof as we are essentially fixing $\xi\in\mathbb{R}$ to the very end. – Sangchul Lee Sep 03 '19 at 09:34
  • Sorry for my lack of reaction, I need more time to fully digest your answer. – Gabriel Romon Sep 08 '19 at 11:49
  • @GabrielRomon, No worries. Indeed I did not word out every detail and left some to the reader, so please let me know if you have any questions. – Sangchul Lee Sep 08 '19 at 13:32
3

An idea is to use the following result in Martingale limit theory and its applications by Hall and Heyde (Theorem 3.2):

Let $(X_{n,i},\mathcal F_{n,i})_{1\leqslant i\leqslant k_n,n\geqslant 1 }$ by a martingale differences array where $X_{n,i}\in L^2$ and $\mathcal F_{n,i-1}\subset \mathcal F_{n,i}$ for all $n$ and $i$. Suppose that there exists a random variable $\eta^2$ which is a.s. finite and such that

  1. $\max_{1\leq i\leq k_n} \left\lvert X_{n,i}\right\rvert\to 0$ in probability;
  2. $\sup_{n\geqslant 1}\mathbb E\left[\max_{1\leq i\leq k_n}X_{n,i}^2\right]$ is finite.
  3. $\sum_{i=1}^{k_n}X_{n,i}^2\to \eta^2$ in probability;

Then $\sum_{i=1}^{k_n}X_{n,i}\to Z$ in distribution, where $Z=\eta N$, with $N$ standard normal and independent of $\eta$.

However, unfortunately, I am not sum whether this works because here. The sum of the conditional variances converges in law and not in probability.

We will use this result with $\mathcal F_{n,i}=\sigma(X_j,1\leq j\leq n)$, $k_n=n$ and $X_{n,i}=\frac 1n\left\lvert S_{i-1}\right\rvert (X_i^2-1)$.

  1. For a fixed $\varepsilon$, $$\mathbb P\left(\max_{1\leq i\leq n} \frac 1n\left\lvert S_{i-1}\right\rvert \left\lvert X_i^2-1\right\rvert>\varepsilon \right) \leqslant \sum_{i=2}^n \mathbb P\left( \left\lvert S_{i-1}\right\rvert \left\lvert X_i^2-1\right\rvert>n\varepsilon \right).$$ Using the independence between $S_{i-1}$ and $X_i$ and the fact that $S_{i-1}$ has a normal distribution with variance $i-1$, we get that
    $$\mathbb P\left( \left\lvert S_{i-1}\right\rvert \left\lvert X_i^2-1\right\rvert>n\varepsilon \right)=\mathbb P\left( \sqrt{i-1}\left\lvert X_1 \right\rvert \left\lvert X_2^2-1\right\rvert>n\varepsilon \right)\leq \mathbb P\left( \left\lvert X_1 \right\rvert \left\lvert X_2^2-1\right\rvert>n^{1/2}\varepsilon \right)$$ and we conclude that 1. holds by the integrability of $\left\lvert X_1 \right\rvert \left\lvert X_2^2-1\right\rvert$.

  2. It follows from the fact that $$\mathbb E[X_{n,i}^2]=\frac 1{n^2}(i-1)\mathbb E\left[(X_1^2-1)^2\right]$$ hence we even have finiteness of $\sup_{n\geqslant 1}\mathbb E\left[\sum_{1\leq i\leq k_n}X_{n,i}^2\right].$

  3. For the third condition, it would be better to deal with conditional variances. Let $$ \delta_n:= \frac 1{n^2}\sum_{i=2}^n\left( S_{i-1}^2 (X_i^2-1)^2-\mathbb E\left[S_{i-1}^2 (X_i^2-1)^2\mid \mathcal F_{n,i-1}\right]\right). $$ Then $\delta_n$ is a sum of martingale differences and we can check that $\mathbb E\left[\delta_n^2\right]\to 0$. Therefore, we have to look at the limit in probability of $$ A_n:=\frac 1{n^2}\sum_{i=1}^nS_{i}^2= \frac 1n \sum_{i=1}^n W_{i/n}^2, $$ where the equality takes place in distribution and $(W_t)_{t\in [0,1]}$ is a standard Brownian motion. The latter quantity converges in probability to $\int_0^1 W(t)^2dt=:\eta^2$. But there is no convergence in probability. Indeed, $$A_{2n}-A_n= \frac 1{n^2}\sum_{i=1}^nS_i^2-\frac{3}{4n^2}\sum_{i=n+1}^{2n}S_i^2\overset{\mbox{law}}{=}\frac 1n\sum_{i=1}^nW_{i/(2n)}^2-\frac{3}{4n}\sum_{i=n+1}^{2n}W_{i/(2n)}^2.$$

Davide Giraudo
  • 172,925
  • 2
    That's... non-trivial, I wonder how this problem made it in that chapter of the problem book. Out of curiosity, that does lead to unit variance? (and what is the corresponding max of the pdf?) – Clement C. Aug 26 '19 at 16:02
  • 1
    What do you mean by unit variance? – Davide Giraudo Aug 26 '19 at 16:57
  • Just, as a sanity check, that the limiting distribution does have variance 1 (since that is the limit that's expected). – Clement C. Aug 26 '19 at 17:07
  • 1
    Yes. I realized that $\mathbb E\left[(X_1^2-1)^2\right]=3-2+1=2$ and the expectation of $\eta^2$ is $\int_0^1\mathbb E[W(t)^2]dt= \int_0^1t=1/2$. – Davide Giraudo Aug 26 '19 at 18:41