0

Let $(X_n)$ is a sequence of random variables with $X_n\sim\mathsf{N}(\mu_n,\sigma^2_n)$ that converges in law to $X$, then $X\sim\mathsf{N}(\mu,\sigma^2)$, where $\mu=\lim\limits_{n\rightarrow\infty}\mu_n$ and $\sigma^2=\lim\limits_{n\rightarrow\infty}\sigma^2_n$.

There are quite a few posts relating to the above theorem such as the one here and here and while they do go into quite some detail, my analysis is quite rusty and I still have some big gaps in understanding, so would really appreciate it if anyone could help me addressing the issues I have. Thanks in advance.

Let $\varphi(.)$ denote the characteristic function of a r.v. So convergence in law then implies that $\varphi_{X_n}(t)=e^{it\mu-\frac{1}{2}t^2\sigma_n^2}\overset{n}{\underset{\infty}{\longrightarrow}}\varphi_X(t)$. Therefore $|\varphi_{X_n}(t)|=e^{-\frac{1}{2}t^2\sigma_n^2}\overset{n}{\underset{\infty}{\rightarrow}}|\varphi_X(t)|$ and from this we need to infer that $(\sigma_n^2)$ and $(\mu_n)$ are convergent sequences.

I fully understand why $(X_n)$ is a tight sequence (Lévy’s continuity theorem) and why tightness implies that $(\sigma_n^2)$ and $(\mu_n)$ are bounded sequences.

  1. However I have seen in some texts where for the $(\sigma_n^2)$, tightness is not used and it is just said that $(\sigma_n^2)$ has to be convergent as the limit has to be continuous, so my first issue is understanding this argument, as it is implied to be quite obvious but I don't see it, so below I outline my understanding.

    • Continuity Argument: As characteristic functions are (uniformly) continuous hence, are non-vanishing in a region around zero (as $\varphi_X(0)=1$) then $(\sigma_n^2)$ has to be bounded otherwise $\limsup_n \sigma^2_n=\infty$ would imply $|\varphi_{X_n}(t)|\rightarrow 0$ in a region where $\varphi_{X}(t)\neq 0$. This then permits us to apply logs and for any convergent subsequence $(\sigma_{n_k}^2)$, we have $\sigma_{n_k}^2\overset{n}{\underset{\infty}{\rightarrow}}-2\ln|\phi_{X}(1)|$, which then implies $\sigma_{n}^2\overset{n}{\underset{\infty}{\rightarrow}}-2\ln|\phi_{X}(1)|=\sigma^2$ Is this correct, I am a not very confident if my reasoning on why it is safe to apply logs, i.e. why $\varphi_X\neq 0$ is correct.
  2. Now by tightness $(\mu_n)$ must be bounded. Then for any convergent subsequences with limit $\mu$, we have $$e^{it\mu_{n_k}}\overset{n}{\underset{\infty}{\longrightarrow}}e^{\frac{1}{2}t^2\sigma^2}\varphi_X(t)=e^{it\mu}.$$ So if $(\mu_{n_{k'}})$ is another subsequence with limit $\mu'$ then we have $e^{i(\mu-\mu')}=1$, thus $\mu-\mu'=2\pi p$, $p\in\mathbb{Z}$ so how do we infer that $p=0$. I know in 1 a choice of $t$ is chosen so that we can be in the domain of the principal Log function, but I'm still a bit hazy on how this uniquely determines $\mu$. Is it beacuse this has to be true for all $t$, then for that carefully chosen choice of $t$ which allows us the apply the principal log forces that value of $mu$ to be uniquely determined? So any further explanation is greatly needed.

  3. I really don't know why $\sigma^2\neq 0$ As $\sigma^2=0$ iff $|\phi_X(1)|=1$, so why is it that $|\phi_X(1)|\neq1$?

Any feedback is welcomed, thanks.

2 Answers2

1

$e^{-\frac 1 2 t^{2}\sigma_n^{2}} \to |\phi_X (t)|$ for all $x$. If $t>0$ is suffuciently small then $|\phi_X (t)| >0$. [By continuity of ch. function and the fact ch. functions have the value $1$ at $0$]. Fix such a number $t$ and use continuity of logarithm to conclude that $\sigma_m^{2} \to -2\frac 1 {t^{2}} \ln |\phi_X (t)|$. Hence $\sigma_n^{2}$ is convergent.

An elementary lemma in complex analysis say that if $c_n$ is a sequence of complex numbers such that $e^{itc_n}$ converges for all real numbers $t$ then $\{c_n\}$ is itself convergent. [ In fact it is enough if convergence holds for all $t$ in some set of positive Lebesgue measure].

This lemma shows that $\mu_n$ is also convergent.

Now by DCT we see that $|\phi_X (t)|=lim |\phi_{X_n} (t)|=e^{i\mu t} e^{-t^{2}\sigma^{2}/2}$ where $\sigma^{2}=\lim \sigma_n^{2}$ and $\mu =\lim \mu_n$. Hence $X$ is normal with mean $\mu$ and variance $\sigma^{2}$.

What happens when $\sigma_n \to 0$? In this case $|\phi_X(t)|=1$ for all $t$ and this implies that $X$ is a constant random variable. Constants are considered as normal with variance $0$.

  • The only theorem from Probability Theory you require is Levy's Continuity Theorem. No need to worry about any tightness conditions. – Kavi Rama Murthy Oct 02 '19 at 08:34
  • Thanks, can you give more a reference for that lemma or some notes on how to prove it? – user152874 Oct 02 '19 at 21:45
  • @user152874 Some hints: $e^{it(c_n-c_m)} \to 1$. Integrate from $-1$ to $1$ w.r.t. $t$ to see that $\frac {\sin (c_n-c_m)} {c_n-c_m} \to 1$. From this conclude that $c_n-c_m \to 0$ so ${c_n}$ is a Cauchy sequence. – Kavi Rama Murthy Oct 02 '19 at 23:17
  • Thanks, but still a bit stuck and confused, just to clarify do you mean $\lim_{n\rightarrow\infty}\lim_{m\rightarrow\infty}\exp[it(c_n-c_m)]=1$, and similarly for the other limits? Would really appreciate more detail. Also do the next steps employ the Residue Theorem? – user152874 Oct 03 '19 at 00:02
  • @user152874 Yes, I am taking limits w.r.t. $n$ and 4m$. NO, you don't require anything as sophisticated as Residue theorem. You only need elementary properties of $\sin $ and $\cos$. – Kavi Rama Murthy Oct 03 '19 at 00:13
  • Sorry but I still can't see it. Obviously $\lim_{\xi\rightarrow 0} \frac{\sin\xi}{\xi}=1$ but am still not sure how to show that $\lim_{n\rightarrow \infty} \frac{\sin x_n}{x_n}=1$ must imply $x_n\rightarrow 0$? Only argument I can think off is a proof by contradiction, but that has to be broken up into cases where $(x_n)$ is divergent or converges to a nonzero number, but I've not been successful. Also you using iterated limits, not the double limits? Thanks again for all your help. – user152874 Oct 03 '19 at 21:44
  • Also don't we need the double limit $\lim_{n,m\rightarrow \infty}|c_n-c_m|=0$, as don't we need some uniform convergence so that the double limit is the same as the iterated limits. Thanks again. – user152874 Oct 03 '19 at 21:45
  • @user152874 $1-\frac {\sin t} t \to 1$ as $ t \to \infty$ and $1-\frac {\sin t} t \to 1>0$ fro all $t >0$. (It is also an even continuous function ). Can you use these facts to conclude that $\frac {\sin t} t \to 0$ implies $t \to 0$? – Kavi Rama Murthy Oct 03 '19 at 23:12
  • Thank you for your patience and time. I see how out of practice my analysis is and appreciate your remedial lessons. I haven't had much time to mull over this but following your hints I am still not there.

    To verify your 1st claim. Let $f(t)=t-\sin t$. Then $f'(t)\geq 0$, so as $f(0)=0$ we have $f(t)\geq 0$, and so for any $t>0$, $1-\frac{\sin t}{t}\geq 0$, which then holds for $t<0$ as $1-\frac{\sin t}{t}$ is even.

    – user152874 Oct 05 '19 at 08:43
  • Secondly the R.T.P. $\frac{\sin x_n}{x_n}\rightarrow 1\Rightarrow x_n\rightarrow 0$ i.e.$(\forall\epsilon> 0)(\exists N)(n\geq N\Rightarrow |x_n|<\epsilon)$

    So for a $\epsilon>0$ and for large enough $n$, $1-\frac{\sin x_n}{x_n}=|1-\frac{\sin x_n}{x_n}|<\epsilon$, which then implies for $x_n>0$ that $1-\epsilon<\frac{\sin x_n}{x_n}<\frac{1}{x_n}$, i.e. $0<x_n<\frac{1}{1-\epsilon}$. And am stuck here, as this works only if $x_n>0$ and $\epsilon>1$. Tried playing around with $x_n<0$ (so $-\frac{\sin x_n}{x_n}\leq -\frac{1}{x_n})$ and $-\epsilon<1-\frac{\sin x_n}{x_n}$.

    – user152874 Oct 05 '19 at 08:44
  • Also I am still not sure about if the iterated limits and double limits coincide in this case, as don't you require the double limit $\lim_{n,m\rightarrow \infty}|c_n-c_m|$ to conclude it is Cauchy? Or will the iterated limits work, i.e. $\lim_{n\rightarrow \infty}\lim_{m\rightarrow \infty}|c_n-c_m|$. Also do you have any references for the original lemma and similar results, as I would like to read over them and get more practice., and there must be a less round about way to show this. – user152874 Oct 05 '19 at 08:45
  • Additionally you mentioned this is true for any set with positive Lebesgue measure, but here you required on interval viz, [-1,1], so what about the case such as the Smith-Volterra-Cantor set, which has positive measure but does not contain an interval, in particular nowhere dense. How woud you go about it then? Any texts you can point me to that look at this? Thanks again. – user152874 Oct 05 '19 at 08:46
  • @user152874 Surely the case where the limit holds for $t$ in some set of positive measure is a little more complicated. In the present question we have convergence for every real number $t$ so I gave a simple argument where we integrate w.r.t. $t$ from $-1$ to $=1$. – Kavi Rama Murthy Oct 05 '19 at 11:31
  • Thanks for getting back to me.Yes, I do realise that the argument for the general case must be different but I am curious to see it so that's why I asked for a reference I could look up. But back to it holds for any $t$ from my previous comments I am really stuck showing $\frac{\sin x_n}{x_n}\rightarrow 1\Rightarrow (\forall\epsilon> 0)(\exists N)(n\geq N\Rightarrow |x_n|<\epsilon)$. Please can elaborate more and help me full in the blanks? Also can you also please explain if we need only the iterated limits or the double limit and if so don't we need some uniform convergence to coincide? . – user152874 Oct 06 '19 at 05:56
  • Try to show that $\inf {|\frac {\sin t} t-1|: t \geq \epsilon} >0$. Call this infimum $\delta$. Then, $t>0$ and $||\frac {\sin t} t-1| <\delta$ implies that $t <\epsilon$. @user152874 – Kavi Rama Murthy Oct 06 '19 at 06:02
  • Thank you, I finally see it, with your definition of $\delta$, it is obvious. But just to verify, as $g(t):=1-(\sin t)/t\rightarrow 1$, $\exists M>0$ s.t. $t>M\Rightarrow 1/2<g(t)<3/2$. So if $\epsilon>M$ then $\delta>0$. And if $\epsilon\leq M$, then as $[\epsilon,M]$ is compact and $g$ continuous then $\delta\in g([\epsilon,M])$ also $g(t)>0$ on $\mathbb{R}-{0}$, thus $\delta>0$. Thanks again and apologises for my slowness. – user152874 Oct 07 '19 at 20:02
1

$1.)$

You never have any danger in applying the logarithm. You know $\sigma_n^2=-\frac{2}{t^2}\log|\varphi_{X_n}(t)|$ always makes sense (the characteristic function here never vanishes), and then, by mapping back and forth via $\exp(x),$ you see that for a sequence of positive numbers $a_n,$ we have that $a_n\to \infty$ if and only if $\log a_n\to\infty$ and $a_n\to 0$ if and only if $\log a_n\to -\infty$, so there is, indeed, nothing to worry about.

Then, applying your argument works just lovely for establishing that $\sigma_n^2$ indeed doesn't go to $\infty$.

$2.)$

Here, it's a lot easier to prove that the statement is if and only if for weak convergence. Scheffe's Lemma tells you that if $\mu_n$ and $\sigma_n^2$ are convergent, then your Gaussian sequence converges weakly, and the limit is Gaussian with the corresponding parameters (with the caveat that $\sigma^2$ could be $0$, in which case, see below).

Having shown this, you know that $\mu_n$ has a convergent subsequence, and any two convergent subsequences of $mu_n$ must have the same limit by the above (since weak limits are unique, and we assumed that our variables converged in probability, and hence, their distributions converge weakly). Now, it's a general result that if you have a sequence of real numbers $a_n$ and a real number $a$ such that any subsequence of $a_n$ has $a$ as a point of condensation, then $a_n$ converges to $a$. Combining these two results gives this desired without appealing to any complex analysis.

$3.)$

It is, indeed, possible that the limit $\sigma^2$ is $0$, and then, the limit distribution is just a point mass (just apply Chebyshev's Inequality to check that this is true for any sequence of random variables with convergent means and variance tending to $0$).