6

It's well known that, for a sequence of $n$ i.i.d. standard Gaussian random variables $X_1,\ldots,X_n$, where $X_\max=\max(X_1,\ldots,X_n)$, the following convergence result holds:

$$P\left(\lim_{n\rightarrow\infty}\frac{X_\max}{\sqrt{2\log n}}=1\right)=1$$

or, $\frac{X_\max}{\sqrt{2\log n}}\rightarrow1$ almost surely (for a proof of this convergence, see Example 4.4.1 in Galambos "Asymptotic Theory of Extreme Order Statistics").

I am wondering what happens to the following limit:

$$L=\lim_{n\rightarrow\infty}\left[\left(\frac{X_\max}{\sqrt{2\log n}}-1\right)f(n)\log(n)\right]$$ where $f(n)=o(1)$.

Is $L=0$ or infinite? Does it depend of $f(n)$? I am not sure how to deal with the indeterminate form here...

M.B.M.
  • 5,406
  • Could you give a reference to the well known result? – Gautam Shenoy Nov 09 '13 at 06:35
  • I added the reference to the example in Galambos, where he proves this. Other books on extreme value theory have it, though many reference Galambos for the result (e.g. Example 3.5.4 in Embrecht et al "Modelling Extremal Events for Insurance and Finance"). – M.B.M. Nov 09 '13 at 06:44
  • You'll probably need a faster rate of converge of $f(n)$ to $0$. For $f(n)=e^{-n}$, $L=0$. But what if $f(n)$ converges really slowly, e.g., $f(n) = (\log n)^{-\epsilon}$? To show the limit exists, you'd probably need to show it converges to $0$ even when $f(n)=1$. I don't know whether the limit exists for $f(n)=1$. But I doubt it. – Will Nelson Nov 09 '13 at 07:31
  • @WillNelson I used the Borel-Cantelli lemma which you pointed out to me last week to investigate the almost sure convergence of this limit. For some reason, the answer that Did gave is inconsistent with what I found. I posted my work as an answer and am telling you about it since you seem to know this stuff well. It'd be great if you looked it over... Thanks! – M.B.M. Nov 12 '13 at 05:48

2 Answers2

5

Let $M_n=\max\{X_k;1\leqslant k\leqslant n\}$ and let us first recall how the first order asymptotics of $M_n$ obtains. For every $x$, $$ P[M_n\leqslant x]=P[X_1\leqslant x]^n, $$ and standard estimates of the gaussian tail show that, when $x\to\infty$, $$ P[X_1\gt x]=1/\theta(x),\qquad \theta(x)\sim x\sqrt{2\pi}\mathrm e^{x^2/2}. $$ Thus, if $\theta(u_n)\ll n$, then $P[M_n\leqslant u_n]\to0$ while, if $\theta(v_n)\gg n$, then $P[M_n\leqslant v_n]\to1$. This holds with $u_n=(1-\varepsilon)\sqrt{2\log n}$ and $v_n=(1+\varepsilon)\sqrt{2\log n}$, for every positive $\varepsilon$, hence $M_n/\sqrt{2\log n}$ converges in probability to $1$.

To go further, assume that $x_n=(1+z_n)\sqrt{2\log n}$, with $z_n\to0$. Then, $$ n^{-1}\theta(x_n)\sim2\sqrt\pi\exp\left( (2z_n+z_n^2)\log n+\tfrac12\log\log n\right). $$ In particular, if $2z_n\log n=t-\tfrac12\log\log n$ for some fixed $t$, then $n^{-1}\theta(x_n)\sim\sqrt{4\pi}\mathrm e^{t}$ hence $P[M_n\leqslant x_n]\to\exp(-\mathrm e^{-t}/\sqrt{4\pi})$. This means that $$ T_n=2\log n\left(\frac{M_n}{\sqrt{2\log n}}-1\right)+\frac12\log\log n+\frac12\log(4\pi) $$ converges in distribution to a random variable $T$ such that, for every $t$, $$ P[T\leqslant t]=\exp(-\mathrm e^{-t}). $$ In particular, $$ U_n=\frac{\log n}{\log\log n}\left(\frac{M_n}{\sqrt{2\log n}}-1\right)\to-\frac14\ \text{in probability.} $$ Edit: For every $n\geqslant2$, consider the random variable $$ V_n=\frac{\log n}{\log\log n}\left(\frac{X_n}{\sqrt{2\log n}}-1\right). $$ The asymptotics on the gaussian tail used above shows that, for every fixed $t$, $$ P[V_n\geqslant t]\sim\frac1{2\sqrt\pi\cdot n\cdot(\log n)^{1/2+2t}}. $$ If $t\lt1/4$, the series $\sum\limits_nP[V_n\geqslant t]$ diverges hence Borel-Cantelli lemma (difficult part) shows that, almost surely $V_n\geqslant t$ for infinitely many $n$. Since $U_n\geqslant V_n$, almost surely $U_n\geqslant t$ for infinitely many $n$.

If $t\gt1/4$, the series $\sum\limits_nP[V_n\geqslant t]$ converges hence Borel-Cantelli lemma (easy part) shows that, almost surely $V_n\leqslant t$ for every $n$ large enough. Thus, $V_n\leqslant t$ for every $n$ with positive probability, hence $U_n\leqslant t$ for every $n$ with positive probability. Since $M_n\to\infty$ almost surely, asymptotically $U_n$ does not depend on $(X_i)_{i\leqslant k}$, for every $k$. Thus, $\limsup U_n$ is an asymptotic random variable and $[\limsup U_n\leqslant t]$ has probability $0$ or $1$.

Finally, $$ \limsup\limits_{n\to\infty}U_n=+\frac14\ \text{almost surely.} $$

Did
  • 279,727
  • Thanks for the answer. I replicated your calculation until $T_n$. As I understand, to get it, you simply put $x_n=\left(1+t/(2\log n)-\frac{\log \log n}{4\log n}\right)\sqrt{2\log n}$ into $P[M_n\leq x_n]$ and move the terms until only $t$ is left on LHS of the inequality (and factor $\sqrt{4\pi}$ is gone). When I do that, I get $T_n=2\log n\left(\frac{M_n}{\sqrt{2\log n}}-1\right)+\frac{1}{2}\log \log n-\frac{1}{2}\log(4\pi)$. Constants aren't important, but I think the plus instead of minus in front of $\log \log n$ is (as that leads to convergence in probability). Did I follow you wrong? – M.B.M. Nov 11 '13 at 05:42
  • Good replication... :-) Indeed the previous version messed up with some signs, the revised one should be allright. – Did Nov 11 '13 at 11:43
  • Thanks for the correction! I think the sign in front of $\frac{1}{2}\log(4\pi)$ is still reversed, but that term isn't important. Now I am wondering if this can be strengthened to almost surely... – M.B.M. Nov 11 '13 at 17:04
  • Sure about this sign? – Did Nov 11 '13 at 21:10
  • Oh, nevermind. :) I had a stray minus sign in there... – M.B.M. Nov 12 '13 at 00:26
  • I did some work on strengthening this to almost surely, and obtained a bound, which posted as an answer. I am drawing your attention to it because for some reason the bound I obtained is inconsistent with your answer (and I'd really like to learn why)... – M.B.M. Nov 12 '13 at 05:45
  • @M.B.M. Without having looked at this in much detail, how's your answer inconsistent with Did's? He basically proves convergence in probability to $-\frac{1}{4}$. Don't overlook the factor of $4$ in his final result. – Will Nelson Nov 12 '13 at 06:15
  • Aha!!! I completely overlooked the factor of 4, thanks for pointing it out. Now, if only there was a way to match the upper bound to the lower... – M.B.M. Nov 12 '13 at 06:53
-1

As in @Did's answer, we let $f(n)=(\log \log n)^{-1}$. Also, for consistency with the existing literature (and Did's answer), let $M_n=\max\{X_k;1\leq k\leq n\}$.

Theorem: $P\left(\lim_{n\rightarrow\infty}\frac{\log n}{\log \log n}\left(\frac{M_n}{\sqrt{2\log n}}-1\right)\in\left[-\frac{1}{4},\frac{1}{4}\right]\right)=1$ if it exists.

First we state a couple of helpful lemmas for Embrechts, Klüppelberg and Mikosch's "Modeling Extremal Events". The first follows from the Borel-Cantelli lemma; the sketch of the proof of the second is on page 170, with references to the full proof.

Lemma 1 (abridged Theorem 3.5.1 in Embrechts et al.): Suppose $(u_n)$ is non-decreasing. Then $\sum_{n=1}^\infty P(X_1>u_n)<\infty$ implies that $P(M_n>u_n~~\text{i.o.})=0$.

Lemma 2 (abridged Theorem 3.5.2 in Embrechts et al.): Suppose $(u_n)$ is non-decreasing and that the following conditions hold: $P(X_1\geq u_n)\rightarrow 0$ and $nP(X_1\geq u_n)\rightarrow \infty$. Then $\sum_{n=1}^\infty P(X_1>u_n)\exp[-nP(X_1>u_n)]<\infty$ implies that $P(M_n\leq u_n~~\text{i.o.})=0$.

(i.o. here means "infinitely often").

Proof: First we prove the upper bound; and then the lower bound.

For upper bound let $u^u_n=\sqrt{2\log n}+\frac{\sqrt{2}(\frac{1}{4}+\epsilon)\log\log n}{\sqrt{\log n}}$ where $\epsilon>0$. Using standard approximation for the distribution of the tail of a Gaussian random variable $P(X_1>x)\approx\frac{1}{\sqrt{2\pi}x}e^{-x^2/2}$, we obtain: $$\begin{array}{rcl}P(X_1>u^u_n)&\approx&\frac{\exp\left[-\log n -2(\frac{1}{4}+\epsilon)\log\log n-\mathcal{O}\left(\frac{(\log\log n)^2}{\log n}\right)\right]}{\sqrt{2\pi}\left(\sqrt{2\log n}+\mathcal{O}\left(\frac{\log\log n}{\sqrt{\log n}}\right)\right)}\\ &=&\frac{1}{C_1n(\log n)^{\frac{1}{2}+2(\frac{1}{4}+\epsilon)}} \end{array}$$ where $C_1=\mathcal{O}(1)$ captures the lower-order terms and the constants. Using the Cauchy condensation test, one can easily check that: $$\sum_{n=1}^\infty n^{-1}(\log n)^{-1-2\epsilon}<\infty$$ By Lemma 1, $P(M_n>u^u_n~~\text{i.o.})=0$. A few arithmetic manipulations yield that: $$\tag{1}P\left(\frac{\log n}{\log \log n}\left(\frac{M_n}{\sqrt{2\log n}}-1\right)>\frac{1}{4}+\epsilon~~\text{i.o.}\right)=0$$

For the lower bound let $u^l_n=\sqrt{2\log n}-\frac{\sqrt{2}(\frac{1}{4}+\epsilon)\log\log n}{\sqrt{\log n}}$ where $\epsilon>0$. Again using the same approximation for the distribution of the tail of a Gaussian random variable, we obtain: $$\begin{array}{rcl}P(X_1>u^l_n)&\approx&\frac{\exp\left[-\log n +2(\frac{1}{4}+\epsilon)\log\log n-\mathcal{O}\left(\frac{(\log\log n)^2}{\log n}\right)\right]}{\sqrt{2\pi}\left(\sqrt{2\log n}+\mathcal{O}\left(\frac{\log\log n}{\sqrt{\log n}}\right)\right)}\\ &=&\frac{(\log n)^\epsilon}{C_2n} \end{array}$$ where $C_2=\mathcal{O}(1)$ captures the lower-order terms and the constants. Clearly $P(X_1>u^l_n)\rightarrow 0$ and $nP(X_1>u^l_n)\rightarrow\infty$, so the conditions necessary for Lemma 2 are satisfied. Now,

$$P(X_1>u^l_n)\exp[-nP(X_1>u^l_n)]=\frac{(\log n)^\epsilon}{C_2 ne^{(\log n)^\epsilon/C_2}}$$

Again employing the Cauchy condensation test we can show that: $$\sum_{n=1}^\infty \frac{(\log n)^\epsilon}{C_2 ne^{(\log n)^\epsilon/C_2}}<\infty$$ By Lemma 2, $P(M_n<u^l_n~~\text{i.o})=0$. A few arithmetic manipulations yield that: $$\tag{2}P\left(\frac{\log n}{\log \log n}\left(\frac{M_n}{\sqrt{2\log n}}-1\right)<-\frac{1}{4}-\epsilon~~\text{i.o.}\right)=0$$ Combining (1) and (2) yields the desired result.

Remark: This demonstrates a bound on the limit if the limit exists (in fact, as Did points out, this shows $\limsup$ is $\leq 1/4$ and $\liminf$ is $\geq 1/4$). I wonder if one can determine if the limit in fact converges to some fixed number $C\in\left[-\frac{1}{4},\frac{1}{4}\right]$ or "bounces around" (akin to a sine wave). Perhaps Did's convergence in probability to $-\frac{1}{4}$ can somehow be leveraged here?

M.B.M.
  • 5,406
  • Actually you show that the limsup is $\leqslant1/4$ and the liminf $\geqslant-1/4$ almost surely, not that the limit exists. – Did Nov 12 '13 at 11:02
  • Right. My language was off and I adjusted the statement of the theorem and the remark after it. Is there any way to show that this limit exists? In the standard textbook problems $\limsup$ and $\liminf$ are the same so $\lim$ exists; this problem isn't out of textbook... but we do have the convergence in probability that you showed, perhaps that helps somehow? – M.B.M. Nov 12 '13 at 15:05
  • FWIW, the "statement of the theorem" is not "adjusted" at all and still frankly wrong. – Did Sep 15 '16 at 06:41