2

Note: $\log = \ln$.

Suppose $X_1, \dots, X_n \sim \text{Pareto}(\alpha, \beta)$ with $n > \dfrac{2}{\beta}$ are independent. The Pareto$(\alpha, \beta)$ pdf is $$f(x) = \beta\alpha^{\beta}x^{-(\beta +1)}I(x > \alpha)\text{, } \alpha, \beta > 0\text{.}$$ Define $W_n = \dfrac{1}{n}\sum\log(X_i) - \log(X_{(1)})$, with $X_{(1)}$ being the first order statistic.

I wish to show $$\sqrt{n}(W_{n}^{-1}-\beta)\overset{d}{\to}\mathcal{N}(0, v^2)$$ as $n \to \infty$ (convergence in distribution) for some $v^2$.

Here's what I've already shown:

  1. $W_n \overset{p}{\to}\beta^{-1}$.
  2. $X_{(1)} \overset{p}{\to} \alpha$.
  3. The expected values and variances of $\log(X_i)$ for each $i$ and the same for $X_{(1)}$.

It is very obvious that I need to use the Delta method here, but this would require showing that $$\sqrt{n}(W_n - \beta) \overset{d}{\to}\mathcal{N}(0, \text{something})\text{.}$$ I suppose I could approach using the Central Limit Theorem, but it isn't clear to me how this could be done. I can see that by the CLT, we have $$\sqrt{n}\left[\dfrac{1}{n}\sum\log(X_i) - \underbrace{\left(\beta^{-1}+\log(\alpha) \right)}_{\mathbb{E}[\log(X_i)]} \right]\overset{d}{\to}\mathcal{N}\left(0, \underbrace{\beta^{-2}}_{\text{Var}(\log(X_i))}\right)$$ but I'm stuck as to how to proceed from here. By the continuous mapping theorem, I know that $$\log(X_{(1)}) \overset{p}{\to} \log(\alpha)$$ but I'm stuck from here.

Clarinetist
  • 19,519

2 Answers2

3

You actually need a slightly stronger result, namely $\sqrt{n}(\log(X_{(1)}) - \log \alpha) \xrightarrow{P} 0$; I am unsure if this actually holds, but you should be able to deduce this from your existing results.

If this is correct, then from the CLT you can apply Slutsky's theorem to deduce $$\sqrt{n}(W_n - \beta^{-1}) = \sqrt{n}\frac{1}{n} \sum \limits_{i = 1}^n \{\log(X_i) - \mathbb{E}[\log X_i]\} + \sqrt{n}(\log \alpha - \log(X_{(1)}))\xrightarrow d \mathcal{N}(0, \beta^{-2}).$$

Applying the Delta-method to the function $g(x) = \frac{1}{x}$ then yields $$\sqrt{n}(g(W_n) - g(\beta^{-1})) \xrightarrow d \mathcal{N}(0, g'(\beta^{-1})^2 \beta^{-2}) = \mathcal{N}(0, \beta^2).$$

Edit: As zhoraster pointed out, the convergence in the beginning of my post does hold. For a simple proof, note that from the formulae in the comments of the question we can get $E[(X_{(1)} - \alpha)^2] = O(n^{-2})$, which is enough to show $\sqrt{n}(X_{(1)} - \alpha) \xrightarrow d 0$. The delta-method then yields $\sqrt{n}(\log X_{(1)} - \log \alpha) \xrightarrow d 0$; Since this limit is constant, the latter convergence also holds in probability.

Dominik
  • 19,963
  • The convergence does hold. For a distribution having a smooth density on $[a,+\infty)$, non-vanishing at $a$, the first order statistic is $a + O_P(n^{-1})$. A density singular at $0$ can only improve this. – zhoraster Dec 18 '16 at 20:38
  • @zhoraster I'm not very familiar with big-$O$ notation. Could you clarify this? – Clarinetist Dec 18 '16 at 20:41
  • 2
    @Clarinetist, $\delta_n = O_P(a_n)$ means that ${\delta_n/a_n}$ is bounded in probability, i.e. $$\sup_{n\ge 1} P(|\delta_n/a_n|>C)\to 0,\ C\to\infty.$$ – zhoraster Dec 18 '16 at 20:44
  • @zhoraster Hate to say that I haven't seen that definition before, but it makes sense. How would one go about proving the claim you have about the first order statistic being $a + O_{P}(n^{-1})$? I don't need anything extensive for this. I don't recall the course (w.r.t. convergence concepts) that I took covering anything other than WLLN, CLT, Slutsky, continuous mapping, and taking the MGF and letting $n \to \infty$, just so you understand where I'm at in this material. – Clarinetist Dec 18 '16 at 21:03
  • @Clarinetist I have added a proof based on the formulae in the comments of the question. – Dominik Dec 18 '16 at 21:07
3

Proving the convergence $\sqrt{n}(X_{(1)}-\alpha)\overset{P}{\rightarrow} 0$ (the one with logarithms would follow since $\log$ is differentiable at $\alpha$): $$ P\big(n|X_{(1;n)}-\alpha|>C\big) = P\big(X_{(1;n)}>\alpha+ C n^{-1}\big) = \alpha^{\beta n} (\alpha+C n^{-1})^{-\beta n} \\ = (1+C\alpha^{-1}n^{-1})^{- \beta n}\to e^{-\alpha^{-1}\beta C},\ n\to\infty. $$ It follows that $\{n(X_{(1;n)}-\alpha),n\ge 1\}$ is stochastically bounded, hence $\sqrt{n}(X_{(1)}-\alpha)\overset{P}{\rightarrow} 0$.

zhoraster
  • 25,481