1

Let $ X_1,\ldots,X_n \sim f(x;\theta,\gamma) = \frac{\theta x^{\theta-1} }{\gamma^\theta}1(0 \le x \le \gamma), (S,T) = (\prod_{i=1}^{n-1} T_i,T_n)$ where $T_i$ is the corresponding order statistic; additionally, independence is assumed of $X_1,\ldots,X_n$.

Find the marginal PDF of $-\log(S)+ (n-1)\log(T) = - \sum_{i = 1}^{n-1} \log(T_i)+(n-1)\log(T_n)$ and hence the UMVUE for $\frac{1}{\theta}$

My issue here is I don't know exactly how to go about this exercise. Namely, do I first need to find the joint pdf of $(S,T)$ then the go on to solving for the marginal of the above which would involve finding another joint pdf.

What I mean is to find the joint pdf of $(S,T)$ by the transformation of random variables method I will need to introduce new transformations, namely $n-2$ of them then integrate out until I only have $(S,T)$ remaining, then I would need to do this again but for $-\log(S)+ (n-1)\log(T)$ but I would only need to introduce one new transformation of $S$ and/or $T$.

It seems perhaps I don't need to do all that or at the very least there is a more efficient way in this scenario. For the UMVUE I am still unpracticed with UMVUE so I don't immediately see why solving this gives the UMVUE for $\frac{1}{\theta}$ so I could use an explanation for this as well.

StubbornAtom
  • 17,052
oliverjones
  • 4,199

2 Answers2

1

The pair $(S,T)$ is sufficient for this family of distributions, i.e. the conditional distribution of $(X_1,\ldots,X_n)$ given $(S,T)$ does not depend on $(\theta,\gamma).$

Suppose you can also show $(S,T)$ is a complete statistic. That would mean there is no function $g(S,T)$ (not depending on $(\theta,\gamma)$) such that $\operatorname E g(S,T)$ remains equal to $0$ as $(\theta,\gamma)$ changes (except of course $g=0$ a.e.).

And suppose further that you can show that $$ \operatorname E(-\log S + (n-1)\log T) = \frac 1 \theta $$ (regardless of the value of $(\theta,\gamma)$).

The the Lehmann–Scheffé theorem applied to this situation means that $-\log S + (n-1)\log T$ is the UMVUE for $1/\theta.$

(I haven't checked the facts in the second and third paragraphs above.)

  • Okay I understand that much, I am still not quite sure if the way I outlined is the best approach to get the marginal pdf of $-\log(S) + (n-1)\log(T)$. – oliverjones Aug 07 '20 at 19:17
1

A sufficient statistic for $(\theta,\gamma)$ as seen here is $\left(\prod\limits_{i=1}^n X_i,X_{(n)}\right)$ or equivalently $\left(\sum\limits_{i=1}^n \ln X_i,\ln X_{(n)}\right)$. This is again equivalent to $\boldsymbol T=\left(\sum\limits_{i=1}^n (\ln X_{(n)}-\ln X_i),\ln X_{(n)}\right)$ as they are all one-to-one functions of each other (in the sense that no information about the unknown parameter is lost going from one to the other).

If you change variables to $Y_i=\ln\left(\frac1{X_i}\right)=-\ln X_i$, it turns out to have a density

\begin{align} f_{Y_i}(y)&=f_{X_i}(e^{-y})\left|\frac{\mathrm d}{\mathrm dy}e^{-y}\right| \\&=\frac{\theta e^{-\theta y}}{\gamma^{\theta}}\mathbf1_{y>\ln(1/\gamma)} \\&=\theta\exp\left\{-\theta\left(y+\ln \gamma\right)\right\}\mathbf1_{y>-\ln\gamma}&;\,\small \theta,\gamma>0 \end{align}

This is a two-parameter exponential distribution with location $-\ln \gamma$ and scale $1/\theta$. In other words, this means $Y_i+\ln \gamma=\ln\left(\frac{\gamma}{X_i}\right)$ is exponential with mean $1/\theta$.

Noting that $Y_{(1)}=-\ln X_{(n)}$, the statistic $\boldsymbol T$ can be written as $$\boldsymbol T=\left(\sum_{i=1}^n (Y_i-Y_{(1)}),- Y_{(1)}\right)=(U,V) $$

That $\boldsymbol T=(U,V)$ is a complete statistic can be seen by comparing to this problem since we know that $Y_1,\ldots,Y_n$ are i.i.d $\text{Exp}\left(-\ln \gamma,\frac1{\theta}\right)$. You can see here that $U=\sum\limits_{i=1}^n (Y_i-Y_{(1)})$ has a certain Gamma distribution (this is the distribution you are asked for). To be precise, this can also be written as $2\theta U\sim \chi^2_{2(n-1)}$ as argued here. As $U$ is a function of a complete sufficient statistic, unbiased estimator of $1/\theta$ based on $U$ is the UMVUE by Lehmann-Scheffe theorem. This can also be done without the distribution of $U$ since one can find $E\left[U\right]=\sum\limits_{i=1}^n E\left[ Y_i\right]-nE\left[Y_{(1)}\right]$ directly in terms of $1/\theta$.

StubbornAtom
  • 17,052