1

A statistic $T(X)$ is called complete statistic for a parameter $\theta$, if $E_{\theta}g(T) = 0$ for all $\theta$ implies $P_{\theta}(g(T) = 0) = 1$ for all $\theta$.

I interpret $P_{\theta}(g(T) = 0) = 1 \> \forall \theta \quad$ as

$ \> g(t) = 0 \> for \> almost \> every \> t \in T \enspace \forall \theta\quad$ (for continuous distribution)

<p><span class="math-container">$ \&gt; g(t) = 0$</span> <span class="math-container">$\forall  t \in T \enspace \forall \theta\quad\quad\quad\quad\quad$</span> (for discrete distribution)</p>

Note: for the following continuous case(unifrom distribution), I've abused this notation a bit and have written $'\forall t'$ instead of $\>'for \> almost\> every\> t'$

In the book Statistical inference(2nd ed.) by Berger and Casella in Example 6.2.23, to prove that $T(X) = \max_i X_i$ is a complete statistic for random sample $X_1 , X_2 , ... X_n$ following Uniform distribution $ f(x;\theta) = 1/\theta,\; 0\leq x \leq \theta$
We assume a function $g(t)$ satisfying $E_{\theta}g(T) = 0 \> \forall \theta$ and finally arrive at the condition that $g(\theta) = 0 \> \forall \theta$. I've understood the proof till here but couldn't understand that how from this condition, can we conclude that $T$ is a complete statistic.

Shouldn't we need to show that $g(t) = 0 \> \forall t \> \forall \theta$ to conclude that $T$ is a complete statistic. For example if we consider a function $g(t) = t-\theta$ ,then $ g(\theta) = \theta-\theta = 0 \>\forall \theta$ but that need not necessarily mean that $g(t) = 0 \> \forall t \> \forall \theta$.

I saw a similar proof here but couldn't get how $g = 0$. I think here also it meant that $g(\theta) = 0\> \forall \theta$ and not $g(t) = 0 \> \forall t \> \forall \theta$.

I couldn't understand where I am going wrong or is my interpretation above is wrong. Please help.

Anshul
  • 13
  • 4
  • Function $g(t)$ does not depend on $\theta$. When we substitute $T$ instead of $t$, we get a r.v. with some distribution which depends on $\theta$. So, $\mathbb P(g(T)=0)$ depends on $\theta$. We need that for every $\theta$, $g(T)=0$ a.s. If you get somewhere $g(\theta)=0$ for all $\theta$, you can denote variable as you wish: $g(x)=0 \forall x$, $g(t)=0 \forall t$ and so on. – NCh Feb 06 '19 at 14:15
  • Another way to understand $$\mathbb{P}{\theta}(g()=0)=1, \forall\theta\in \Theta$$ is $$g\equiv0,\text{ }a.e\text{-}(\mathbb{P{\theta},\theta\in\Theta)}$$ – Tan Jan 08 '21 at 02:10

0 Answers0