7

We have that $\mathbf{X}$ is a random sample from Uniform$(\theta, \theta+1)$ and we want to find a sufficient statistic for $\theta$ and the determine whether it is minimal.

The likelihood function is given by $$ L(\mathbf{x}| \theta) = \prod \mathbf{1} [ \theta < x_i < \theta+1] = \mathbf{1} [\min (\mathbf{x}) > \theta] \mathbf{1} [\max (\mathbf{x}) < \theta+1]$$

so that by Neyman-Pearson factorization theorem, $T(\mathbf{X}) = (m,M)$ where $m := m (\mathbf{X})$ and $M := M(\mathbf{X})$ are the minimum and the maximum of $\mathbf{X}$, respectively. Now we want to determine whether the statistic is minimal.

Despite the rule of thumb, (that if the dimension of the statistic is greater than the dimension of the parameter, then the statistic is not minimal), I have the hunch that the statistic is actually minimal. So we proceed by definition:

A statistic $T$ is minimal sufficient if the ratio $f_θ(x)/f_θ(y)$ does not depend on $\theta$ if and only if $T(x) = T(y)$. In order to skirt any indeterminacy problems, we can take the first condition to be $f_\theta (x) = k(x,y) f_\theta (y)$.

It is here that I get stuck.

  • You have the condition for minimality slightly wrong. $T$ is sufficient if "$f_\theta(x)/f_\theta(y)$ does not depend on $\theta$" implies "$T(x)=T(y)$." – angryavian Dec 31 '17 at 22:41
  • 1
    Thank you, I should have written that a statistic $T$ is a minimal sufficient statistic if "..." I will edit the post to reflect my true question. – misogrumpy Dec 31 '17 at 22:51
  • I was referring the ratio being $f_\theta(x) / f_\theta(y)$ rather than $T(x)/T(y)$, and not about changing the "if-then" to "if and only if." – angryavian Jan 01 '18 at 00:02
  • Also see https://math.stackexchange.com/questions/2116770/minimal-sufficient-statistic-of-operatornameuniform-theta-theta?noredirect=1&lq=1. – StubbornAtom Sep 18 '19 at 14:16

1 Answers1

3

Refer to the lecture notes here on page 5.

Joint density of the sample $ X=(X_1,X_2,\ldots,X_n)$ for $\theta\in\mathbb R$ is as you say $$f_{\theta}( x)=\mathbf1_{\theta<x_{(1)},x_{(n)}<\theta+1}=\mathbf1_{x_{(n)}-1<\theta<x_{(1)}}\quad,\,x=(x_1,\ldots,x_n)$$

where $x_{(1)}=\min_{1\le i\le n}x_i$ and $x_{(n)}=\max_{\le i\le n}x_i$.

It is clear that $T(x)=(x_{(1)},x_{(n)})$ is sufficient for $\theta$ by the Factorization theorem.

Define $A_x=(x_{(n)}-1,x_{(1)})$ and $A_y=(y_{(n)}-1,y_{(1)})$.

Then for some $y=(y_1,\ldots,y_n)$, observe that the ratio $f_{\theta}(x)/f_{\theta}(y)$ takes the simple form

$$\frac{f_{\theta}(x)}{f_{\theta}(y)}=\frac{\mathbf1_{\theta\in A_x}}{\mathbf1_{\theta\in A_y}}=\begin{cases}0&,\text{ if }\theta\notin A_x,\theta\in A_y \\ 1&,\text{ if }\theta\in A_x,\theta\in A_y \\ \infty &,\text{ if }\theta\in A_x,\theta\notin A_y\end{cases}$$

Clearly this is independent of $\theta$ if and only if $A_x=A_y$, that is iff $T(x)=T(y)$, which proves $T$ is indeed minimal sufficient.

Another proof using the definition of minimal sufficiency is given on page 3 of the linked notes.

As this example shows, there is no such rule of thumb in general for ascertaining minimal sufficiency of a statistic simply by comparing the dimensions of the statistic and that of the parameter.

StubbornAtom
  • 17,052