2

I see how the MLE of the upper bound for a uniform distribution is obtained, given $n$ draws are observed.

Now I wonder the following: What is the maximum likelihood estimator of $b$, given $n$ draws from the minimum distribution of two values from $U[0,b]$ are observed?

I would start with specifying the minimum distribution:

$$ F_{\min}(x;b ) = \begin{cases} 0 & \text{if } x < 0 \\ 1-(1-F(x))^2 & \text{if } 0 \le x \le b \\ 1 & \text{if } x > b \end{cases}$$

from which we can derive the pdf

$$ f_{\min}(x;b) = \begin{cases} 0 & \text{if } x < 0 \\ \frac{2(b-x)}{b^2} & \text{if } 0 \le x \le b \\ 0 & \text{if } x > b \end{cases}$$

Thus the log likelihood function is:

$$ L(b) = \sum_{i=1}^n \log f(x_i; b) $$

Similar to the standard case, we need that $\hat b \ge \max_n \{x_n\}$; but moreover in the interior it has to hold that

$$ \frac{\partial L(b)}{\partial b} = \sum_{i=1}^n - \frac{2(b-x_i)}{b^3} = 0 $$

which is solve by $b = \frac{2 \sum_{i=1}^n x_i}{n} $.

Therefore I would obtain:

$$ \hat b = \max\left( \max_n \{x_n\}, \frac{2 \sum_{i=1}^n x_i}{n} \right)$$

Is this correct?

bonifaz
  • 795
  • 8
  • 19
  • No, the probability of observing a value less than $x$ is $1$ if $x > b$. – bonifaz Dec 22 '16 at 22:05
  • Your derivative is incorrect. The correct one is $$\frac{\partial L(b)}{\partial b}=\sum_{i=1}^n\frac1{b-x_i}-\frac{2n}{b}=0$$ Unfortunately, you can't find the maximum point in a closed form, but it's easy to show that it's definitely greater than $\max{x_i}$. – Sergei Golovan Dec 22 '16 at 22:19

1 Answers1

0

Your answer is not correct. From your (correct) specification of the density for the minimum value, the log-likelihood is:

$$\ell_{\boldsymbol{x}} (b) = \sum_{i=1}^n \ln(b - x_i) - 2n \ln(b) \quad \quad \text{for } \text{ } b \geqslant \max \{ x_1, ..., x_n \}.$$

Differentiating gives the score function:

$$\frac{d \ell_{\boldsymbol{x}}}{db} (b) = \sum_{i=1}^n \frac{1}{b - x_i} - \frac{2n}{b} \quad \quad \quad \quad \text{ } \text{for } \text{ } b > \max \{ x_1, ..., x_n \}.$$

The score equation can then be written as:

$$\frac{1}{n} \sum_{i=1}^n R_i(\hat{b}) = 2 \quad \quad R_i(b) \equiv \frac{1}{1 - x_i /b} \quad \text{for } \text{ } b > x_i.$$

This equation must be solved numerically to obtain the MLE $\hat{b}$. The MLE occurs at the point where the average of the $R_i$ values is two. We can see that each function $R_i$ is monotonically decreasing in $b$, with limits $\lim_{\text{ }b \downarrow x_i} R_i(b) = \infty$ and $\lim_{\text{ }b \rightarrow \infty} R_i(b) = 0$. Hence, there is a unique solution to the score equation, and this can be obtained via standard iterative techniques.

Lower bound on the MLE: Since we must have $R_i(\hat{b}) < 2n$ for all $i = 1,...n$ (in order not to exceed the average of two in the score equation), this implies that $\hat{b} > \tfrac{2n}{2n-1} \cdot x_i$ for all $i = 1,...n$, which gives the lower bound:

$$\hat{b} > \frac{2n}{2n-1} \cdot \max \{ x_1, ..., x_n \}.$$

This shows that the MLE is strictly above the maximum value of the observations, and so it differs from the MLE in the case where we are observing uniform values (as opposed to the minimum of two uniform values).

Ben
  • 4,079