1

I am trying to get the maximum likelihood estimate for the parameter $p$. The distribution is the following:

$$ f(x\mid p) = \begin{cases} \frac{p}{x^2} &\text{for} \ p\leq x < \infty \ \\ 0 &\text{if not} \end{cases} $$

The sample has size $n$.

The problem is, when I try to estimate it by the procedure I know, I would have to estimate the likelihood function and obtain the derivative of the log-likelihood. We'd have:

$$ L(p; x) = \frac{p^n}{\prod_{i=1}^{n} x^2_i}$$ $$ \ln L(p;x) = n \ln(p) - \sum_{i=1}^{n} \ln(x_i^2) $$

For the derivative:

$$ l'(p;x) = \frac{n}{p} = 0$$

And I am stuck because it has no solution for $p$. How do I evaluate this?

Thanks!

EDIT:

So in this case I can use the indicator variable to write:

$$ L(p;x) = \frac{p^n}{\prod_{i=1}^{n}x_i^2} I_{(x_i \geq p)}$$

for $i = 1,2, \ldots n$ in the indicator variable. So the "closest" non-null value of $x \in X$ to $p$ is $min(X_1, \ldots X_n)$. Is that the point?

  • 2
    See why differentiation is not valid for other distributions where support depends on parameter: https://math.stackexchange.com/questions/649678/how-do-you-differentiate-the-likelihood-function-for-the-uniform-distribution-in?noredirect=1&lq=1, https://math.stackexchange.com/questions/2944073/why-cant-we-use-calculus-in-finding-the-m-l-e-of-uniform-theta-theta?noredirect=1&lq=1. – StubbornAtom Oct 09 '18 at 20:21
  • 1
    It would be unwise to say this is the 'usual procedure' since maximum likelihood estimation is not merely about finding critical points of the likelihood function by differentiation. – StubbornAtom Oct 09 '18 at 20:28
  • Indeed, the method is about maximizing the likelihood function, not about critical points. Thanks :) – YetAnotherUsr Oct 10 '18 at 10:28
  • I should explain: I deleted my answer. The reason why is because when $0 < p < 1$, you get a very different answer than when $p > 1$, and I didn't have time to think through the details. Are you provided any restrictions on $p$? – Clarinetist Oct 10 '18 at 12:09
  • @M.Gonzalez StubbornAtom is actually correct. If you're setting the derivative equal to $0$, what you are doing is you are finding critical points of the likelihood function by differentiation. – Clarinetist Oct 10 '18 at 12:14
  • @Clarinetist actually I am provided that $p>0$. I hadn't thought about the interval when $0<p<1$. But in that case why wouldn't the minimum still work? – YetAnotherUsr Oct 11 '18 at 11:15
  • @M.Gonzalez Sorry, I am wrong. Check my answer, which is now back. – Clarinetist Oct 11 '18 at 11:38
  • https://math.stackexchange.com/questions/2659520/fx-theta-frac-thetax2-with-x-geq-theta-and-theta0-find-the-m?rq=1 – StubbornAtom Sep 17 '19 at 15:18

1 Answers1

3

The usual method does not work well when the support of the random variable (in this case, $[p, \infty)$) depends on the parameter of interest (which is $p$ in this case).

In these situations, you should use indicator functions. Let $\mathbf{I}$ denote the indicator function, defined by $$\mathbb{I}(\cdot) = \begin{cases} 1, & \cdot \text{ is true} \\ 0, & \cdot \text{ is false.} \end{cases}$$ Thus, we may write $$f(x \mid p) = \dfrac{p}{x^2}\mathbf{I}(p \leq x)\text{.}$$

(Please read this other answer for details that I will leave unproven here, and for a similar problem to this one.)

Per the link I've put above, you can see that $$L(p \mid \mathbf{x}) = \prod_{i=1}^{n}\dfrac{p}{x_i^2}\mathbf{I}(p \leq x_i)=\dfrac{p^n}{\prod_{i=1}^{n}x_i^2}\mathbf{I}(p \leq x_{(1)})$$ where $x_{(1)} = \min\limits_{1 \leq i \leq n}x_i$.

Viewing this as a function of $p$, note that if $p > x_{(1)}$, then $\mathbf{I}(p \leq x_{(1)}) = 0 = L(p \mid \mathbf{x})$, which is obviously not the largest value of $L$.

Thus, assume $p \leq x_{(1)}$. Disregarding constants of proportionality with respect to $p$ (which do not affect the actual maximum likelihood estimator), we obtain $$L(p \mid \mathbf{x}) = \dfrac{p^n}{\prod_{i=1}^{n}x_i^2}\mathbf{I}(p \leq x_{(1)}) \propto p^n\text{.}$$

As long as $p > 0$, we know that $p^n$ (for $n$ fixed) is indeed a monotonically increasing function of $p$. Thus, to maximize $p^n$, we must seek the largest value of $p$. Note that to get to this point, we had to assume $p \leq x_{(1)}$. It follows that $$\hat{p}_{\text{MLE}} = X_{(1)}$$ is the maximum likelihood estimator of $p$.

Clarinetist
  • 19,519