5

this is my first post so I apologize if the formatting is a little rocky.

I'm currently going through "Probability and Statistics" 4th ed by DeGroot/Schervish, and I was wondering if somebody could help me out on two related problems (7.5.10, 7.6.1).

The first question is as follows: Suppose that $ X_1, \dots, X_n $ form a random sample from a distribution for which the p.d.f. $ f(x|\theta) $ is as follows: $$ f(x|\theta) = \frac{1}{2} e^{-|x-\theta|} \text{ for } -\infty < x < \infty $$ Also, suppose that the value of $ \theta $ is unknown, for $ -\infty < \theta < \infty $. We will find the M.L.E. of $ \theta $.

The likelihood function is given by $$ f_n(\mathbf{x}|\theta) = \frac{1}{2^n} e^{-\sum_{i=1}^n |x_i - \theta|}$$ and will be maximized when $ \sum_{i=1}^n |x_i - \theta| $ is minimized. By choosing $ \theta $ to be a median value of $ x_1, \dots, x_n $, we accomplish the minimization task.

More specifically, note that the likelihood function has log \begin{align*} \log f(\mathbf{x}|\theta) &= -n \log 2 - \sum_{i=1}^n |x_i - \theta| \\ &= n \left ( - \log 2 - \frac{1}{n} \sum_{i=1}^n |x_i - \theta| \right ) \end{align*} Now, we see that the M.L.E. will minimize the sum in the log likelihood shown above. We can also see that $ \frac{1}{n} \sum_{i=1}^n |x_i - \theta| = E(|X-\theta|)$ for $ X $ having a discrete distribution assigning probability $ \frac{1}{n} $ to each of $ x_1, \dots, x_n $, and 0 everywhere else. So now we can see that, by choosing $ \hat{\theta} $ to be a median of $ x_1, \dots, x_n $, then the log likelihood will be minimized. This follows from the fact that the median of a distribution minimizes the mean absolute error.

The second question is to find the MLE of $e^{-\frac{1}{\theta}}$, which by the invariance property of MLEs, should just be $e^{-\frac{1}{\hat{\theta}}}$.

My problem is that the answer in the back of the book (for the second question) is given as $\left ( \prod_{i=1}^n x_i \right)^{\frac{1}{n}}$, and I'm having trouble reconciling that with my answer. You'd think it'd be pretty straightforward, but...

Any help would be appreciated!

Chase Uyeda
  • 204
  • 2
  • 8
  • For the second problem what is the mle for theta? It looks like it is not the same theta in the previous problem. Working backwards you would get this answer if the mle for theta is -n/∑lnx$_i$. – Michael R. Chernick Sep 08 '12 at 18:51
  • What exactly is your first question? For the second question, I would set $\alpha = e^{1/\theta}$, solve for $\theta$, plug that in $f(x|\theta)$ and find the MLE. – echoone Sep 08 '12 at 19:03
  • @Michael I also noticed that -- does that mean I calculated the first MLE wrong? – Chase Uyeda Sep 08 '12 at 19:13
  • @echoone Err, I didn't really have a question about the first part, I was just showing my work to see if maybe I was messing up somewhere there. I'll try your method and see if that works. – Chase Uyeda Sep 08 '12 at 19:15
  • Here is a hint: In the first problem $X_i \in \mathbb R$. Now take $n = 2$ and suppose $X_1 < 0$ and $X_2 > 0$, each of which can happen with positive probability. Now consider the "estimator" for the second question. Conclusion? :-) – cardinal Sep 08 '12 at 19:32
  • The first distribution the mle is the median. For the second you didn't give us the likelihood for the second. I got it by taking the asnwer given to e$^-$$^1/θ and inverting to get the mle for θ. – Michael R. Chernick Sep 08 '12 at 19:52
  • @cardinal: ... you get a complex number?? (that is if you use $\sqrt{X_1\cdot X_2}$) – Chase Uyeda Sep 08 '12 at 20:00
  • @ChaseUyeda: Yes. In other words, in the book there's a typo (or some other logical disconnect) afoot. :-) – cardinal Sep 08 '12 at 20:03
  • @Michael In the second part, you just use the same likelihood, but instead now writing $\theta$ as a function of $\psi$, where $\psi = e^{-1/\theta}$, right? And then maximizing that likelihood with respect to $\psi$ ? – Chase Uyeda Sep 08 '12 at 20:06
  • @cardinal Hmm, okay. Thanks for pointing that out! Since the given answer, as Michael said, would tell us that $\hat{\theta} = -\frac{n}{\sum_{i=1}^n \ln x_i}$, which is the MLE for a beta distribution with unknown parameter $\theta$ and 1, I wonder if that's where this is coming from?? Not sure. – Chase Uyeda Sep 08 '12 at 20:09
  • @ChaseUyeda: Here is the connection: If $X$ is exponential with rate $\theta$, then $Y=\exp(X)$ is Pareto with density $\theta y^{-(\theta + 1)}$ on $y \geq 1$ and $Z = 1/Y = \exp(-X)$ is $\mathrm{Beta}(\theta+1,1)$. :-) – cardinal Sep 08 '12 at 20:21
  • @cardinal Interesting! Thanks again for all your help :) – Chase Uyeda Sep 09 '12 at 00:43
  • https://math.stackexchange.com/questions/1678740/mle-of-double-exponential?noredirect=1&lq=1, https://math.stackexchange.com/questions/240496/finding-the-maximum-likelihood-estimator – StubbornAtom Jul 13 '19 at 16:01
  • What is the name of this distribution ? – Kutsit Nov 18 '19 at 20:10

1 Answers1

2

Here is a quick pointer on the optimization step. Your intuition for the problem seems good, it is just a matter of fleshing out the computations.

At this point, you want to maximize the likelihood of the data given $\theta$, \begin{align*} \log f(\mathbf{x}|\theta) &= -n \log 2 - \sum_{i=1}^n |x_i - \theta| \\ &= n \left ( - \log 2 - \frac{1}{n} \sum_{i=1}^n |x_i - \theta| \right ) \end{align*} , which is equivalent to minimizing the term $\frac{1}{n} \sum_{i=1}^n |x_i - \theta|$.

One can write this as an optimization problem:

$$\arg\max_{\theta}{\{-n\log 2 - \sum_{i=1}^n |x_i - \theta|\}}$$

Then let $$ \epsilon_i = |x_i - \theta | \quad \forall i $$

, so we can rewrite the optimization problem as $$\arg\max_{\epsilon,\theta}{\{-n\log 2 - \sum_{i=1}^n \epsilon_i\}}$$ subject to $$ -\epsilon_i \leq x_i - \theta \leq \epsilon_i \quad \forall i$$. The constraints simplify to:

\begin{align*} (x_i - \theta) + \epsilon_i &\geq 0 \\ -(x_i - \theta) + \epsilon_i &\geq 0 \end{align*}

Then, go on to set up the Lagrangian and compute the answer.

Bryce
  • 459