2

Suppose $f(r\mid \theta)$ is double exponential distribution with pdf as

$$f(r\midθ) = \frac{1}{2\sigma}\exp\left({−\frac{|r − \mu|}{\sigma}}\right)$$

, where $\theta= (\mu,\sigma)\,, −\infty<\mu<\infty,\,\sigma>0$.

Now given a random sample $r_1,r_2,r_3,\ldots,r_n$ find the MLE of $\theta$.

So, I can't solve this with calculus because the partial derivative of $f$ w.r.t $\mu$ doesn't exist everywhere, right? Or can I? If I can how do I justify this?

StubbornAtom
  • 17,052
Adienl
  • 1,069
  • See here: https://math.stackexchange.com/questions/240496/finding-the-maximum-likelihood-estimator – Math1000 Oct 20 '18 at 10:59
  • 1
    @Math1000: I have seen that, but I don't understand how one can justify taking partial derivative of f w.r.t $\mu$ – Adienl Oct 20 '18 at 11:02
  • I don't recommend using differentiation for deriving MLE of $\mu$ here. See the answers to this post. Also refer to https://math.stackexchange.com/questions/113270/the-median-minimizes-the-sum-of-absolute-deviations-the-l-1-norm. However, differentiation is justified for finding the MLE of $\sigma$. – StubbornAtom Oct 20 '18 at 11:26
  • 1
    Given the sample $r_1,\ldots,r_n$, denote the log-likelihood by $\ell(\mu,\sigma)$.

    Suppose you derived that $\ell(\mu,\sigma)$ is maximized for that value of $\mu$ given by $\hat\mu=\text{median}(r_1,\ldots,r_n)$ and $\hat\sigma$ (which would depend on $\hat\mu$) is the estimate of $\sigma$ you found by solving $\frac{\partial \ell}{\partial\sigma}=0$.

    To finally conclude that $(\hat\mu,\hat\sigma)$ is the MLE of $(\mu,\sigma)$, one could show that $\ell(\hat\mu,\hat\sigma)\geqslant \ell(\mu,\sigma)$ holds for all $(\mu,\sigma)$.

    – StubbornAtom Oct 20 '18 at 11:45

0 Answers0