0

Hi I am having some trouble with the following question:

Say we have random variables $X $~$ Poisson(\lambda)$ and $\lambda_{1} \gt \lambda_{0} \gt 0$ set values and we want to test $H_{o}: \lambda=\lambda_{o}$ and $H_{1}: \lambda=\lambda_{1}$.

The question asks to show that the optimal test at a level $\alpha$ rejects the null hypothesis when $\bar X_n \gt c$ and find $c$, where $\bar X_n=\frac{1}{n}(x_{1}+...+x_{n})$ and furthermore show that the test that minimizes the sum of type one and type two errors rejects the null hypothesis when $\bar X_n \gt k$ and find $k$

What I have tried:

Using the method of likelihoods and Neyman criterion I keep getting that $$(x_{1}+...x_{n}) \gt {\frac{\ln ke^{n(\lambda_{1}-\lambda_{o})}}{\ln\frac{\lambda_{1}}{\lambda_{1}}}}$$ But that gives me just the sum without the division of the n. Can I simply just divide by n on both sides and call that my c?

For the next part, can I just the first part to just compute whatever this best test is?

Thanks for any help

BruceET
  • 51,500
Quality
  • 5,527
  • There are some typos in your post. Also, if you mean to show $\bar X_n$, use \bar X_n. Also, \overline X_n for $\overline X_n$. Also also, use \ln k for $\ln k$. – Em. Mar 28 '16 at 18:57

1 Answers1

0

Perhaps look at this page for an example how to set up the Neyman-Pearson criterion.

Maybe it will help if I illustrate the idea with a specific numerical example. Suppose $\lambda_0 = 3$ and $\lambda_1 = 10.$ Also, for simplicity, suppose we have a single observation $X.$ It makes sense to reject $H_0$ for "large" values of $X$ and to accept for "small" values. The task is to find a critical value $c$ to separate "large" from "small."

Here are partial PDF tables for $Pois(3)$ and $Pois(10)$ from R statistical software.

 i = 0:10; pdf.0 = dpois(i,3);  pdf.1 = dpois(i,10);  ratio=pdf.0/pdf.1
 round(cbind(i, pdf.0, pdf.1, ratio),3)
    ##  i pdf.0 pdf.1    ratio
    ##  0 0.050 0.000 1096.633
    ##  1 0.149 0.000  328.990
    ##  2 0.224 0.002   98.697
    ##  3 0.224 0.008   29.609
    ##  4 0.168 0.019    8.883
    ##  5 0.101 0.038    2.665
    ##  6 0.050 0.063    0.799
    ##  7 0.022 0.090    0.240
    ##  8 0.008 0.113    0.072
    ##  9 0.003 0.125    0.022
    ## 10 0.001 0.125    0.006

If we want to test at the 5% level, we need to look in the right-hand tail of the distribution $Pois(3)$ to accumulate about 5% (without going over). It is obvious that we can't use $c = 5.5$ because $P(X > 5.5|\lambda=3) > .05.$ But maybe we can use $c = 6.5.$ We find that $\alpha = P(X > 6.5|\lambda=3) .0335 < .05.$ So we use $c$ as the critical value. Then the rejection region is $\{X > 6.5\}$ and the acceptance region is $\{X < 6.5\}.$

Then the power of the test (probability of rejecting when $H_1$ is true) is $P(X > 6.5 | \lambda=10) \approx .87.$

 1 - ppois(6.5, 10)
 ## 0.8698586

It seems clear from looking at the table above that no choice of the rejection region with $\alpha < .05$ could have a larger power. We have "spent" our roughly 5% probability in the $H_0$ column of the table in a way that makes the probability of the rejection region in the $H_1$ column as large as possible (hence, the ratio is as small as possible). By looking at the ratio of probabilities in the two columns, that optimal choice is what the Neyman-Pearson Lemma ensures.

More generally, notice that $\sum_{i=1}^n X_i = Y \sim Pois(n\lambda).$ so, as you guessed, it does not matter whether we express the critical value for $n$ observations in terms of $Y$ or $\bar X_n = Y/n.$ You might try writing the ratio for the case $n = 1$ first.

BruceET
  • 51,500
  • I have made a new question based of my original , if you could have the time to take look that would be great, here it is http://math.stackexchange.com/questions/1725278/hypothesis-testing-with-poisson-rv – Quality Apr 02 '16 at 23:03