The problem is to find the distribution of $X_1\mid M$ where $M$ is the maximum of the i.i.d. random variables $X \sim U(0,\theta)$. I have a complete solution but am having trouble justifying one step. We use Bayes' Theorem for CDF's to get started:
$$ P(X_1 < x_1 \mid M < m) = \frac{P(M < m \mid X_1 < x_1) P(X_1 < x_1)}{P(M < m)} $$
The cdf's for $M$ and $X_1$ are $(m/\theta)^n$, by independence, and $x_1/\theta$. The cdf for $M\mid X_1$ is $(m/\theta)^{n-1} {\bf 1} [x_1 \leq m]$. The justification I have is that if the observed value $x_1$ is greater than $m$, then $m$ cannot be the maximum. So, I threw the indicator on the cdf in order to justify that $M\mid X_1$ is just the distribution of the maximum excluding $X_1$. So,
$$ \frac{P(M < m \mid X_1 < x_1) P(X_1 < x_1)}{P(M < m)} = \frac{(x_1/\theta) (m/\theta)^{n-1}}{(m/\theta)^n} = \frac{x_1}{m} $$
It follows that $X_1\mid M \sim U(0,m)$.
Is my justification for the distribution of $M\mid X_1$ correct? I believe my final answer is intuitive.
Does $P(M=m)$ make sense? Or is $P(M=m) = 0$. I thought that $M$ would have a continuous distribution.
– misogrumpy Jan 05 '18 at 19:13