0

Let $X_1, \cdots, X_m$ be i.i.d. random variables with each $X_i$ has the following distribution, let $k$ be a stopping parameter:

$$ \text{P}(X_i = j) = \begin{cases} p(1-p)^{j -1}, & (j \in [1, k - 1] )\\ (1-p)^{k - 1}, & (j = k) \end{cases} $$

And another random variable $M = \max_{i=1}^m X_i$, could you please help me on the precise form of $\mathbb{E}(X_i)$ and $\mathbb{E}(M)$.

I can get: $\mathbb{E}(X_i) = \sum_{j = 1}^{k - 1}j\cdot p(1-p)^{j - 1} + k\cdot (1-p)^{k - 1}$
and
$\text{P}(M \le l) = \text{P}(X_i \le l)^m$

But I can not simplify it more. I want to know the relation between $\mathbb{E}(M)$ and $k$, how $\mathbb{E}(M)$ changes with respect to $k$.

Fan Zhang
  • 1,967

1 Answers1

1

The random variable $X$ satisfies $\mathbb{P}(X>x)=(1-p)^x$ for $x=0,1,\dots, k-1$ and $\mathbb{P}(X>x)=0$ for $x\geq k$. Therefore its expectation is $$\mathbb{E}(X)=\sum_{x=0}^{k-1} (1-p)^x= {1\over p}-{(1-p)^k\over p}.$$

For the maximum $M$ of $m$ of these, we have for $x=0,1,\dots, k-1$ $$\begin{eqnarray*} \mathbb{P}(M>x)&=&1-\mathbb{P}(M\leq x)\\&=& 1-(1-\mathbb{P}(X>x))^m\\&=&1-(1-(1-p)^x)^m. \end{eqnarray*}$$

The expectation is $$\mathbb{E}(M)=\sum_{x=0}^{k-1} 1-(1-(1-p)^x)^m,$$ but I don't think that this simplifies nicely unless $m=1$.