Let $X_1, \cdots, X_m$ be i.i.d. random variables with each $X_i$ has the following distribution, let $k$ be a stopping parameter:
$$ \text{P}(X_i = j) = \begin{cases} p(1-p)^{j -1}, & (j \in [1, k - 1] )\\ (1-p)^{k - 1}, & (j = k) \end{cases} $$
And another random variable $M = \max_{i=1}^m X_i$, could you please help me on the precise form of $\mathbb{E}(X_i)$ and $\mathbb{E}(M)$.
I can get:
$\mathbb{E}(X_i) = \sum_{j = 1}^{k - 1}j\cdot p(1-p)^{j - 1} + k\cdot (1-p)^{k - 1}$
and
$\text{P}(M \le l) = \text{P}(X_i \le l)^m$
But I can not simplify it more. I want to know the relation between $\mathbb{E}(M)$ and $k$, how $\mathbb{E}(M)$ changes with respect to $k$.