3

Question:

Johnson's mobile has a Gmail app and the arrival time of an email $T$ has following density: $$T \sim \lambda e^{-\lambda t}$$

When an email arrives in time t, Johnson's mobile email software will elicit a beep:

$$b_{t}= \begin{cases} 1 & \text{with probability $z$}\\ 0 & \text{with probaility $1-z$} \end{cases}$$

Otherwise, if there is no email, $b_{t}=0$ always holds.

It will take time $t^{*}$ for Johnson to wait for the email arrival, Johnson will stop waiting in following two situations.

Situation 1: If $b_{t}=1$, Johson will stop waiting.

$$t_{1}=\min\{t:b_{t}=1\}$$

Situation 2: If $b_{s}=0,s \leq t$, Johnson will form belief in time t: $$P(\text{The email has arrived before time t}|b_{s}=0,s\leq t)$$ When $P(\text{The email has arrived before time t}|b_{s}=0,s \leq t)=p$, Johnson will also stop waiting.

$$t_{0}=min\{t:P(\text{The email has arrived before time t}|b_{s}=0,s\leq t)=p\}$$

Thus we can define: $$t^{*}=\min\{t_{1},t_{0}\}$$

The question is: What is Johnson's expected waiting time $E[t^{*}]$?

In order to help to understand above question, I show the extreme cases of above question:

When $z=1$, it means that once the email arriving, the mobile always elicit a beep, the expected waiting time is in fact the expected arrival time: $$\frac{1}{\lambda}$$

When $z=0$, it means that the mobile never elicit a beep no matter the email arrives or not, then after time t, you will believe that the email arrival probability is: $$1-e^{-\lambda t}$$

You will check the email when you belief of email arriving equals to p: $$1-e^{-\lambda t^{*}}=p$$ Thus in this situation, the waiting time is degenerate and will be always: $$t^{*}=-\frac{\ln{(1-p)}}{\lambda}$$

It is easy to calculate the expected waiting time in above two extreme situations($z=1$ and $z=0$), but once $z \in (0,1)$, what is the expected waiting time?

The answer by original author is: $$\tilde{t}(z)=\frac{1-(1-p)^{\frac{z}{1-z}}}{\lambda z}$$

It is easy to check that: $$\tilde{t}(1)=\frac{1}{\lambda}$$ $$\lim_{z \to 0}\tilde{t}(z)=-\frac{\ln(1-p)}{\lambda}$$ The boundary condition holds

Galor
  • 61
  • The integral of $e^{-\lambda t}$ over $[0,\infty)$ is $\lambda^{-1}$ so that its only a probability density if $\lambda=1$. Was this a typo and you meant $\lambda e^{-\lambda t}$? – Nap D. Lover Jun 17 '17 at 01:31
  • Oh, sorry, I omit the $\lambda$, $T$ satisfies a standard exponential distribution. I have modified it. – Galor Jun 17 '17 at 02:46
  • @user24930 Were you able to derive the answer? – Dhruv Kohli Jun 20 '17 at 13:09
  • @expiTTp1z0, thank you very much, based on your answer, I modify some mistake and the solution seems to be more approximated to posted answer. – Galor Jun 21 '17 at 04:15

2 Answers2

3

$$E(t^*) = E(t_1\mathbb{I}(t_1 \leq t_0) + t_0\mathbb{I}(t_1 > t_0)) = E(t_1\mathbb{I}(t_1 \leq t_0)) + t_0E(\mathbb{I}(t_1 > t_0))$$

$$\implies E(t^*) = E(t_1\mathbb{I}(t_1 \leq t_0)) + E(t_0)P(t_1>t_0)$$

Based on the distribution of the arrival time, the number of mails in a time interval of length $t$ follows a poisson distribution $Po(\lambda t).$

Computing $E(t_1\mathbb{I}(t_1 \leq t_0))$ and $P(t_1\leq t_0)$.

$$\begin{align} P(t_1 \leq t) &= \sum\limits_{n=1}^{\infty}P(\text{atleast one beep} | n\text{ mails in time } [0,t])P(n\text{ mails in time } [0,t]) \\\\ &= \sum\limits_{n=1}^{\infty}(1-(1-z)^n)e^{-\lambda t}\frac{(\lambda t)^n}{n!} = \sum\limits_{n=1}^{\infty}e^{-\lambda t}\frac{(\lambda t)^n}{n!} - e^{-\lambda t}\sum\limits_{n=1}^{\infty}(1-z)^{n}\frac{(\lambda t)^n}{n!} \\\\ &= (1-e^{-\lambda t}) - e^{-\lambda t}(e^{(1-z)\lambda t} - 1) = 1 - e^{-z\lambda t} \\\\ P(t_1 \leq t_0) &= 1 - e^{-z\lambda t_0} \\\\ E(t_1\mathbb{I}(t_1 \leq t_0)) &= \int_{0}^{t_0} t \cdot z\lambda e^{-z\lambda t} dt = \frac{1}{z\lambda}(1-e^{-z\lambda t_0}) - t_0 e^{-z\lambda t_0}\end{align}$$

Now we compute $t_0$,

$$\begin{align} p &= \sum\limits_{n=1}^{\infty} P(n\text{ mails has arrived in time } [0,t_0] | b_s = 0,\, \forall s \leq t_0) \\\\ p &= \sum\limits_{n=1}^{\infty}\frac{P(b_s = 0,\, \forall s \leq t_0 | n\text{ mails arrived in time } [0,t_0]) P(n\text{ mails arrived in time } [0,t_0])}{P(b_s = 0,\, \forall s \leq t_0)} \\\\ p &= \sum\limits_{n=1}^{\infty}\frac{P(b_s = 0,\, \forall s \leq t_0 | n\text{ mails arrived in time } [0,t_0]) P(n\text{ mails arrived in time } [0,t_0])}{\sum\limits_{i=0}^{\infty}P(b_s = 0,\, \forall s \leq t_0 | i\text{ mails arrived in time } [0,t_0]) P(i\text{ mails arrived in time } [0,t_0])} \\\\ p &= \frac{\sum\limits_{n=1}^{\infty}(1-z)^n e^{-\lambda t_0} \frac{(\lambda t_0)^n}{n!}}{\sum\limits_{i=0}^{\infty}(1-z)^i e^{-\lambda t_0} \frac{(\lambda t_0)^i}{i!}} = \frac{e^{-\lambda t_0}(e^{(1-z)\lambda t_0} - 1)}{e^{-\lambda t_0}(e^{(1-z)\lambda t_0})} = 1 - e^{-\lambda t_0(1-z)}\end{align}$$

Now, I have,

$$P(t_1 \leq t_0) = 1 - e^{-z\lambda t_0} \implies P(t_1 > t_0) = e^{-z\lambda t_0}$$

$$E(t_1\mathbb{I}(t_1 \leq t_0)) = \frac{1}{z\lambda}(1-e^{-z\lambda t_0}) - t_0 e^{-z\lambda t_0}$$

$$1 - e^{-\lambda t_0(1-z)} = p \implies t_0 = -\frac{\ln(1-p)}{\lambda(1-z)} \text{ and } e^{-z\lambda t_0} = (1-p)^{\frac{z}{1-z}}$$

Now, we need to put these values in,

$$E(t^*) = E(t_1\mathbb{I}(t_1 \leq t_0)) + t_0P(t_1>t_0)$$

$$\implies E[t^*] = \frac{1-(1-p)^{\frac{z}{1-z}}}{\lambda z}$$

Dhruv Kohli
  • 5,216
0

The reply is a little long in comment, I write it in answer form, based on above derivation completed by @expiTTp1z0 , I modify a little mistake and find the solution can be approximated, the key step is the calculation of p: \begin{align} p &= \sum\limits_{n=1}^{\infty} P(n\text{ mails has arrived in time } [0,t_0] | b_s = 0,\, \forall s \leq t_0) \\ &= \frac{\sum\limits_{n=1}^{\infty}P(b_s = 0,\, \forall s \leq t_0 , n\text{ mails has arrived in time } [0,t_0])}{P(b_s=0,\,\forall s \leq t_{0})} \\ &= \sum\limits_{n=1}^{\infty}\frac{(1-z)^n e^{-\lambda t_0} \frac{(\lambda t_0)^n}{n!}}{e^{-\lambda zt_{0}}} = 1-e^{-(1-z)\lambda t_0} \end{align} Then the explicit solution of $t_{0}$ can be solved: $$t_{0}=\frac{\ln(1-p)}{\lambda(1-z)}$$ Above modification uses the conditional probability formula that: $$P(n\text{ mails has arrived in time } [0,t_0] | b_s = 0,\, \forall s \leq t_0)=\frac{P(n\text{ mails has arrived in time } [0,t_0] , b_s = 0,\, \forall s \leq t_0)}{P(b_s = 0,\, \forall s \leq t_0)}$$ Furthermore \begin{align}P(b_{s}=0,\,\forall s \leq t_{0})&=P(\text{No email arrives before $t_{0}$})+P(\text{Email has arrived without beep before $t_{0}$})\\&=e^{-\lambda t_{0}}+(e^{-z\lambda t_{0}}-e^{-\lambda t_{0}})=e^{-z\lambda t_{0}}\end{align} Then following the first answer's result: $$P(t_{1} \leq t_{0})=1-e^{-z\lambda t_{0}}$$ $$P(t_{1} < t_{0})=e^{-z\lambda t_{0}}$$ $$E[t_{1}]=\frac{1}{\lambda z}$$ Using the modification results $$1-e^{-(1-z)\lambda t_{0}}=p \Rightarrow e^{-z\lambda t_{0}}=(1-p)^{\frac{z}{1-z}}$$ Thus \begin{align}E[t^{*}]&=E[t_{1}]P(t_{1}\leq t_{0})+E[t_{0}]P[t_{1}>t_{0}]\\&=\frac{1-e^{-z\lambda t_{0}}}{\lambda z}+t_{0}e^{-z\lambda t_{0}}\\ &= \frac{1-(1-p)^{\frac{z}{1-z}}}{\lambda z}-\frac{\ln(1-p)}{\lambda(1-z)}(1-p)^{\frac{z}{1-z}}\end{align}

Galor
  • 61
  • How did you get the results $(1-p)^{\frac{z}{1-z}}$ right after "Using the Modification results"... – Satish Ramanathan Jun 21 '17 at 04:34
  • The solution is not clean and it seems a little botched up – Satish Ramanathan Jun 21 '17 at 04:35
  • Could you tell me the source of the problem? – Satish Ramanathan Jun 21 '17 at 04:35
  • @user24930 Thanks for looking into my answer and pointing out the mistake. I have corrected my answer. But still the derived answer doesn't match the answer mentioned in your question. Probably, we are missing something if your answer is correct. – Dhruv Kohli Jun 21 '17 at 05:57
  • @satishramanathan, $1-e^{-(1-z)\lambda t_{0}}=p \Leftrightarrow 1-p=e^{-(1-z)\lambda t_{0}} \Leftrightarrow (1-p)^{\frac{z}{1-z}}=(e^{-(1-z)\lambda t_{0}})^{\frac{z}{1-z}} \Leftrightarrow (1-p)^{\frac{z}{1-z}}=e^{-z\lambda t_{0}}$, I think this is clear – Galor Jun 21 '17 at 07:19
  • @satishramanathan, I subtract this problem from this paper http://jeffely.com/assets/beeps.pdf, this problem emerges in page 5 – Galor Jun 21 '17 at 07:23
  • @expiTTp1z0, the original problem is this paper http://jeffely.com/assets/beeps.pdf , on page 5, I subtract this random beep problem. In fact, I tend to believe we are correct and this paper's answer misses the second term in our solution. An easy check is that: let $z \to 0$, using this paper's solution, using l'Hopital's Rule rule, we can see $$\lim_{z \to 0}\frac{1-(1-p)^{\frac{z}{1-z}}}{\lambda z}=\lim_{z \to 0}\frac{\frac{z}{(1-z)^{3}}(1-p)^{\frac{z}{1-z}}}{\lambda}=0$$The boundary condition fails in the paper's answer, but this boundary condition holds in our solution. – Galor Jun 21 '17 at 07:27
  • The above limit is $-\frac{\ln(1-p)}{\lambda}$. Your differentiation with respect to $z$ is incorrect. $\frac{\partial (1-p)^{\frac{z}{1-z}}}{\partial z} = \frac{((1-p)^{\frac{z}{1-z}})\ln(1-p)}{(1-z)^2}$. So, I still think that our answer is probably missing some crucial point. I will look into the answer again in sometime. – Dhruv Kohli Jun 21 '17 at 07:43
  • Yes, the original answer holds in the boundary condition and it is more elegant. This problem remains to be explored...... – Galor Jun 21 '17 at 08:06
  • 1
    I made a correction and now we have the desired result. – Dhruv Kohli Jun 24 '17 at 12:50
  • @expiTTp1z0, excellent! Thanks very much! All of the two corrections focus on the computation of conditional probability and conditional expectation, the calculation needs more cautious~ – Galor Jun 25 '17 at 14:01
  • Yes. So I guess the answer is acceptable. – Dhruv Kohli Jun 25 '17 at 14:03