1

We have a system in which events happen one after each other. The time interval between each two events shown by random variable $t_i$. So, the time interval between the first and the second events is shown by $t_1$, the time interval between the second and the third events is shown by $t_2$, and so on. We suppose the system keeps working as long as the time interval between each two successive events is smaller than $\tau$. In other words, the system stops as soon as the time interval between two successive events is larger than $\tau$.

Assuming the time interval between $n-1^{th}$ and $n^{th}$ is larger than $\tau$, we can show all time intervals between events as follows:

$t_1,t_2,t_3,\dots t_n$

all $t_i,\ 1\le i \le n$, have i.d.d exponential distribution with expected value $\frac{1}{\lambda}$. So:

($E[t_1]=E[t_2]=\dots\ E[t_n]=\frac{1}{\lambda}$).

Assuming PDF for $t=\sum_{i=1}^{n-1}t_i + t_n$ indicated by $f(t|n)$, We can define PDF $f(t)$ for the interval time between start and end of the system over $t=\sum_{i=1}^{n-1}t_i + t_n$ as follows:

$f(t)=\sum_{n=1}^{\infty}f(t|n)P(n)$

in which $P(n) = (1-e^{-\lambda \tau})^{n-1}e^{-\lambda \tau}$

Now, Wee need to calculate the Expected value for $t$. How?

Afterwards, I need to consider a different story. We have the same system in which events happen one after each other. The time interval between each two events shown by random variable $t_i$.we can show all time intervals between events as follows:

$t_1,t_2,t_3,\dots t_n$

all $t_i,\ 1\le i \le n$, have i.d.d exponential distribution with expected value $\frac{1}{\lambda}$. So:

($E[t_1]=E[t_2]=\dots\ E[t_n]=\frac{1}{\lambda}$).

The system HOWEVER keeps running as long as the time interval between the $i-1^{th}$ event and $i+1^{th}$ event is less than $\tau$. In other words, the system keeps running as long as $t_1+t_2 < \tau, t_2+t_3 < \tau, t_3+t_4 < \tau$ and so on. The system stops as soon as $t_{n-1}+t_n > \tau$.

Now, how can I find the Expected Value for $t=\sum_{i=1}^{n-2}t_i + t_{n-1} + t_n$ conditional on $n$.

  • 1
    When you say "shown by $f(t\mid n)$", do you mean "expressed in terms of $f(t\mid n)$", as you did in the first case, $f(t)=\sum_{n=1}^\infty f(t\mid n)P(n)$? (By the way, note that the spacing came out right because I used \mid instead of |.) – joriki Jun 07 '16 at 17:15
  • 1
    Your expression for $f(t\mid n)$ is wrong. What you wrote is $f(t)$ if you add $n$ independent exponential random variables. But $f(t\mid n)$ is something quite different from that; it contains the condition that $n-1$ of the variables are less than $\tau$ and one is greater, and you didn't take that into account. – joriki Jun 07 '16 at 17:18
  • 1
    As you stated, the sum of i.i.d. exponential random variables has a gamma (Erlang) distribution, and this is well known. The question is, are $t_i$ unconditionally i.i.d. exponential? And in the sum, are you conditional on the event that $t_i < \tau, i = 1, 2, \ldots, n-1$ and $t_n > \tau$? If yes then you do not have such a nice result. Or you need to clarify, anyway. – BGM Jun 07 '16 at 17:24
  • @joriki You are absolutely right. Could you please tell me how I can come up with $f(t|n)$ since it contains the condition that $n−1$ of the variables are less than $\tau$ and one is greater. – Alireza Montazeri Gh Jun 07 '16 at 17:36
  • @BGM You are absolutely right. Could you please tell me how I can come up with $f(t|n)$ since it contains the condition that $n−1$ of the variables are less than $\tau$ and one is greater. – Alireza Montazeri Gh Jun 07 '16 at 17:38
  • I'd expect the PDF for $t$ even in the first case to be quite messy. Are you sure you need the PDF? You're mixing it with expectation values, e.g. $f(t)=\sum_{n=1}^{\infty}E(t\mid n)$ doesn't make sense as far as I can tell. If you only need the expected value of $t$, that would be a lot easier to obtain. (By the way, is there a reason why you keep using | instead of \mid after I pointed out the cramped spacing?) – joriki Jun 07 '16 at 17:43
  • Well, Yup. I just need the Expectation Value. @Joriki – Alireza Montazeri Gh Jun 07 '16 at 17:44
  • 1
    Rule #1 in probability: Never try to find the probability distribution if you only need the expected value. In most cases, the expected value is far easier to compute without taking the detour through the distribution. – joriki Jun 07 '16 at 17:45
  • @joriki Absolutely right. – Alireza Montazeri Gh Jun 07 '16 at 17:46
  • @joriki I edited the question. So, it is now clear we are looking for Expected Value. – Alireza Montazeri Gh Jun 07 '16 at 17:56
  • I'm working on an answer. – joriki Jun 07 '16 at 17:56
  • I missed the fact that you want the expected value conditional on $n$, not the overall expected value of the sum. I worked out the unconditional expected value, but since you want the conditional one, I deleted it and posted a new question together with the answer. It might also be of interest in your case. – joriki Jun 08 '16 at 14:09
  • Regarding your answer on the other page, $E(S)$ is the expected value for time over which the system encouters the first interval time btween $i-1^{th}$ and $i+1^{th}$ events which is larger than $\tau$? @joriki – Alireza Montazeri Gh Jun 13 '16 at 19:52
  • Yes. $S$ is what you denote by $t$ in your question. – joriki Jun 13 '16 at 19:56
  • Hi @joriki. I need to thank you again for your very nice answer. I was wondering if you could help me with the following question which is very close to the question you answered: As you remember the question you answered, the system continues working as long as $t_i<\tau$ for $1\le i < n$. The system stops when $t_n > \tau$. My new question is that what if system keeps working as long as $t_i>\tau$ for $1\le i < n$, and then the system stops when $t_n < \tau$. I really appreciate your time. – Alireza Montazeri Gh Aug 10 '16 at 18:05
  • 1
    @AlirezaMontazeriGh: And you want the unconditional expected value of the sum of the steps up to and including $t_n$? – joriki Aug 10 '16 at 18:11
  • @joriki, Yes. let me explain my whole question in a new post. – Alireza Montazeri Gh Aug 10 '16 at 18:42
  • @joriki you can find my question in here – Alireza Montazeri Gh Aug 10 '16 at 19:40

1 Answers1

2

For the first example, all you need to do is to work out the truncated exponential distribution.

$$ \begin{align} &~ E\left[\sum_{i=1}^n T_i~\middle|~ \bigcap_{i=1}^{n-1}\{T_i \leq \tau\} \cap \{T_n > \tau\}\right] \\ =&~ \sum_{i=1}^n E\left[T_i~\middle|~ \bigcap_{i=1}^{n-1}\{T_i \leq \tau\} \cap \{T_n > \tau\}\right] \\ =&~ \sum_{i=1}^{n-1} E\left[T_i \mid T_i \leq \tau \right] + E[T_n \mid T_n > \tau]\\ \end{align}$$

Here is the nice part of exponential distribution: Consider the conditional CDF of $T_n \mid T_n > \tau$, for $t > \tau$: $$ \Pr\{T_n \leq t \mid T_n > \tau\} = \frac {\Pr\{T_n \leq t, T_n > \tau\}} {\Pr\{T_n > \tau\}} = \frac {e^{-\lambda\tau}-e^{-\lambda t}} {e^{-\lambda\tau}} = 1 - e^{-\lambda(t - \tau)}$$

which shows that $T_n \mid T_n > \tau$ has the same distribution as $T_n + \tau$ (the shifted exponential), and this is the memoryless property.

Next you may use a similar trick to workout the conditional CDF of $T_1 \mid T_1 \leq \tau$, and obtain the expectation. Or you may consider this:

$$ \begin{align} && E[T_1] &= E[T_1 \mid T_1 \leq \tau]\Pr\{T_1 \leq \tau\} + E[T_1 \mid T_1 > \tau]\Pr\{T_1 > \tau\} \\ &\Rightarrow & \frac {1} {\lambda} &= E[T_1 \mid T_1 \leq \tau] (1 - e^{-\lambda \tau}) + \left(\frac {1} {\lambda} + \tau\right)e^{-\lambda \tau} \\ &\Rightarrow & E[T_1 \mid T_1 \leq \tau] &= \frac {1} {\lambda} - \frac {\tau e^{-\lambda\tau}} {1 - e^{-\lambda\tau}} \end{align}$$

So the above expectation becomes $$ (n - 1)\left(\frac {1} {\lambda} - \frac {\tau e^{-\lambda\tau}} {1 - e^{-\lambda\tau}}\right) + \frac {1} {\lambda} + \tau = \frac {n} {\lambda} - \frac {(n-1)\tau e^{-\lambda\tau}} {1 - e^{-\lambda\tau}} + \tau$$

We can employ the similar strategy for the second part: $$ \begin{align} &~ E\left[\sum_{i=1}^n T_i~\middle|~ \bigcap_{i=1}^{n-2}\{T_i + T_{i+1} \leq \tau\} \cap \{T_{n-1} + T_n > \tau\}\right] \\ =&~ \sum_{i=1}^n E\left[T_i~\middle|~ \bigcap_{i=1}^{n-2}\{T_i + T_{i+1} \leq \tau\} \cap \{T_{n-1} + T_n > \tau\}\right] \\ =&~ E\left[T_1 \mid T_1 + T_2 \leq \tau \right] + \sum_{i=2}^{n-2} E\left[T_i \mid T_{i-1} + T_i \leq \tau, T_i + T_{i+1} \leq \tau \right] \\ &~ + E[T_{n-1} \mid T_{n-2} + T_{n-1} < \tau, T_{n-1} + T_n > \tau] + E[T_n \mid T_{n-1} + T_n > \tau]\\ \end{align}$$

So we compute the conditional CDFs one by one: First for $T_1 \mid T_1 + T_2 \leq \tau $, and $0 < t < \tau$, $$ \Pr\{T_1 \leq t \mid T_1 + T_2 \leq \tau\} = \frac {\Pr\{T_1 \leq t, T_1 + T_2 \leq \tau\}} {\Pr\{T_1 + T_2 \leq \tau\}} $$ The numerator is given by $$ \begin{align} \int_0^t \Pr\{T_2 \leq \tau - u\} \lambda e^{-\lambda u}du &= \int_0^t (1 - e^{-\lambda(\tau - u)}) \lambda e^{-\lambda u}du \\ &= 1 - e^{-\lambda t} - \int_0^t \lambda e^{-\lambda\tau}du \\ &= 1 - e^{-\lambda t} - \lambda t e^{-\lambda\tau} \end{align}$$ The denominator is similar, we just replace the upper integral limit $t$ by $\tau$, and obtain $1 - e^{-\lambda\tau} - \lambda \tau e^{-\lambda\tau}$ (or directly look up the CDF of Erlang). So the resulting CDF is $$ \frac {1 - e^{-\lambda t} - \lambda t e^{-\lambda\tau}} {1 - e^{-\lambda\tau} - \lambda \tau e^{-\lambda\tau}}, 0 < t < \tau$$

and thus the expected value is $$ \begin{align} &~ \int_0^{\tau} 1 - \frac {1 - e^{-\lambda t} - \lambda t e^{-\lambda\tau}} {1 - e^{-\lambda\tau} - \lambda \tau e^{-\lambda\tau}} dt \\ =&~ \frac {1} {1 - e^{-\lambda\tau} - \lambda \tau e^{-\lambda\tau}} \int_0^{\tau} e^{-\lambda t} + \lambda t e^{-\lambda\tau} - e^{-\lambda\tau} - \lambda \tau e^{-\lambda\tau} dt \\ =&~ \frac {1} {1 - e^{-\lambda\tau} - \lambda \tau e^{-\lambda\tau}} \left( 1 - \frac {1} {\lambda} e^{-\lambda \tau} + \frac {\lambda} {2} \tau^2 e^{-\lambda\tau} - \tau e^{-\lambda\tau} - \lambda \tau^2 e^{-\lambda\tau} \right) \\ =&~ \frac {1} {1 - e^{-\lambda\tau} - \lambda \tau e^{-\lambda\tau}} \left( 1 - \frac {1} {\lambda} e^{-\lambda \tau} - \frac {\lambda} {2} \tau^2 e^{-\lambda\tau} - \tau e^{-\lambda\tau} \right) \end{align}$$ So it looks tedious but manageable. The remaining terms are left to you as you have got all the tools to work them out from this example.

BGM
  • 7,218
  • thanks for your answer @BGM. So, If I want to find the $E(t)$ independent of $n$, I just need to calculate the following: $E(t) = \sum_{n=1}^{\infty} E(t|n)P(n) = \sum_{n=1}^{\infty} (\frac{n}{\lambda}-\frac{(n-1) \tau e^{-\lambda \tau}}{1-e^{-\lambda \tau}}+\tau)(1-e^{-\lambda \tau})^{n-1}e^{-\lambda \tau}$ Right? $E(t)$ however will be $\infty$!!! – Alireza Montazeri Gh Jun 13 '16 at 18:02
  • 1
    @AlirezaMontazeriGh: Why $\infty$? The factor $(1-\mathrm e^{-\lambda\tau})^{n-1}$ provides exponential decay, so the series converges (as it must). If you're interested in the unconditional expected value of $E[t]$, it can be calculated much more directly by one-step analysis:

    $$ E[t]=\frac1\lambda+\left(1-\mathrm e^{-\lambda\tau}\right)E[t];, $$

    and thus

    $$ E[t]=\frac{\mathrm e^{\lambda\tau}}\lambda;. $$

    – joriki Jun 13 '16 at 20:00
  • right. I also calculate it as follows: $E(t) = \sum_{n=1}^{\infty}\frac{n}{q}p^{n-1}q-\sum_{n=1}^{\infty}\frac{(n-1)\tau q}{p}p^{n-1}q + \sum_{n=1}^{\infty}\tau p^{n-1}q = \frac{q}{\lambda p}\sum_{n=1}^{\infty}np^n +- \frac{\tau q^2}{p}\sum_{n=1}^{\infty}(n-1)p^{n-1} + \tau q \sum_{n=1}^{\infty}p^{n-1} = \frac{1}{\lambda q} = \frac{e^{\lambda \tau}}{\lambda}$ in which $q = e^{\lambda \tau}, p = 1- q$ @joriki – Alireza Montazeri Gh Jun 13 '16 at 20:55
  • $q=e^{-\lambda \tau}$ – Alireza Montazeri Gh Jun 13 '16 at 21:02
  • 1
    @AlirezaMontazeriGh: Looks good (except the $q$ in the denominator in the first sum should be a $\lambda$). – joriki Jun 13 '16 at 21:17
  • @joriki So, If I keep working on the second part based on your solution in the page and find the $E(t)=\sum_{n=1}^{\infty}E(t|n)P(n)$, I will get the same answer you provided in the following page? http://math.stackexchange.com/questions/1818505/expected-sum-of-exponential-variables-until-two-of-them-sum-to-a-threshold – Alireza Montazeri Gh Jun 13 '16 at 21:55
  • 1
    @AlirezaMontazeriGh: I'm not sure what you mean by "my solution in the page". I applied one-step analysis, as in my first comment above, but more involved because it needs to be conditionalised on the previous interval. I didn't condition on $n$, as you requested and as BGM did in this post. To get $E[t]$ from $E[t|n]$, you'd need $P(n)$, which seems about as hard to determine as $E[t]$ itself. – joriki Jun 13 '16 at 22:01
  • @joriki Well, I meant BGM's answer on this page. My concern is that BGM's solution for the first part of my question finds the expected value $E(t|n)$ which is conditional on $n$. As you said in couple of comments above and I could calculate, $E(t)$ would be $\sum_{n=1}^{\infty}E(t|n)P(n) = \frac{e^{\lambda \tau}}{\lambda}$ which is gonna be the average time between the start and end of the system. – Alireza Montazeri Gh Jun 14 '16 at 04:49
  • @joriki For the second part of my question, I'm looking for the average time of the system as long the time interval between $i-1{th}$ and $i+1{th}$ events is less that $\tau$. Consequently, the system may stop after the $2^{nd}$, or $3^{rd}$, or $4^{th}$ event and so on. I was asking for $E(t|n)$ for the second part of my question on this page since I guessed finding $P(n)$ is not a difficult job. Otherwise, I could use $E(t) = \sum_{n=1}^{\infty}E(t|n)P(n) $ to find the average time between start and end of the system for the second part of my question. – Alireza Montazeri Gh Jun 14 '16 at 04:50
  • @joriki So, if $\frac{e^{\lambda \tau}}{\lambda}$ is the average time between the start and end of the system for the first part of my question, your solution on the other page would be the average time between the start and end of the system for the second part of my question? – Alireza Montazeri Gh Jun 14 '16 at 04:51
  • 1
    @AlirezaMontazeriGh: Yes. This shows why you should always provide the context of your original problem, rather than just the approach that you're currently using to approach it. Your question was phrased to ask for the expectation conditional on $n$, and now it seems to turn out that you were interested in the unconditional expectation after all and were merely taking the approach of obtaining it through the conditional expectation because you thought that this would be the easiest way to obtain it. That wasn't apparent from the question. – joriki Jun 14 '16 at 05:59
  • 1
    @AlirezaMontazeriGh: And the same had already happened previously when you wrote that you were looking for the entire distribution, and it turned out that in fact you weren't and this was just your current approach in trying to obtain the expected value. – joriki Jun 14 '16 at 06:01
  • @joriki I agree that I could not clearly state what I was exactly looking for. Any way, I really appreciate your time. – Alireza Montazeri Gh Jun 14 '16 at 07:53