2

Let $X = (X_t)_{t\geq 0}$ be a Markov Chain with states $s_1$ and $s_2$. Suppose $X_0=s_1$. The times $X$ stays in $s_1$ before jumping to $s_2$ are independent and exponentially distributed with parameter $\mu_1$. Likewise, the times $X$ stays in $s_2$ before jumping to $s_1$ are also independent and exponentially distributed but with parameter $\mu_2$.

Let $N_t$ be the number of jumps that $X$ makes before time $t$. So if $T_i$ is the time point where $X$ jumps for the $i$-th time, then $N_t = \sum_{i=1}^\infty 1_{T_i <t}$.

My question is, what is a ($t$-dependent) bound for $\mathbb E a^{N_t}$ for a number $a>0$?

Bernard
  • 175,478
  • I think you need the distribution of $N_t$. Then you can get the expectation as $\int a^{N_t} P(N_t)dt$. I suspect the distribution involves a modified bessel function. I would try to compute it with some renewal equations. – kevinkayaks Jul 02 '21 at 12:45
  • I'm not familiar with renewal equations, but a quick google search does show that $N_t$ is a renewal process. Could you tell me, or point me to a source that shows, how renewal equations can be used to solve for the distribution of $N_t$? – user178563 Jul 02 '21 at 13:15
  • I suggest Weiss 1994 "Aspects and applications of the random walk". Basically you need to write the probability of $N$ events in $t$ in terms of the probability that $N-1$ events occurred at any earlier time, then solve the resulting integral equations. – kevinkayaks Jul 02 '21 at 13:27
  • Could you maybe elaborate a bit more? Let $f_t(n) = P(N_t = n)$. I can get relations such as f_t(n) = (...) f_s(n-1) for s<t, but I don't see your integrals appearing. – user178563 Jul 04 '21 at 13:26

2 Answers2

1

Some more elaboration!

Your problem is like a continuous time two-state random walk. The key references as far as I'm concerned are Weiss 1976 (Journal of Statistical Physics) and Weiss 1994 (Aspects and Applications of the Random Walk, a monograph).

I'll change notation a bit because this is what I scribbled down. It's not a full answer and it's likely not exactly correct but it may help. I'd personally have to think hard to solve this problem.

You have two states $1$ and $2$. At any $t$ you can be in either state. The probability that you're in state $i$ at time $t$ after $N$ transitions have happened is $P_i(N,t)$. This has to link to the probability that you were in the other state $j$ when $N-1$ transitions had happened.

There are sojourn time distributions in each state, $g_i(t) = \lambda_i \exp[-\lambda_i t]$. These represent the probability density of times spent in each state. There are also related distributions characterizing the probabilities that a soujourn in a state has lasted at least as long as t, $G_i(t) = \int_t^\infty dt g_i(t).$

First, you need to evaluate when the transitions between states occur. Let $\omega_i(t)$ represent the probability that a transition from state $j$ to state $i$ happened exactly at time $t$. Then you can write the two renewal equations $$ \omega_i(t) = \theta_i g_i(t) = \int_0^t d\tau \omega_j(\tau) g_i(t-\tau).$$ The $\theta_i$ are the initial probabilities to start in state $i$. We have $\theta_1 + \theta_2 = 1.$ These equations say if you're transitioning out of the state $i$, you either started there and are just transitioning out for the first time (first term), or you were in $j$ back at some earlier time $\tau$ when you transitioned to i, but you're transitioning out now (second term).

Using these transition time distributions, you can link the probabilities. I think this will be $$ P_i(N,t) = \int_0^t d\tau \omega_i(\tau)P_j(N-1,t-\tau).$$ This chain of equations says that the probability that $N$ transitions have occured while the system sits in state $i$ is related to the probability that the last transition occurred back at time $\tau$ when the system was in state $j$.

The distribution you need is then the probability that the system has undergone $N$ transitions regardless of what state it's in: $P(N,t) = P_1(N,t) + P_2(N,t)$.

People typically solve such coupled renewal equations with Laplace transforms. I think this formulation is not exactly correct but it should give you the general idea how the integrals appear in renewal approaches. In particular I think the $G_i$ should enter the problem. I would go to Weiss or potentially Cox 1962 "renewal theory" and think hard about your specific problem.

Once you have P(N,t) you can compute your desired expectation $\sum_N a^N P(N,t)$.

kevinkayaks
  • 1,444
0

I did some digging and found what I needed as a lemma in an old paper. Maybe someone has a use for this someday, so I will just give the reference. Looking at the proof of that lemma, it is looks somewhat similar to Kevins discussion, so please accept the bounty.

The lemma is more general than I need. Let $X(t)$ be a Markov process on {1,\dots , n} with $Q$-matrix $Q = (q_{ij})_{ij}$. Then we have the corresponding measures $P_1,\dots, P_n$ with $P_i(X(0) = i) = 1$ over the space of sample paths of $X$. We let $N(t)$ be the number of jumps $X$ makes upto time $t$.

Lemma 1 (from [1]): Let $0\leq \alpha \leq -q_{ii} \leq \beta$ for all $i$. Then, for $m\geq 0$, we have $$ P_i\left[N(t) = m\right] \leq \frac{\left(\beta t\right)^m}{m!} e^{-\alpha t}, $$ for each $t\geq 0$ and $i=1,\dots,n$.

For my situation: This result then gives that $\mathbb E a^{N(t)}$ is bounded by $e^{tr a}$ where $r$ depends on the choice of $\alpha$ and $\beta$.

[1] Griego, Richard, and Reuben Hersh. "Theory of random evolutions with applications to partial differential equations." Transactions of the American Mathematical Society 156 (1971): 405-418.