6

For a finite Markov chain with one or more absorbing states there are well-known methods for determining either the probability of ending up in a specific absorbing state or the expected number of steps to end up in any absorbing state. However, I've not found a way to determine the expected number of steps to reach a specific state.

For instance, in the classic Drunkard's Walk, the drunkard is absorbed by both home and the bar. If I were looking at a Drunkard's Walk, what I would want to determine is how many steps it takes, on average, a drunkard who gets home to actually get home. Ideally so that the answer can be presented in a form like "There's an X% chance the drunkard will end up at home, taking on average Y steps, and a Z% chance the drunkard will end up at the bar after an average of W steps." or equivalent.

I've not yet found a good way to find the solution, I'm at a bit of a loss where to start, and if the answer exists out there I've not found the correct Google search terms to land me on it.

Thanks!

  • I think your stationary vector should give you the distribution over absorbing states... the Number of steps is roughly proportional to the “eigenvalue gap” between the stationary vector and the eigenvector with second largest eigenvalue – mm8511 Jul 14 '18 at 23:52
  • If you look up “eigenvalue gap Markova chain” in google, this should do the trick. – mm8511 Jul 14 '18 at 23:53

2 Answers2

1

Once you have the probability at each point for ending up in your selected absorbing state, you can compute new transition probabilities everywhere that describe the behavior of exactly those drunkards who will eventually end up in that state.

Then compute the expected time to absorption for the new chain.


Alternatively, if you have one starting state you're interested in, first set up a system of equations that compute for each state the expected number of times one will encounter that state before being absorbed. With these numbers you can then find the coefficients for a system of equations that finds the "expected number of steps taken yet" for a uniformly selected drunkard who just reached each specific state.

This gives you the time-to-reach for all states at once, at the expense of only working for one starting state (or probability distribution for the starting state).

  • I'm really interested in this, but I can see how to carry out the computations you describe. I'm going to post a question with a concrete example, and I hope you'll indicate how to go about solving it. – saulspatz Jul 15 '18 at 00:37
  • I posted my question at https://math.stackexchange.com/questions/2852105/expected-time-till-absorption-in-specific-state-of-a-markov-chain I hope you'll have a look. I just saw there was a typo in my previous comment. I meant to say, "I can't see how..." of course. – saulspatz Jul 15 '18 at 01:20
  • Very belated update: I wrote a detailed explanation of the first method in this answer at https://math.stackexchange.com/a/4600370/1127272, including examples and sample code. – David Clyde Dec 17 '22 at 02:40
1

Purge the “bad” outcomes from the system: Let $S_+$ be the set of “good” absorbing states and $S_-$ the set of states that don’t communicate with $S_+$. Delete $S_-$ and update the remaining transition probabilites by conditioning them on not entering $S_-$ in one step: $$p_{ij} \to {p_{ij} \over 1-\sum_{k\in S_-}p_{ik}}.$$ You need to delete more than just the “bad” absorbing states because deleting the latter can create new recurrent states in which the system can get trapped, effectively replacing some of the deleted absorbing states with new ones. These dead-end branches also need to be pruned. The reduced system can then be analyzed as usual to find the conditional expected absorption times.

amd
  • 53,693