7

I'm working on a problem concerning censoring of transitions in a Markov Chain. For example, take a Markov Chain that models a counter, it goes up or down but does not stay in position. A possible censoring could be to only observe the transitions where the new number is prime or to only observe those transitions where the new number is even.

According to what I'm told, this results in a new Markov Chain, but I can't really grasp how this changes the problem that the new Markov Chain could be modeling. Do you still take into account that you somehow got in a state to be able to make that kind of transition or can I just completely ignore it and only consider the leftover transitions?

Wim
  • 73

2 Answers2

7

If the state space of the Markov chain is $S$ and you observe the chain only when it is in $U\subseteq S$, then the result is still a Markov chain but with different transition probabilities.

Let $(X_n)_{n\geqslant0}$ denote the original Markov chain and $T=\inf\{n\geqslant1\mid X_n\in U\}$. Then the transition probabilities of the new Markov chain are such that, for every $x$ and $y$ in $U$, $$ Q(x,y)=P(X_T=y\mid X_0=x). $$ In general, the new transition probabilities $Q(x,y)$ are a complicated functional of the transition probabilities of $(X_n)_{n\geqslant0}$ and of the disposition of $U$ in $S$.

Here is an example. Assume $(X_n)_{n\geqslant0}$ is the symmetric $\pm1$ random walk on $\mathbb Z$, with transition probabilities $P_x(X_1=x+1)=P_x(X_1=x-1)=\frac12$ for every $x$ in $\mathbb Z$. Let $U\subseteq\mathbb Z$ with $U=\{x_k\mid k\in\mathbb Z\}$ and $x_k<x_{k+1}$ for every $k$. Then, starting from $x_k$, the new Markov chain can only jump to a vertex $x_j$ such that $|j-k|\leqslant1$, and the transition probabilities are $$ Q(x_k,x_{k-1})=\frac1{2(x_k-x_{k-1})},\quad Q(x_k,x_{k+1})=\frac1{2(x_{k+1}-x_{k})}, $$ and $$ Q(x_k,x_{k})=1-\frac1{2(x_k-x_{k-1})}-\frac1{2(x_{k+1}-x_{k})}. $$ Hence the new chain behaves like a random walk on $\mathbb Z$ with jumps from $k$ to $k$ or $k+1$ or $k-1$, and basically any kind of transition probabilities.

Did
  • 279,727
5

As Didier points out, the transition probabilities for the new chain are complicated. But for a finite state space Markov chain, we can let linear algebra do the work. First partition the transition matrix $P$ as shown:

$$P= \matrix{&\hskip-15pt \matrix{&\small U&\small U^c}\cr\matrix{\small U\cr\small U^c}&\hskip-10pt\pmatrix{A&B\cr C&D}}.$$

That is, the matrix $A$ has the transitions from $U$ to $U$, the matrix $B$ has the transitions from $U$ to $U^c$, etc. The transition matrix for the new chain with state space $U$ is $$P^U=A+B(I-D)^{-1}C.$$

The inverse of $I-D$ will exist if it is possible to reach $U$ from any state in $U^c$.