In Chapter iii section 6 of Optimal stopping and free boundary problems (Peskir and Shiryaev), they re-wrote the optimal stopping problem of the form $V=\sup_\tau\mathbb E(M(X_\tau)+\int^\tau_0 L(X_t)dt+\sup_tK(X_t))$ into the form $V=\sup_\tau \mathbb EG(Z_\tau)$ where $Z_t=(I_t,X_t,S_t)$ and $I_t=\int^t_0L(X_s)ds$ (integral process) and $S_t=\sup_{s\in[0,t]}X_s$. They claim that $Z_t$ is Markovian (under general assumptions). I wonder what assumptions need to be made so that this is true. In particular, $I_t=\int^t_0B_sds$ seems to be not Markov by Is the definite time integral of a Brownian Motion a Markov process and a martingale?
1 Answers
It looks like the only conditions needed are that $X$ is a (cadlag) Markov process and that $L$ is a measurable (deterministic) function. I'll give an informal explanation below.
The pair $(X_t, S_t)$ is Markov because $S_T$ depends only on $S_t$ and the path of $X$ on $[t,T]$, and knowing the path of $X$ (or $S$) on $[0,t]$ gives no more information about the path of $X$ on $[t,T]$ than knowing $X_t$ does (thanks to the Markov property of $X$.
We can see that the pair $(I_t,X_t)$ is Markov by looking at the dynamics: $dI_t = L(X_t)dt, dX_t = dX_t$. Since $X$ is Markov, and the dynamics of $(I_t,X_t)$ depend only on $(I_t,X_t)$ (really only $X_t$), the pair is Markov.
The issue with the example you gave of $J_t := \int_0^t B_s ds$ is that the dynamics $dJ_t = B_t dt$ do not depend only on $J_t$, so $J_t$ on its own is not Markov. However, the pair $(J_t,B_t)$ is Markov. The intuition for $J$ not being Markov is that if you just know $J_t$, you don't know the value of $B_t$. But if you knew $J_s$ for $s \in [0,t]$, you could deduce the value of $B_t$ by, for example, differentiating $J_t$. You would expect $J$ to increase in the near future if $B_t > 0$ and to decrease if $B_t < 0$, so knowing the full path up to time $t$ does give you more information about future values of $J$ than just knowing the current value of $J_t$.

- 13,426