1

I'm dealing with a problem in my thesis that involves proving the boundedness of a Stochastic Process. This question is related to another one I did recently, but the results found in the other one sadly didn't help me sufficiently.

Let $\text{d}X_t= f(t,X_t) \text{d}t +\text{d}W_t$ be a stochastic process defined for the times $t\ge0,$ such that $X_0=0$ and $f(t,x)<a<0$ for all $t\ge0,x>k$ for a certain constant $k$.

Prove that $$\mathbb{P}[\exists C>0: X_t<C \ \forall t\ge 0]=1.$$

In other words, I wish to show that if the function $f$ is negative when $X_t$ goes above a certain value $k$, then the stochastic process is bounded from above, because the drift part is going to be negative.

I know that $Y_t=at+W_t$ is bounded from above if $a<0$, but what I can't manage to deal with is the fact that we cannot control the moments when $X_t$ goes above $k$: every time it goes above $k$ it might reach a higher peak since the bound on $Y_t$ is not $\omega$-wise uniform, and the sequence of those peaks might be unbounded.

Here's a simulation of the SDE $\text{d}Y_t=-Y_t\text{d}t+\text{d}W_t$, which seems to fit these hypotheses, as the time taken is quite long.

EDIT: my SDE, in particular, is $$\text{d}X_t=X_t\frac{D(X_t)-S}{D(X_t)+S}\text{d}t+X_t\sigma\text{d}t,$$ with $D$ being $D(x)=D_0\exp(-\alpha x),$ for some $D_0>S>0$, and $X_0>0$. This SDE is equivalent to $$\text{d}Y_t=\left(\frac{D(\exp(Y_t))-S}{D(\exp(Y_t))+S}-\frac{\sigma^2}2\right)\text{d}t+\sigma\text{d}t,$$ thanks to the substitution $Y_t=\log(X_t)$: this second equation is the one I was referring to. Actually, $S$ is not a constant but a bounded function, which breaks the Markovianity, but I think that if the result is not true for constant $S$ the it's not true also for bounded $S$.

Any help would be immensely appreciated.

  • 1
    Since I know how much you appreciate answers of mine: you can reduce this more general setting, to the one we discussed already by using a comparison principle. You'll find it e.g. in Ikeda, Watanabe; SDEs and Diffusion processes. – Tobsn Nov 08 '20 at 20:00
  • 1
    @Tobsn: Can you also give me some references (lecture notes or books) for that? I also want to revise a little bit. Thanks – Paresseux Nguyen Nov 09 '20 at 01:16
  • @Tobsn thanks a lot. However, how could you use it here? How to deal with the positive $f$ for $x<k$? – Riccardo Ceccon Nov 09 '20 at 07:42
  • 1
    Hm, that's an issue, which I overred indeed. The comparison principle would certainly work in case $f\le a<0$ for all $x$ but so it's more delicate. Doesn't mean it cannot be of use. Whereas the argument of Paresseux Nguyen below shows that it may not be true in general, that doesn't mean it's always wrong. In particular in your examples the drift becomes becomes heavily non-negative, whereas in the proof below a lower bound on the drift is assumed. Regarding your last edits, what's sign of $S$? And is it really $D(exp)$, i.e. sort of double exponential? – Tobsn Nov 09 '20 at 09:10
  • Sorry, I've edited the text: $S$ is positive, so that for high values of $X$ the function $\frac{D(X)-S}{D(X)+S}$ is negative, whereas for low values of $X$ it is positive. Is the equation was without the stochastic component, it would converge towards $S$. I'm quite sure about the double exponential, as the equivalent version comes from the substitution $Y_t=\log(X_t).$ :( Also, I don't really see how the proof below can be "broken", because my drift seems to be "boundable" from below, as when $X\to\infty$ we have $\frac{D(X)-S}{D(X)+S}\to-1$. – Riccardo Ceccon Nov 09 '20 at 10:08
  • 1
    Well, it depends on how free you are in choosing the parameters. To me your second equation suggests that one can always choose $\sigma$ large enough to obtain that the whole drift is negative and upper bounded by some negative constant. And then you are in the situation where a comparison principle works. – Tobsn Nov 09 '20 at 10:44
  • This is totally true. Actually, if you take $\sigma$ that large the solution goes incredibly fast to $-\infty$ (I checked experimentally). So thanks for that! Sadly, I really hoped to get something also from smaller $\sigma$. Also, do you know what are the theoric tools to show some kind of convergence (if possible) for $\sigma\to0$ to the deterministic solution of the equation without Brownian Motion? – Riccardo Ceccon Nov 09 '20 at 11:02
  • Most common tool for even quantifying small noise asymptotics would be large deviation theory, e.g. Freidlin-Wentzell theorem. – Tobsn Nov 09 '20 at 13:18
  • @RiccardoCeccon: it seems there is a much easier way to have your desired convergence when $\sigma \rightarrow 0$ – Paresseux Nguyen Nov 10 '20 at 05:40
  • @ParesseuxNguyen what would it be? I'm sorry for answering so late. What's the theoric setup that shows this kind of convergence? – Riccardo Ceccon Nov 10 '20 at 18:46
  • @RiccardoCeccon: the convergence in your case is really strong, it's confirming a wide range of convergences for stochastic processus. I'll post it in another post. – Paresseux Nguyen Nov 10 '20 at 20:55
  • 1
    Sure, if it's just about proving convergence there are easier ways. The point of establishing a large deviation result is, that it will also prove that this convergence happens on exponential scales. If small noise asymptotics do play a roll for you thesis, then LDPs are basically compulsory. – Tobsn Nov 11 '20 at 09:25

4 Answers4

3

This might not be good news for you.
I don't think your setting will work generally. The reason -why your simulation seems to be confirming your hunch - might be due to your limited simulated time interval.
The main reason based on which I came to that conclusion is the Markov nature of your SDE. ( if there is no $t$ in f)
Now, let us together examine your setting to see if I have made any mistake.

Some assumptions I made on $f$:

  • (i) $f$ is time stationary, that is $ f(t,x)=f(x)$
  • (ii) $f(0)= 0$
  • (iii)$f(x) \ge 0$ if $x \le 0$ and $ f(x) \le 0$ in the other case.
  • (iv) $f$ is bounded under by a value $b<0$, that is $f(x) \ge b$

Let $\tau_1, \gamma_M$ denote:

  • $ \tau_1 := \inf \{ t \ge 1: X_t=0\}$ , the first time after $1$ that $X$ revisits $0$.
  • $ \gamma_M:= \inf \{ t \ge 0: X_t= m\}$ for some $M>0$

So the desired conclusion is that: $$\lim_{M \rightarrow +\infty} \mathbb{P}\left( \gamma_M < \infty \right) =0$$

What I will show is that: It seems to me, under these assumptions : $$ \mathbb{P}\left( \gamma_M < \infty \right) =1 \space \forall M>0$$

Heuristically, under my third assumptions (iii), $f$ acts as a repulsive force that drags $X$ back to $0$, and I don't think it is hard to prove that: $$ \mathbb{P}( \tau_1 < \infty)=1 $$ Then, we have: $$\mathbb{P}\left( \gamma_M< \infty \right) =\mathbb{P}\left( \gamma_M< \tau_1 \right)+\mathbb{P}\left( \tau_1 \le \gamma_M<\infty \right)$$ $$ \underbrace{=}_{ \text{Strong Markov's property}} \mathbb{P}\left( \gamma_M< \tau_1 \right)+\mathbb{P}\left( \tau_1 \le \gamma_M \right)\mathbb{P}\left( \gamma_M <\infty \right)$$ Which is equivalent to: $$\left[ \mathbb{P}\left( \gamma_M< \infty \right) -1 \right]\mathbb{P}\left( \gamma_M< \tau_1 \right)=0$$ $$\Leftrightarrow \mathbb{P}\left( \gamma_M< \infty \right)=1$$ Because the fourth assumption gives us clear a reason for which $\mathbb{P}\left( \gamma_M< \tau_1 \right) >0$. Indeed, we have: $$\mathbb{P}\left( \gamma_M< \tau_1 \right) \ge \mathbb{P}\left( \gamma_M< 1 \right) \ge \mathbb{P}\left( X_1 > M \right) \ge \mathbb{P}\left( W_1+b > M \right) >0$$
**QED **
*Discuss *: So there are more than just only the control over the negativity of $f$ in order for your result to happen.

  • Thanks a lot! The problem is: from this proof, I don't see any modifications that can be done to the SDE in order to make the result true. Like, even if the negative part was way stronger, it would work equally, as much as if the positive part was weaker. Do you agree with this? – Riccardo Ceccon Nov 09 '20 at 07:55
  • I've edited the question, if you are further interested. – Riccardo Ceccon Nov 09 '20 at 08:05
  • At this stage, any modifications that made the conclusion true could be interesting. – Riccardo Ceccon Nov 09 '20 at 08:41
  • @RiccardoCeccon: In general, as you and Tobsn have discussed, there are many ways to break my results, but given your real problem, they seem to be not much of interest now. One thing I might try is to have the zeros $x_t$ of the equation $f(t,x)=0$ converge to $-\infty$ when $t$ gets large. However, it seems that manipulation may lead to some trivial results. – Paresseux Nguyen Nov 09 '20 at 12:40
  • Here I am. Thanks again for all your help - I wish I could green-flag all of your answers! Potentially, I may ask you some more advice, but probably it will happen in another question. :) – Riccardo Ceccon Nov 15 '20 at 16:38
  • 1
    Thank you, I really appreciate that. – Paresseux Nguyen Nov 15 '20 at 16:47
2

This post is meant to explain why your toy example also fails and why I think your simulation did not show a good illustration here
Your toy example is Ornstein-Uhlenbeck process ( me too, I was having a little trouble solving until I realize how trivial it is when writing this answer), with solution: $$ Y_t= e^{-t} \underbrace{ \int_{0}^t e^sdW_s}_{=: A_t}$$

Moreover, $A_t$ is just a time change Brownian, that is, there is a Brownian process $B$ such that: $$ (A_t)= \left( B_{e^{2t}-1}\right)$$ and clearly, $(Y_t)$ is a.s unbounded.
Remark: We do have the same manipulation for a suitable SDE of form : $$dY_t= -g(t)Y_tdt+dW_t$$ or even $$dY_t= -g(t,Y_t)Y_tdt+dW_t$$ under some conditions on $g$

@to Mods: Since this topic is now more like a discussion. I would like to break my long comment in multiple posts for readability. Please kindly understand.

1

So, let's come back to our initial equation but with some modifications to suit our analysis on the impact of volatility.
$$ dX_t = \left( f(t,X_t)-\frac{\sigma^2}{2} \right)dt +\sigma dW_t $$ with intial condition $X_0=x_0$
Here are some additional assumptions we make on $f$ (which are also confirmed by our real $f$):

  • $f$ is bounded (1)
  • $f$ is decreasing on second variable (2)
  • $f$ is locally lipschitz (3)

Let $\tilde{X}$ denote the solution of our SDE when there is no volatility, that is, $\tilde{X}$ is the solution of the following ODE: $$d\tilde{X}_t = \left( f(t,\tilde{X}_t) \right)dt$$ with intial value $\tilde{X}_0=x_0$.
The condition of local Lipschitz (3) and boundedness of $f$ (1) guarantee the existence and unicity of $\tilde{X}$
After Ito's formula, we have: $$d(X_t-\tilde{X}_t)^2=2(X_t-\tilde{X}_t)\left[ f(t,X_t)- f(t,\tilde{X}_t) \right]dt + \sigma^2 \left[ 1-\left( X_t-\tilde{X}_t\right)\right]dt+\underbrace{2\sigma (X_t-\tilde{X}_t) dW_t}_{=:dM_t}$$

Let's have some simple analysis:

  • $(X_t)$ is bounded in $L^2$ ( as a direct consequence of the boundedness of $f$ )
  • $(M_t)$ is a local martingale and in fact, it's even a $L^2$ martingale ( thanks to the boundedness of (X_t) )
  • The first weight of $dt$ in the $RHS$ of the above equation is nonpositive (due to the monotonicity of $f$) So we can imply the following inequality for all $t \ge 0$:

$$g(t):= \mathbb{E}\left( (X_t-\tilde{X}_t)^2 \right) \le \sigma^2t- \int_{0}^t \sigma^2 \mathbb{E}(X_s-\tilde{X}_s)ds$$

Thus, $$ 0 \le g(t) \le 2\sigma^2 t +\int_{0}^t \sigma^2 g(s)ds $$ So after Gronwalls, we imply that: $$ 0 \le g(t) \le 2\sigma^2 t e^{\sigma^2t}$$ and that is : $$ \mathbb{E}( X^{\sigma}_t-\tilde{X}_t)^2 \le 2\sigma^2 t e^{\sigma^2t}$$

The last equation implies most of the convergence we know:

  • Strong convergence in Euler Maruyama's sense.
  • (As a consequence) Finite-dimensional convergence (in law and in $L^2$)
  • Convergence in the weak topology of continuous processes (taking value in $\mathcal{C}[0,T]$ ) ( tightness is free due to the boundedness of $f$)

Discussion:

  • The condition of local Lipschitz is unnecessary as its existence and unicity can be derived by the same method ( the decreasing monotonicity of $f$ is necessary)
  • The second assumption (2) plays the central role in our analysis.
  • We can give a convergence speed if we want.
  • Some modifications can be made to have the same result in a more general setting of SDE.
  • We can craft some sort of almost sure convergence as follows:
    Choosing a sequence $(\sigma_n , n \ge 1)$ such that: $$\sum_{n \ge 1} {\sigma_n}^2 <\infty $$ (for example $\sigma_n= \frac{1}{n}$ )
    Then, there is a sequence of positive real number $(\epsilon_n , n \ge 1)$ that is decreasing to $0$ such that: $$ \sum_{n \ge 1} \frac{\sigma_n^2}{\epsilon_n^2} <\infty $$ Thus, by Borel-Catelli, we imply that $\forall t \ge 0$: $$ X^{\sigma_n}(t) \xrightarrow{n \rightarrow +\infty} \tilde{X}(t) \text{ almost surely}$$. Then thanks to the boundedness of $f$ and Komogolrov's continuity theorem, we imply the "almost sure" convergence, that is almost surely, $$ \forall t X^{\sigma_n}(t) \rightarrow \tilde{X}(t)$$
  • The SDE can be viewed as an ODE driven by a Holder-continous noise. But I don't have any knowledge on this field to give any adequate proof.
  • Thanks a lot also for this! I'll read it carefully. – Riccardo Ceccon Nov 11 '20 at 08:58
  • Maybe you are right, but I cannot really see your point clearly. If you meant that the integral equal to zero, then I don't really think so. It would be nice if you elaborate more on that for me. – Paresseux Nguyen Nov 16 '20 at 22:53
  • I deleted because it was a stupid observation, I'm sorry. However, I'll exploit this situation to ask you some things: -what's Euler-Mauyama's strong convergence? And how would you prove the almost sure convergence for every $t$? I don't quite get how to pass from punctual to global a.s. convergence :( – Riccardo Ceccon Nov 16 '20 at 23:36
  • Hi, maybe my term is somehow misleading or not a standard definition. By that convergence, I mean the convergence in $L^1$ of $X_T$ and $\tilde{X}_{T}$ of two stochastic processes which are solutions of SDE derived by a same Brownian motion. (when some parameter of $X$ converges to zero). As you know, this type of convergence is the one we usually encounter when applying the finite differences method for SDE. – Paresseux Nguyen Nov 17 '20 at 00:29
  • The convergence a.s follows from my observation that a.s those processes live in the same compact subset of $C[0,T]$. Perhaps my observation is not true, I'll update my post later so we can check it together. – Paresseux Nguyen Nov 17 '20 at 00:32
1

@to Mods: Please kindly understand that we are discussing.

Almost Sure Convergence

(This is the continuation of my discussion in the above post)

Let $(\sigma_n ; n \ge 0)$ and $(X^{\sigma_n} ; n \ge 0 )$ be the sequence of volatilities and their respective stochastic processes as defined above.
As we had argued, we are able to show the almost sure convergence of $X^{\sigma_n}$ to $\tilde{X}$ at a countable dense subset, say $\mathbb{Q}^+$, of our time horizon $\mathbb{R}^+$, that is: Almost surely, for all $t \in \mathbb{Q}^+$, we have: $$\lim_n X^{\sigma_n}_t = \tilde{X}_t$$ Now if we fix a real positive number $T$. We see that, for almost all $\omega \in \Omega$, the sequence of continuous function $(X^{\sigma_n}(\omega) ; n \ge 1) \subset \mathcal{C}([0,T]) $ is equicontinous.
Indeed, we can prove this fact either by Komolgorov's continuity theorem( a bit overkill here) or just by obversing that: $$ X^{\sigma_n}_t=x_0 +\underbrace{ \int_{0}^t \left( f(s,X_s) -\dfrac{\sigma_n^2}{2} \right) ds }_{A^{(n)}_t} +\underbrace{\sigma_n W_t}_{B^{(n)}_t}$$

  • The part $A$ gives us a Lipschitz continuous function with the Lipschitz coefficient clearly bounded for all $\sigma_n$. ( by our choice of $\sigma_n$ and the boundedness of $f$). Hence, $(A^{(n)}(\omega) )$ is a family of equicontinuous functions for every $\omega$

  • The part $B$ clearly gives a family of equicontinuous functions as it is formed by the scalar multiplication between a bounded real number and a continuous function shared by all $n$.

Note: Two arguments above are constructed in the perspective that we are only considering all the processes restricted to the compact subset $[0,T]$ of time horizon)

Thus for almost every $\omega$, the family of functions $$( X^{\sigma_n}(\omega); n \ge 1 )$$ is a relative compact subset of $\mathcal{C}([0,T])$. Thus for every subsequence of that sequence, we can always extract a subsubsequence that converges uniformly to a limit function.
However, our very first result on the almost sure convergence of this sequence tells that if that limit exists, that limit function must be identical to $\tilde{X}$.
So a.s, $X^{\sigma_n}(\omega)$ converges uniformly to $\tilde{X}(\omega)$
Thus the conclusion $\square$.

Comments:

  • In the end, we even have the a.s convergence of $X^{\sigma}$ to $\tilde{X}$ under the usual metric of $\mathcal{C}( \mathbb{R_+})$
  • It seems that I haved blah-blah a lot in my arguments. By all means, the central idea is the equicontinuity of paths when restricted to $[0,T]$.
  • Some relaxation on the boundedness of $f$ can be done (if needed). While a replacement by an upper bound for $|f|$ only dependent of $t$ is easy to make, a bound depending also on $x$ is more not that straigthforward.