2

Is it true, that if the integrand in the $$\int_{0}^{T}X\left(t\right)dY\left(t\right)$$ integral is deterministic (so it is the same “function” for each $\omega\in\Omega$) and continuous, then it doesn't matter what we choose as $s_{i}$ in the $$\sum_{i}X\left(s_{i}\right)\left(Y_{t_{i}}-Y_{t_{i-1}}\right),\;\;\;s_{i}\in\left[t_{i-1},t_{i}\right]$$ integral approximating sum: the sum will eventually converge to the same random variable $\int_{0}^{T}X\left(t\right)dY\left(t\right)$ in $L^{2}\left(\Omega\right)$? (I will refer this integral approximating sum as “the sum” below.)

I mean this is very surprising for me, because in the very famous example of $\int_{0}^{T}W\left(t\right)dW\left(t\right)$, where $W$ is a Wiener-process it does matter what $s_{i}$s are, i.e. the sum will converge to different random variables depending on what $s_{i}$s are. I thought the only reason of converging to different limits is the fact, that the $W$ integrator hasn't got finite variation. However, I still think this is the most importan reason, but considering the statement above this isn't the only one.

E.g.: I haven't thought so far that there is “reasonable” difference between a Wiener-process and a Weierstrass function choosing as integrand, in terms of the $\int_{0}^{T}X\left(t\right)W\left(t\right)$ integral. I've thought that in both cases the $\sum_{i}X\left(s_{i}\right)\left(W_{t_{i}}-W_{t_{i-1}}\right)$ will converge to different values depending on the $s_{i}$ points. According to the statement above, this is not true. Choosing $X$ as a Weierstrass function, it doesn't matter what $s_{i}$s are, the sum will converge to the same random variable. The only fair difference I see between the Weierstrass function or a Wiener process is that one of them is deterministic and the other one is a stochastic process.

So am I right to point out that in several cases there are more reasons that the sum will converge to different values as $s_{i}$s are different:

1, $Y$ hasn't got finite variation,

2, $X$ is a stochastic process (so it is $\omega$ dependent)?

If it is indeed true, then what is the “intuition” behind this? Why is it way different when the integrand is deterministic and not stochastic? Other words, why can I choose whatever $s_{i}$ in case of the (deterministic) Weierstrass function as an integrand and the sum will converge to the same variable, while on the other hand choosing $s_{i}$ is important when the integrand is a (stochastic) Wiener-process?

Kapes Mate
  • 1,352

2 Answers2

1

Is it true, that if the integrand in the...

Yes. Following Revuz-Yor or Le-Gall section on Stochastic integrals, they build stochastic integrals using elementary processes $H(s)=\sum H_{i} 1_{(t_{i},t_{i+1}]}$ for $H_{i}\in \mathcal{F}_{t_{i}}$ to define for martingale $M$

$$\int H(s)dM_{s}:=\sum H_{i}(M_{t_{i+1}}-M_{t_{i}})$$

and then extend to more general L^2 processes (Theorem 5.4 in Le Gall). But if $H_{i}$ is deterministic, then the filtration constraint is not relevant i.e. we immediately get that this sum is a martingale and so the same proof goes through.

Why is it way different when the integrand is deterministic and not stochastic?

The key issue is the need for martingale and the various martingale convergence theorems. They use that to obtain the L^2-limit. If it is deterministic, there is no effect. If it is random, we have to be careful eg. for $s_{i}=\frac{t_{i}+t_{i+1}}{2}$, we get the Stratonovich integral (Definitions of the Stratonovich integral and why the "average" definition is arguably correct).

To be clear this is only for the L2 convergence. In the almost-sure convergence, it fails even for deterministic. For example, if we interpret $\int f(t)dB_{t}$ as a Riemann-Stieljes integral i.e.

$$\int f(t)dB_{t}\approx \sum f(s_{i})(B_{t_{i+1}}-B_{t_{i}}),$$

for $s_i\in [t_{i},t_{i+1}]$, then this is not allowed to converge due to the Banach-Steinhaus Theorem because it implies that $B$ has finite variation

*Th. 56 of Protter's book : *

If the sum $S_n=\sum_{i=1}^{n} f(t^{(n)}_i).(B_{t^{(n)}_{i+1}} - B_{t^n_i})$ converges to a limit for every continuous function $f$ then $B$ is of finite variations.

If ,however, $f$ is of bounded variation, then we can define

$$\int^T f(t)dB_{t}:=f(T)B_{T}-f(0)B_{0}-\int^T B_{s}df_{s}.$$

For more posts see

What is the explicit obstruction to almost sure convergence in stochastic integrals?

and

Understand better stochastic integral through a.s. convergence

Also, see Rough paths for the more recent developments in extending stochastic integration to more degenerate fields. This topic explores the issue of the choice of partitions/divisions and how different choices/lifts give different types of stochastic integrals.

Thomas Kojar
  • 3,596
  • thanks for the answer. – Kapes Mate Nov 02 '23 at 18:54
  • I started to write a comment as a "skatch of proof" for my own question, but it has become so long that I decided to write an other answer instead. – Kapes Mate Nov 04 '23 at 14:55
  • Can the statement above somehow be expanded to continuous integrand with bounded variation, i.e.: does it matter where to choose $s_i$ in order to reach any kind of convergence when the integrand is continuous (stochastic) process with bounded variation? I think it does matter...but do we know any (counter) example? – Kapes Mate Dec 16 '23 at 20:36
  • @KapesMate feel free to open a new question, it seems interesting. – Thomas Kojar Dec 16 '23 at 22:04
  • Okay, you can find it here if you are interested: https://math.stackexchange.com/questions/4828901/stochastic-integral-of-continuous-integrand-with-bounded-variation. – Kapes Mate Dec 16 '23 at 23:15
1

I found out a kind of “intuitive explanation”, that probably can be considered as a “sketch of proof”.

Let $f$ be a continuous deterministic function and $W$ a Wiener process on $\left[0,T\right]$. $I_{1}$ and $I_{2}$ are the appropriate approximating sum of the integral with

$$I_{1}\dot{=} \sum_{i}f\left(t_{i}\right)\left(W_{t_{i+1}}-W_{t_{i}}\right)$$ $$I_{2}\dot{=} \sum_{i}f\left(t_{i+1}\right)\left(W_{t_{i+1}}-W_{t_{i}}\right)$$

So the integrand is evaluated at the beginning and at the end of the $\left[t_{i},t_{i+1}\right]$ interval. If the two approximating sum has two different partitions, then we can “pour together the two partitions”, so we can consider the same partition for the two approximating sums. Since I mentioned $L^{2}\left(\Omega\right)$ convergence, then by definition we should proof, that

$$\left\Vert I_{2}-I_{1}\right\Vert _{L^{2}\left(\Omega\right)}^{2}\dot{=}\mathbb{E}\left(\left(I_{2}-I_{1}\right)^{2}\right)\rightarrow0.$$

(This also implies that $\left\Vert I_{1}-I_{2}\right\Vert _{L^{2}\left(\Omega\right)}\rightarrow0$). If we can proof that this difference is converge to zero in this “extreme case” when the integrand is evaluated at the two edges of the $\left[t_{i},t_{i+1}\right]$ interval, then it shouldn't be a big deal to prove it for any arbitrary $s_{i},s_{i+1}\in\left[t_{i},t_{i+1}\right]$ as evaluating point for the $f$ integrand.

This difference is the following:

$$\left\Vert I_{2}-I_{1}\right\Vert _{L^{2}\left(\Omega\right)}^{2}\dot{=} \mathbb{E}\left(\left(\sum_{i}f\left(t_{i+1}\right)\left(W_{t_{i+1}}-W_{t_{i}}\right)-\sum_{i}f\left(t_{i}\right)\left(W_{t_{i+1}}-W_{t_{i}}\right)\right)^{2}\right) = \mathbb{E}\left(\left(\sum_{i}\left(f\left(t_{i+1}\right)-f\left(t_{i}\right)\right)\left(W_{t_{i+1}}-W_{t_{i}}\right)\right)^{2}\right) \leq\mathbb{E}\left(\left(\sup_{i}\left(f\left(t_{i+1}\right)-f\left(t_{i}\right)\right)\cdot\sum_{i}\left(W_{t_{i+1}}-W_{t_{i}}\right)\right)^{2}\right) = \mathbb{E}\left(\left(\sup_{i}\left(f\left(t_{i+1}\right)-f\left(t_{i}\right)\right)\right)^{2}\cdot\left(\sum_{i}\left(W_{t_{i+1}}-W_{t_{i}}\right)\right)^{2}\right)=\ldots$$

Since $f$ is deterministic, then we can pull out the $\left(\sup_{i}\left(f\left(t_{i+1}\right)-f\left(t_{i}\right)\right)\right)^{2}$ term in front of the $\mathbb{E}\left(\ldots\right)$ part. This is the point where we use the assumption, that $f$ is deterministic.

$$\ldots =\left(\sup_{i}\left(f\left(t_{i+1}\right)-f\left(t_{i}\right)\right)\right)^{2}\mathbb{E}\left(\left(\sum_{i}\left(W_{t_{i+1}}-W_{t_{i}}\right)\right)^{2}\right) =\left(\sup_{i}\left(f\left(t_{i+1}\right)-f\left(t_{i}\right)\right)\right)^{2}\left(\sum_{i,j}\mathbb{E}\left(\left(W_{t_{i+1}}-W_{t_{i}}\right)\left(W_{t_{j+1}}-W_{t_{j}}\right)\right)\right)=\ldots$$

Since the increments of a Wiener process on disjoint intervals are independent, then from the inner sum above only those parts remains, where $i=j$, because in case of independent variables “the expected value of the product is the product of the expected values”:

$$\sum_{i,j}\mathbb{E}\left(\left(W_{t_{i+1}}-W_{t_{i}}\right)\left(W_{t_{j+1}}-W_{t_{j}}\right)\right) =\sum_{i}\mathbb{E}\left(W_{t_{i+1}}-W_{t_{i}}\right)^{2}+2\sum_{i<j}\mathbb{E}\left(\left(W_{t_{i+1}}-W_{t_{i}}\right)\left(W_{t_{j+1}}-W_{t_{j}}\right)\right) =\sum_{i}\mathbb{E}\left(W_{t_{i+1}}-W_{t_{i}}\right)^{2}+2\sum_{i<j}\left(\underbrace{\mathbb{E}\left(W_{t_{i+1}}-W_{t_{i}}\right)}_{=0}\underbrace{\mathbb{E}\left(W_{t_{j+1}}-W_{t_{j}}\right)}_{=0}\right) =\sum_{i}\mathbb{E}\left(W_{t_{i+1}}-W_{t_{i}}\right)^{2}+2\sum_{i<j}0=\mathbb{E}\left(\sum_{i}\left(W_{t_{i+1}}-W_{t_{i}}\right)^{2}\right)$$

So continuing the above: $$\ldots=\underbrace{\left(\sup_{i}\left(f\left(t_{i+1}\right)-f\left(t_{i}\right)\right)\right)^{2}}_{\rightarrow0}\underbrace{\mathbb{E}\left(\sum_{i}\left(W_{t_{i+1}}-W_{t_{i}}\right)^{2}\right)}_{\rightarrow T}\rightarrow0\cdot T=0.$$

The first part is converging to zero, because $f$ is continuous, and the second part converges to the quadratic variation of the Wiener process. As a result: $\left\Vert I_{2}-I_{1}\right\Vert _{L^{2}\left(\Omega\right)}^{2}\rightarrow0$,as I stated. Choosing the evaluation point for the integrand indeed doesn't matter in this case.

(I've probably omitted a lot of technical, but important steps in the reasoning above, e.g. interchanging $\lim$, $\mathbb{E}$ or $\sum$ without any words... but this whole train of thoughts is supposed to be a skatch of proof.)

Kapes Mate
  • 1,352