0

For Brownian Motions, I am trying to prove that:

  • For large $n$: The expected total variation is infinite and the expected quadratic variation is finite
  • For large $n$ : The total variation equals to the expected total variation and quadratic variation is equal to the expected quadratic variation

We start with a review of Moment Generating Functions:

  • The moment generating function (MGF) of a random variable $X$ centered around some constant $c$. (where $f(x)$ is the probability distribution function of $x$) is the Expected Value of $e^{t(X-c)}$:

    $$M_{X-c}(t) = E[e^{t(X-c)}] = \int_{-\infty}^{\infty} e^{t(x-c)} f(x) dx$$

  • For the $k$-th moment of $X$ around "c", we take the $k$-th derivative (i.e., differentiate $k$ times) of the MGF:

    $$M_{X-c}^{(k)}(t) = \frac{d^k}{dt^k} M_{X-c}(t) = \frac{d^k}{dt^k} \int_{-\infty}^{\infty} e^{t(x-c)} f(x) dx$$

  • To get the $k$-th moment around some point $c$, we evaluate this derivative at $t=0$:

    $$E[(X-c)^k] = M_{X-c}^{(k)}(0)$$

As an example, for a Normal Distribution, the MGF (centered around the point $c=0$) is given by:

$$M_{X}(t) = e^{t\mu + \frac{1}{2}t^2\sigma^2}$$

  • The 4th Moment of a Normal Distribution can be calculated as:

$$M_{X}(t) = e^{t\mu + \frac{1}{2}t^2\sigma^2}$$

$$M_{X}^{(4)}(t) = \frac{d^4}{dt^4} M_{X}(t) = \frac{d^4}{dt^4} e^{t\mu + \frac{1}{2}t^2\sigma^2}$$

Evaluating this at $t=0$ gives us the 4th moment:

$$E[X^4] = M_{X}^{(4)}(0)$$

$$E[X^4] = 3\sigma^4 + 6\sigma^2\mu^2 + \mu^4$$

When $\mu=0$, this becomes:

$$E[X^4] = 3\sigma^4$$

This result will be useful for showing later on that $$Var(X^2) = E[(X^2)^2] - [E(X^2)]^2 = E[X^4] - [E(X^2)]^2 $$

Now, we show that the (Expected) Total Variation of the Brownian Motion is infinite

First assume some Brownian Motion $B_t$ defined over an interval $t$ and make $k=n$ partitions. Next, define the Total Variance as:

$$f_n(W) = \sum_{k=1}^{n} \left| B_{t,k}(W) - B_{t,k-1}(W) \right|$$

We know that from first principles, the above expression is Normal with mean = 0 and variance = $\Delta t$. However, the above expression also contains an absolute value sign. Therefore, the above expression is actually an "absolute value normal random variable".

Assume a random normal variable $X$ with $\mu=0$ and $\sigma=1$ (i.e. a standardized normal). If we want to take the absolute value of $X$, it will only be defined over non-negative values. Therefore, we need to "shift" the "negative mass" of this function and add it to the positive part, such that this new probability distribution function still integrates to $1$. Conceptually, we think of scaling this new function by a factor of $2$ to respect this integration constraint:

$$f(x) = 2 \cdot \frac{1}{\sqrt{2\pi}} e^{ -\frac{x^2}{2} } = \sqrt{\frac{2}{\pi}} e^{ -\frac{x^2}{2} }$$

I thought of this trick to find out the integral of the above function:

$$z = \int_0^\infty e^{-x^2/2} dx $$ $$f(x) = \sqrt{\frac{2}{\pi}} \int_0^\infty e^{-x^2/2} dx = 1$$ $$\sqrt{\frac{2}{\pi}} \cdot z = 1$$ $$z = \sqrt{\frac{\pi}{2}}$$

However I am not sure how to take its expectation. Using the formula on Wikipedia (https://en.wikipedia.org/wiki/Half-normal_distribution) .... if $X_n$ is a Brownian Motion such that $X_n \sim N(0, \Delta t)$, then the absolute value of $X_n$ becomes:

$$ |X_n| \sim N\left[ \sqrt{\frac{2 \Delta t}{\pi}}, \Delta t \cdot \left(1 - \frac{2}{\pi}\right) \right] $$

Now, since our Total Variation function $f_n$ contains $n$ sums of $|X_n|$, the Expected Value becomes multiplied accordingly (note that $\Delta t = \frac{t}{n}$):

$$E(f_n) = n \sqrt{ \frac{2 \Delta t}{\pi}} = n \sqrt{ \frac{2 \frac{t}{n}}{\pi}} = \sqrt{\frac {2nt}{\pi}} $$

Taking the limit, we can see that Total Variation is infinite:

$$\lim_{{n \to \infty}} E(f_n) = \lim_{{n \to \infty}} \sqrt{\frac {2nt}{\pi}} = \infty$$

We can also see the variance of $f_n$:

$$Var(f_n) = n\cdot \Delta t \cdot \left(1 - \frac{2}{\pi}\right) = n\cdot \frac{t}{n} \cdot \left(1 - \frac{2}{\pi}\right) = t \cdot \left(1 - \frac{2}{\pi}\right) $$

Next, we use the Chebyshev Inequality to show that the Expected Total Variance for the Brownian Motion converges to actual Total Variance:

$$P( |f_n - E(f_n)| > \epsilon) \ \leq \frac{Var(f_n)}{\epsilon^2}$$

This shows us that the probability of $f_n$ being greater than $E(f_n)$ does not depend on $n$ (i.e. $f_n$ and $E(f_n)$ grow together). As I see it, we can use the above statement in which we proved that $E(f_n)$ becomes infinite (for larger $n$) to suggest that both $E(f_n)$ and $f_n$ tend to infinity.

Now, we have shown that $E(f_n)$ equals to $f_n$ in probability, and both $E(f_n)$ and $f_n$ tend to infinity for large values of $n$.

Finally, we need to derive the formula for the Quadratic Variation of the Brownian Motion . This can be done by squaring the formula for the total variation:

$$g_n(W) = [f_n(W)]^2 = \sum_{k=1}^{n} \left [ B_{t,k}(W) - B_{t,k-1}(W) \right]^2 = \sum_{k=1}^{n} (X_n)^2 $$

From first principles, we know that:

$$E(X_n) = 0$$ $$Var(X_n) = E(X_n^2) - E(X_n)^2 = E(X_n^2) - 0 = E(X_n^2) = \Delta t$$

Now, we can start taking expectations of $g_n(W)$. Basically, we need to multiply $E(X_n)$ by $n$ (unlike the previous expectation of the total variation, the expectation of the quadratic variation is bounded, i.e. finite):

$$E(g_n) = n \cdot E(X_n^2) = n \Delta t = n \frac{t}{n} = t$$

Now, we show variance of $g_n$. This is where we use the results from the Moment Generating Function:

$$E(X_N^4) = 3 (\sigma^2)^2 = 3 [Var(X_N)]^2 = 3 (\Delta t)^2$$ $$Var(X_n^2) = E(X_n^4) - E(X_n^2)^2 = E(X_n^4) - (\Delta t)^2$$ $$Var(X_n^2) = 3 (\Delta t)^2 - (\Delta t)^2 = 2 (\Delta t)^2 = 2 (\frac{t}{n})^2$$

We can see that:

$$Var(g_n) = \sum_{k=1}^{n} Var[ (X_n)^2] = n \cdot [Var (X_n)^2] = n \cdot \frac{2t^2}{n^2} = \frac{2t^2}{n} $$

$$\lim_{{n \to \infty}} Var(g_n) = \lim_{{n \to \infty}} \frac{{2t^2}}{n} = 0$$

Finally, using Chebyshef's Inequality, we can see that:

$$P( |g_n - E(g_n)| > \epsilon) \ \leq \frac{Var(g_n)}{\epsilon^2}$$

This again shows us that $g_n$ and $E(g_n)$ are equal to each other in probability for large $n$.

Conclusion: Have I done all of this correctly?

  • There's a lot of unnecessary detail in your question. Can you edit it to include only your question and a brief attempt? – Jose Avilez Mar 15 '24 at 08:17
  • Can't you just apply the central limit theorem? Like this is straightforward with it – ioveri Mar 19 '24 at 03:10
  • I provided an answer some days ago. You may let me know whether it has been useful or not. – Amir Mar 22 '24 at 16:14

1 Answers1

0

Here I tried to simplify your argument and highlight the fact that convergence of total or quadratic variation depends on how the convergence and how the sequence of partitions are defined. Some of the convergence results can be proven only using the expectation and variance of the variation (here we focus on them, which is requested in the OP), but other require more advanced results such as continuity of Brownian motion paths or Borel-Cantelli Lemma (required references are provided).

Regarding your statements:

  1. For large $n$: The expected total variation is infinite and the expected quadratic variation is finite
  1. For large $n$ : The total variation equals to the expected total variation and quadratic variation is equal to the expected quadratic variation.

The first one is correct, but the second one is not accurate:

  • The total variation converges to $\infty$ almost surely for any sequence of partitions whose meshes (maximum sub-interval length) tend to zero.
  • The quadratic variation converges to $T$ in $l_2$ norm for any sequence of partitions whose meshes tend to zero, and converges to $T$ almost surely for any sequence of partitions whose meshes sum to a finite number.

Distributional properties

First consider these properties of $X \sim \mathcal N (0,\sigma^2)$:

$$ \mathbb E (|X|)= \sigma \sqrt{\frac{2}{\pi}} \tag{1}$$ $$ \text {var} (|X|)=\sigma^2 \left (1- \frac{2}{\pi} \tag{2} \right ) $$ $$ \mathbb E (X^2)= \text{var} (X)=\sigma^2 \tag{3}$$ $$ \text {var} (X^2)= \sigma^4\text{var} (\chi^2_1)= 2 \sigma^4 . \tag{4}$$ Note that the distribution of $|X|$ is called folded normal distribution; and $\frac{X}{\sigma} \sim \mathcal N(0,1)$ and $\left (\frac{X}{\sigma} \right)^2 \sim \chi^2_1.$


Problem setting

Consider partition $\mathcal P_n= \{t_0=0<t_1<\dots t_{n-1}< t_n=T \}$ with $\Delta_n =\sup_{i\in [n]} \left (\Delta^i_n:=t_i-t_{i-1} \right )$ over the interval $(0,T).$ Also, for the Brownian motion $B(t)$, define

$$D_1(B,\mathcal P_n)=\sum_{i=1}^{n}|B(t_i)-B(t_{i-1})| $$ $$D_2(B,\mathcal P_n)=\sum_{i=1}^{n}|B(t_i)-B(t_{i-1})|^2. $$

It is known that all the increments $$B(t_i)-B(t_i) \sim \mathcal N(0,\Delta^i_n), i=1,\dots,n \tag{5}$$ are independent.


Total variation

Using (1) and (5), one can see that

$$\mathbb E \left [ D_1(B,\mathcal P_n) \right ]=\sum_{i=1}^{n}\mathbb E\left [ |B(t_i)-B(t_{i-1})| \right ]=\sqrt{\frac{2}{\pi}}\sum_{i=1}^{n} \sqrt{\Delta^i_n}.$$

This does not converge to any finite number. Indeed, for the partition with $\Delta^i_n=\frac{T}{n}$, we have

$$\mathbb E \left [ D_1(B,\mathcal P_n) \right ]= \sqrt{\frac{2}{\pi}} n\sqrt{\frac{T}{n}},$$

which tends to $\infty$ as $n\to \infty$.

Remark 1: Note that based on the above observation we can say that $D_1(B,\mathcal P_n)$ does not converge to any finite number in any $l_p$ norm with $p \ge 1$ (considering the definition of convergence in $l_p$ norm, we cannot say that $Y_n$ converges to $\infty$ in any $l_p$ norm.) However, it can be proven that for any sequence of partitions $\mathcal P_n$ with $\Delta_n \to 0$:

$$D_1(B,\mathcal P_n) \to \infty \quad \text{a.s.},$$

using more advanced results; see Proposition 1 in this note, Corollary 1.5. in this note, and the MSE answer considering Mittens's comments following after it for three different proofs. Your attepmt in the OP using Chebyshev's inequality is similar to the third proof, but requires more details.


Quadratic variation

From (3) and (5), we obtain

$$\mathbb E \left [ D_2(B,\mathcal P_n)\right ]=\sum_{i=1}^{n}\mathbb E \left [ |B(t_i)-B(t_{i-1})|^2\right ]=\sum_{i=1}^{n} \Delta^i_n= T.$$

Moreover, using (4) and (5) and considering the independence of the increments:

$$\mathbb E \left [ D_2(B,\mathcal P_n)-T \right ]^2=\text {var} \left [ D_2(B,\mathcal P_n) \right ] = \text {var} \left [ \sum_{i=1}^{n} |B(t_i)-B(t_{i-1})|^2 \right ] =\sum_{i=1}^{n} \text {var} \left (|B(t_i)-B(t_{i-1})|^2 \right)=\\ 2\sum_{i=1}^{n} (\Delta^i_n)^2 \le 2 \sum_{i=1}^{n} (\Delta^i_n) \Delta_n =2T \Delta_n.$$

From the above upper bound, we can see that for any sequence of partitions $\mathcal P_n$ with $\Delta_n \to 0$, $\text {var} \left [ D_2(B,\mathcal P_n) \right ] \to 0$, which means $D_2(B,\mathcal P_n)$ converges to $T$ in $l_2$ norm.

Remark 2: By Markov inequality and Borel-Cantelli Lemma (see Corollary 1.4. in this note), it can be proven that for any sequence of partitions $\mathcal P_n, n=1,2, \dots $ with $\sum_{i=1}^\infty \Delta_n < \infty$ (which is stronger than $\Delta_n \to 0$):

$$D_2(B,\mathcal P_n) \to T \quad \text{a.s.}.$$

Amir
  • 4,305