41

Let $(W_t)$ be a standard Brownian motion, so that $W_t \sim N(0,t)$. I'm trying to show that the random variable defined by $Z_t = \int_0^t W_s \ ds$ is a Gaussian random variable, but have not gotten very far.

I tried approximating the integral by a Riemann sum: choose $\delta, M$ such that $M\delta = t$, then the integral is approximated by $$ \sum_{k=0}^{M-1} (W_{(k+1)\delta} - W_{k\delta} )\delta = \delta \sum\limits_{k=0}^{M-1} X_k $$ where using standard properties of the Brownian motion, the $X_k$'s are independent identically distributed $N(0, \delta)$ random variables. So I find that $Z_t$ is approximated by a random variable with distribution $ N(0, M\delta^3) = N(0,t\delta^2) $. Now letting $ \delta \to 0$, I find the variance of $Z_t$ is also $0$, which does not make sense to me.

Any help is appreciated!

saz
  • 120,083
Jonas
  • 2,423

3 Answers3

41

First of all, the Riemann sum is given by

$$\sum_{k=0}^{M-1} W_{k \delta} \cdot (\delta (k+1)-\delta k).$$

Note that this expression does not equal

$$\sum_{k=0}^{M-1} (W_{(k+1)\delta}-W_{k \delta}) \cdot \delta.$$


Let $t_k := \delta \cdot k$, then

$$\begin{align} G_M &:= \sum_{k=0}^{M-1} W_{k \cdot \delta} \cdot (t_{k+1}-t_k) =\ldots= \sum_{k=0}^{M-1} (W_{t_{k-1}} - W_{t_k}) \cdot t_k + W_{t_{M-1}} \cdot t \\ &= \sum_{k=0}^{M-1} (W_{t_{k-1}}-W_{t_k}) \cdot (t_k-t) \end{align}$$

where $t_{-1}:=0$. Clearly, $G_M$ is Gaussian, $\mathbb{E}G_M=0$ and (using the independence of the increments)

$$\begin{align*} \mathbb{E}(G_M^2)& = \sum_{k=0}^{M-1} (t_k-t)^2 \cdot \underbrace{\mathbb{E}((W_{t_k}-W_{t_{k-1}})^2)}_{t_k-t_{k-1}} \\ &\to \int_0^t (s-t)^2 \, ds \quad \text{as} \, \, M \to \infty. \end{align*}$$

Hence, as $G_M \to Z_t$ as $M \to \infty$ almost surely, we conclude that $Z_t$ is Gaussian with mean $0$ and variance $\int_0^t (s-t)^2 \, ds$ (see this question for further details).

Remark: In fact, the statement holds in a more general setting. The random variable $Y_t := \int_0^t X_s \, ds$ is Gaussian for any (measurable) Gaussian process $(X_t)_{t \geq 0}$, see this question.

saz
  • 120,083
  • This is quite helpful, but I don't understand why when computing $\mathbb{E}(G_M^2)$ you're allowed to simply square the summand. Shouldn't there be a double sum involving cross terms and the like? I'm not seeing why those terms vanish. – Jonas Nov 24 '12 at 23:39
  • 5
    @Jonas: The factors in the cross terms are indepedent, aren't they? – Stefan Hansen Nov 25 '12 at 00:05
  • @StefanHansen That makes sense. Thanks! – Jonas Nov 25 '12 at 01:14
  • @StefanHansen or Saz - Can you please explain me what happened in the dots of this equation? $$\begin{align} G_M &:= \sum_{k=0}^{M-1} W_{k \cdot \delta} \cdot (t_{k+1}-t_k) =\ldots= \sum_{k=0}^{M-1} (W_{t_{k-1}} - W_{t_k}) \cdot t_k + W_{t_{M-1}} \cdot t \ &= \sum_{k=0}^{M-1} (W_{t_{k-1}}-W_{t_k}) \cdot (t_k-t) \end{align}$$ – Matteo Dec 07 '13 at 19:07
  • 3
    @Matteo Note that $W_{k \cdot \delta} = W_{t_k}$. Thus, $$\begin{align} \sum_{k=0}^{M-1} W_{k \cdot \delta} (t_{k+1}-t_k) &= \sum_{k=0}^{M-1} W_{t_k} \cdot t_{k+1} - \sum_{k=0}^{M-1} W_{t_k} \cdot t_k \ &= \sum_{k=1}^{M} W_{t_{k-1}} \cdot t_{k} - \sum_{k=0}^{M-1} W_{t_k} \cdot t_k \ &= W_{t_{M-1}} \cdot t_{M} + \sum_{k=0}^{M-1} (W_{t_{k-1}}-W_{t_k}) \cdot t_k \end{align}$$ where we used in the last step $W_{t_0}=0$. – saz Dec 07 '13 at 20:03
  • All clear now! thanks a lot for the clarifications and prompt reply! – Matteo Dec 07 '13 at 20:12
  • @saz We have that $\int_0^t W_s(\omega)\ ds := \lim_{mesh(\pi)\rightarrow 0} \sum_{i=0}^{n-1}B_{t_i}(\omega)(t_{i+1}-t_i)$ where $\pi: 0 = t_0<t_1<\ldots<t_n=t$. When you take the expectation of this, what is the justification to interchange the limit and the expectation? – Calculon Mar 18 '15 at 19:10
  • 2
    @Calculon Where have I claimed/used this? Anyway, we can apply Fubini's theorem to interchange expectation and integration: $$\mathbb{E} \left( \int_0^t W_s , ds \right) = \int_0^t \mathbb{E}(W_s) , ds=0.$$ – saz Mar 18 '15 at 19:37
  • @saz I thought it was implicit in your answer. But how does Fubini help when you want to compute the expectation of the square of the integral? – Calculon Mar 18 '15 at 19:39
  • 1
    @Calculon "Expectation of the square of the integral"..? Could you please be a bit more specific: Which integral are you exactly talking about? And what do you (exactly) want to know? In my answer, I have calculated $\mathbb{E}(G_M)$ and $\mathbb{E}(G_M^2)$. Since $G_M$ is a finite sum for each $M$, we don't need to interchange limit and expectation. – saz Mar 18 '15 at 19:49
  • @saz Sorry, by the expectation of the square of the integral I meant $E\left(\int_0^t W_s\ ds\right)^2$. I understand how you manipulate $G_M$ and such but I am having trouble understanding the link between the expectation of $G_M$ (or $G_M^2$) and the expectation of $Z_t$ (or $Z_t^2$). – Calculon Mar 18 '15 at 19:57
  • 1
    @Calculon I see. Well, it is not obvious that $\mathbb{E}(G_M^2)$ converges to $\mathbb{E}(Z_t^2)$. In this particular case, this follows from the fact that $G_M$ and $Z_t$ are Gaussian. (Mind: If a sequence $(G_n)_n$ converges pointwise to some random variable $Z$ and $\mathbb{E}(G_n^2)$ converges to some constant $c$, then in general $c \neq \mathbb{E}(Z^2)$. However, under the additional assumption that $G_n$ and $Z$ are Gaussian, we have $c= \mathbb{E}(Z^2)$.) – saz Mar 18 '15 at 20:06
  • @saz Thanks a lot. But then how do we know $Z_t$ is Gaussian? – Calculon Mar 18 '15 at 20:11
  • @Calculon $Z_t$ is Gaussian as an almost sure limit of Gaussian random variables; see this question: http://math.stackexchange.com/q/232540/. If you have a closer look at the proof (of the linked answer), then you'll see that it actually shows that the $\mathbb{E}(G_n) \to \mathbb{E}(Z)$ and $\mathbb{E}(G_n^2) \to \mathbb{E}(Z^2)$ (as in my last comment, $G_n \to Z$ almost surely, $G_n$ and $Z$ are Gaussian). – saz Mar 18 '15 at 20:19
  • @saz Thank you. Things are much more clear to me now. – Calculon Mar 18 '15 at 20:28
  • @Calculon You are welcome. :) – saz Mar 18 '15 at 20:33
18

I just found out that we can use the following fact:

If $f:[0,T] \rightarrow [0,T]$ is continuous and deterministic, then \begin{equation} \int_{0}^T \bigg( \int_{0}^T f(s,t) \,dW_s \bigg) \,dt = \int_{0}^T \bigg( \int_{0}^T f(s,t) \, dt \bigg) \,dW_s. \end{equation} Hence (I suppose that it works for piecewise continuous functions), \begin{eqnarray} \int_{0}^T W_t \,dt & = & \int_0^T \int_0^T \mathbf{1}_{[0,t]} (s) \,dW_s \,dt \\ & = & \int_0^T \int_0^T \mathbf{1}_{[0,t]} (s) \,dt \,dW_s\\ & = & \int_0^T T-s \,dW_s\\ & \sim & N \bigg( 0, \int_{0}^T (T-s)^2 \,ds \bigg). \end{eqnarray}

Richard
  • 3,020
7

This is an old question, but it may be worth providing a better answer:

Let $\phi(Y,t,\omega)$ be the conditional characteristic function $\mathbb{E}[\exp(i\omega Y_T)|Y_t=Y] $. By the law of iterated expectations this quantity is a martingale. It is then straightforward to derive a partial differential equation for $\phi$ using Ito's lemma and setting the drift to zero. It will become apparent that the solution takes a Gaussian form.

  • 20
    Using a hammer to kill a fly does not necessarily makes for "better" answers. – Did Jan 01 '14 at 17:30