0

say we have a function $g$ on an $n$-dimensional domain $[0,1]^n$, where g is defined as: $g(y_1, y_2, y_3, ... , y_n) = \mathbb{1}_{y_1 + y_2 + y_3 + ... + y_n \leq 1}$. That is, if the condition is met, $g = 1$, else $g = 0$. I would like to find $\int_{}g$.

But once we rewrite this as a series of $n$ iterated integrals, how do we proceed? How would we "fix" the other variables when taking a given integral given that the particular way the function is defined? it's not like we can just disregard all other $y_i$ when focusing on a specific integral

Mittens
  • 39,145

2 Answers2

1

Let us consider a family of functions $g_{n, t} : [0, 1]^n \to \mathbb{R}$ defined by $g_{n, t}(y_1, \ldots, y_n) = 1_{y_1 + y_2 + \cdots + y_n \leq t}$. I claim that $\int\limits_{[0, 1]^n} g_{n, t}(y_1, \ldots, y_n) dy_1 \ldots dy_n = \frac{t^n}{n!}$ for all $n \in \mathbb{N}_{> 0}$, $t \in [0, 1]$, and equals $0$ for $t < 0$.

We prove this by induction on $n$.

The base case here is $n = 1$. In this case, it is clear that the integral gives us $t = \frac{t^1}{1!}$ for $t \in [0, 1]$ as required and gives us $0$ for $t < 0$.

Now, we consider the inductive step. Write $n = k + 1$, and suppose $\int\limits_{[0, 1]^k} g_{k, t}(y_1, \ldots, y_k) dy_1 \ldots dy_k = \frac{t^k}{k!}$ for all $t \in [0, 1]$.

Then we have $\int\limits_{[0, 1]^{k + 1}} g_{k + 1, t}(y_1, \ldots, y_k, y_{k + 1}) dy_1 \ldots dy_k dy_{k + 1} = \int\limits_0^1 dy_{k + 1} \int\limits_{[0, 1]^k} g_{k + 1, t}(y_1, \ldots, y_k, y_{k + 1}) dy_1 \ldots dy_k$ by Fubini's theorem.

Now note that $g_{k + 1, t}(y_1, \ldots, y_k, y_{k + 1}) = 1_{y_1 + \cdots + y_k + y_{k + 1} \leq t}$. Note that $y_1 + \cdots + y_k + y_{k + 1} \leq t$ if and only if $y_1 + \cdots + y_k \leq t - y_{k + 1}$. Therefore, $1_{y_1 + \cdots + y_k + y_{k + 1} \leq t} = 1_{y_1 + \cdots + y_k \leq t - y_{k + 1}} = g_{k, t - y_{k + 1}}(y_1, \ldots, y_k)$. So we can rewrite the integral as $\int\limits_0^1 dy_{k + 1} \int\limits_{[0, 1]^k} g_{k, t - y_{k + 1}}(y_1, \ldots, y_k) dy_1 \ldots dy_k$. By the inductive hypothesis, we have

$\int\limits_{[0, 1]^k} g_{k, t - y_{k + 1}}(y_1, \ldots, y_k) dy_1 \ldots dy_k = \begin{cases} \frac{(t - y_{k + 1})^k}{k!} & y_{k + 1} \leq t \\ 0 & otherwise \end{cases}$

So the integral can be rewritten as $\int\limits_0^t dy_{k + 1} \frac{(t - y_{k + 1})^k}{k!}$. Make the $u$-substitution $u = t - y_{k + 1}$ to get the integral $\int\limits_0^t \frac{u^k}{k!} du = \frac{t^{k + 1}}{(k + 1) k!} = \frac{t^{k + 1}}{(k + 1)!}$. This is exactly what we needed to show. The proof is complete. $\square$

So in particular, for $g(y_1, \ldots, y_n) = g_{n, 1}(y_1, \ldots, y_n)$, we have $\int\limits_{[0, 1]^n} g(y_1, \ldots, y_n) dy_1 \ldots dy_n = \frac{1}{n!}$.

Mark Saving
  • 31,855
  • @Invincible It is straightforward to compute the integral for any fixed $n$ by following the procedure laid out in the induction step. It is also easy to guess by a simple scaling argument that for $t \in [0, 1]$, the integral should be proportional to $t^n$, and it is trivial to verify that the integral is $0$ for $t < 0$. Once you know it's proportional to $t^n$, you need a recurrence for the constant. You could try the induction with the claim that the integral is $c_n t^n$: if you do this, you'll see in the proof that $c_1 = 1$ and $c_{k + 1} = \frac{1}{k + 1} c_k$. This means $c_n = 1/n!$ – Mark Saving Oct 13 '22 at 23:16
  • @Invincible - there is no mystery in doing mathematics. You just have to play with it, that is, in this case. compute integral for some small value of $n$ like $n=1,2,3 ...$ and then notice a pattern and finally you prove it in general context. – Salcio Oct 13 '22 at 23:46
0

The integral can be estimated directly using Fubini's theorem: $$\int^1_0\int^{1-x_1}_0\ldots\int^{1-x_1-\ldots-x_{n-1}}_0 dx_n\ldots dx_2 dx_1$$

for example, for $n=3$ we have \begin{align} \int^1_0\int^{1-x_1}_0(1-x_1-x_2)\,dx_2\,dx_1&=\int^1_0(1-x_1)^2-\frac12(1-x_1)^2\,dx_1\\ &=\frac12\int^1_0(1-x_1)^2\,dx_1\\ &=-\frac16(1-x)^3\Big|^1_0=\frac16 \end{align}

To proceed by induction, notice that each of the inner interated integrals is of the form $$\frac1{k!} \int^{1-s}_0(1-s-t)^k\,dt=\frac{1}{(k+1)!}(1-s)^{k+1}$$


The Integral in the OP can also be estimated by more general methods: \begin{align} \int_{\mathbb{R}^n_+}f(x_1+\ldots+x_n)x_1^{a_1-1}\cdot\ldots\cdot x^{a_n-1}_n\, dx_1\ldots dx_n = \frac{\Gamma(a_1)\cdot\ldots\Gamma(a_n)}{\Gamma(a_1+\ldots+a_n)}\int^\infty_0 f(t) t^{a_1+\ldots+a_n-1}\,dt\tag{1}\label{one} \end{align} where $\Gamma(t)=\int^\infty_0x^{t-1}e^{-x}\,dx$. The formula above have been studied in MSE before. For example, in this posting the formula is obtained using (a) spherical coordinates, or (b) by linear transformations and Fubini's theorem.

The integral in the OP is of the form \eqref{one} with $f(t)=\mathbb{1}_{(0,1]}(t)$ and $a_1=\ldots =a_n=1$, in which case $$ \int_{\mathbb{R}_+}\mathbb{1}_{[0,1]}(x_1+\ldots+ x_n)\,dx_1\ldots dx_n=\frac{1}{(n-1)!}\int^1_0t^{n-1}dt=\frac{1}{n!}$$


Mittens
  • 39,145