a book I own says that $$ \delta(xy)= \frac{\delta(x)+\delta (y)}{\sqrt{x^2+y^2}}.$$
This kind of makes sense to me, but I cannot figure out why the denominator is what it is. Does anyone have any idea how to derive this?
a book I own says that $$ \delta(xy)= \frac{\delta(x)+\delta (y)}{\sqrt{x^2+y^2}}.$$
This kind of makes sense to me, but I cannot figure out why the denominator is what it is. Does anyone have any idea how to derive this?
Instead of giving a formal and general answer (as others have done), my approach is to regard $\delta(xy)$ as a function in the $(x,y)$ plane; for which I will use the well-known block representation. This means:
$$ \delta(xy) = \begin{cases} 0.5/\epsilon^{2} & \text{when $-\epsilon^2 <xy < \epsilon^2$}\\ 0 & \text{otherwise} \end{cases}$$
The area where the function is non-zero resembles a four-pointed star. We now split this area into a central square with corners given by $|x| = |y| = \epsilon$; plus the four hyperbolic arms. We cut the central square along its two diagonals and add each section to the nearest arm. Along the horizontal axis we obtain this region:
$$\begin{cases} -|x| < y < |x| & \text{for $|x| < \epsilon$}\\ -\epsilon^2 /|x| < y < \epsilon^2 /|x| & \text{for $|x| \ge \epsilon$} \end{cases}$$
We see that the maximum width in the $y$ direction equals $2\epsilon$. In the limit of $\epsilon$ to zero this results in a delta function in $y$ with pre-factor $p(x) = \min(|x|/\epsilon^2, 1/|x|)$. Along the $y$ axis we obtain the same expression, with the roles of $x$ and $y$ reversed. In total we obtain:
$$\delta(xy) = p(x) \delta(y) + p(y) \delta(x)$$
It is now convenient to multiply both sides of the equation by $r = (x^2+y^2)^{1/2}$. Using the property of the delta function we obtain:
$$r \delta(xy) = q(x) \delta(y) + q(y) \delta(x)$$
where $q(x) = \min(x^2/\epsilon^2, 1)$. So the new pre-factors are equal to unity, except in a small region of width $2\epsilon$ around the origin where the pre-factor is smaller. Still, the pre-factor is well-behaved. To make this explicit, we replace the pre-factor by unity everywhere and integrate the central correction term so that it yields another delta function. This way we get:
$$r \delta(xy) = \delta(x) + \delta(y) - \frac {8}{3} \epsilon \delta(x)\delta(y)$$
This formula is correct up to linear order in $\epsilon$, although the pre-factor of the last term may depend on the specific representation used for the delta function. The main point though is that the function is well-behaved, even in the origin. So we can safely take the limit of $\epsilon$ to zero, and then the last term vanishes. Thus we can conclude that:
$$r \delta(xy) = \delta(x) + \delta(y)$$
It is not clear what the precise mathematical definition of the distribution $\delta(xy)$ should be. Here we will assume that $\delta(xy)$ is represented by the generalized function $$ \lim_{\epsilon\to 0^+} \frac{\epsilon/\pi}{(xy)^2+\epsilon^2} $$ in analogy with the Poisson kernel representation of the Dirac delta distribution, i.e. we define
$$\begin{align}\int_{\mathbb{R}^2}\!\mathrm{d}x~\mathrm{d}y~&\sqrt{x^2+y^2}\delta(xy)f(x,y)\cr ~:=~&\lim_{\epsilon\to 0^+} \int_{\mathbb{R}^2}\!\mathrm{d}x~\mathrm{d}y~ \sqrt{x^2+y^2} \frac{\epsilon/\pi}{(xy)^2+\epsilon^2}f(x,y) .\end{align} $$
We can then sketch a calculation using Tonelli/Fubini's theorems and Lebesgue's dominated convergence theorem:
$$\begin{align} \int_{|x|<|y|}\!\mathrm{d}x~\mathrm{d}y~& \sqrt{x^2+y^2} \frac{\epsilon/\pi}{(xy)^2+\epsilon^2}f(x,y)\cr ~=~&\int_{\mathbb{R}\backslash\{0\}}\!\mathrm{d}y~\int_{-y}^{y}\!\mathrm{d}x~ \sqrt{x^2+y^2} \frac{\epsilon/\pi}{(xy)^2+\epsilon^2}f(x,y)\cr ~\stackrel{x=\epsilon z/y}{=}&\int_{\mathbb{R}\backslash\{0\}}\!\mathrm{d}y~\int_{-y^2/\epsilon}^{y^2/\epsilon}\!\mathrm{d}z~ \sqrt{(\epsilon z/y^2)^2+1} \frac{1/\pi}{z^2+1}f(\epsilon z/y,y)\cr ~\stackrel{\epsilon\to 0^+}{\longrightarrow}&\int_{\mathbb{R}\backslash\{0\}}\!\mathrm{d}y~\int_{\mathbb{R}}\!\mathrm{d}z~ \frac{1/\pi}{z^2+1}f(0,y)\cr ~=~&\int_{\mathbb{R}}\!\mathrm{d}y~f(0,y)\cr ~=:~&\int_{\mathbb{R}^2}\!\mathrm{d}x~\mathrm{d}y~ \delta(x) f(x,y), \end{align}$$ and similarly for $x\leftrightarrow y$ exchanged. $\Box$
It might be possible to generalize the above to a class of nascent delta functions for $\delta(xy)$.
So I think the formula is right (and there is no $1/2$ as argued by M. Wind). One has to work with tests functions with two variables (i.e. $\varphi = \varphi(x,y)$) to deal with distibutions in $2$ dimensions.
Then usually, to define a composition of a distribution with a function, one uses a change of variable (see e.g. Chapter VI in Lars Hörmander's Book The Analysis of Linear Partial Differential Operators I). One looks to what happens with a regular function $f$ and then generalizes to the case when $f$ is a distribution. If $f$ is a integrable function and $\varphi$ a smooth test function with compact support, then we can cut the domain in two parts to avoid having $x=0$ and $y=0$ simultaneously $$ \langle f(xy),\varphi\rangle = \iint_{|x|<|y|} f(xy)\,\varphi(x,y)\,\mathrm d x\,\mathrm d y + \iint_{|y|<|x|} f(xy)\,\varphi(x,y)\,\mathrm d x\,\mathrm d y = I_1+I_2 $$ Now in the first integral, a change of variable yields $$ I_1= \int_{\Bbb R}\frac{1}{|y|}\int_{-|y|^2}^{|y|^2} f(x) \,\varphi(\tfrac{x}{y},y) \,\mathrm d x\,\mathrm d y = \int_{\Bbb R} \langle f,\varphi_y \rangle \,\frac{1}{|y|}\,\mathrm d y $$ where $\varphi_y(x) = \mathbf 1_{[-|y|^2,|y|^2]}(x)\, \varphi(\tfrac{x}{y},y)$. For the second integral, one gets the same thing with $x$ replaced by $y$ and so defining $\tilde{\varphi}(x,y):=\varphi(y,x)$, in general it holds $$ \langle f(xy),\varphi\rangle = \int_{\Bbb R} \langle f,\varphi_y \rangle \,\frac{1}{|y|}\,\mathrm d y + \int_{\Bbb R} \langle f,\tilde{\varphi}_x \rangle \,\frac{1}{|x|}\,\mathrm d x $$ Now, replace $f$ by the Dirac delta (which I usually write $\delta_0$) multiplied by $\sqrt{x^2+y^2}$, and define $\Phi=\sqrt{x^2+y^2}\,\varphi$. Since by definition $\langle \delta_0,g\rangle = g(0)$ $$ \langle \sqrt{x^2+y^2}\,\delta_0(xy),\varphi\rangle = \langle \delta_0(xy),\Phi\rangle \\= \int_{\Bbb R} \frac{\Phi_y(0)}{|y|}\,\mathrm d y + \int_{\Bbb R} \frac{\tilde{\Phi}_x(0)}{|x|}\,\mathrm d x \\ = \int_{\Bbb R} \varphi(0,y)\,\mathrm d y + \int_{\Bbb R} \varphi(x,0)\,\mathrm d x \\ = \langle \delta_0(x)+\delta_0(y),\varphi\rangle $$ where I used the fact that $\Phi_y(0) = |y|\,\varphi(0,y)$ and $\Phi_x(0) = |x|\,\varphi(x,0)$. The above identity holding for any test functions, it implies that $$ \sqrt{x^2+y^2}\,\delta_0(xy) = \delta_0(x)+\delta_0(y) $$ which is the true meaning to give to your equation (notice that something like $\delta_0(x)/\sqrt{x^2+y^2}$ has no clear meaning, since one would have $\delta_0(x)/\sqrt{x^2+y^2} = \delta_0(x)/|x|$ which only means "a distribution $T$ such that $|x|\,T=\delta_0$", but there can be several such distributions. See e.g. here)
Also, distributions do not need to have finite integral ... the distributions with finite "integral" are called "bounded measures".
– LL 3.14 Oct 02 '21 at 16:49See https://en.wikipedia.org/wiki/Dirac_delta_function#Properties_in_n_dimensions:
The dirac delta satisfies:
Your example is a direct application of case (2):
$$\begin{aligned} \int _{g^{-1}(0)}{\frac {f(\mathbf {x} )}{|\mathbf {\nabla } g|}}\,d\sigma (\mathbf {x} ) &=\int_{x=0} {\frac {f(\mathbf {x} )}{|\mathbf {\nabla } g|}}\,d\sigma (\mathbf {x} ) + \int_{y=0} {\frac {f(\mathbf {x} )}{|\mathbf {\nabla } g|}}\,d\sigma (\mathbf {x} ) \\&=\int_{ℝ^2} f(\mathbf {x} )\frac{δ(x)}{|\mathbf {\nabla } g|}\,d(\mathbf {x} ) + \int_{ℝ^2} f(\mathbf {x} )\frac{δ(x)}{|\mathbf {\nabla } g|}\,d(\mathbf {x} ) \\&= \int_{ℝ^2} f(x,y)\frac{δ(x)+δ(y)}{\sqrt{x^2+y^2}}\,d(x,y) \end{aligned}$$
Note that there is a potential problem at the origin - division by zero. This can be fixed by requiring that, at the very least, $f()=o(‖‖)$ as $→0$, or by removing the origin from the domain altogether.