5

a book I own says that $$ \delta(xy)= \frac{\delta(x)+\delta (y)}{\sqrt{x^2+y^2}}.$$

This kind of makes sense to me, but I cannot figure out why the denominator is what it is. Does anyone have any idea how to derive this?

Qmechanic
  • 12,298
  • Hello this may be a start but I do not think it quite solves it. I do not see how I can use double integrals when I only have one delta function. Double integrals would make sense if it was $$\delta(x) \delta(y)$$ – user2551700 Sep 30 '21 at 01:28
  • my bad, I misinterpreted your expression with $\delta(x,y)$. I've deleted my comment. – NeuroEng Sep 30 '21 at 11:47
  • Double integrals makes sense for the right-side of the referenced formula, but is the referenced formula consistent with $\delta (a x)=\frac{1}{|a|},\delta (x)$? (see formula (5) at https://mathworld.wolfram.com/DeltaFunction.html). – Steven Clark Oct 05 '21 at 17:41

4 Answers4

1

Instead of giving a formal and general answer (as others have done), my approach is to regard $\delta(xy)$ as a function in the $(x,y)$ plane; for which I will use the well-known block representation. This means:

$$ \delta(xy) = \begin{cases} 0.5/\epsilon^{2} & \text{when $-\epsilon^2 <xy < \epsilon^2$}\\ 0 & \text{otherwise} \end{cases}$$

The area where the function is non-zero resembles a four-pointed star. We now split this area into a central square with corners given by $|x| = |y| = \epsilon$; plus the four hyperbolic arms. We cut the central square along its two diagonals and add each section to the nearest arm. Along the horizontal axis we obtain this region:

$$\begin{cases} -|x| < y < |x| & \text{for $|x| < \epsilon$}\\ -\epsilon^2 /|x| < y < \epsilon^2 /|x| & \text{for $|x| \ge \epsilon$} \end{cases}$$

We see that the maximum width in the $y$ direction equals $2\epsilon$. In the limit of $\epsilon$ to zero this results in a delta function in $y$ with pre-factor $p(x) = \min(|x|/\epsilon^2, 1/|x|)$. Along the $y$ axis we obtain the same expression, with the roles of $x$ and $y$ reversed. In total we obtain:

$$\delta(xy) = p(x) \delta(y) + p(y) \delta(x)$$

It is now convenient to multiply both sides of the equation by $r = (x^2+y^2)^{1/2}$. Using the property of the delta function we obtain:

$$r \delta(xy) = q(x) \delta(y) + q(y) \delta(x)$$

where $q(x) = \min(x^2/\epsilon^2, 1)$. So the new pre-factors are equal to unity, except in a small region of width $2\epsilon$ around the origin where the pre-factor is smaller. Still, the pre-factor is well-behaved. To make this explicit, we replace the pre-factor by unity everywhere and integrate the central correction term so that it yields another delta function. This way we get:

$$r \delta(xy) = \delta(x) + \delta(y) - \frac {8}{3} \epsilon \delta(x)\delta(y)$$

This formula is correct up to linear order in $\epsilon$, although the pre-factor of the last term may depend on the specific representation used for the delta function. The main point though is that the function is well-behaved, even in the origin. So we can safely take the limit of $\epsilon$ to zero, and then the last term vanishes. Thus we can conclude that:

$$r \delta(xy) = \delta(x) + \delta(y)$$

M. Wind
  • 3,624
  • 1
  • 14
  • 20
  • Nice approach. So what you basically are doing is approximating $\delta(t)$ with $\frac{1}{2\epsilon^2} \mathbf{1}_{[-\epsilon^2,+\epsilon^2]}(t),$ where $\mathbf{1}_A$ is the indicator function on the set $A.$ This gives a region which you split into a more-or-less horizontal part and a more-or-less vertical part. Then you take limits. – md2perpe Oct 05 '21 at 11:47
1

It is not clear what the precise mathematical definition of the distribution $\delta(xy)$ should be. Here we will assume that $\delta(xy)$ is represented by the generalized function $$ \lim_{\epsilon\to 0^+} \frac{\epsilon/\pi}{(xy)^2+\epsilon^2} $$ in analogy with the Poisson kernel representation of the Dirac delta distribution, i.e. we define

$$\begin{align}\int_{\mathbb{R}^2}\!\mathrm{d}x~\mathrm{d}y~&\sqrt{x^2+y^2}\delta(xy)f(x,y)\cr ~:=~&\lim_{\epsilon\to 0^+} \int_{\mathbb{R}^2}\!\mathrm{d}x~\mathrm{d}y~ \sqrt{x^2+y^2} \frac{\epsilon/\pi}{(xy)^2+\epsilon^2}f(x,y) .\end{align} $$

We can then sketch a calculation using Tonelli/Fubini's theorems and Lebesgue's dominated convergence theorem:

$$\begin{align} \int_{|x|<|y|}\!\mathrm{d}x~\mathrm{d}y~& \sqrt{x^2+y^2} \frac{\epsilon/\pi}{(xy)^2+\epsilon^2}f(x,y)\cr ~=~&\int_{\mathbb{R}\backslash\{0\}}\!\mathrm{d}y~\int_{-y}^{y}\!\mathrm{d}x~ \sqrt{x^2+y^2} \frac{\epsilon/\pi}{(xy)^2+\epsilon^2}f(x,y)\cr ~\stackrel{x=\epsilon z/y}{=}&\int_{\mathbb{R}\backslash\{0\}}\!\mathrm{d}y~\int_{-y^2/\epsilon}^{y^2/\epsilon}\!\mathrm{d}z~ \sqrt{(\epsilon z/y^2)^2+1} \frac{1/\pi}{z^2+1}f(\epsilon z/y,y)\cr ~\stackrel{\epsilon\to 0^+}{\longrightarrow}&\int_{\mathbb{R}\backslash\{0\}}\!\mathrm{d}y~\int_{\mathbb{R}}\!\mathrm{d}z~ \frac{1/\pi}{z^2+1}f(0,y)\cr ~=~&\int_{\mathbb{R}}\!\mathrm{d}y~f(0,y)\cr ~=:~&\int_{\mathbb{R}^2}\!\mathrm{d}x~\mathrm{d}y~ \delta(x) f(x,y), \end{align}$$ and similarly for $x\leftrightarrow y$ exchanged. $\Box$

It might be possible to generalize the above to a class of nascent delta functions for $\delta(xy)$.

KCd
  • 46,062
Qmechanic
  • 12,298
  • Actually, it is normally clear what the definition of $\delta(xy)$ is, or at least the standards references about Distribution theory provide general definitions for the composition of a distribution with a smooth function ... – LL 3.14 Oct 05 '21 at 16:50
0

So I think the formula is right (and there is no $1/2$ as argued by M. Wind). One has to work with tests functions with two variables (i.e. $\varphi = \varphi(x,y)$) to deal with distibutions in $2$ dimensions.

Then usually, to define a composition of a distribution with a function, one uses a change of variable (see e.g. Chapter VI in Lars Hörmander's Book The Analysis of Linear Partial Differential Operators I). One looks to what happens with a regular function $f$ and then generalizes to the case when $f$ is a distribution. If $f$ is a integrable function and $\varphi$ a smooth test function with compact support, then we can cut the domain in two parts to avoid having $x=0$ and $y=0$ simultaneously $$ \langle f(xy),\varphi\rangle = \iint_{|x|<|y|} f(xy)\,\varphi(x,y)\,\mathrm d x\,\mathrm d y + \iint_{|y|<|x|} f(xy)\,\varphi(x,y)\,\mathrm d x\,\mathrm d y = I_1+I_2 $$ Now in the first integral, a change of variable yields $$ I_1= \int_{\Bbb R}\frac{1}{|y|}\int_{-|y|^2}^{|y|^2} f(x) \,\varphi(\tfrac{x}{y},y) \,\mathrm d x\,\mathrm d y = \int_{\Bbb R} \langle f,\varphi_y \rangle \,\frac{1}{|y|}\,\mathrm d y $$ where $\varphi_y(x) = \mathbf 1_{[-|y|^2,|y|^2]}(x)\, \varphi(\tfrac{x}{y},y)$. For the second integral, one gets the same thing with $x$ replaced by $y$ and so defining $\tilde{\varphi}(x,y):=\varphi(y,x)$, in general it holds $$ \langle f(xy),\varphi\rangle = \int_{\Bbb R} \langle f,\varphi_y \rangle \,\frac{1}{|y|}\,\mathrm d y + \int_{\Bbb R} \langle f,\tilde{\varphi}_x \rangle \,\frac{1}{|x|}\,\mathrm d x $$ Now, replace $f$ by the Dirac delta (which I usually write $\delta_0$) multiplied by $\sqrt{x^2+y^2}$, and define $\Phi=\sqrt{x^2+y^2}\,\varphi$. Since by definition $\langle \delta_0,g\rangle = g(0)$ $$ \langle \sqrt{x^2+y^2}\,\delta_0(xy),\varphi\rangle = \langle \delta_0(xy),\Phi\rangle \\= \int_{\Bbb R} \frac{\Phi_y(0)}{|y|}\,\mathrm d y + \int_{\Bbb R} \frac{\tilde{\Phi}_x(0)}{|x|}\,\mathrm d x \\ = \int_{\Bbb R} \varphi(0,y)\,\mathrm d y + \int_{\Bbb R} \varphi(x,0)\,\mathrm d x \\ = \langle \delta_0(x)+\delta_0(y),\varphi\rangle $$ where I used the fact that $\Phi_y(0) = |y|\,\varphi(0,y)$ and $\Phi_x(0) = |x|\,\varphi(x,0)$. The above identity holding for any test functions, it implies that $$ \sqrt{x^2+y^2}\,\delta_0(xy) = \delta_0(x)+\delta_0(y) $$ which is the true meaning to give to your equation (notice that something like $\delta_0(x)/\sqrt{x^2+y^2}$ has no clear meaning, since one would have $\delta_0(x)/\sqrt{x^2+y^2} = \delta_0(x)/|x|$ which only means "a distribution $T$ such that $|x|\,T=\delta_0$", but there can be several such distributions. See e.g. here)

LL 3.14
  • 12,457
  • 1
    You average a test function over a 2-dimensional distribution. This leads to an integral over the $xy$ plane. You then introduce the delta function. However, this is not allowed. $\delta (xy)$ and/or $r*\delta (xy)$ are not 2-dimensional distributions. In fact their integral over the $xy$-plane diverges. – M. Wind Oct 02 '21 at 14:11
  • Note that $\delta(x)\delta(y)$ is a 2-dimensional distribution. It represents a peak around the origin in the $xy$-plane. By changing to polar coordinates one can derive an equivalent form based on $\delta(r)$. That is correct and useful. However, one can not in general reverse the argument, by starting with an arbitrary single delta function and turning it into something two-dimensional. – M. Wind Oct 02 '21 at 14:18
  • So, even if $\delta$ is a one dimensional distribution, $\delta(xy)$ is the composition of $\delta$ with $(x,y) \mapsto xy$ which is a two dimensional function, which is why it is at the end a two dimensional distribution. The problem here has nothing to do with $\delta(x)\delta(y)$ ... or I do not get the point ?

    Also, distributions do not need to have finite integral ... the distributions with finite "integral" are called "bounded measures".

    – LL 3.14 Oct 02 '21 at 16:49
  • Your first equation is valid in case $f(xy)$ is a normalized function on integration over $x$ and $y$. But it is not. In fact the equation has little or nothing to do with the properties of $\delta(xy)$, which are found by integration over $d(xy)$ instead of $dx *dy$. – M. Wind Oct 02 '21 at 18:58
  • 1
    Please tell your sources then ... if I look for example in Lars Hörmander's Book "The Analysis of Linear Partial Differential Operators I", there is a chapter called "Composition with Smooth Maps" from which it follows that the composition of a distribution $f : \Bbb R\to \Bbb R$ with a function $u : \Bbb R^2 \to \Bbb R$ is a distribution $f◦ u:\Bbb R^2\to\Bbb R$. Such distributions are defined using test function $φ : \Bbb R^2\to\Bbb R$... so test functions with two variables. I do not even know what meaning you give to "integration over $\mathrm d(xy)$" ? I added the reference in the answer. – LL 3.14 Oct 03 '21 at 01:12
  • Shouldn't the limits of the inner integral of $I_1$ be $\pm|y|^2$? – md2perpe Oct 03 '21 at 14:58
  • Is $\Phi$ really in $C^\infty_c$? Doesn't the factor $\sqrt{x^2+y^2}$ make differentiability fail at origin? – md2perpe Oct 03 '21 at 15:08
  • @M.Wind. Do you mean that $\iint \delta(xy) , dx , dy$ and $\iint \sqrt{x^2+y^2} \delta(xy) , dx , dy$ are divergent? That's true, but that's not a problem; the integrals shall include a test function $\phi(x,y)$ with compact support. Then the divergences disappear. – md2perpe Oct 03 '21 at 15:14
  • Thanks for the reading. There should indeed be a $|y|^2$, I corrected. Hopefully, it doesn't change the result since this is out of the support. $Φ$ is indeed not $C^∞_c$ but $C^0_c$, which is ok when dealing with measures by a theorem from L. Schwartz (you can always replace them by approximated smoothed version and pass to the limit). – LL 3.14 Oct 03 '21 at 15:57
  • According to Theorem 6.1.2 in Hörmander, the smooth function with which the distribution is composed must have surjective derivative on all of the domain. We must exclude origin from the domain for this to be true for $(x,y)\mapsto xy.$ – md2perpe Oct 03 '21 at 16:17
  • Yes, and this is seen in the change of variable (the $1/|x|$ is not defined in $0$). However, to get the formula I am proving, the test function in the side of $\delta(xy)$ is $0$ at the origin. Of course one could also try to find the solutions of my last equation, which should lead to apparition of derivatives of the Dirac delta to give a precise meaning to $(\delta(x)+\delta(y))/\sqrt{x^2+y^2}$, but this goes beyond the question of the OP I think. – LL 3.14 Oct 03 '21 at 16:59
  • The subject is trickier than I first thought. I am now convinced that LL 3.14 is on the right track. I will adjust my answer accordingly. – M. Wind Oct 03 '21 at 17:44
0

See https://en.wikipedia.org/wiki/Dirac_delta_function#Properties_in_n_dimensions:

The dirac delta satisfies:

  1. case $ℝ→ℝ:\quad$ ${\displaystyle \delta (g(x))=\sum _{i}{\frac {\delta (x-x_{i})}{|g'(x_{i})|}}}$
  2. case $ℝ^n→ℝ:\quad$ ${\displaystyle \int _{\mathbf {R} ^{n}}f(\mathbf {x} )\,\delta (g(\mathbf {x} ))\,d\mathbf {x} =\int _{g^{-1}(0)}{\frac {f(\mathbf {x} )}{|\mathbf {\nabla } g|}}\,d\sigma (\mathbf {x} )}$
  3. case $ℝ^n→ℝ^n:\quad$ ${\displaystyle \int _{\mathbf {R} ^{n}}\delta (g(\mathbf {x} ))\,f(g(\mathbf {x} ))\left|\det g'(\mathbf {x} )\right|\,d\mathbf {x} =\int _{g(\mathbf {R} ^{n})}\delta (\mathbf {u} )f(\mathbf {u} )\,d\mathbf {u} }$

Your example is a direct application of case (2):

  • With $g(x,y)=x⋅y$ we have $|∇g| = \sqrt{x^2+y^2}$
  • since $g^{-1}(0) = (\{0\}×ℝ) ∪ (ℝ×\{0\}) $, we can split the integral in two parts:

$$\begin{aligned} \int _{g^{-1}(0)}{\frac {f(\mathbf {x} )}{|\mathbf {\nabla } g|}}\,d\sigma (\mathbf {x} ) &=\int_{x=0} {\frac {f(\mathbf {x} )}{|\mathbf {\nabla } g|}}\,d\sigma (\mathbf {x} ) + \int_{y=0} {\frac {f(\mathbf {x} )}{|\mathbf {\nabla } g|}}\,d\sigma (\mathbf {x} ) \\&=\int_{ℝ^2} f(\mathbf {x} )\frac{δ(x)}{|\mathbf {\nabla } g|}\,d(\mathbf {x} ) + \int_{ℝ^2} f(\mathbf {x} )\frac{δ(x)}{|\mathbf {\nabla } g|}\,d(\mathbf {x} ) \\&= \int_{ℝ^2} f(x,y)\frac{δ(x)+δ(y)}{\sqrt{x^2+y^2}}\,d(x,y) \end{aligned}$$

Note that there is a potential problem at the origin - division by zero. This can be fixed by requiring that, at the very least, $f()=o(‖‖)$ as $→0$, or by removing the origin from the domain altogether.

Hyperplane
  • 11,659
  • 3
    We should exclude origin from the domain since $\nabla g$ vanishes there. – md2perpe Oct 03 '21 at 16:28
  • Yes indeed, all the problem here is that $(\delta(x)+\delta(y))/\sqrt{x^2+y^2}$ is quite badly behaved at $x=0$. Here you are dealing with the $\delta$ as if it was a function ... but it is not clear what are the definitions of your objects. But we agree that the formula does not contains a $1/2$. – LL 3.14 Oct 03 '21 at 17:10
  • 1
    @LL3.14 I'm really not treating the dirac delta as a function. In each step, when you see δ inside an integral, it should be interpreted as ∫δ(x)f(x)dx≔⟨δ|f⟩ where is an appropriate, but not further specified, Hilbert space. As md2perpe pointed out, the domain of functions in should either not contain the origin, or possibly should only contain functions that vanish to sufficient order at the origin. – Hyperplane Oct 03 '21 at 17:20
  • Then you should just add the fact that $f$ vanishes at the origin. It implies that the equality is only true away from $(0,0)$ (i.e. equivalent to my answer) and does not tell what is happening in $0$ (I suspect there should be a gradient of the Dirac in $0$ but I am not sure). More depth and debates in the question than expected – LL 3.14 Oct 03 '21 at 17:23