0

I'm kind of struggling to understand, why the condition

$$J\left[y+h\right]-J\left[y\right] = \Psi \left[h\right]+\varepsilon \left(h\right){\lVert}h{\rVert} $$

where $\Psi$ is a linear functional, and $\varepsilon(h)$ is a functional such that $\varepsilon(h) \rightarrow 0 $ for ${\lVert}h{\rVert} \rightarrow 0$

is the condition to check if we want to know if our given functional $J\left[\cdot \right]$ is differentiable or not.

Should we divide both sides by $\varepsilon(h)$, and take the limit ${\lVert}h{\rVert} \rightarrow 0$, and see what happens?

Also, I know, that if $\varepsilon(h) \rightarrow 0 $ for ${\lVert}h{\rVert} \rightarrow 0$, then $\varepsilon(h)$ is a continuous functional - but why? Shouldn't continuity be checked by taking the difference ${\lVert}\varepsilon(f_n) - \varepsilon (f){\rVert}$ and checking if it's $\rightarrow 0$ for ${\lVert}f_n - f{\rVert} \rightarrow 0$?

anon
  • 625
  • 1
    it's defined that way because differential calculus is the study of local linear approximations (and this also conforms with our usual intuitive understanding in terms of slopes). See Defining differentiablity of a function of two variables for the motivation, and also this answer. The definition of differentiability can be given in much more generality. For your next question, we have $\epsilon(0):=0$ and so $\epsilon$ is continuous at $0$. – peek-a-boo Oct 11 '21 at 18:33
  • Continuity of $\epsilon$ at the origin can be described either directly ($\epsilon(h)\to 0$ as $h\to 0$), or in terms of sequences: for every sequence ${h_n}$ such that $h_n\to 0$, we require $\epsilon(h_n)\to 0$. – peek-a-boo Oct 11 '21 at 18:34
  • But if we have $J\left[y+h\right]-J\left[y\right] = \Psi \left[h\right]+\varepsilon \left(h\right){\lVert}h{\rVert} $, then by what should we divide both sides to actually calculate the functional derivative? Should we divide by $\varepsilon(h)$, or should we divide by ${\lVert}h{\rVert} $? If I had to guess, we should be dividing both sides by ${\lVert}h{\rVert} $ because then it'd be $\lim _{{\lVert}h{\rVert}\to 0}\frac{J\left[y+h\right]-J\left[y\right]:}{{\lVert}h{\rVert}}$, just like in the definition of the derivative of a function? – anon Oct 11 '21 at 18:47
  • The functional derivative is the linear mapping $\Psi$; typically denoted as $DJ_y$ or $dJ_y$ or in the calculus of variations, $\delta J_y$, or simply $\delta J$. Calculating $\Psi$ means figuring out what is $\Psi(h)$ for all possible $h$. One possible way of doing to is that $\Psi(h)=DJ_y(h)$ (the Frechet derivative of $J$ at $h$) and by the chain rule, this is equal to $\frac{d}{ds}\bigg|{s=0}J(y+sh)=\lim\limits{s\to 0}\frac{J(y+sh)-J(y)}{s}$. i.e $\Psi(h)=DJ_y(h)=\frac{d}{ds}\bigg|{s=0}J(y+sh)=\lim\limits{s\to 0}\frac{J(y+sh)-J(y)}{s}$. – peek-a-boo Oct 11 '21 at 19:05
  • This is like saying if you have a differentiable function, then knowing all the directional derivatives determines the "total" derivative. In the calculus of variations, it is usually most convenient to calculate this directional derivative for all possible $h$, thereby determining what $\Psi=DJ_y$ is. – peek-a-boo Oct 11 '21 at 19:06

1 Answers1

0

apping Ψ; typically denoted as DJy or dJy or in the calculus of variations, δJy, or simply δJ. Calculating Ψ means figuring out what is Ψ(h) for all possible h. One possible way of doing to is that Ψ(h)=DJy(h) (the Frechet derivative of J at h) and by the chain rule, this is equal to dds∣∣∣s=0J(y+sh)=lims→0J(y+sh)−J(y)s. i.e Ψ(h)=DJy(h)=dds∣∣∣s=0J(y+sh)=lims→0J(y+sh)−J(y

  • As it’s currently written, your answer is unclear. Please [edit] to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center. – Community Oct 13 '21 at 17:20