This is a possible duplicate of this question and probably a few others. However, the question I have on my mind seems to be unanswered. On p. 14 in The calculus of variations by Gelfand and Fomin, the following formula $$ \int_{a}^{b} \left(\frac{\partial F}{\partial y}(x,y(x),y^{\prime}(x))h + \frac{\partial F}{\partial y^{\prime}}(x,y(x),y^{\prime}(x))h^{\prime} \right) \mathrm{d} x $$ is derived using Taylor expansion of $F(x,y,z).$ I know what multivariable Taylor's theorem says, but I have hard times figuring out what is happening here exactly and what is the exact meaning of this equations and expressions like $F_{y^{\prime}}(x,y,y^{\prime})$ and so on. Is it a kind of notation? It seems that they take $F(x,y,z)$ pretending that $x,y,z$ are independent variables, they apply the Taylor expansion giving $$ F(x,y+h_2,z+h_3) - F(x,y,z) = h_2 \frac{\partial F}{\partial y}(x,y,z) + h_3 \frac{\partial F}{\partial z}(x,y,z) + \frac{1}{2!}\left(h_2 \frac{\partial}{\partial y} + h_3 \frac{\partial}{\partial z} \right)^{2} F(x,y+\theta h_2, z + \theta h_3) $$ with $0<\theta<1,$ and then somehow treat $y,$ $h_1$ as functions of $x$ and $h_2$ and $z$ as derivatives thereof, and plug them into the penultimate equation. Then what I would expect to see is some application of chain rule to $\frac{\partial F}{\partial z}(x,y,z).$ What I am missing here? Why can they pretend that $x,y,z$ are independent? Can somebody explain what is going on here? I would appreciate an explanation without detour to Gateaux differentiability or the staff of any similar kind. Thanks.
Asked
Active
Viewed 162 times
1
-
Maybe helpful: https://math.stackexchange.com/questions/1963640/why-does-fracdqdt-not-depend-on-q-why-does-the-calculus-of-variations, https://math.stackexchange.com/questions/1205829/how-can-y-and-y-be-independent-in-variational-calculus? – Hans Lundmark Jun 07 '18 at 14:03