0

If there is a function $f(x,y)$ and $x=x(t),y=y(t)$ such that $f:R^2\to R$. I want to prove that $\frac{df}{dt}=\frac{\partial f}{\partial x}\frac{dx}{dt}+\frac{\partial f}{\partial y}\frac{dy}{dt}$ (All functions are differentiable).
Most of the proofs I've seen, use ideas like total differential (which I don't understand), or have steps which are not explained because of complexity. So I've been trying to prove it using single variable calculus ideas, and the partial derivative.
This is what I've got (mostly from an MIT_OCW video):
$\Delta f=\Delta f_x+\Delta f_y$ (Here $\Delta f$ is the change in $f$, when there is a change in $x$)
$\Rightarrow\Delta f=\frac{\Delta f_x}{\Delta x}\Delta x+\frac{\Delta f_y}{\Delta y}\Delta y$
$\frac{\Delta f_x}{\Delta x}\approx\frac{\partial f}{\partial x}$ and $\frac{\Delta f_y}{\Delta y}\approx\frac{\partial f}{\partial y}$
$\Rightarrow \Delta f\approx f_x\Delta x+ f_y\Delta y$
$\tag{1} \Rightarrow \frac{\Delta f}{\Delta t}\approx f_x\frac{\Delta x}{\Delta t}+ f_y\frac{\Delta y}{\Delta t}$
As $\Delta t\to 0$, $\frac{\Delta x}{\Delta t}\to \frac{dx}{dt}$, so when I apply limit it's supposed to become:
$\lim_{\Delta t\to 0}\frac{\Delta f}{\Delta t}=\lim_{\Delta t\to 0}f_x\frac{\Delta x}{\Delta t}+\lim_{\Delta t\to 0}f_y\frac{\Delta y}{\Delta t}$
$\tag{2} \Rightarrow \frac{df}{dt}=\frac{\partial f}{\partial x}\frac{dx}{dt}+\frac{\partial f}{\partial y}\frac{dy}{dt}$
I know that as $\Delta t\to 0$, $\frac{\Delta x}{\Delta t}\to \frac{dx}{dt}$, $\frac{\Delta y}{\Delta t}\to \frac{dy}{dt}$ and $\frac{\Delta f}{\Delta t}\to \frac{df}{dt}$, but how do I know that $(f_x\frac{\Delta x}{\Delta t}+ f_y\frac{\Delta y}{\Delta t})$ converges to $\frac{df}{dt}$?
(1) is an approximation, and (2) is an equality. How do I know that (1) turns into an equality when the limit is applied?


UPDATE 1: Keeping the error terms in the proof:
$\Delta f=\Delta f_x+\Delta f_y$
From manipulating the Taylor's series I get
$\Delta f_x=\frac{\partial f}{\partial x}\Delta x+\sigma_1(\Delta x)$ and
$\Delta f_y=\frac{\partial f}{\partial y}\Delta y+\sigma_2(\Delta y)$
(Here $\sigma_1(\Delta x)$ is a function of $\Delta x$, and $\to 0$ as $\Delta x\to 0$)
$\Rightarrow \Delta f=f_x\Delta x+f_y\Delta y+\sigma_1(\Delta x)+\sigma_2(\Delta y)$
$\Rightarrow \frac{\Delta f}{\Delta t}=f_x\frac{\Delta x}{\Delta t}+ f_y\frac{\Delta y}{\Delta t}+\frac{\sigma_1(\Delta x)}{\Delta t}+\frac{\sigma_2(\Delta y)}{\Delta t}$
As $\Delta t\to 0$, $\Delta x\to 0$ (If I consider $\Delta x=x(t+\Delta t)-x(t)$),
also $\frac{\Delta x}{\Delta t}\to \frac{dx}{dt}$, and $\frac{\sigma_1(\Delta x)}{\Delta t}\to 0$, as $\sigma_1(\Delta x)\to 0$ much faster than $\Delta t$
$\Rightarrow \lim_{\Delta t\to 0}\frac{\Delta f}{\Delta t}=\lim_{\Delta t\to 0}f_x\frac{\Delta x}{\Delta t}+\lim_{\Delta t\to 0}f_y\frac{\Delta y}{\Delta t}$
$\therefore \frac{df}{dt}=\frac{\partial f}{\partial x}\frac{dx}{dt}+\frac{\partial f}{\partial y}\frac{dy}{dt}$

What I'm unsure about are the Taylor series manipulations. This is what I did for $\Delta f_x$:
$f(x)=f(c)+f`(c)(x-c)+\sigma(x-c)$ (where $\sigma(x-c)$ = all higher order terms)
Taking $c=x$, and $x=x+\Delta x$. I get
$f(x+\Delta x)=f(x)+f`(x)(\Delta x)+\sigma(\Delta x)$
$\Rightarrow f(x+\Delta x)-f(x)=f`(x)(\Delta x)+\sigma(\Delta x)$
$\therefore \Delta f_x=f`(x)(\Delta x)+\sigma(\Delta x)$

I'd appreciate it if someone could check this proof and the validity of these manipulations.

KraZZ
  • 357
  • 4
    You should keep track of the error term. – user10354138 Apr 15 '21 at 01:43
  • Thanks for the idea, I had not thought of that, I'll try to do it and get back. – KraZZ Apr 15 '21 at 03:27
  • You're right to keep track of the error terms, but you're not making the estimates correctly/precisely. Here is an answer I wrote previously (obviously on the more intuitive side), but if you know some $\epsilon,\delta$ proofs and some linear algebra; mainly the fact that continuity of a linear transformation $T$ implies it is "bounded" (in the sense that there exists $M>0$ such that for all $x\in\Bbb{R}^n$, $\lVert T(x)\rVert\leq M\lVert x\rVert$) then I think what I wrote there should be understandable (and enough to finish up the proof). – peek-a-boo Apr 15 '21 at 20:13
  • Of course, if after reading that page (and following the "hint" in my previous comment) you still have doubts, then I'll elaborate more. (The reason I'm actually hesitant to provide an answer directly in this 2-dimensional case is because I think that thinking in terms of these special cases completely obscures the general idea which tbh is very simple, it also makes the notation more complicated). – peek-a-boo Apr 15 '21 at 20:18
  • @peek-a-boo I will definitely read your answer there as it is interesting, but can you still point out what parts of this particular proof of this 2 variable case are imprecise or have fault, so that I can understand? – KraZZ Apr 15 '21 at 20:35
  • 1
    Even before I answer your questions about the proof, let me ask you: what is the definition of differentiability in higher dimensions? Because if you can't answer this correctly then of course a correct proof is not possible. (What you've written down is what it means for $f$ to have partial derivatives, but this is not the same thing as the definition of differentiability). Also, you write "$\sigma_1(\Delta x)\to 0$ as $\Delta x \to 0$". THis is true, but not good enough; it should be $\sigma_1(\Delta x)/\Delta x \to 0$ as $\Delta x \to 0$. There's other small errors. – peek-a-boo Apr 15 '21 at 20:46
  • @peek-a-boo I see your point. I'll try to understand the answer you have written. – KraZZ Apr 15 '21 at 21:29

0 Answers0