1

Suppose $f:\mathbb{R}^2\rightarrow \mathbb{R}$ is differentiable at $(x_0,y_0)$. I want to show that

$$f(x,y) = f(x_0,y_0)+\frac{\partial f}{\partial x}\Delta x + \frac{\partial f}{\partial y}\Delta y +\epsilon (x,y)$$

where $e\rightarrow 0$ as $(x,y)\rightarrow (x_0,y_0)$.


The idea I have in mind is that of

$$f(x,y)\sim f(x_0,y) + \frac{\partial f}{\partial x}\Delta x\sim f(x_0,y_0)+\frac{\partial f}{\partial x}\Delta x + \frac{\partial f}{\partial y}\Delta y$$

The last approximation can be proven easily by regarding $f$ as a function of $y$ alone: let $f_{x_0}(y)=f(x_0,y)$ and note that $$f_{x_0}(y)=f_{x_0}(y_0)+\frac{df}{dy}\Delta y +\epsilon _y\Delta y$$ where $$\epsilon _y =\Big( \frac{f_{x_0}(y)-f_{x_0}(y_0)}{\Delta y}-\frac{df}{dy}\Big) \rightarrow 0 \ \ \text{as} \ \ \Delta y\rightarrow 0$$

Yet I can't prove the first approximation.


I would appreciate any help.

Sam
  • 4,734
  • What exactly is your definition of derivative? (If you only require the existence of partials along the coordinate axes, then your goal is not true.) – Jacob Manaker Jul 13 '21 at 22:21
  • That is no different from showing $f$ is continuous at $(x_0,y_0).$ Is that what you really want to show? – zhw. Jul 13 '21 at 23:39
  • @JacobManaker I had not noticed that just requiring the partial derivatives to exist along the coordinate axes was not enough, but you are right. I believe that if we assume the total derivative exists ar $(x_0,y_0)$ then the result would follow, right? – Sam Jul 14 '21 at 07:05
  • @zhw.how would $f$ being continuous at $(x_0,y_0)$ yield the result? – Sam Jul 14 '21 at 07:06
  • can you write down the definition of $f$ being differentiable at $(x_0,y_0)$? – peek-a-boo Jul 14 '21 at 07:14
  • @Leo: Yes; that is the essence of Prado's solution. – Jacob Manaker Jul 14 '21 at 20:38
  • As others have stated, the continuity is enough to show it for an arbitrary error $\epsilon(x,y).$ Basically, for any pair of continuous functions that passes through the same point $$\lim_{(x,y)\rightarrow (x_0,y_0)} f(x,y)-g(x,y)=f(x_0,y_0)-g(x_0,y_0) = 0$$ which means that your error converges to zero. On the other hand, not aways you substitute $f$ for any other function that passes through the same point. This would be like saying $f(x)=x^2$ looks like $g(x)=100x$ around the origin, just because both are zero there. Usually, some control on the error is needed, like "my concern" required. – R. W. Prado Jul 16 '21 at 06:26

1 Answers1

2

Any differentiable function satisfies it by definition of differentiability, since it implies the existence of a vector $v$ (it can be seen as a linear functional) such that $$ \lim_{(x,y)\rightarrow (x_0,y_0)} \dfrac{f(x,y)-f(x_0,y_0)-\left(v_1 (x-x_0)+v_2(y-y_0)\right)}{|x-x_0|+|y-y_0|}=0 $$ After that, it's easy to see that partial derivatives is the components of the vector $v$, since $$0 = \lim_{t \rightarrow 0}\dfrac{f((x_0,y_0)+t e_i)-f(x_0,y_0)-t v_i}{t},$$ for $i=1$ and $i=2$, where $e_1=(1,0)$ and $e_2=(0,1)$. There is no need of being a big brain here, for more details see article. On the other hand, a function with continuous partial derivatives is differentiable, see the proof, and that is what you were trying to do in your last paragraph. All this things is basically stated in Stewart.

Here, my only concern is about your $\epsilon(x,y)$ error. It needs to be a term of the form $$\epsilon(x,y) = (|x−x_0|+|y−y_0|)g(x,y),$$ with $g(x,y)$ converging to zero as $(x,y)\rightarrow(x_0,y_0)$. After that, it's easy to see that what you are wanting to show is the definition of differentability.