2

How do you deal with multivariable limits? We'll use the example $f: \mathbb R ^2 \rightarrow \mathbb R$ $$\lim _{(x,y) \rightarrow (0,0)}\frac{\sqrt{|xy|}}{\sqrt{x^2 + y^2}}$$

The limit doesn't exist, if $x=y$ we have the value $1/\sqrt{2}$ and if $y = x^3$ we get $0$. How would we prove it with the $\epsilon - \delta$ proof? Do we even need to prove it through the definition, or does it suffice to show that approaching the point by different paths leads to different answers?

Given that there are an infinite amount of paths in which to approach our point, how would you prove that the limit did actually exist? Take for example $f(x,y) = xy$, where we'll take the limit $$\lim _{(x,y)\rightarrow (1,2)} f(x,y) = 2 $$ This is obvious because the function is continuous at our point of interest, but how do you prove it directly from the definition? The same tricks we used when dealing with a single variable won't apply here due to the fact that we're dealing with a point, rather than a single number.

Git Gud
  • 31,356
Astrum
  • 373
  • 3
  • 5
  • 13
  • 1
    It suffices to show that approaching from two different paths results in two different limits. To prove it does exist, it would almost certainly be necessary to use an epsilon delta proof. – Eoin Oct 15 '14 at 23:26
  • ok, so how would you go about proving it with epsilon-delta? Amazingly both my textbooks don't give a single example on this. Is it supposed to be so obvious? – Astrum Oct 15 '14 at 23:29
  • The first answer gives the definition of an epislon-delta argument for the continuity of a function at any point $x_0$. If the function is continuous there then it will also have a limit equal to $f(x_0)$. – Eoin Oct 15 '14 at 23:34

2 Answers2

6

To show that a limit does not exist it suffices to prove (as Eoin commented) that the limit changes along different paths. Sometimes the paths are not obvious, however the example you gave is a typical example in calculus texts which shows that even though you may find agreement of the limit for paths along the $x$ and $y$ axis, choosing the line $y=x$ yields a different limit, and so the limit does not exist.

To show that a multivariable limit does exist requires more care than in the single variable limit case, however some common approaches include

  1. Appealing to theorems of continuity (for instance, polynomials are continuous, as are differentiable functions although this also requires a little more care than single-variable differentiability).

  2. Using radial properties of the function.

  3. Using a multidimensional squeeze theorem.

Of course one can always fall back on the $\varepsilon, \delta$ definition if needed. I will try to address 2 and 3 below.

Radial Functions - Note that, simply checking the definition, one finds that in general, for any function $f(\vec x)$,

$$\lim_{\vec x \to \vec a} f(\vec x) = \lim_{\lVert \vec x - \vec a \rVert \to 0} f(\vec x).$$

In some ways this seems simpler. For one thing, the limit operator here is simply the typical limit operator we have from single variable Calculus. If we are also able to rewrite $f(\vec x)$ as a function of $\lVert \vec x - \vec a \rVert$, i.e. there exists some $g$ function such that $f(\vec x) = g(\lVert \vec x - \vec a \rVert)$, then we could further write

$$\lim_{\lVert \vec x - \vec a \rVert \to 0} f(\vec x) = \lim_{\lVert \vec x - \vec a \rVert \to 0} g(\lVert \vec x - \vec a \rVert) = \lim_{r\to 0^+} g(r)$$ where $r\in \mathbb R$ and the one-sided limit on the right is due to the fact that $r$ replaced the positive quantity $\lVert \vec x - \vec a \rVert$. Functions for which such a $g$ exists are called radial (at least when $\vec a = 0$), as for these functions the value of $f(\vec x)$ depends only on the distance of the point $\vec x$ from the origin. Examples of such functions include a cone or the ripples on the surface of the water after throwing a pebble in.

As a concrete example of how this might work, consider the function $$f(x,y) = (x^2+y^2)\log(x^4+2x^2y^2+y^4).$$ This may seem complicated at first, but I wanted to choose a nontrivial example. If we were asked to find the limit as $(x,y)\to(0,0)$, we will benefit by first noting that $$f(x,y) = (x^2+y^2)\log((x^2+y^2)^2)=g(\lVert(x,y)\rVert)$$ where $g(r)=r^2\log(r^4)=4r^2\log(r)$. Now we write

$$\lim_{(x,y)\to(0,0)}f(x,y) = \lim_{r\to0^+}4r^2\log(r)$$ and the limit on the right is entirely a single variable calculus limit, to which we can apply all our single variable limit theorems including l'Hôpital's rule (which will be needed in this case). This method may seem to have few applications, however in combination with the squeeze theorem one can easily apply this trick to nonradial functions $f(\vec x)$ by squeezing such a function between radial ones.

Squeeze Theorem - Care must be taken here to ensure that your upper and lower bound functions remain above and below the function respectively in a neighborhood around the point you are interested in, but aside from that the reasoning is the same as single variable calculus. Often, at least in easy examples, one finds functions which actually globally bound the function, so this is not as difficult as it may sound.

As an example, take $f(x,y)= \frac{x^5 y}{x^4+4y^2}$, and suppose we want the limit as $(x,y)\to(0,0)$. Then note that

$$0\le\left|\frac{x^5 y}{x^4+4y^2}\right|\le\left|\frac{x^5 y}{4x^2y}\right|=\left|\frac 1 4 x^3\right|$$

And now, appealing to either continuity or radially bounding the right side, we find that the outside functions tend to $0$ as $(x,y)\to (0,0)$, therefore $f(x,y)\to 0$ also.

The answer here includes a number of useful strategies and ideas, but I thought it would also be helpful to demonstrate the two above in detail here.

mboratko
  • 4,553
1

You do not need to take the $\epsilon$ route. The definition of the continuity in $\Bbb R^d$ applications is:

$$ \forall r>0\ \ \exists a>0: \|x-x_0\|<a\implies \|f(x) - f(x_0)\| < r $$ and as a consequence (actually it is an equivalent definition)

$$ \forall (x_n)\ x_n\to x_0 \implies f(x_n) \to f(x_0) $$ And this is exactly what you intend (and shoud!) use in this context.


proof:

  • if $f$ is continuous in $x_0$ and $(x_n)\to x_0$: Fix $r>0$ and consider the $a$ from the definition. For $n$ large enough (say $n>N$) you have $$ \|x_n-x_0\|<a $$and then $$ \|f(x_n) - f(x_0)\| < r $$ so $f(x_n) \to f(x_0)$.

  • if $\forall (x_n)\ x_n\to x_0 \implies f(x_n) \to f(x_0)$: Assume that $f$ is not continuous in $x_0$. Then you can find $r>0$ such as $\forall n \ \exists x_n: |x_n - x_0| < 1/n$ and $\|f(x_n - f(x_0)\| > r$. This is a contradiction.

mookid
  • 28,236