4

Consider a function $f(x,y) : \mathbb [0,1]^2 \to \mathbb R$, smooth in both variables (very strong assumption for the purposes). Define $f_{X}(x) = \max_{y \in [0,1]} f(x,y)$ and $f_Y{(y)} = \max_{x \in [0,1]} f(x,y)$ , which are well-defined. Oftentimes, in optimization, it is a useful statement to say that $f_X$ and $f_Y$ have unique maxima/minima. For this purpose, it is convenient to see if they are differentiable, and set the derivative to zero.

  • Under what sufficient conditions are $f_X$ and $f_Y$ differentiable (they are always continuous, as pointed out below)? Is the condition of "f smooth" acceptable?

  • How can I study the derivative of $f_X$ and $f_Y$, so that I can look at the maxima/minima of such functions?

EDIT : Since we have come up with a counterexample to non-differentiability, given by $(x,y) \to 1-xy$ on $[-1,1]^2$, I would like sufficient conditions under which this is true. One may use the language of sub-differentials etc. (from convex analysis) if required for this purpose.

  • 1
    Even $f$ being polynomial is not sufficient for differentiability, e.g., $[-1,1]^2\to\mathbb{R};(x,y)\mapsto xy$. On the other hand, If $f$ is continuous, then $f_X,f_Y$ are clearly continuous. – user10354138 Dec 10 '20 at 06:43
  • @user10354138 Thank you for the example. I would prefer sufficient conditions on the maximum being differentiable. (I realize that this can be translated to a function which doesn't work in my scenario either) – Sarvesh Ravichandran Iyer Dec 10 '20 at 06:46
  • I think you need some very strict assumptions, e.g., $f(c,-)$ and $f(-,c)$ have unique maximum for every $c\in(0,1)$. – user10354138 Dec 10 '20 at 06:52
  • @user10354138 Thank you for that contribution , I think it makes sense too. I will try to build from there. I am actually dealing with a specific situation, but want to tackle the question in general, so that the most general statement is out as an answer rather than the one pertaining to my situation. In my case, there is a unique maximum, so no trouble! – Sarvesh Ravichandran Iyer Dec 10 '20 at 06:53

2 Answers2

3

Sufficient conditions for this to hold are for $f$ to be of the form $f(x,y) = g(x) + h(y)$ with $g$ and $h$ convex functions.

  • For a sufficient condition, you have answered my question, +1. I am sure now that if I were to take a function that was a suitable limit of such functions under a strong norm, it too would be max-differentiable. I will have to think about that, of course. – Sarvesh Ravichandran Iyer Feb 08 '21 at 02:30
3

We may apply Danskin's theorem. See [1], [2], [3].

If $f(x, y)$ is convex in $x$ for each $y \in [0, 1]$, and at a given $x_0 \in (0, 1)$, $f(x_0, y)$ has a unique maximizer $y_0$ on $[0, 1]$, then $f_X(x)$ is differentiable at $x_0$ with $$\frac{\mathrm{d} f_X(x)}{\mathrm{d} x}\Big\vert_{x=x_0} = \frac{\partial f(x, y_0)}{\partial x}\Big\vert_{x=x_0}.$$

Reference

[1] https://en.wikipedia.org/wiki/Danskin%27s_theorem

[2] R. T. Rockafellar, “On a special class of convex functions”, Journal of Optimization Theory and Applications volume 70, pages619–621(1991).

[3] Dimitri P. Bertsekas, “Nonlinear Programming”, 2nd Ed., Appendix B.5, page 717.

River Li
  • 37,323