0

I'm doing research about some model arising in electromagnetism and there is a function, let us denote it $$\boldsymbol{\lambda} = \mathbf{g}(\nabla u) = \nu(|\nabla u|)\nabla u,$$ where $u:\Omega\times[0,T]\to\mathbb{R}$ is the solution to said electromagnetism model, $\Omega$ is a bounded open Lipschitz domain in $\mathbb{R}^2$, and $\nu:\mathbb{R}_0^+\to\mathbb{R}^+$ is a $C^1$ function such that $\mathbf{g}:\mathbb{R^2}\to\mathbb{R}^2$ defined by $\mathbf{g}(\mathbf{x}) := \nu(|\mathbf{x}|)\mathbf{x}$ satisfies the following properties: $$|\mathbf{g}_i(\mathbf{x})| \leq C_1\|\mathbf{x}\|_{\mathbb{R}^2}, \qquad \left|\frac{\partial}{\partial x_j}\mathbf{g}_i(\mathbf{x})\right|\leq C_1 \quad \forall i,j\in\{1,2\}$$ $$\sum_{i,j=1}^2\frac{\partial}{\partial x_j}\mathbf{g}_i(\mathbf{x})y_iy_j \geq C_2\|\mathbf{y}\|^2_{\mathbb{R}^2}.$$

I know that, in virtue of the implicit function theorem, the ellipticity condicion above guarantees that the relation $\boldsymbol{\lambda} = \nu(|\nabla u|)\nabla u$ can be inverted to obtain $\nabla u$ as a function of $\boldsymbol{\lambda}$, say $$\nabla u = \boldsymbol{\psi}(\boldsymbol{\lambda}).$$

Basically, we obtain the inverse function $\boldsymbol{\psi}$ og $\mathbf{g}$. What I would like to ask you is: there is any possible way to prove that this inverse function inherits the properties that I mentioned above are satisfied by $\mathbf{g}$? I've seen some particular cases in which that is true, so I've tried to prove it myself but I just simply don't even know how to even start and I haven't been able to find references about this subject either.

If anyone can give me some hints or references about this I'd really appreciate it. Thank you in advance.

daw
  • 49,113
  • 2
  • 38
  • 76

1 Answers1

0

If I understand your question it can be restated as follows: you want to compute the second-order Taylor expansion of a multivariable inverse map. That is, given a column vector of functions $ \vec y(x)$ whose Taylor expansion up to second order is known at each point, find the corresponding expansion for the inverse function $\vec x(\vec y)$. This can be computed by implicit differentiation but it's a pain. The method outlined below just requires that we make some simple-minded algebraic substitutions, so no calculus is involved. (The calculus is swept under the rug, so to speak.)

The key idea is that the one-variable example $ y= e^x = 1 + x+ \frac{x^2}{2} +\dots$ and its inverse relation $x= \ln (y) = (y-1) -\frac{1}{2} (y-1)^2 +\ldots$ in which the quadratic terms have reversed signs generalizes to the multivariable setting. Here are the gory details.

The first-order expansion of $y$ in terms of $x$, written in matrix notation, starts off as $ [y] \approx [L][x]$ where square brackets denote matrices ( and $[L]$ is the Jacobian matrix of first-order partials evaluated at the point of expansion).

The second-order expansion $ \vec y= [L (\vec x)] + [Q(\vec x)] +\ldots$ is a sum of linear and quadratic terms.

It is helpful to create new auxiliary variables $[u]$ defined by the linear change of variables $[x]= [L]^{-1} [u]$. (Here $L$ is a constant matrix, the Jacobian matrix with entries frozen). This substitution is done so that $ [y]\approx [u]$ is correct up to second-order terms that will be dealt with next. That is, in these new variables we appear to be working with a transformation that is close to the identity map. This trick simplifies many things.

Then write the column vector $[Q(x)]$ as a matrix product of a square matrix and a column vector, as $[Q(x)]= [q_1(x)| q_2(x)| q_3(x), \ldots ] [x]$ where the lower-case $q_i(x)$ are column vectors (typeset as separated by vertical bars) forming a square matrix, each of whose entries is a first-order (linear) function of $x$. Recapping, the matrix product $[Q]=[q_1(x), q_2(x), ] [x]$ is a column vector whose entries are each quadratic scalar functions in $x$.

Write this all in terms of $[u]$ instead:

$y= [u]+ [ q_1(x(u))| q_2(x(u)), | \ldots ] [L^{-1}] u$

$$y= [u]+ [ \tilde q_1(u))| \tilde q_2(u), | \ldots ] [L^{-1}] u$$

where $\tilde q_i (x[u])= q_i(L^{-1}(u))$ is simply the result of expressing each $x$ in terms of $u$ in every scalar function in every column.

The inverse quadratic Taylor expansion

We claim that the quadratic Taylor expansion of the inverse functional relation that solve for $u$ in terms of $y$ is structurally identical to the preceding by swapping the letters $u$ and $y$ everywhere but you reverse the signs in the quadratic terms:

$$[u]= [y]- [ \tilde q_1((y)))| \tilde q_2(y), | ] [L^{-1}] [y]$$

Finally, you wish to switch from $u$ to your original variables and solve for $x$ by making the substitution $ [x]= [L]^{-1} [u]$. That is, multiply from the left the previous equation by the constant matrix $[L^{-1}]$.

Verification of the claim.

Try composing $y= u + [q(u)][u]$ with the substitution $ u(y)= y- [q(y)][y]$ and see if you get the identity map to the correct order.

$y=?= ( y - [q(y)][y] + q[u(y)][u(y)] $

$y=?= ( y - [q(y)][y] + q[y- \ldots ][y- \ldots]$ is correct up to terms of third-order or higher, which arise when you include the suppressed terms of second-order denoted by the ellipses ($\ldots$).

I suggest working some simple explicit one-variable and two-variable examples by hand and checking that all the steps work out as claimed. (It is a pain to typeset all the numerical details but I hope these pointers will get you started.)

P.S. This related post inverse function power series provides a method for getting the Taylor expansions of arbitrarily high order by a recursive algorithm. It also shows an explicit numerical example.

MathFont
  • 4,837