If I understand your question it can be restated as follows: you want to compute the second-order Taylor expansion of a multivariable inverse map.
That is, given a column vector of functions $ \vec y(x)$ whose Taylor expansion up to second order is known at each point, find the
corresponding expansion for the inverse function $\vec x(\vec y)$. This can be computed by implicit differentiation but it's a pain. The method outlined below just requires that we make some simple-minded algebraic substitutions, so no calculus is involved. (The calculus is swept under the rug, so to speak.)
The key idea is that the one-variable example $ y= e^x = 1 + x+ \frac{x^2}{2} +\dots$ and its inverse relation $x= \ln (y) = (y-1) -\frac{1}{2} (y-1)^2 +\ldots$ in which the quadratic terms have reversed signs generalizes to the multivariable setting. Here are the gory details.
The first-order expansion of $y$ in terms of $x$, written in matrix notation, starts off as
$ [y] \approx [L][x]$ where square brackets denote matrices ( and $[L]$ is the Jacobian matrix of first-order partials evaluated at the point of expansion).
The second-order expansion $ \vec y= [L (\vec x)] + [Q(\vec x)] +\ldots$ is a sum of linear and quadratic terms.
It is helpful to create new auxiliary variables $[u]$ defined by the linear change of variables $[x]= [L]^{-1} [u]$. (Here $L$ is a constant matrix, the Jacobian matrix with entries frozen). This substitution is done so that $ [y]\approx [u]$ is correct up to second-order terms that will be dealt with next. That is, in these new variables we appear to be working with a transformation that is close to the identity map. This trick simplifies many things.
Then write the column vector $[Q(x)]$ as a matrix product of a square matrix and a column vector, as $[Q(x)]= [q_1(x)| q_2(x)| q_3(x), \ldots ] [x]$ where the lower-case $q_i(x)$ are column vectors (typeset as separated by vertical bars) forming a square matrix, each of whose entries is a first-order (linear) function of $x$. Recapping, the matrix product $[Q]=[q_1(x), q_2(x), ] [x]$ is a column vector whose entries are each quadratic scalar functions in $x$.
Write this all in terms of $[u]$ instead:
$y= [u]+ [ q_1(x(u))| q_2(x(u)), | \ldots ] [L^{-1}] u$
$$y= [u]+ [ \tilde q_1(u))| \tilde q_2(u), | \ldots ] [L^{-1}] u$$
where $\tilde q_i (x[u])= q_i(L^{-1}(u))$ is simply the result of expressing each $x$ in terms of $u$ in every scalar function in every column.
The inverse quadratic Taylor expansion
We claim that the quadratic Taylor expansion of the inverse functional relation that solve for $u$ in terms of $y$ is structurally identical to the preceding by swapping the letters $u$ and $y$ everywhere but you reverse the signs in the quadratic terms:
$$[u]= [y]- [ \tilde q_1((y)))| \tilde q_2(y), | ] [L^{-1}] [y]$$
Finally, you wish to switch from $u$ to your original variables and solve for $x$ by making the substitution $ [x]= [L]^{-1} [u]$. That is, multiply from the left the previous equation by the constant matrix $[L^{-1}]$.
Verification of the claim.
Try composing $y= u + [q(u)][u]$ with the substitution $ u(y)= y- [q(y)][y]$ and see if you get the identity map to the correct order.
$y=?= ( y - [q(y)][y] + q[u(y)][u(y)] $
$y=?= ( y - [q(y)][y] + q[y- \ldots ][y- \ldots]$ is correct up to terms of third-order or higher, which arise when you include the suppressed terms of second-order denoted by the ellipses ($\ldots$).
I suggest working some simple explicit one-variable and two-variable examples by hand and checking that all the steps work out as claimed. (It is a pain to typeset all the numerical details but I hope these pointers will get you started.)
P.S. This related post inverse function power series provides a method for getting the Taylor expansions of arbitrarily high order by a recursive algorithm. It also shows an explicit numerical example.