From the modern perspective, differentiation and integration act on functions, not on equations. This fact gets hidden because the most convenient way to write common functions is to use an algebraic formula that expresses the output in terms of the input.
For example, $y = f(x) = x^{2}$ commonly denotes the squaring function, while $x = g(y) = \sqrt{y}$ denotes one branch of the inverse of $f$. (In each equation, $x$ and $y$ are dummy variables; one could write $y = g(x) = \sqrt{x}$ with identical meaning. The benefit of writing $x = \sqrt{y}$ is that these dummy variables are (mostly) compatible with the dummy variables in $y = x^{2}$, allowing us to work both both functions "in the same scope".)
"Differentiating $y = x^{2}$ with respect to $x$, obtaining $\frac{dy}{dx} = 2x$" amounts to the assertion $f'(x) = 2x$; the derivative of the squaring function is the multiplication-by-two function.
"Differentiating $y = x^{2}$ with respect to $y$, obtaining $1 = 2x\frac{dx}{dy}$" amounts to the assertion $g'(y) = \frac{1}{2\sqrt{y}}$.
Leibniz notation blurs an important distinction, denoting a derivative function $f'$ and the value of a derivative $f'(x)$ at an arbitrary point with the same symbol, $\frac{dy}{dx}$. Inevitably this leads to confusion among students who carefully ponder the meaning of notation. The "identity"
$$
\frac{dx}{dy} = \frac{1}{dy/dx}
\tag{1a}
$$
is a perfect example. Written more carefully, the claim is that if the composition $f \circ g$ is the indentity function on some open interval $I$ and if $f$ is differentiable on $g(I)$, then
$$
g'(y) = \frac{1}{f'(x)} = \frac{1}{f'(g(y))}
\tag{1b}
$$
for all $y$ in $I$ where $g'(f(y)) \neq 0$. While (1a) looks like manipulation of fractions, closer inspection (1b) reveals the functions $f'$ and $g'$ are not reciprocals, their values at suitably-chosen inputs related by $f$ are reciprocals. Equation (1a) is error-prone for both interpretations of Leibniz notation.
In this same spirit, integrals are not taken "with respect to a variable". Spivak's notation, $\int_{a}^{b} f$ instead of the near-universal $\int_{a}^{b} x^{2}\, dx$ (for $f(x) = x^{2}$), explicitly acknowledges this. Unfortunately, this analogue of Newton notation for integrals is inconvenient in practice. As with derivatives, it's much easier to specify a function (and its (anti-)derivatives) by giving the value at a "generic" input $x$ and using compatibly-named dummy variables between multiple functions.
"Integrating the chain rule" (with $f$ continuous and $h'$ continuously-differentiable, say) gives
$$
\int_{h(a)}^{h(b)} f(u)\, du = \int_{a}^{b} f(h(x))\, h'(x)\, dx.
\tag{2a}
$$
This is customarily explained in calculus books by "substituting $u = h(x)$, so that $du = h'(x)\, dx$, and $u = h(a)$ when $x = a$, etc." It should be clear why calling the respective sides "the integral of $y = f(u)$ with respect to $u$" and "the integral of $y = f(u)$ with respect to $x$" would be logically inconsistent, however. The limits of integration differ (as SchrodingersCat notes), and the functions being integrated are not the same.
In "Newtonian" notation, for the record, the change of variables formula reads
$$
\int_{h(a)}^{h(b)} f = \int_{a}^{b} (f \circ h)\, h'.
\tag{2b}
$$