I was somewhat interested your comment that “(1) proves (2) simply because (1) assumes what is to be proved in (2)”. This sounds like a very common misunderstanding of how inductions works. For example, this question asks something very similar-sounding:
In the next step, one assumes the $n$th case is true, but how is this not assuming what we are trying to prove? Aren't we trying to prove any $n$th case is true? So how can we assume this without employing circular reasoning?
$\def\ri{{\color{darkred}{\implies}}}
\def\gi{{\color{darkgreen}{\implies}}
}$
This isn't the place to explain induction de novo, which would make this already too-long answer even longer, so I'll try to summarize, without trying to justify the logic. Suppose $\Phi(n)$ is some claim about the number $n$. Induction says that from $$\bbox[5px,border:2px solid red]{\Phi(0)}\tag{$\cdot$}$$ and $$\bbox[5px,border:2px solid red]{\forall k. \Phi(k)\gi\Phi(k+1)}\tag{$\cdot\cdot$}$$ we can conclude $$\bbox[5px,border:2px solid red]{\forall n.\Phi(n).}\tag{$\cdots$}$$
There are two important subtleties in the notation here:
$(\cdot\cdot)$ can be confusing; the scope of the $\forall$ extends all the way to the right, so that $(\cdot\cdot)$ is an abbreviation for $$\forall n.( \Phi(n)\gi\Phi(n+1))\tag{$\stackrel{\cdot\cdot}\smile$}$$ and is not the same as $$ (\forall n. \Phi(n))\gi\Phi(n+1)\tag{$\stackrel{\cdot\cdot}\frown$}$$
Although the variables in $(\cdot\cdot)$ and $(\cdots)$ are often written with the same letter, typically $n$, they are logically unrelated. I used different letters to make this clear.
In my original post, I suggested a sequence of lemmas, which I will now try to explain in more detail. Let us consider the following claim, which I will call $\Phi_0(n)$:
$$\forall z. P(0,n)\land P(n,z)\ri P(0,z).$$
This has a free variable, $n$, so it is neither true or false. it is a claim about an unspecified number $n$, and might be true for some values of $n$ and false for others. For example, this is claim $\Phi_0(17)$:
$$\forall z. P(0,17)\land P(17,z)\ri P(0,z).$$
Notice that it has no free variables, and so must be either true or false. (In fact it is true.)
I have used a red implication sign in $\Phi_0$ so that we can tell it apart from the green implication sign that appears in the induction part of an induction proof.
Following the pattern in the red boxes above, induction says that if we can prove
$$\begin{align}
\Phi_0(0)& \tag0\\
\forall k.(\Phi_0(k)& \gi \Phi_0(k+1))\tag1
\end{align}$$
then by induction we can conclude
$$\forall n.\Phi_0(n).\tag2$$
Replacing $\Phi_0$ with its definition, this says that if we can prove
$$\forall z. P(0,0)\land P(0,z) \ri P(0,z)\tag0$$
and
$$\begin{align}
\forall k.&((\forall z. P(0,k) &&\land P(k,z)&&\ri P(0,z))\\
&&&\gi\tag1\\
&(\forall z. P(0,k+1)&&\land P(k+1,z)&&\ri P(0,z)))
\end{align}$$
then by induction we can conclude $(2)$, which is
$$\forall n.\forall z. P(0,n)\land P(n,z)\ri P(0,z).\tag2$$
Notice that $(1)$ contains three implications: the green one was there before, because the induction step requires it, and also each instance of $\Phi_0$ contains a red one. In $(1)$, we're not proving that $a\implies b$; we're proving that if $a\implies b$ then $c\implies d$.
You agreed that $(0)$ was trivial, and I still think that $(1)$ should be provable. From these, induction says we can conclude $(2)$.
Okay, this would be a good place to pause and take a rest.
Now we consider $(2)$ as the base case of its own induction. Define $\Phi_2(n)$ to mean $$\forall y.\forall z. P(n,y)\land P(y,z) \ri P(n,z).$$ As in all induction proofs, the claim has exactly one free variable, $n$ in this case. We would like to prove $\forall x.\Phi_2(x)$, because this is exactly the theorem you said you were trying to prove. How can we do this?
Again, following the same pattern from the red boxes, induction says that if we can prove
$$\begin{align}
\Phi_2(0)& \tag{$2'$}\\
\forall k.(\Phi_2(k)& \gi \Phi_2(k+1))\tag3
\end{align}$$
then we can conclude
$$\forall n.\Phi_2(n).\tag4$$
which is exactly what you are looking for, but with $n$ instead of $x$, which does not matter. Expanding $\Phi_2$ in these premises, we find that if we can prove
$$\forall y .\forall z. P(0,y)\land P(y,z) \ri P(0,z)\tag{$2'$}$$
and
$$\begin{align}
\forall k.&(\forall y.\forall z. P(k,y) &&\land P(y,z)&&\ri P(k,z))\\
&&&\gi \tag3\\
&(\forall y.\forall z. P(k+1,y)&&\land P(y,z)&&\ri P(k+1,z)))
\end{align}$$
then we can conclude $(4)$ which is what you want. Item $(2')$ is already done; it is identical with $(2)$ except that the letter $n$ has become a letter $y$. I am pretty sure that $(3)$ is also doable, although I think you might need a third lemma along the way.
In any case, this is the general pattern, which does not depend on any trick such as a special property of $\Phi$.
The only other thing I would like to add is that this combined pattern, where you have a formula with two variables and you use induction on one variable and then the other, is not some crazy thing I made up, but is common enough to have a name; it is called a “double induction” and in more advanced mathematics one often sees things like “and then we can prove $\Phi(x,y)$ for all $x$ and $y$ via a straightforward double induction.” The general pattern is that from these:
$$\begin{align}\Phi(0,0)& \\
\forall j.\Phi(0,j)& \implies \Phi(0,j+1) \\
\forall k.(\forall n.\Phi(k,n))& \implies (\forall n.\Phi(k+1,n))
\end{align}
$$we can conclude $$\forall m.\forall n.\Phi(m,n).$$
One can of course extend this pattern to triple induction and so forth. I rather wish now that I had started my answer with this point, but I can't justify another rewrite. Sorry!
The rest of this post is my original answer, which you have already read. I left it here fore reference.
I haven't thought this through (I'm in bed, and I'll try to come back in the morning and fill in details) but I think you ought to be able to proceed via the following sequence of lemmas:
$$\begin{align}
\forall z.P(0,0)\land P(0,z)&\implies P(0,z)\qquad\tag{0}\text{(trivial)}\\
\forall z.(P(0,y)\land P(y,z) \implies P(0,z))&\implies(P(0,y+1)\land P(y+1,z) \implies P(0,z))\tag{1}\\
\forall y.\forall z.(P(0,y)\land P(y,z) &\implies P(0,z))\tag{2}\\
\forall y.\forall z.(P(x,y)\land P(y,z) \implies P(x,z))&\implies(P(x+1,y)\land P(y,z) \implies P(x+1,z))\tag{3}\\
\forall x.\forall y.\forall z.P(x,y)\land P(y,z) &\implies P(x,z)\tag4
\end{align}
$$
You get $(2)$ from
$(0)$
and $(1)$ by induction on $y$, and then you get $(4)$ from
$(2)$ and $(3)$ by induction on $x$. And $(4)$ is the theorem you wanted to prove. The key point is that each induction step adds only one quantifier, and you need three quantifiers. The first one is so easy you can get it for free, so you need two inductions.
Although it's possible that to get $(3)$ you'll need to do another induction on $y$; like I said, I haven't worked it all out.
The observation that $P$ is equality is a trick, which happens to work for this particular example. But in general you would need to use the pattern above, and I think that's what you were looking for.