You ask: $ $ are there easier proofs of the linear representation of the gcd? $ $ (i.e. Bezout's equation). $ $ One candidate is the conceptual proof below - which highlights the implicit key ideal structure - whose fundamental role will become clearer when one studies abstract algebra and number theory.
The set $\rm\,S\,$ of integers of form $\rm\,a\,x + b\,y,\ x,y\in \mathbb Z\,$ is closed under subtraction so, by the Lemma below, all $\rm\,n\in S\,$ are divisible by the least positive $\rm\,d\in S.\,$Thus $\rm\,a,b\in S\:$ $\Rightarrow$ $\rm\:d\,|\,a,b,\:$ i.e. $\rm\,\!d\,$ is a common divisor of $\rm\:\!a,b,\:\!$ necessarily greatest, by $\rm\,c\,|\,\color{#0a0}{a,b}\,$ $\Rightarrow$ $\rm\,c\,|\,\color{#0a0}{d =a\, x_1+b\, y_1}$ $\Rightarrow$ $\rm\,c\le d,\,$ (i.e. common divisors representable in such $\rm\color{#0a0}{linear\ form}$ are always greatest).
Lemma $\ \ $ If a nonempty set of positive integers $\rm\: S\:$ satisfies $\rm\ n > m\ \in\ S \ \Rightarrow\ \: n-m\ \in\ S$
then every element of $\rm\:S\:$ is a multiple of the least element $\rm\:\ell \in S.$
Proof ${\bf\ 1}\,\ $ If not there is a least nonmultiple $\,n\in \rm S,\,$ contra $\rm\,n-\ell \in S\,$ is a nonmultiple of $\rm\,\ell.$
Proof ${\bf\ 2}\,\rm\ \ S\,$ closed under subtraction $\rm\,\Rightarrow\,S\,$ closed under remainder (mod), when it's $\ne 0,$ since mod is computed by repeated subtraction, i.e. $\rm\ a\ mod\ b\, =\, a - k b\, =\, a\!-\!b\!-\!b\!-\cdots\! -\!b.\,$ Therefore $\rm\,n\in S\,$ $\Rightarrow$ $\rm\, (n\ mod\ \ell) = 0,\,$ else it is in $\,\rm S\,$ and smaller than $\rm\,\ell,\,$ contra minimality of $\rm\,\ell.$
Remark $\ $ In a nutshell, two applications of induction yield the following inferences
$$\begin{align} &\rm S\,\ \rm closed\ under\ {\bf subtraction}\\[.1em]
\Rightarrow\ \ &\rm S\,\ closed\ under\ {\bf mod} = remainder = repeated\ subtraction \\[.1em]
\Rightarrow\ \ &\rm S\,\ closed\ under\ {\bf gcd} = repeated\ mod\ (Euclidean\ algorithm) \end{align}\qquad\qquad$$
Interpreted constructively, this yields the extended Euclidean algorithm for the gcd. Namely, $ $ starting from the two elements of $\rm\,S\,$ that we know: $\rm\ a \,=\, 1\cdot a + 0\cdot b,\ \ b \,=\, 0\cdot a + 1\cdot b,\, $ we search for the least element of $\rm\,S\,$ by repeatedly subtracting elements of $\,\rm S\,$ to produce smaller elements of $\rm\,S\,$ (while keeping track of each elements linear representation in terms of $\rm\,a\,$ and $\rm\,b).\:$ This is essentially the subtractive form of the Euclidean GCD algorithm (vs. the mod / remainder form), where each reduction / descent step employs subtraction (vs. iterated subtraction = mod). In more general rings with a Euclidean division algorithm, e.g. polynomials over a field, we need to use proof $2$ to descend to a "smaller" element via mod (remainder), since generally mod can no longer be calculated by repeated subtraction.
See this answer for a direct inductive proof. The conceptual structure will be clarified when one studies ideals of rings, where the above proof generalizes to show that Euclidean domains are PIDs.
Beware $ $ This $\rm\color{#0a0}{linear}$ representation of the the gcd need not hold true in all domains where gcds exist, e.g. in the domain $\rm\:D = \mathbb Q[x,y]\:$ of polynomials in $\rm\:x,y\:$ with rational coefficients we have $\rm\:gcd(x,y) = 1\:$ but there are no $\rm\:f(x,y),\: g(x,y)\in D\:$ such that $\rm\:x\:f(x,y) + y\:g(x,y) = 1;\:$ indeed, if so, then evaluating at $\rm\:x = 0 = y\:$ yields $\,0 = 1\,$ in $\rm D,\,$ contra $\rm D$ is a domain.