Problem statement: underdetermined system
Start with the linear system
$$
\begin{align}
\mathbf{A} x &= b \\
%
\left[
\begin{array}{cc}
1 & -1 \\
1 & -1 \\
\end{array}
\right]
%
\left[
\begin{array}{c}
x \\
y
\end{array}
\right]
%
&=
%
\left[
\begin{array}{c}
4 \\
6
\end{array}
\right]
%
\end{align}
$$
The system has matrix rank $\rho = 1$; therefore, if a solution exists, it will not be unique.
Provided $b\notin \color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)}$, we are guaranteed a least squares solution
$$
x_{LS} = \left\{ x\in\mathbb{C}^{2} \colon \lVert \mathbf{A} x_{LS} - b \rVert_{2}^{2} \text{ is minimized} \right\}
\tag{1}
$$
Subspace resolution
By inspection, we see that the row space is resolved as
$$
\color{blue}{\mathcal{R} \left( \mathbf{A}^{*} \right)} \oplus
\color{red}{\mathcal{N} \left( \mathbf{A} \right)}
=
\color{blue}{\left[
\begin{array}{r}
1 \\
-1
\end{array}
\right]} \oplus
\color{red}{\left[
\begin{array}{c}
1 \\
1
\end{array}
\right]}
$$
The column space is resolved as
$$
\color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus
\color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)}
=
\color{blue}{\left[
\begin{array}{c}
1 \\
1
\end{array}
\right]} \oplus
\color{red}{\left[
\begin{array}{r}
-1 \\
1
\end{array}
\right]}
$$
The coloring indicates vectors in the $\color{blue}{range}$ space and the $\color{red}{null}$ space.
Finding the least squares solution
Since there is only one vector in $\color{blue}{\mathcal{R} \left( \mathbf{A}^{*} \right)}$, the solution vector will have the form
$$
\color{blue}{x_{LS}} = \alpha
\color{blue}{\left[
\begin{array}{r}
1 \\
-1
\end{array}
\right]}
$$
The goal is to find the constant $\alpha$ to minimize (1):
$$
\color{red}{r}^{2} = \color{red}{r} \cdot \color{red}{r} =
\lVert
\color{blue}{\mathbf{A} x_{LS}} - b
\rVert_{2}^{2}
=
8 \alpha ^2-40 \alpha +52
$$
The minimum of the polynomial is at
$$
\alpha = \frac{5}{2}
$$
Least squares solution
The set of least squares minimizers in (1) is then the affine set given by
$$
x_{LS} = \frac{5}{2}
\color{blue}{\left[
\begin{array}{r}
1 \\
-1
\end{array}
\right]}
+
\xi
\color{red}{\left[
\begin{array}{r}
1 \\
1
\end{array}
\right]}, \qquad \xi\in\mathbb{C}
$$
The plot below shows how the total error $\lVert \mathbf{A} x_{LS} - b \rVert_{2}^{2}$ varies with the fit parameters. The blue dot is the particular solution, the dashed line homogeneous solution as well as the $0$ contour - the exact solution.
Addendum: Existence of the Least Squares Solution
To address the insightful question of @RodrigodeAzevedo, consider the linear system:
$$
\begin{align}
\mathbf{A} x &= b \\
%
\left[
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
\end{array}
\right]
%
\left[
\begin{array}{c}
x \\
y
\end{array}
\right]
%
&=
%
\left[
\begin{array}{c}
0 \\
1
\end{array}
\right]
%
\end{align}
$$
The data vector $b$ is entirely in the null space of $\mathbf{A}^{*}$:
$b\in \color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)}$
As pointed out, the system matrix has the singular value decomposition. One instance is:
$$\mathbf{A} = \mathbf{U}\, \Sigma\, \mathbf{V}^{*} = \mathbf{I}_{2}
\left[
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
\end{array}
\right]
\mathbf{I}_{2}$$
and the concomitant pseudoinverse,
$$\mathbf{A}^{\dagger} = \mathbf{V}\, \Sigma^{\dagger} \mathbf{U}^{*} =
\mathbf{I}_{2}
\left[
\begin{array}{cc}
1 & 0 \\
0 & 0 \\
\end{array}
\right]
\mathbf{I}_{2} = \mathbf{A}$$
Following least squares canon, the particular solution to the least squares problem is computed as
$$
\color{blue}{x_{LS}} = \mathbf{A}^{\dagger} b =
\color{red}{\left[
\begin{array}{c}
0 \\
0 \\
\end{array}
\right]}
\qquad \Rightarrow\Leftarrow
$$
The color collision (null space [red] = range space [blue]) indicates a problem. There is no component of a particular solution vector in a range space!
Mathematicians habitually exclude the $0$ vector a solution to linear problems.