Many questions arise. What is the size of your linear system? What tools are you using to solve the problem? What is your model? Is your system matrix fixed for each set of measurements?
The answers will force the answer down different paths. In general, if you are writing a computer program, you will call a library like LAPACK, which has nice routines to solve the least squares problem.
1.) Alert user @Shiyue has pointed out that there are fast ways to compute $\mathbf{A}^{+}$ that don't involve the SVD. What is the best way to compute the pseudoinverse of a matrix?
In numerical computation, it is a cardinal sin to compute the matrix inverse. You instead compute the solution vector. solution.
Please read up on numerical linear algebra lest you lose a summer to reinventing the wheel. This forum is a great place to track your progress. Writing code to understand computation is an excellent practice. The world needs more people who understand algorithms.
2.) Great question! Always remember: the least squares fit is the best fit. There is no guarantee that it is a good fit.
You can apply the wrong model to the data and get a lousy match, but a mathematical optimization.
These posts address the question in a bit more depth: Least Squares Sensitivity to data, What parameters can be used to tell a least squares fit is “well fit”?. The upshot is that you exploit the power of the least squares fit which provides quantitative feedback on the quality of the fit. In other words, compute the errors associated with the computation of the free parameters. Something like
$$
a_{k} \pm \sigma_{k}, \ k = 1, n
$$
If you post your model, it will be a small matter to compute the form of these error terms.
In response to edited question:
Linear System
Given a full column rank system matrix $\mathbf{B}\in\mathbb{C}^{m\times 6}_{6}$, and a data vector $S\in\mathbb{C}^{m}$, the solution vector is $D\in\mathbb{C}^{6}$. The linear system is
$$
\mathbf{B}D=-S
\tag{1}
$$
By construction, the problem is overdetermined, that is $m>6$.
Least squares problem
The least squares problem is defined as
$$
D_{LS} = \left\{
D \in \mathbb{C}^{6} \colon
\lVert \mathbf{B}D + S \rVert^{2}_{2}
\text{ is minimized}
\right\}
\tag{2}
$$
Least squares solution
The solution for the least squares problem in $(2)$ is
$$
D_{LS} = \mathbf{B}^{+}S +
\underbrace{\left( \mathbf{I}_{6} - \mathbf{B}^{+}\mathbf{B} \right)}_{=0\text{ when }m\ge 6} C, \qquad C\in\mathbb{C}^{6}
$$
Define the residual error as in $(2)$;
$$
r = \mathbf{B}D + S
$$
The total error is then $r\cdot r = r^{2}$.
Error propagation
The measurements are not perfect and we need to quantify how these errors propagate through to the solution.
Let's recast $(1)$ in terms of the normal equations
$$
\mathbf{B}^{*} \mathbf{B} D = -\mathbf{B}^{*} S
\tag{3}
$$
This is now a square system you could solve with tools like Gaussian elimination or $\mathbf{LU}$ decomposition. The solution is represented as
$$
D = - \left(
\underbrace{\mathbf{B}^{*} \mathbf{B}}_{\alpha}
\right)^{-1}
\mathbf{B}^{*} S
$$
The matrix $\alpha$ is called the curvature matrix. The diagonal elements of the inverse of this matrix are
$$
\epsilon_{k} = \alpha^{-1}_{kk}, \quad k = 1, 6
$$
The other ingredient is the parent standard deviation, approximated as
$$
s^{2} = \frac{r^{2}}{m-6}.
$$
Notice this factor reflects the overall quality of the fit.
Finally, the uncertainty in the fit parameters is
$$
\sigma_{k} = \sqrt{s^{2} \epsilon}
$$
The answer could be quoted as
$$
S \pm \sigma
$$
Interpretation
This is the quality measure you so wisely looked for, a factor too often overlooked. This is how we find our significant digits.
Let's look at an arbitrary element. If
$$
S_{k} = 4.01 \pm 0.21
$$
we see that we know this value to within 5%.
If
$$
S_{k} = 4.0 \pm 4.2
$$
the signal is comparable to the noise. If you compute $S_{k}= 0.001$ and $\sigma_{k} = 0.11$, you would quote
$$
S_{k} = 0.00 \pm 0.11
$$
In this case the computation of the value $S_{k}$ is essentially an effluent.
Amplification
1) Why should I use SVD method in reversing the matrix, not simply (BB)^-1*B? Or perhaps it doesn't matter?
The SVD is the gold standard for ill-conditioned linear systems, yet is computationally very expensive. The normal equations are a decent place to start because you can use a more familiar tool like Gaussian elimination. If you have good data, you will get a good answer.
'Stable' Ways To Invert A Matrix
Comparing LU or QR decompositions for solving least squares
3) Could you recommend me some basic literature with such a stuff?
A popular modern book is Matrix Analysis and Applied Linear Algebra by Carl Meyer. Excellent development of Gaussian elimination and the need for pivot maximization.
A more in depth look at the subject is in Numerical Linear Algebra by Lloyd N. Trefethen and David Bau.
The ubiquity of the texts insures you will get prompt answers to your question here as you work through the texts.