If Numerical Analysis says that something is not worth it and that it is generally believed that the extra computations involved in this procedure are not worthwhile in the majority of linear systems, then all of your alarm bells should go off. How much worth is a "belief" in mathematics in the first place? And how would such a "majority" of linear systems be defined? So I wouldn't be surprised if the real reason for scaled pivoting is ease of calculations and that's it.
That being said, instead of heading for pivoting strategies, what I'm going to advertise here is rather to avoid any form of pivoting.
The method to be employed for that purpose is called Least Squares. Instead of solving the original system
$A x = b$ , we first square the equations and then solve the system $S x = A^T A x = A^T b$ instead. There are
a number of advantages and disadvantages with this method. A disadvantage is that the equations must first
be squared, at cost of computation time. But there are dumb and there are smart methods to do the squaring.
Another disadvantage is that the conditioning of the equations becomes worse, namely proportional to
the square of the original conditioning. The latter phenomenon may be cured by first optimizing the condition
of the original the equations, which can be done by norming them, row by row. Meaning that the coefficients
$a_{ki}$ of each row $(k)$ are divided by the square root of the sum of the squared coefficients of that row:
$$
a_{ki} := \frac{a_{ki}}{\sqrt{\sum_i a_{ki}^2}}
$$
Let's talk about the advantages now. It's easy to see that the equations have become symmetric, which
definitely is an advantage. (Note that if the symmetry of the original system is destroyed by norming
then it is restored again by squaring) There is no pivoting needed, because:
$$
S_{ij} = S_{ji} = \sum_k a_{ki} a_{kj} \quad \Longrightarrow \quad S_{ii} = \sum_k a_{ki}^2
$$
And this sum is only zero if all coefficients in a column are zero, which is certainly not the case with a
non-singular matrix. Thus all pivots are on the main diagonal. Furthermore:
$$
\vec{a}_i = \begin{bmatrix} a_{1i} \\ a_{2i} \\ \cdots \\ a_{Ni} \end{bmatrix}
\quad ; \quad
\vec{a}_j = \begin{bmatrix} a_{1j} \\ a_{2j} \\ \cdots \\ a_{Nj} \end{bmatrix}
\qquad \Longrightarrow \qquad \left\{ \begin{matrix}
\left(\vec{a_i}\cdot\vec{a_j}\right) = \sum_k a_{ki} a_{kj} = S_{ij} \\
\left(\vec{a_i}\cdot\vec{a_i}\right) = \sum_k a_{ki} a_{ki} = S_{ii} \\
\left(\vec{a_j}\cdot\vec{a_j}\right) = \sum_k a_{kj} a_{kj} = S_{jj} \end{matrix} \right.
$$
Schwarz inequality:
$$
\left(\vec{a_i}\cdot\vec{a_j}\right)^2 \le
\left(\vec{a_i}\cdot\vec{a_i}\right)\left(\vec{a_j}\cdot\vec{a_j}\right)
\qquad \Longleftrightarrow \qquad
S_{ij}^2 \le S_{ii} S_{jj}
$$
With the AM-GM inequality we find from here:
$$
\frac{S_{ii}+S_{jj}}{2} \ge \sqrt{S_{ii}S_{jj}} \ge \left| S_{ij} \right|
$$
With other words: the (arithmetic/geometric) mean of two main diagonal elements is positive
and always greater than the absolute value of the corresponding off-diagonal element.
This is consistent with the fact that all pivots are on the main diagonal.
Further properties are that for an orthogonal matrix $A$ the squared equations become the
identity: $S = A^T A = I$ and the sum of all main diagonal matrix elements equals the order of
the system: $\sum_k S_{kk} = N$.
Simple $2 \times 2$ example:
$$
\left\{ \begin{matrix} x_2 = 1 \\ 2 x_1 - x_2 = 1 \end{matrix} \right.
\qquad \Longleftrightarrow \qquad
\begin{bmatrix} 0 & 1 \\ 2 & -1 \end{bmatrix}
\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} =
\begin{bmatrix} 1 \\ 1 \end{bmatrix}
$$
Standard Gaussian elimination will fail because there is a zero in the first row and first column.
Let the equations be normed in the first place:
$$
\begin{bmatrix} 0 & 1 \\ 2 & -1 \end{bmatrix}
\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} =
\begin{bmatrix} 1 \\ 1 \end{bmatrix}
\qquad \Longleftrightarrow \qquad
\begin{bmatrix} 0 & 1 \\ 2/\sqrt{5} & -1/\sqrt{5} \end{bmatrix}
\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} =
\begin{bmatrix} 1 \\ 1/\sqrt{5} \end{bmatrix}
$$
Least squares equivalent (calculating the square roots can always be avoided):
$$
\begin{bmatrix} 0 & 2/\sqrt{5} \\ 1 & -1/\sqrt{5} \end{bmatrix}
\begin{bmatrix} 0 & 1 \\ 2/\sqrt{5} & -1/\sqrt{5} \end{bmatrix}
\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} =
\begin{bmatrix} 0 & 2/\sqrt{5} \\ 1 & -1/\sqrt{5} \end{bmatrix}
\begin{bmatrix} 1 \\ 1/\sqrt{5} \end{bmatrix}
\qquad \Longleftrightarrow \\
\begin{bmatrix} 4/5 & -2/5 \\ -2/5 & 6/5 \end{bmatrix}
\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} =
\begin{bmatrix} 2/5 \\ 4/5 \end{bmatrix}
$$
Gaussian elimination with the first row as a pivot:
$$
\begin{bmatrix} 4/5 & -2/5 \\ 0 & 5/5 \end{bmatrix}
\begin{bmatrix} x_1 \\ x_2 \end{bmatrix} =
\begin{bmatrix} 2/5 \\ 5/5 \end{bmatrix}
$$
Followed by backsubstitution, as usual.
EDIT. More information about Condition Numbers available
at the personal website MSE link.