All Quantum Error Correction does is suppress the probability of error compared to a 'barebones' approach. If you don't use QECC but the error rates are very low one could, in principle, do without this extra layer of suppressions.
However, as already pointed out, the scaling is exponential: for (an unphysical picture of) uncorrelated layers in the computation (let's say $\sim 10000$), with an error rate of $p = 10^{-2}$, the probability of no error happening is $(1-p)^{10000} ~= 10^{-42}$. That's not a lot.
You have the extra insight that we still can go for some 'majority' analysis after repeated runs. After enough repeated runs we gather some statistics, and assume the most often returned answer to be the correct one. However, with the above error rate the number of repetitions needed to get sensible statistics is, quite literally, out of this world.
QECC's generally suppress the error rate with an exponential factor of $2-3$; so we get $(1-p^{2})^{10000} \sim (1-p^{3})^{10000} = 0.37 \sim 0.99$. That is a tremendous improvement.
Remember, some computations that QC's are envisioned to perform are even hard to validate. Consider, for instance, some very big Hamiltonian, of which we ask the QC to give us its ground state energy. We have no way of checking, with a classical computer, if any returned answer even is the ground state.
So if we have this huge list of different computation outcomes, of which some are supposedly the correct ones, it can be very hard indeed to check if one of those outcomes is actually the correct one.