first post ever on stack exchange in years of using it.
Can anyone provide a historical or logical deduction of the reasoning behind multiplication by 1 via a fraction? For instance, in finance theory, specifically the DuPont formula, we see that we can multiply by Sales/Sales to alter our result "intuitively" and come up with different results. Alternatively in some algebraic proofs we multiply by x/x and then rearrange. Etc.
What I can't seem to understand is why this works. We see this across mathematics, multiplying all sorts of equations by fractions equivalent to 1 so that they can be rearranged through cancelling out numerators and denominators. Why are we "allowed" to do this? Once we cancel we are left with residual denominators or numerators. Is there an easy way of visualizing how this is allowed?
$$ 1 \cdot x = x \ \text{ for all real numbers } x$$
This is also intuitively pleasing: if we have one of something, then we have the something, no more or no less.
– Simon S Jun 15 '15 at 21:41