I know it may sound stupid but I'm just genuinely wondering about it....
$$\frac ab\times\frac1c=\frac a{bc}$$ where $b,c\ne0$.
How can we multiply numerators by numerators and denominators by denominators?
Is it just a rule? Or can it be proved?
I know it may sound stupid but I'm just genuinely wondering about it....
$$\frac ab\times\frac1c=\frac a{bc}$$ where $b,c\ne0$.
How can we multiply numerators by numerators and denominators by denominators?
Is it just a rule? Or can it be proved?
You can think of multiplication as meaning "of". So what is $2/5$ of $3/7$ (for example)?
Draw a picture of a cake (a rectangular cake) sliced into 7 equal vertical slices, with $3$ of those slices having red frosting. That's $3/7$ of the cake.
Take that 3/7 of the cake and slice it horizontally into 5 equal pieces, and pour sprinkles on 2 of those 5 pieces. (When you're doing the horizontal slicing, slice the entire cake horizontally while you're at it.)
The portion of the cake with sprinkles is 2/5 of 3/7. But if you draw the picture, you see that the cake has been chopped into 35 equal pieces (5 groups of 7), and 6 of those 35 pieces have sprinkles. So, $$ \frac{2}{5} \text{ of } \frac{3}{7} = \frac{2 \times 3}{5 \times 7}. $$
There are three steps to this process
Steps 1 and 2 can be done many different ways, and for each combination, step 3 will be done differently.
Great question! The short answer to this: it works because we defined it like this. (I assume we are talking about multiplication of rational numbers)
We are worried, however, whether the operation is well-defined. It means that the result $$\frac{a}{b} \cdot \frac{x}{y} = \frac{ax}{by} $$ must not depend on the choice of fractions. It can't be that this equality holds for some fractions, but not for some other fractions. That would make the operation ill-defined.
The process of verification is quite an involved one, however, especially for multiplication.
On a quick search I did find this which covers all one needs.
To give a slightly different spin on this problem. Intuition paves the way for how we want to define certain operations. Other answers give an intuitive explanation why multiplying two fractions produces a certain fraction. We used these intuitions to define how multiplication of two fractions behaves. But to be absolutely sure we didn't make a mistake, we must also verify the operation is well-defined and that is beyond reach for intuition.
This idea of well-definedness is very important in mathematics not just as a failsafe for addition and multiplication of (rational) numbers to be bulletproof.
As @Arthur points out, understanding why fractions multiply as they do depends on understanding what a fraction is. That's a subtle question.
There are ways to answer your particular question if you choose to think of fractions as what you get when you cut up pies, but I think the best way starts with defining (thinking about) $1/x$ as the number $?$ that solves the equation $$ ? \times x = 1 . $$ Then you can use the ordinary rules of arithmetic to show that the left side of your equation is a solution to the equation $$ ? \times bc = a $$ and so must equal $a/(bc)$.
Related How to make sense of fractions?
This is just how we've (in most cases, at any rate) chosen to define multiplication of rational numbers $\mathbf Q.$ And it can be proven to work (by this I mean that it is a well-defined binary operation on $\mathbf Q$). Of course, this definition has some intuition behind it about how rational numbers should behave under multiplication. However, all this can be seen quite neatly by a development of the system $\left(\mathbf Q, \times \right)$ from the natural numbers $\mathbf N:=\{0,1,2,3,\ldots\}$ and the operation $\times$ defined on them in the usual recursive manner which eventually boils down to the primitive successor function. There may be other ways to effect this development, though, but I think this accords most with intuition.
The reciprocal of a nonzero number $x$ is usually denoted by $\dfrac 1x$ or by $x^{-1}$. Both of these notations are useful because they imply a lot of things that can later be show to be true. In other words, the notation coerces you into treating the multiplicative inverse of $x$ correctly when you learn more stuff.
Particularly in your case, the notation is confusing things instead of simplifying things.
So, just for this discussion, lets assume this postulate.
1. POSTULATE For every non zero real number $x$ there exists a non zero real number $x^*$, called the multiplicative inverse of $x$, such that $x x^* = x^* x = 1$.
We start with an important detail.
2. THEOREM. Let $x$ be a non zero number. If $xy=1$ for some real number $y$, then $y = x^*$.
In other words, the multiplicative inverse of $x$ is unique.
PROOF. Suppose $xy=1$. Then $x^* = x^*(1) = x^*(xy) = (x^*x)y = 1y = y$.
3. NOTATION. Let $x$ and $y$ be real number with $y$ non zero. Then we define $x \div y = \dfrac xy = xy^*$
4. THEOREM. Let $x$ and $y$ be non zero real numbers. Then $x^*y^* = (xy)^*$.
PROOF. $(xy)(x^*y^*) = (xx^*)(yy^*) = 1 \cdot 1 = 1$. By theorem (2.), $x^*y^* = (xy)^*$.
5. THEOREM. Let $a,b,c,d$ be real numbers with $b,d$ non zero. Then $\dfrac ab \cdot \dfrac cd = \dfrac{ac}{bd}$.
PROOF. $\dfrac ab \cdot \dfrac cd = (ab^*)(cd^*) = (ac)(b^*d^*) = (ac)(bd)^* = \dfrac{ac}{bd}$.