- Why we can't multipy column of first matrix with row of second matrix in matrix multiplication
$A\cdot B$ multiplies rows of $A$ with columns of $B$. If you want to multiply columns of $A$ with rows of $B$ you can
$$A^T\cdot B^T = (B\cdot A)^T$$
However, multiplying rows with columns is just a convention used everywhere. We could use linear algebra similarly if we used consistently multiplying columns with rows, however this would not gain anything, it would just cause confusion if we mixed up two conventions. Moreover, the current convention has the advantage that composition $$(A\circ B)x = A(B(x)) = (AB)x = A(Bx)$$ just looks the same like ordinary function composition.
- Why not we use natural multiplication of corresponding entries just like in addition. From where this idea comes ?
You can represent linear functions from $K^n$ to $K^m$ by $x\mapsto Ax$ with $A\in K^{m\times n}$ and $x\in K^n$ a row vector. This only works when you define matriX multiplication as usual.
In addition, $C=B \cdot A$ represents the composition of two linear mappings
$$K^n \stackrel A \to K^m \stackrel B \to K^\ell$$
only of you use the canonical matrix multiplication.
Using component-wise multiplication doesn't gain you much. This is similar to the situation when you construct the a multiplication in $\Bbb R^2$ (the Complex numbers) by a rule that is not so obvious. If you used
$$(x_1,y_1)\cdot (x_2,y_2) := (x_1\cdot x_2,y_1\cdot y_2)$$
Then the result is not a field, i.e. you cannot find inverse for multiplication for non-zero elements like in $(0,0)=(1,0)\cdot (0,1)$ i.e. you cannot define division properly.
In the context of linear mappings this means that you can define inverse mappings for all matrices with non-zero determinant, and the determinant has nice multiplication properties:
$$\det(A\cdot B) = \det(A)\cdot \det(B)$$
$$\det(A^{-1}) = \det(A)^{-1}$$
if $K$ is a field. Even if $K$ is not a field and just, say a ring, using matrix multiplication is the way to go in 99.9% of cases.