Tl;dr
In my opinion, the dot product cannot be motivated naturally, because no single of its applications justifies this exact definition. However, the mere number of naturally occurring formulas which contain one or more terms of the form $v_1\cdot w_1+v_2\cdot w_2$ gives a hard to argue a posteriori motivation for this exact notation.
So the reason why the dot product is defined this way and no other: because this is the term which occures in hundreds of naturally emerging formulas, and no other.
The nature of definitions
In contrast to mathematical proofs or ideas for how to solve certain problems, which must be developed from the first second on, many definitions are given a posteriori, i.e. after the subject reached some maturity. The reason is that only after solving many similar problems, it turns out which definitions would have been useful in the first place. Many definitions arise for one of the following reasons:
- A certain important term is long, ugly or hard to remember. Therefore we introduce a short hand form to hide some complexity.
- A certain term occurs over and over, and it seems introducing a short-hand form creates some useful abstraction and might reveal what is really going on.
Another reasons for definitions, which also makes sense a priori, is the following:
- We know what we want to compute, but we lack the exact expression $-$ for now. Still, we have to develop a whole lot of theory until we have a result. Therefore we introduce a placeholder term. This is often done for quantities occurring from modeling reality, e.g. curve lengths etc.
The dot product is a classic example of the second motivation (among others like determinants, matrix multiplication, ...). Look at the following problems and their solutions. I will not show you how to derive them as this will be done as you advance in linear algebra (or you already know them):
- Do you want to compute the length of a vector $\mathbf v=(v_1,v_2)$? Do it like this: $$\sqrt{v_1\cdot v_1+v_2\cdot v_2}.$$
- Do you want to know the angle $\alpha$ between two vectors $\mathbf v=(v_1,v_2)$ and $\mathbf w=(w_1,w_2)$? Do it like this:$$\cos(\alpha)=\frac{v_1\cdot w_1+v_2\cdot w_2}{\sqrt{v_1\cdot v_1+v_2\cdot v_2}\sqrt{w_1\cdot w_1+w_2\cdot w_2}}.$$
- Do you need to project a vector $\mathbf v=(v_1,v_2)$ onto a plane with normal vector $\mathbf n=(n_1,n_2)$? Do it like this:
$$\mathbf v-\frac{v_1\cdot n_1+v_2\cdot n_2}{n_1\cdot n_1+n_2\cdot n_2}\mathbf n.$$
- Do you need to know if two vectors $\mathbf v=(v_1,v_2)$ and $\mathbf w=(w_1,w_2)$ are orthogonal? Check whether $$v_1\cdot w_1+v_2\cdot w_2=0.$$
All these problems arise naturally in a geometrically motivated subject like linear algebra. And do you see what all of them have in common? They all can benefit from the definition
$$\mathbf v\cdot \mathbf w := v_1\cdot w_1+v_2\cdot w_2.$$
All the complexity vanishes and we get (in this order):
$$\sqrt{\mathbf v\cdot \mathbf v},\qquad \cos(\alpha)=\frac{\mathbf v\cdot \mathbf w}{\sqrt{\mathbf v\cdot\mathbf v}\sqrt{\mathbf w\cdot\mathbf w}},\qquad \mathbf v-\frac{\mathbf v\cdot\mathbf n}{\mathbf n\cdot\mathbf n}\mathbf n,\qquad\mathbf v\cdot\mathbf w=0.$$
Further simplification can be obtained via the definition $\|\mathbf v\|=\sqrt{\mathbf v\cdot\mathbf v}$ after it is proven that $\mathbf v\cdot\mathbf v\ge0$. Also, this definition opens up the way for a coordinate-free approach to linear algebra which only then justifies the word algebra in the name.
From a didactic point of view
I generally avoid introducing definitions without some motivation. For very central and recurring elements like the dot product it is hard to demonstrate the true importance before bringing the definition $-$ already for notational reasons.
But what can be done is computing at least two of the above toy problems and therefore demonstrating the recurrent character of this element in naturally occuring tasks.
Only after this definition has proven its usefulness in certain relevant problems it is appropriate to give definitions which are more like theorems:
- This definition is the only way to define a bi-linear multiplication on vectors that yields scalar and also gives $\mathbf e_1\cdot\mathbf e_1=1$ and $\mathbf e_2\cdot\mathbf e_2=1$ for $\mathbf e_1=(1,0)$ and $\mathbf e_2=(0,1)$.
or which are mainly based on further unmotivated axioms:
- A dot product is a symmetric, positive definite bilinear-form.