1

The drawing of lines on the elliptic curve is repeated n times, where n is your private key, resulting in a point Ω. When calculating Ω, is there a short cut function that lets you skip having to actually iterate n times? If not, does not seem that difficult get n by just brute force..

dieCurve
  • 13
  • 2
  • https://en.wikipedia.org/wiki/Elliptic_curve_point_multiplication – Ruggero Nov 15 '18 at 13:24
  • Ah the internet, overall, is a good place to find information. But this is a pretty general question, that has not been asked on this stackexchange from what I can see, and, cannot see why it should not have an answer. – dieCurve Nov 15 '18 at 13:29
  • See https://crypto.stackexchange.com/questions/3907/how-does-one-calculate-the-scalar-multiplication-on-elliptic-curves (from 2012) and terser https://crypto.stackexchange.com/questions/35581/scalar-multiplication-for-elliptic-curve (although it basically just links to the wikipedia article you reject) – dave_thompson_085 Nov 16 '18 at 02:26

1 Answers1

3

Suppose that I can add two points together. Now, I have a point $G$, and I want to compute 197 times that point $G$. I could do that with 196 additions. But I can also do so much faster:

\begin{eqnarray*} G + G &=& 2G \\ 2G + G &=& 3G \\ 3G + 3G &=& 6G \\ 6G + 6G &=& 12G \\ 12G + 12G &=& 24G \\ 24G + 24G &=& 48G \\ 48G + G &=& 49G \\ 49G + 49G &=& 98G \\ 98G + 98G &=& 196G \\ 196G + G &=& 197G \\ \end{eqnarray*} As you see, I computed 197 times the point $G$ in only 10 point additions, which is far cheaper than 196 additions.

This is known as the double-and-add algorithm. The gist is that when you have $nG$ for some integer $n$, then a single addition of $nG$ with itself yields $nG + nG = (2n)G$; and you can also get $(2n+1)G$ with an extra addition with $G$. You can think of the algorithm as maintaining a current point $P = nG$, and step by step altering $n$ so as to reach a given target value; it helps to think of $n$ as an integer in base 2:

  • Adding $nG$ to itself means multiplying $n$ by $2$, i.e. shifting it one bit to the left, filling the blank with a zero.
  • Adding $G$ to $(2n)G$ replaces that newly inserted zero with a one.

Thus, the square-and-multiply algorithm simply rebuilds the multiplier bit by bit. This yields a cost of $\log m$ doublings and at most $\log m$ extra additions, to compute $mG$.

There are many possible optimizations (window, NAF, wNAF...) that can be done to save on the extra additions. The baseline cost is that, to multiply by an integer that fits on $k$ bits, you'll need $k-1$ doublings, and some extra additions; the costs of doublings is the dominant one, because the window/NAF optimizations only reduce the number of extra additions.

The same algorithm is known as square-and-multiply in the context of modular exponentiations; it has its own Wikipedia page. This is the same thing, except that point additions become multiplications of integers, and multiplication by an integer becomes exponentiation.

Thomas Pornin
  • 86,974
  • 16
  • 242
  • 314