1

Let's say that I would like to calculate all legendre symbols from $1$ to $p-1$ $\pmod{p}$, is there a way to calculate them in an incremental way?. For example, an incremental table of legendre symbols could help to calculate them in a memoized algorithm, but let´s assume we can't do that due to processing limitations. Is there a nice solution?

chubakueno
  • 5,623

2 Answers2

4

There is less known about the non-multiplicative structure of legendre symbols. We know, that the legendre symbol is $1$ for exactly half of $\{1,\dots,p-1\}$ and $-1$ for the other half, but the way this values are distributed is not clear.

The complexity of the usual computation with the laws of quadratic reciprocity is logarithmic in $p$. So you could calculate all symbols in $\mathcal O(p\log p)$ time. Maybe you can spare a lot of calculation by memorizing all calculated symbols, but it gets easier:

If you need to calculate all symbols, just go by the definition. Compute $$1^2,\dots,\left(\frac{p-1}{2}\right)^2$$ This will give you exactly all quadratic residiues. You can further reduce costs here, by calculating the next square as $n^2=(n-1)^2+2n-1$. So, you just need to do $\frac{p-1}{2}$ iterations, in which you add twice and reduce modulo $p$ once.

I don't think, it gets more easier.

EDIT: After ThomasAndrews' legitimate comment, I dediced, to add some pseudo-code, which provides a fast implementation:

list = ()
s = 0
for i from 1 to (p-1) / 2:
   s = s + 2 * i - 1
   if s >= p:
        s = s - p
   list.append(s)

Now, list contains exactly the values, for which $\left(\frac{\cdot}{p}\right)=1$.

Tomas
  • 4,534
  • 1
    Nice. Of course, addition $\pmod p$ is still an $O(\log p)$ operation, so this is still $O(p\log p)$. The constant will be a lot smaller, though :) – Thomas Andrews Jul 19 '13 at 15:58
  • Thank you, this really helps to increase the speed of my algo. – chubakueno Jul 19 '13 at 16:01
  • 1
    @ThomasAndrews: You are right in general. In this case however, we only add $2n-1$, which is always less than $p$. Thus, after adding $2n-1$ the new number is smaller than $2p$ and reducing modulo $p$ boils down to subtracting $p$ in case, that the calculated number is bigger than $p$. – Tomas Jul 19 '13 at 16:05
  • So, with modern computer architecture this would just be $O(p)$. Nice. – chubakueno Jul 19 '13 at 16:11
  • I'm just talking about the addition - the addition of two numbers with $\log p$ binary digits takes $\log p$ time. – Thomas Andrews Jul 19 '13 at 16:31
  • 1
    @ThomasAndrews modern CPUs are hardwired to do that in O(1) time for standard, register-sized integers. – chubakueno Jul 19 '13 at 16:45
  • Of course, but $O$ notation is supposed to apply to incomprehensibly large $p$ as well as the $p$ that can be worked with by hand. Can you really add two numbers with 4 billion bits in the same time that you can add two 8-bit numbers? No, you can't. It seems like $O(1)$ time for small $p$, but that's just fudging. If this were a comp sci forum, I might agree, but since this is a math forum, we should be precise. @chubakueno – Thomas Andrews Jul 19 '13 at 16:54
  • As I constantly remind people, in reality, if the output of a process is a sequence of $f(n)$ elements, it is never mathematically possible to do that operation in faster than $O(f(n))$ time. We can parallel-ize to reduce the constant factor a lot, and to make it almost vanish in most practical cases, but we cannot do better for arbitrary $n$. – Thomas Andrews Jul 19 '13 at 17:02
  • @ThomasAndrews you are right, with arbitrary-precision arithmetic is again $O(p\log p)$. As a side note, although not deterministic and in quantum computers only, (http://en.wikipedia.org/wiki/Grover's_algorithm) seems related and very interesting in this calculate every item kind of problem. – chubakueno Jul 19 '13 at 17:10
  • @ThomasAndrews: I get confused all the time with the different interpretations of "$O(\cdot)$ time". I agree, that this takes $O(p\log p)$ single digit additions. The legendre symbol computation method however takes $O(p\log p)$ modulo $p$ reductions (similarly to the Euclidean algorithm). – Tomas Jul 19 '13 at 20:54
  • @Tomas Ah, yes, that might be the case. – Thomas Andrews Jul 19 '13 at 21:35
1

The quadratic reciprocity theorem might be useful.

  • 1
    Yes, it is reasonably fast to compute a single value with quadratic reciprocity and reducing $\left(\frac{2^na}{p}\right)$ to $\left(\frac{2^n}{p}\right)\left(\frac{a}{p}\right)$ just by looking at the trailing zeroes(in binary representation), but I would like to know if there is an algorithm to compute them using the previous calculation, without memorizing all of them. – chubakueno Jul 19 '13 at 15:56