7

On this blog https://blog.josefsson.org/2014/06/23/offline-gnupg-master-key-and-subkeys-on-yubikey-neo-smartcard/ there is a following sentence

Generate master key

Below I will use a 3744 bit RSA key, where the key size is selected based on the assumption that people will focus efforts to crack RSA keys on the usual power-of-two key sizes.

But that leaves me to wonder: do unusual key sizes really pose a disadvantage to a potential attacker? After all, couldn't he just get a key size from a public key?

Jakub
  • 181
  • 5
  • 7
    From this answer: The difficulty of factoring (thus, as far as we know, the security of RSA in the absence of side-channel and padding attacks) grows smoothly with $n$. So it would appear that there is no real benefit in terms of security. – mikeazo Jul 20 '16 at 17:02
  • 2
    @mikeazo: want to make that the answer??? – poncho Jul 20 '16 at 17:10
  • 2
    @poncho, I did. I felt bad basically pasting a portion of another answer here. But better to have an answer, right? – mikeazo Jul 20 '16 at 17:22
  • 1
    There may be a small benefit if somebody builds hardware which makes power-of-two assumptions or has a fixed key size. If somebody cares that much to crack your key you've got other problems. – David Jul 20 '16 at 19:59
  • in general with these sorts of ideas, you're going into uncharted waters by such deviations; you might create something nobody would think of the right way to attack, or you might make something fundamentally flawed that someone smarter than you will recognize. – dandavis Jul 22 '16 at 17:49

1 Answers1

12

From this answer:

The difficulty of factoring (thus, as far as we know, the security of RSA in the absence of side-channel and padding attacks) grows smoothly with $n$.

So, if factoring is the method of choice for breaking RSA, it doesn't seem like it really helps.

mikeazo
  • 38,563
  • 8
  • 112
  • 180
  • 7
    After encouraging you to make this the answer, I thought of one obscure scenario where an odd-sized key might be a bit better; if you don't have good entropy, and use a standard-sized key, you might generate the same key as someone else (or worse, share one of the primes with someone else); that's a bit less likely to happen with an odd-sized key. Of course, the correct solution is "have good entropy" – poncho Jul 20 '16 at 17:41
  • 2
    Not so obscure. Common factors due to insufficient entropy have been found in practice. – otus Jul 20 '16 at 18:03
  • There was also the Debian openssl loss-of-entropy failure, that led to a always generating one of a small set of weak keys. Reconstructing the set took some work, and so was only done for default sizes. Using a size outside the default set would mean someone would have to target you individually, rather than using a precomputed value for a mass target attack. That would suggest that operational practice should also add a a few bits of entropy as additional bits worth of key length, to increase the pre-computation expense for any other mass attack. – Phil Miller Jul 20 '16 at 21:22
  • @mikeazo I would recommend adding the caveat "Assuming you have enough entropy from a trusted source, then... – Dessa Simpson Jul 21 '16 at 02:14
  • 1
    @Novelocrat The Debian OpenSSL fiasco also resulted in defensive tools to find and block vulnerable keys on your infrastructure. They were only made for common key sizes. Your unusually sized keys would be left out in the cold, and you would have to patch the tools yourself. – Matt Nordhoff Jul 21 '16 at 03:46
  • @Novelocrat, also, if you used the predictable bits to decide on they key size, the result would still be predictable... Asking the user for randomness in the form of the key size would start questions like "why do I have to chooce, why isn't there a sane default" – ilkkachu Jul 21 '16 at 09:34
  • I would add that the RSA factoring challenge proove your answer. (a lot of anormal keysize used, to detect the limit of time to factorize the private key versus modern hardware) – yagmoth555 Jul 23 '16 at 14:31