61

Cryptographic tools can often become adopted even when their security proofs lack mathematical rigour - or altogether missing.

Are there famous cases of security breaches in the industry, where the underlying cryptography was (up until then) considered secure?

Snoop Catt
  • 1,297
  • 7
  • 14
  • 41
    Yes--it is called the history of cryptography. – Patriot Jun 03 '19 at 23:09
  • All of them? Formal verification is hard, also there's hardware too (which now is verified in state but seldom in time and power :/ Or cache access :/ ) – Alec Teal Jun 06 '19 at 15:31
  • 1
    The essential problem here is that not everything that actually does work can be mathematically proven. Don't believe me? Prove me wrong. ; ) – candied_orange Jun 06 '19 at 20:09
  • @AlecTeal Formal verification is often done w/ a simplistic description that doesn't capture real time behavior. It's interesting but not the whole story. – curiousguy Jun 08 '19 at 23:18

6 Answers6

71
  • The SSH protocol has a complicated record format with an encrypted message length, variable padding, encrypt-and-MAC, etc. This complicated system, which was designed without any formal analysis relating the security of the system to the security of the building blocks, turned out to be vulnerable to an attack (paywall-free) exploiting the MAC verification as an oracle for information about the plaintext, leading to plaintext recovery attacks on SSH in popular implementations like OpenSSH.

  • The SSL/TLS protocols have a long and sordid history of informal design, leading to a multitude of attacks:

    • The Bleichenbacher RSAES-PKCS1-v1_5 padding oracle attack, which breaks an RSA-based encryption scheme designed without formal analysis, broke SSLv3 with RSA key agreement in 1998, and then broke it again in 2014 with the POODLE attack because of security-destroying protocol compatibility fallbacks, and then broke it again in 2018 with the ROBOT attack.
    • The BEAST attack, which had been noted by Phil Rogaway in 2002 and documented in theory for SSL/TLS in 2004, exploited the failure of SSL/TLS to follow the security contract of CBC which requires the IV to be unpredictable in advance for each message—the protocol had deployed CBC without formal analysis of how the IV is chosen.
    • The Lucky 13 attack (web site) recovers plaintext by using timing of padding verification as a CBC padding oracle, which arose from a CBC padding mechanism designed without even a simple-minded formal analysis of its timing characteristics.
    • The TLS renegotiation attack exploited a complicated state machine in the TLS protocol involving key renegotiation and authentication, which was never formally analyzed for its security properties, to forge messages sent to a TLS peer without its notice.
  • The OpenPGP protocol was designed by the ad hoc '90s-style composition of generic and poorly-understood public-key encryption and signature building blocks with no formal treatment for how they fit together.

    • The original promise of ‘Pretty Good Privacy’ was to keep email private. But the method for combining fancy math primitives like RSA and standard symmetric cryptography like AES to encrypt long messages in OpenPGP was designed without formal analysis relating them to the security of the building blocks.

      And it turned out this method was exploitable in practice in real email clients in an attack dubbed EFAIL that can leak message content (my answer on it). It took a decade and a half after the problem was first reported in theory in 2002 for the OpenPGP world to catch up when an attack was published in practice in 2018.

    • A secondary promise of PGP was to prevent forgery in private email. But there is no formal concept of a message from Alice to Bob—only of an encrypted message to Bob, and a signed message from Alice, nested however you please*…and usually nested in a way that Charlie can take a message Alice sent to him and make it look to Bob like Alice had sent it to Bob instead. Alice can, of course, take the extra step to name the recipient in the message, and Bob can take the extra step to check for his name. The software could also do this. The OpenPGP designers could have tried to formalize the human-relevant interactions and analyzed their security properties—and could have designed the cryptography to support human use.

      But, when confronted with the problem, instead the OpenPGP designers abdicated that responsibility by asserting that cryptography cannot solve the problem of appropriate use of technology. To this day the OpenPGP protocol doesn't have a formal concept of a message from Alice to Bob—which could be implemented by standard well-understood public-key authenticated encryption—even though it is nominally intended for private email.

    • The public-key encryption and signature schemes chosen in OpenPGP were themselves designed without any formal analysis relating them to well-studied hard problems like the RSA problem or the discrete log problem, and it turned out that both the particular Elgamal signature scheme and RSA encryption scheme used by OpenPGP were problematic.

      It is unclear to me whether these led to practical exploits on OpenPGP—except perhaps for the implementation error in GnuPG of using the same per-message secret for Elgamal signature and encryption, which illustrates the danger of proving a protocol secure without proving the code implements the protocol correctly.


These attacks are all on protocols that were designed by ad hoc engineering without formal analysis guaranteeing the security of the protocol relative to the security of its building blocks. But the primitive building blocks—like $x^3 \bmod{pq}$, cubing modulo a product of secret primes; like $g^x \bmod p$, exponentiation of a standard base modulo a safe prime; like the AES-256 permutation family; like the Keccak permutation—don't have formal analysis guaranteeing their security relative to anything. What gives?

The formal analysis or provable security for protocols, which consists of theorems relating security of a protocol to security of its building blocks, is only one part of a global sociological system for getting confidence in security of cryptography:

  • The reason we suspect the RSA problem is hard is that some of the smartest cryptanalysts on the planet have had strong motivation to study it for decades, and they have left a track record of decades of failure to find any way to break (say) RSA-2048 at cost less than $2^{100}$ per key. The same goes for discrete logs in certain groups, for AES-256, etc.

  • The formal analysis of a protocol using RSA and AES enables the cryptanalysts to focus their effort so they don't have to waste time studying SSH, studying OpenPGP, studying SSLv3, studying TLS, studying WPA, etc., to find whether there's some way to break those protocols. If the formal analysis is done well enough, the cryptanalysts can spend their effort on a small number of primitives, and the more effort they spend failing to break those primitives, the more confidence we have in the primitives and everything built on them.

Protocols like SSH, OpenPGP, SSL/TLS, etc., without formal analysis, are a colossal waste by society of the world's supply of cryptanalysts. Formal analysis enables much more efficient use of the world's resources—and it would have paid off, because nearly all of the above attacks could have been caught just by studying the security properties of the protocols involved and the security contracts of the building blocks: CBC in TLS with predictable IVs, public-key encryption in PGP failing the IND-CCA standard, RSA-based encryption schemes in PGP and TLS without a reduction to RSA security, representing one's error responses in SSH and TLS as oracles for a chosen-ciphertext adversary.

Not all protocols are easy to formally analyze once designed—like the trainwreck that was TLS renegotiation—but protocols can be designed out of well-understood composable parts with clear security contracts to facilitate formal analysis, instead of ad hoc agglomerations of crypto gadgets like the '90s, and today there are tools like the Noise protocol framework with the Noise explorer to compose protocols with built-in formal analysis.


* Sometimes infinitely nested!

Squeamish Ossifrage
  • 48,392
  • 3
  • 116
  • 223
  • Can you elaborate on the analysis is not actually that hard? – user2768 Jun 04 '19 at 06:47
  • "they have left a track record of decades of failure to find any way to break (say) RSA-2048 at cost less than $2^{100}$ per key" Other than quantum computers, I think? – nick012000 Jun 04 '19 at 12:18
  • @user2768 Better? – Squeamish Ossifrage Jun 04 '19 at 14:22
  • 9
    @nick012000 If you know a design for a Shor-capable quantum computer with a predictable cost below $2^{100}$, a lot of people would like to hear about it! – Squeamish Ossifrage Jun 04 '19 at 14:23
  • @SqueamishOssifrage Understanding that the key to easy analysis is to design protocols from well-understood composable parts is a nice improvement, thanks. – user2768 Jun 04 '19 at 14:56
  • I feel a lot better than I used to about my custom protocols on reading this, because I designed them to be mind-bogglingly simple. On the other hand, EFAIL is impressive. – Joshua Jun 05 '19 at 18:20
  • Can we not use "paywall-free" (I thought it was like two links paywall then like a "/" type notation, then sci-hub) but just "open" or something? – Alec Teal Jun 06 '19 at 15:32
  • 2
    @AlecTeal I'm sorry, I don't understand. What are you asking? I provide the paywall-free link so that anyone in the world can read the paper without paying academic extortion money to IEEE; I also provide the authoritative IEEE link because it is sometimes useful for bibliographic or reference purposes. Are you asking for fewer links, or are you asking for different links, or are you asking for different link text? – Squeamish Ossifrage Jun 06 '19 at 15:36
18

When choosing curves for use in elliptic curve cryptography, some have suggested using various classes of curves which avoid certain "bad" properties which would make the system vulnerable to attack.

The MOV attack breaks the ECDSA on a class called supersingular curves. To avoid this, some suggested using curves from another class called anomalous curves, which are guaranteed to not be supersingular.

However it was then found that these curves suffer from an arguably worse flaw exploited by Smart's attack, which also breaks the ECDSA.

I don't know if anyone acted on the suggestion to use these in the meantime, but it demonstrates that very subtle mathematical properties of elliptic curves can have a massive impact on the system as a whole, and that not understanding these properties sufficiently well can have serious consequences.

Dave
  • 281
  • 1
  • 4
9

I'm surprised that nobody has mentioned the Dual EC DRBG backdoor.

In short, there was an issue with a specific random number generator, because it was shown that by chosing specific initial parameters the possibility of a backdoor existed.

Reuters reported:

As a key part of a campaign to embed encryption software that it could crack into widely used computer products, the U.S. National Security Agency arranged a secret $10 million contract with RSA, one of the most influential firms in the computer security industry… RSA’s contract made Dual Elliptic Curve the default option for producing random numbers in the RSA toolkit.

After the Snowden leaks it's now presumed to be true that the NSA indeed had a backdoor implemented and thus could break the security.

Important: The possibility of a backdoor was published before the Snowden leaks.

I think this was in hindsight certainly foolish of people to assume that it was still a secure algorithm even after showing mathematically that a backdoor could theoretically exist.


For more informations:

Squeamish Ossifrage
  • 48,392
  • 3
  • 116
  • 223
AleksanderCH
  • 6,435
  • 10
  • 29
  • 62
  • 2
    Could the downvoter please explain why this is not a good answer? – AleksanderCH Jun 04 '19 at 06:58
  • 6
    Probably cause (a) it's an unproven conspiracy theory, and (b) the claim doesn't show the lack of mathematical rigor asked for. – cHao Jun 04 '19 at 13:12
  • 9
    @cHao The only part missing from your (a) is the actual back door key for the standard parameters, which obviously the NSA is not going to share with the world. The design of the back door is well-understood—it was patented back in 2005!—and there is widespread consensus among cryptographers that you would be an idiot to choose Dual_EC_DRBG if you weren't doing it for the back door because even ignoring the back door it's a stupidly slow and detectably biased PRNG. – Squeamish Ossifrage Jun 04 '19 at 15:10
  • 3
    @AleksanderRas Can you elaborate on how Dual_EC_DRBG was thought secure, and adopted and deployed in practice under that premise, until someone did a formal analysis and concluded the protocol was broken? – Squeamish Ossifrage Jun 04 '19 at 15:12
  • I see this more as an example where very high (but anyway not high enough) mathematical competence (of the NSA) was used to sneak in a backdoor into a standard. @cHao: John Kelsey of the NIST was not amused and admitted that they lacked competency at NIST in elliptic curves at the time the standard was released. – j.p. Jun 06 '19 at 06:22
  • 1
    @cHao Unproven conspiracy theory? The existence of the Dual_EC_DRBG backdoor was confirmed in 2013(14?), along with the $10 million payment. You might be thinking about the rumors of backdoors in NIST curves for ECC like P-256, which are indeed unproven and, in the opinions of respected cryptographers, quite unlikely at least in part due to the difficulty of pulling it off. The Dual_EC_DRBG thing is unrelated, despite both NIST curves and the DRBG involving elliptic curve cryptography. – forest Jun 06 '19 at 07:04
  • @forest: The "Allegedly:" in the answer is a weasel word. Considering how easily accusations are misused as evidence, the word adds enough uncertainty in my eyes to make the claim, yes, an "unproven conspiracy theory". Proven true statements don't have to, and shouldn't, be couched in such language. – cHao Jun 06 '19 at 11:57
  • @cHao I agree, but truth isn't safe from other factors - couching with "allegedly" also effects things like the cost * risk of going through a libel or slander lawsuit, which is in turn partly a function of truth-independent things like financial ability to hire better or more lawyers and to absorb legal costs long enough to maybe recoup them by winning decisively enough to get awarded costs and damages from the suing party. Even when that doesn't apply, habitual couching with "allegedly" saves mental effort, time, and opportunity costs that would be spent on verifying how much it applies. – mtraceur Jun 06 '19 at 18:41
  • 3
    @cHao I edited the answer to reflect the exact report by Reuters in 2013, not long after the New York Times reported on NSA internal documents describing a program of deliberate sabotage of cryptography standards. (AleksanderRas: Apologies if this interfered with your intent; feel free to change it again, of course, as it is your answer.) This is not some wacko Illuminati chemtrail QAnon bullshit. No self-respecting cryptographer would ever choose Dual_EC_DRBG unless they wanted a back door, which is why the NSA went to the business leaders instead of technologists at RSA, Inc. – Squeamish Ossifrage Jun 06 '19 at 22:19
  • @forest My understanding is that the NSA could have added a backdoor to Dual_EC_DRBG if they had known how to do it at the time, that the existence of a backdoor could only be proven by the NSA by demonstrating its use, and that the non-existence of a backdoor would be impossible to prove. Obviously the possibility alone means Dual_EC_DRBG cannot be trusted. – gnasher729 Jun 06 '19 at 22:49
  • 1
    @gnasher729 If NSA didn't generate the standard parameters with a back door, then they're astonishingly incompetent idiots, because there is no other reason to use such an awful PRNG. NSA and Certicom worked in late '90s and early 2000s to push the world toward ECC; NSA paid RSA to use Dual_EC_DRBG; Dan Brown of Certicom patented the back door; NSA had a program of deliberate sabotage of cryptography standards; when NIST asked NSA where the standard parameters came from, NSA said not to talk about it. The only missing part is the secret key, which obviously NSA will never share. – Squeamish Ossifrage Jun 07 '19 at 01:00
  • 1
    @gnasher729 The story is different with the standard curves like NIST P-256, in which there's no evidence of a back door—just that the curve parameters are hashes of seeds that don't have explanations, but nobody has even published a way that the standard curves even could have back doors. Perhaps you are confusing the NIST curves with Dual_EC_DRBG? – Squeamish Ossifrage Jun 07 '19 at 01:05
  • @gnasher729 They did know how to do so at the time. As was mentioned before, the technique was even patented. It wasn't a difficult technique. The existence of a backdoor could only be cryptographically proven if they gave you the backdoor seed, but we didn't need that because of the Snowden leaks. – forest Jun 07 '19 at 07:55
  • 2
    @cHao The term "allegedly" has been removed due to the confusion it has caused. You are right, the statement shouldn't have used such vague language. The existence of a backdoor is known. – forest Jun 08 '19 at 02:15
3

Note that even where an algorithm is proven "correct" (that is to say, where the algorithm is proven to have certain well-defined properties), there may still be flaws in the implementation: the implementation is likely to rely on underlying hardware and software that almost certainly has not been subject to the same level of mathematical scrutiny. So you get flaws such as Meltdown and Sceptre, whereby execution of an algorithm leaves discoverable traces of its internal variables. Any mathematical proof of a security algorithm is almost certainly going to rely on assumptions that the implementation at that level is flawless.

Michael Kay
  • 131
  • 2
  • I think you mean when an algorithm is proven "secure"? – user2768 Jun 04 '19 at 14:58
  • An algorithm can't be proven to be secure. It can be proven to have certain defined properties which might meet your definition of "secure", but your first task is to define what you mean by "secure" in mathematical terms. – Michael Kay Jun 04 '19 at 16:26
  • Spectre, not Sceptre. =) – jamesdlin Jun 04 '19 at 18:26
  • @MichaelKay Proven secure is a widely used phrase (rightly or wrongly), but that's not my point: Correctness has a technical meaning in the literature, it is used to capture a functional, non-security requirement, e.g., correctness of an encryption scheme requires decryption of a ciphertext to return the underlying plaintext (under some sensible conditions). – user2768 Jun 05 '19 at 07:18
  • OK, sorry if the terminology I learnt many years ago doesn't match usage in this community. – Michael Kay Jun 05 '19 at 08:28
1

Early WiFI protocols (WEP) had too far small of an IV. Plugging in known data rates, and analyzing the protocol would have told them this.

Rob
  • 349
  • 1
  • 12
0

The MIFARE Classic is a type of contactless smart card used for many applications such as transit fare collection, tolling, loyalty cards, and more.

NXP, the creator of these cards, used a proprietary security protocol Crypto-1 to secure them.

Wikipedia quotes that "the security of this cipher is ... close to zero".

These cards in the field today are known to be easily crackable with the right equipment and very insecure.

Although this is a dramatic example, Crypto-1 was probably never widely considered secure, as it was a proprietary, closed source cipher. Despite that, these card had very wide circulation and are still in the field today.