0

This page on Wikipedia states

"Brute-force attacks can be made less effective by obfuscating the data to be encoded making it more difficult for an attacker to recognize when the code has been cracked or by making the attacker do more work to test each guess."

This might make sense, but are there any examples of widely used encryption software doing such a thing?

daniel
  • 912
  • 5
  • 15
  • If I were inventing things instead of asking questions, I would have my software do a block cipher on the plain data and append the key to the newly encrypted data. I'd have it do this at least twice, now the data would look like uniformly distributed random data (because both the key and encrypted message are), and the attacker would have to number crunch at least 1 full decryption of the message for each brute force attempt. – daniel Jun 21 '17 at 11:53
  • Since most keys are random (not like passwords) and the key space is far too large to brute force I think such things are uncommon in practice. – Elias Jun 21 '17 at 11:57
  • @Elias but they still design systems to be strong against brute force attacks, that is why people keep making keys longer. Also https://en.wikipedia.org/wiki/RSA_Secret-Key_Challenge – daniel Jun 21 '17 at 12:09
  • 2
    I doubt any public software does that , since an attacker with a copy can just do the same as the software to unobfuscate/verify. – otus Jun 21 '17 at 12:15
  • @otus but this is all fine with Kerckhoffs 2nd. the attacker would have to do the same for every key he checks. Maybe obfuscate is the wrong word, but its not my wiki. – daniel Jun 21 '17 at 12:19
  • @daniel The reason I would do the newdata = (key + cipher(data)) process more than once is so that if there's a way to use only the key and one block of the cipher to get a positive sign that the key is correct then its still a fairly fast operation for a long message, compared to doing the whole message decryption once plus doing the one block. Also you could use these repeated processes as padding then the brute force would have to each time repeatedly decrypt until there was nothing left of the cipher. – daniel Jun 21 '17 at 14:18
  • @daniel "would look like uniform" is part of neither modern cryptography nor cryptanalysis. In general, adding arbitrary (intentionally slow) transformations is bad (unless that is your explicit goal - and that's only the case for low entropy things like passwords): It makes the regular usage less efficient, and the "gain" of security is likely to be only fictional. – tylo Jun 22 '17 at 13:54
  • @tylo data looking uniformly distributed and random seems like a big deal for CSRNGs which is pretty big for cryptography. Doesn't AES add 10 to 14 rounds of transformations? doesn't the data look random until the last step there? – daniel Jun 22 '17 at 14:03
  • @daniel You said "data looking uniformly distributed and random seems like a big deal for ..." Again, seems like ... is not a scientific definition of anything, it's not a proper argument and it certainly isn't a proof. And your thought is going the wrong way: If something can easily be distinguished from randomness, it's entirely useless for some cryptographic application. But that does not mean, this is enough. That's the same reason why statistical tests for CSPRNGs are almost useless: They can only filter out the worst - they can never prove something is secure. – tylo Jun 26 '17 at 08:23
  • @daniel Your thought is a logical fallacy, more precisely affirming the consequent (the converse statement: The original statement would be If something is cryptographically secure, then it looks random. And you made if something looks random, then it is secure. And that's just wrong. In the world of logic, the correct transformation would be the contrapositive. – tylo Jun 26 '17 at 08:29
  • @tylo if you take the "would look like uniform" and replace it with "would be uniform..." does that clear up what i said? The main thing I learned from this question is that for AES the key check processing already costs a large amount of time, so something like this might only increase that time by 2, and then why wouldn't you just increase your key by 1 bit instead. And for old ciphers the question turned into "why didn't they apply a SHA-2 before using the enigma during WW2?" so it was a bit silly. – daniel Jun 26 '17 at 08:42
  • "why didn't they apply a CSPRNG before using the enigma during WW2?" I meant, even though there is this https://www.stat.berkeley.edu/~stark/Java/Html/sha256Rand.htm – daniel Jun 26 '17 at 09:02

2 Answers2

3

Since speaking about any software is not showing anything, I'll instead on how they do it and where they do it.

This "obfuscation" is very often done when passwords come into play. There are plenty of good passwords hashing functions, that slow attacker down by "obfuscating" it, then brute-force has to use that same "obfuscation" which is designed to be slow. This is done because users will always generate weak passwords, it isn't something we can fix. But this is still futile attempt at fixing something that is inherently broken (users will always devise poor and short passwords). It is far more effective to make passwords twice as long than making "password check" take twice as long. Increasing key by ONE BIT doubles security. Increasing check time by factor of 2 doubles security.

Apart from passwords, we almost never use such "obfuscations" because they are not effective at stopping attacker. Making key double length will increase security by unimaginable factor, doubling encryption will only double security!

If I were inventing things instead of asking questions, I would have my software do a block cipher on the plain data and append the key to the newly encrypted data. I'd have it do this at least twice, now the data would look like uniformly distributed random data (because both the key and encrypted message are), and the attacker would have to number crunch at least 1 full decryption of the message for each brute force attempt.

This would not help you. I'd simply take twice as long to decrypt your message. This is not a lot, as computing power increases very fast so at worst I'd wait a year to break your scheme. On the flip side, you would have to take twice as long to encrypt that message. We end up being hit same.

History likes to repeat itself. Germans tried to fix enigma by using more and more complex encoding schemes, it didn't help. RC4 was "fixed" by removing first X bytes, it didn't help in long run (everything before fix was compromised, everything after fix was safe for some years). It's easy to fall into pitfall of "I'll just change it a bit and it will be secure again", truth is we don't know how long it will be secure, so it's best to leave whatever is broken behind, making better, more effective algorithms.

but they still design systems to be strong against brute force attacks, that is why people keep making keys longer.

Problem is, you are not even making key longer. Key only makes sense if increasing key by X will increase security by larger margin than X. You instead make it so that it takes two times longer for you and attacker. It doesn't make difference in long run for attacker. RSA is special case, because RSA inherently has better factoring algorithms than bruteforce, and we move away from RSA now because it's not efficient enough. If RSA got cracked however, making better algorithms, we would probably totally move away from RSA, because we couldn't make key large enough for attacker to be powerless. RSA is also far different from symmetric ciphers, because of efficiency, breakage method etc. so comparing two isn't correct.

axapaxa
  • 2,940
  • 11
  • 21
  • "I'd simply take twice as long to decrypt your message." I don't think the added time works like you think it does. The worst case time to brute force is: (number of keys needed to check)*(time to check one key) , obfuscation would increase the time to check one key, If an attacker chose or knew the plain text (or part of it) this would increase the time by a great deal for a small cost. – daniel Jun 21 '17 at 13:38
  • Also, a brute force method might only have to decrypt part of the message each key it checks depending on the encryption method (maybe the first couple of bytes of a 1 gigabyte message), if the plain text was obfuscated similar to how I outlined then they couldn't save time this way. – daniel Jun 21 '17 at 13:48
  • @daniel you cannot randomly assume that every message is 1 gigabyte. "there is a case where it helps" isn't compelling reason, choosing algorithm that has greater security margin and using shorter messages IS a compelling reason. – axapaxa Jun 21 '17 at 14:14
  • I didn't even assume the cipher that this is applied to. I'm pointing out this obfuscation method adds time for the checking, that the time for checking without it may be very small especially for chosen or known plain text, and that the time added is related to the message length for some forms of obfuscation. – daniel Jun 21 '17 at 14:24
  • @daniel 1.in "known plaintext" scenario, check won't be much shorter (in fact it is so short that we always assume it is negligible). 2.You assume cipher is broken, since you assume brute-force is viable option (it isn't with good cipher). 3.Messages are usually upwards of few kilobytes, because they are chunked. 4.You have assumptions that messages have to be huge, and rest of world just uses cipher that cannot be bruteforced. After all security of message is job of cipher. 5.You fail to acknowledge existence of MAC which nullifies your "improvements". Please read up on MAC and it's uses. – axapaxa Jun 21 '17 at 14:52
0

Obfuscation may just be an old fashioned way of describing different concepts that expanded on the idea of pre processing the plain text in some way, and are used in current cryptography systems such as All-or-nothing transforms, and Initialization Vectors in AES, given its description is a:

block of bits that is used by several modes to randomize the encryption and hence to produce distinct ciphertexts even if the same plaintext is encrypted multiple times

The list of ciphers that would have benefited by randomizing the plain text before using the rest of the encryption system can be found by looking for any that were broken with the assistance of chosen or known plain text attacks such as the PKZIP stream cipher.

The suggested method is probably flawed and may actually weaken the system by sending a key and cipher as the plain text as this may leak information, if its related to this answer.

So in summary an extra step is not needed for encryption like AES because the Substitution-permutation network randomizes the plain-text and is also expensive for the attacker to repeat for every key it checks, earlier encryption that is vulnerable to known plain text attacks could have benefited from such a step.

daniel
  • 912
  • 5
  • 15
  • IV is public knowledge, and as such cannot be used to defend block cipher from plaintext attack. I also disagree that "obfuscation" is same as IV, since sole purpose of IV is randomization of message and chaining blocks. If key is known, IV can be undone very easily, something you consider "hard" in case of your "obfuscation". -1 – axapaxa Jun 22 '17 at 18:22
  • @axapaxa I'm just saying this randomization step would add security for something like a 10 character key Vernam cipher, but doesn't for AES. What should I change IV to, to fix my answer? – daniel Jun 22 '17 at 20:10
  • I still won't agree that it will add security, it might at best prevent some otherwise obvious attacks. If what you were looking for in your question is IV, then sadly you phased it wrong (you provided solution to problem that doesn't exist). I don't know how to fix your answer, you say you solve something that doesn't exist in modern cryptography and nobody cares about terribly broken ciphers (unlikely that it is their only problem). – axapaxa Jun 23 '17 at 12:15