I know that a simple monoalphabetic substitution cipher is considered extremely weak, on account of linguistic frequency-analysis attacks. However, assume the following:
- cleartext is encrypted (with 256-bit AES, for example), and the resulting cipher is in base64 format
- the client has a key of 64 chars (a-z, A-Z, "+/"), where each character is randomly mapped to another character (i.e., 'a' -> 'Y', 'b' -> 'f', 'c' -> '/' etc)
- a simple monoalphabetic substitution cipher is applied to the base64 ciphertext
Aside from the practicalities of implementing this approach, would this add much security? I argue that it would, taking into consideration the following points:
- neither base64 encoded text nor the raw encrypted data it represents are subject to frequency-analysis attacks
- the 'key space' (to use the word loosely) of the base64 character set, represented as a factorial (64!) = 1.2688693e+89, or roughly 2^296 (if I calculate correctly).
So, in the case of a brute force attack, an attacker would need to try 2^296 key combinations, and for each one of those, they would need to go through the normal process of brute-forcing a 256-bit AES cipher (already unfeasible).
In the case of a break in the underlying algorithm (AES, in this case a break would be highly unlikely) the attacker would still need to search the 2^296 keyspace to find the original ciphertext.
There has been plenty of discussion about double-encryption, and how it adds little more security (in terms of orders of magnitude), but I would like to know people's thoughts about the approach described above. I DO understand that it's complete overkill, and that 256-bit AES encryption is more than sufficient by itself, but I'm interested to hear opinions at least from a theoretical point of view. Does this potentially add much security?