Just wondered: I currently have TrueCrypt on my system (I am aware that there are various doubts about its penetrability, and that it is no longer under development). TC "volumes" obviously consist of bytes.
It seems to me that decryption must always be reliant on being able to recognise that you've managed to reach the unencrypted text. But supposing, of 1000 bytes of text which then gets encrypted, only about 40 of them actually contain the information (e.g. a password), and that all the 960 other bytes were just junk. And you then encrypt that as a 1000-byte TC volume?
Furthermore, you might make it so that the 40 characters that you want to hide actually consist of 7-bit ASCII characters, but you spread them over 40 * 7 / 8 = 35 8-bit bytes? So the first 8-bit byte contains all 7 bits of the first 7-bit byte, and the 1st bit of the next 7-bit byte, etc.?
It seems to me that under these circumstances a decryption app would never be able to recognise that it had actually reached the hidden text which had been encrypted.
More generally, how do decryption programs know that they've succeeded in breaking an encryption?
how my question might differ from the referenced one
I don't think my question is really the same as that one because I'm not asking about "double" encryption, which obviously merely requires more powerful computing power.
Also I should not have mentioned TrueCrypt or the business of disguising 7-bit text within 8-bit text. This is too specific. The fact that TC contains the word "TRUE" in a certain position actually makes me laugh somewhat: I didn't know that. Also the fact that many such apps may "randomly" fill with "random" zeros also makes me laugh. Can the question not be considered on its merits, rather than pointing to the inadequacies of certain existing apps, which are not really germane to the point I'm making?
What I'm trying to get at is: given that a password (or a bank account number) or any item of information may consist of a sequence of bytes which is completely indistinguishable, and I mean completely indistinguishable, from randomly generated sequences, how can you know (i.e. a human cryptographer or a decryption application) that you have found the correct way of decrypting.
In the case of the Enigma codebreakers, for example, they only managed to break the code because they were looking for, and found, human language, which obviously contains all sorts of patterns. If the Germans had only ever communicated numbers (and I don't mean numbers corresponding to "code words") to one another they could have done so and it would have been impossible to crack the code. How useful that would have been to them in WW2 is another matter. For certain purposes, however, all you need is to communicate a number.
If you are trying to encrypt a specific number, which does not contain a recognition pattern of any kind, how can decryption ever know that it's found the right way of interpreting these (encrypted) bytes?
Thus, my use of the word "disguising": if your unencrypted text does not contain any pattern or give-away indication which distinguishes it from a random sequence of bytes, how can a would-be decryptor (human or other) ever be certain the the result of the decryption is the byte-sequence which was in fact encrypted?
For clarification: I am referring in this question to encryption situations in which the key does not need to travel with the message.
PS JimmyB, in his answer to that other question, touches on what I'm wondering about. But even he is assuming that there will ultimately be some sort of underlying "plaintext" which, when found, will be identifiable as such because of patterns of some kind.