35

I'm not even sure if they are serious, but I've heard many times that some people refuse to not only trust their computer to generate a random string (which is understandable) but also don't trust themselves to do it. So, instead of simply typing/writing down:

fcC540dfdK45xslLDdfd7dDL92

And then randomly changing a few of those to other ones a few seconds later, such as one in the beginning, one in the middle and one in the end, they use dice which they roll again and again to generate random numbers, which are then treated as "truly random" and thus "truly secure".

Why would a dice roll be "more random" than simply coming up with a sequence in your head, and then changing some of them?

I simply don't believe that this could possibly be "not secure". Why the need to do the very tedious dice rolling? It feels like a ritual that they go through, based not on logic and reason, but on some sort of delusion that their brain is going to generate the same sequence as others who guess it, or a computer, even though they also change some of them after the phrase is "done".

I don't understand this. Is there anything that speaks for this practice as being "more random"/"more secure"?

Oliphaunt
  • 105
  • 2
K. B.
  • 383
  • 1
  • 4
  • 4
  • 108
    I don't see any characters from the top row of a QWERTY keyboard in your random string. – the default. Feb 04 '21 at 03:32
  • 65
    Why do you think you're good at generating random numbers? – Eric Duminil Feb 04 '21 at 11:24
  • 2
    "changing a few of those to other ones a" -- the way people do with passwords when an uppercase letter or a digit is demanded? (i.e. I expect most will just capitalize the first character, or append a 1). Random link about character frequency in passwords: https://csgillespie.wordpress.com/2011/06/16/character-occurrence-in-passwords/ – ilkkachu Feb 04 '21 at 13:59
  • 2
    Maybe, some humans are good at random sequence generation. Most of them are provably not (or at least don't try hard enough) - use any of the circulating leaked password databases as a proof. You cannot expect, require or rely on your users being good at some niche ability, – fraxinus Feb 04 '21 at 15:34
  • 7
    In your own example, you start with a string and then specifically choose certain characters to change. Right there you are making the string less random. You would have been better off leaving those characters alone if you thought the string was random. – Seth R Feb 04 '21 at 15:52
  • 40
    "I simply don't believe that this could possibly be". Yes, this kind of total confidence is required to maintain a position against a mountain of evidence: people's password choices, their performance in games like Man vs Machine below, tax fraud detected by number frequencies, and generally abiding beliefs in properties of random sequences that just aren't true - I'm sure you've met a person who sees a pattern in one and thinks it can't possibly have been output by a random generator, or who sincerely believes that rolling a few low numbers will make a high one more likely the next time... – Leif Willerts Feb 04 '21 at 16:28
  • 26
    @LeifWillerts brings up a good point. Humans are exceptionally good at seeing patterns. So good, they often see patterns even where none exists. That bias makes them bad at making random numbers as in the attempt at being "random", they will often start making choices to avoid the appearance of a pattern. Those choices actually reduce the randomness. – Seth R Feb 04 '21 at 16:40
  • 3
    Let me ask you this, do you think the probability of getting two identical dice rolls in a row in a boardgame is any less than getting any other sequence of two dice rolls? If you do, that's your mind failing at randomness. – DKNguyen Feb 04 '21 at 18:04
  • 1
    The WW2 example in https://en.wikipedia.org/wiki/Random_number_generator_attack#Human_generation_of_random_quantities comes immediately to mind as the same fallacy @KB is falling into now. – Charles Duffy Feb 04 '21 at 18:16
  • 2
    @CharlesDuffy My personal favorite is how often people will pick 7 when asked to think of a random number between 1 and 10. You obviously can't pick 1 or 10 because they are the end. You also can't pick 2, 4, 6, 8 because it's a well known pattern. And you can't pick 5 because it's considered one of the "rounder" numbers in our base 10 system. I forget the reasoning why 3 and 9 doesn't get picked. Maybe they are just too close to the ends as well. – DKNguyen Feb 04 '21 at 22:32
  • 2
    Like when Data in Star Trek makes that that super long oral pass code: 173467321476c32789777643t732v73117888732476789764376.There are an awful lot of sevens in that code. I remember we had a demonstration of this in class many years ago and it turned out more than half the class picked 7. – DKNguyen Feb 04 '21 at 22:34
  • 35
    Input fcC540dfdK45xslLDdfd7dDL92 into Keyboard Heatmap. – user76284 Feb 05 '21 at 00:31
  • 7
  • @KeithMcClary Have to admit that I've never understood that joke. Is it irony? Irrational number expansions are provably randomly distributed. '999999' is the famous Feynman point in $\pi$'s decimal expansion. But '999999999999' occurs at offset 897831316556. And a fast TRNG (>100 Gb/s) can produce even longer runs, yet they're truly random. – Paul Uszak Feb 05 '21 at 02:05
  • 4
    Also obligatory xkcd: https://xkcd.com/221/ – Wayne Werner Feb 05 '21 at 03:51
  • @thedefault. It can be still random. See the article. I've posted this in the answer to indicate this. We are not even good at detecting randomness! – kelalaka Feb 05 '21 at 11:43
  • 1
    For a more amusing comic than the Dilbert example, see the last panel in: https://www.schlockmercenary.com/2012-08-07 – jmoreno Feb 05 '21 at 17:24
  • 1
    Humans tend to both create patterns and try too hard not to create patterns when trying to generate random sequences. – Beefster Feb 05 '21 at 21:46
  • 2
    @DKNguyen also no 0s or 5s in Data's long code. – ypercubeᵀᴹ Feb 06 '21 at 12:34
  • I wonder if a human has ever been trained to be really good at generating (still pseudo)random output. – Hello Goodbye Feb 06 '21 at 19:49
  • @HelloGoodbye I think that would involve more "untraining" than training. I wonder what babies do just mashing buttons. I wonder if it is even theoretically possible to train a neural network to be random. Seems diametrically opposed. – DKNguyen Feb 06 '21 at 20:46
  • 2
    @PaulUszak: that's the joke. Something like 999999 doesn't look random for a human, but is it? Or isn't it? Is that an irrational number expantion? Or is it some guy just typing 9 6x? You can never know. – Quora Feans Feb 07 '21 at 00:56
  • I feel the sentiment in the question, but the answers are very compelling. However, I do feel like sb. acquainted with the field of CS / Mathematics should be much more capable of coming up with a relatively random sequence than the average joe and jenny. – csstudent1418 Feb 07 '21 at 08:45

8 Answers8

115

In short, it is more than a belief: there is strong evidence that humans are not good entropy sources. There is a test for this

Try to win!

So we don't rely on whether generating a random number from the mind or random keyboard typings and mouse movements that seem like a monkey playing on the computer from outsiders. We rely on good entropy sources like the /dev/urandom. That kind of sources comes from good research.

Some researches on supporting this;


Other online tests;

kelalaka
  • 48,443
  • 11
  • 116
  • 196
  • 19
    Interestingly, entirely conscious decisions (not pounding the keys) of whether to use 0 or 1 still gave me an even 50% score (90-90) after 227 moves. It seems the main reason that any human would be considered a bad source of entropy is the strong tendency toward always alternating input so that it "appears more random." With the test, consciously tending toward using longish strings of consecutive bits will bring the bias back to "just about random." There is likely still some bias, but it is greatly decreased. – owacoder Feb 04 '21 at 17:59
  • It looks like the best way to get above 50% to that test is to just follow the rhythm of a tune in your head, since it allows you to avoid thinking about key permutation. I seem to get an average 40-60 and 60-40, though I got 80% only once. – Clockwork Feb 04 '21 at 19:01
  • 34
    @Clockwork Interestingly, winning the game by a large margin is exactly as indicative of non-randomness as losing by a large margin. You could imagine setting up a second machine that always guesses the opposite of the first machine, and this machine would beat you 80-20. A machine that's always wrong is actually as predictive as one that's always right. – Nuclear Hoagie Feb 04 '21 at 19:47
  • Comments are not for extended discussion; this conversation has been moved to chat. – SEJPM Feb 05 '21 at 18:40
  • 2
    Minor nitpick: if there's evidence, then it's very likely to also be a belief. A justified belief, rather than "just a" belief. – Toby Speight Feb 05 '21 at 21:07
  • The first link, ironically, uses a certificate that is not valid for www.loper-os.org. The certificate is only valid for the following names: *.nfshost.com, nfshost.com. – Toby Speight Feb 05 '21 at 21:08
  • 2
    @TobySpeight maybe I should have said the evidence is strong that we are not good entropy sources. Nice. – kelalaka Feb 05 '21 at 21:12
  • @owacoderI Completely agree with you. The first time I tried I tried to think about the sequence and after 207 moves I was winning 51% to 49%. When I tried without thinking I failed miserably (like 25% vs 75% after 50 moves). – Bakuriu Feb 06 '21 at 16:55
  • 5
    Funny that challenge: I inserted 200 bits generated with random.org into that "game" and the pc won "56%". Allowing 'pass' gives it quite a hefty advantage even on truly random sequences. – paul23 Feb 06 '21 at 18:26
  • @paul23 yes, most of the time passes. The less pass the less random that we can say. – kelalaka Feb 06 '21 at 18:39
  • Maybe I'm misunderstanding the game, but I thought if you were trying to be random, you would want to win 50% of the time, which I was able to do pretty consistently. – Ryan_L Feb 06 '21 at 20:38
  • @Ryan_L How long did you tried? Note that this test doesn't need to find out that all humans are not good random number generators. Still, there is another test that can distinguish you from random. – kelalaka Feb 06 '21 at 20:46
  • @kelalaka I played up to about 200 moves. I couldn't maintain exactly 50% permanently, but it never got passed 60% or 40%. Could the problem be that I understand I am trying to trick an AI, not just appear random? I understood that the machine was looking for patterns in my moves, so as soon as the percentages started to move I would change things up. Maybe that's gaming it. – Ryan_L Feb 06 '21 at 21:02
  • You are expected to produce random. What if you don't see the percentage? 40% and 60% are still too big biases for Cryptography. – kelalaka Feb 06 '21 at 21:33
  • 2
    "Being able to maintain" does mean it's not random: it means you get the gist of the underlying pattern recognition and you can 'play the pattern recognition'. – paul23 Feb 06 '21 at 23:20
  • @paul23: 56% for 200 bits is not surprising at all; compute the standard deviation for the actually guessed bits to see why. Anyway see this. – user21820 Feb 07 '21 at 15:28
31

For me, the fraud-related applications of Benford's Law come to mind. When people make up data they tend to create overly uniform data, even when it's not appropriate. There's a definite psychology going on that may cause people to be less random than they are intending to be (Wikipedia links to a paper claiming humans are in fact bad at this). Or perhaps misconceptions about what randomness "looks like." In any case, knowledge of things like this may generate self-doubt about generating randomness. In fact, the very idea of explicitly changing some of the allegedly random data you just generated may seem error-prone to some, and potentially the root of any problems that could later arise.

Dice, on the other hand, people trust to be random despite any unconscious bias they may be introducing. By following the outcome of dice rolls people can be more certain that there is no "gotcha" that might make their data less random. They had no real input and therefore feel sufficiently removed from the generation of the data.

Perhaps people are different enough that no general analysis could be done to make a case for a reduction in apparent entropy in human-generated random data. But I think this is ultimately a risk assessment -- i.e. are you willing to bet whatever you're protecting with the password on the assumption that your attempt at random data is truly random?

All of that said, I question whether this matters much, provided enough data is being generated. For example, a human-generated 8-character password is probably fairly insecure no matter how good a job they did at making it "random." In contrast, a 32-character password is probably fairly secure if they were trying at all. In either case, the way the password is actually used and/or secured may well matter more to whether their account will ultimately get compromised.

Still, it would be frustrating, even embarrassing, to learn that your carefully generated "random" password was able to be guessed due to its human origin, or because other "random" strings you had previously generated were compromised. Eliminating all possibility of that scenario, no matter how unlikely, is undoubtedly attractive to some, if only for that reason.

thesquaregroot
  • 1,249
  • 13
  • 23
  • 15
    +1 for "misconceptions about what randomness 'looks like.'" For example, in a string of random coin flips, people will think that getting 5 consecutive heads is less likely than it actually is, because it "doesn't look random". – Hello Goodbye Feb 04 '21 at 16:28
  • @HelloGoodbye The Birthday Paradox is a classic example - needing only 23 people for there to be a better than 50-50 chance of two people having the same birthday. – Graham Feb 04 '21 at 20:48
  • 9
    @Graham : the birthday "paradox" is a failure to appreciate combinatorial explosion. Let's not mix up common misconceptions here, that just makes it harder to explain why each of them is wrong. – Leif Willerts Feb 04 '21 at 21:18
  • 3
    @LeifWillerts It's the same root of why we get randomness wrong though. We expect "random" to be "everything's different". So we underestimate the probability of random outcomes which look like a pattern, and for the same reason we overreach when we try to be random. – Graham Feb 04 '21 at 22:13
  • 2
    @LeifWillerts More concretely than Graham’s response (which is accurate), humans are notoriously bad at understanding the difference between simple independent correlation from actual dependency. The issue of randomness is a result of this applying in one way (we interpret ‘independent’ as ‘uncorrelated’), while issues such as the gambler’s fallacy are a result of the reverse application (it arises from interpreting correlation as dependence). Both directions are a result of how learning works (namely, learning is all based on pattern matching). – Austin Hemmelgarn Feb 04 '21 at 23:39
  • @HelloGoodbye All strings of the same length generated from a uniform distribution are equally random. – yters Feb 05 '21 at 03:02
  • @yters Exactly. – Hello Goodbye Feb 05 '21 at 03:06
  • 1
    +1 for talking about uniform randomness. One of the biggest mistakes non-stats people make is thinking randomness only means drawn from a uniform distribution, probably because the most basic probability devices are things like coins and dice which generate discrete uniform distributions. – eps Feb 05 '21 at 17:22
18

Why would a dice rolled be "more random" than simply coming up with a sequence in your head, and then changing some of them?

Humans have too many biases regarding what a random sequence is. If you ask humans to generate a random sequence, they will probably pay attention not to use the same character in a row, i.e., aa or bb, as they think that ab is more random than aa. He or she will also have a bias due to the language used, where some combinations are more frequent than others. Humans easily but wrongly generate values based on what they have generated before, so there is no true independence between values! Also note that many people put semantic on numbers (7, 13, 666, etc), and then avoid some of them! All of this is very well known, and many experiments exist to demonstrate it. You may think that rolling dice is not really random, but alas, as there is no link between each roll, they are truly independent (at least nobody can control dependencies).

I simply don't believe that this could possibly be "not secure". Why the need to do the very tedious dice rolling? It feels like a ritual that they go through, based not on logic and reason, but on some sort of delusion that their brain is going to generate the same sequence as others who guess it, or a computer, even though they also change some of them after the phrase is "done".

Alas, don't believe is not sufficient. There are many scientific results on the subject, and generating a "secure" random sequence is not easy. Even a small bias may be dangerous and exploited. The human mind has a real difficulty understanding/tackling what randomness is.

I don't understand this. Is there anything that speaks for this practice as being "more random"/"more secure"?

There is true random physical randomness-- for example, rolling a dice. Of course, this can't be used in computer systems as the throughput will not be sufficient. Truly random sequences can be generated by radiation decay, but it is not easy to integrate it into a computer. So, modern random sequences are generated by a mix of pseudo-random generators and physical events. Pseudo-random generators are algorithms. Thus they can't produce true randomness but something very close. Then mixing the result with true randomness gives even more security.

  • 2
    Your first point is very true. According to official sources, 11 people died of COVID-19 on 2020-08-13 in Moscow. The next day, another 11 died. Since then, never once the figure was the same on two consecutive days. Chart – Roman Odaisky Feb 05 '21 at 14:16
15

Randomness is a measurable, statistical property of a set of values. It doesn't mean the same as "hard for a human to guess."

Your sample string is hard for a human to guess, but it isn't very random.

There is a tool called "ent" for most Unix systems that can quantify the randomness, by some measures, of a file.

Available here: https://www.fourmilab.ch/random/

Your string was 27 characters long, all ASCII, and limited to the set of [a-zA-Z0-9] . Let's compare your string to 27 characters from /dev/urandom limited to that same range, using "ent".

Your string: fcC540dfdK23xslLDdfd7dDL92

Here are the results from "ent".

$ ent test1.txt

Entropy = 3.926572 bits per byte.

Optimum compression would reduce the size of this 27 byte file by 50 percent.

Chi-square distribution for 27 samples is 532.41, and randomly would exceed this value less than 0.01 percent of the times.

The arithmetic mean value of data bytes is 77.9259 (127.5 = random).

Monte Carlo value for Pi is 4.000000000 (error 27.32 percent). Serial correlation coefficient is 0.271042 (totally uncorrelated = 0.0).

27 characters from /dev/random: Q9HpOpJrS3yYKlLc71yq003IMR

Here are the results from "ent".

$ ent test2.txt

Entropy = 4.458591 bits per byte.

Optimum compression would reduce the size of this 27 byte file by 44 percent.

Chi-square distribution for 27 samples is 304.85, and randomly would exceed this value 1.76 percent of the times.

The arithmetic mean value of data bytes is 78.8889 (127.5 = random).

Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).

Serial correlation coefficient is -0.024251 (totally uncorrelated = 0.0).


The program was easily able to quantify how much less "random" (in the statistical sense) your string was.

"People believe" we humans are bad at generating randomness because we are.

Nij
  • 103
  • 4
JesseM
  • 462
  • 5
  • 6
14

People are not that bad, but we're slow. See How were one-time pads and keys historically generated? In summary, MB's of 100% secure key material were generated for one time pads by people simply key smashing on type writers. Sufficient to win three world wars. It's just that a human's entropy rate is a little lower than a laser phase based TRNG.

fcC540dfdK23xslLDdfd7dDL92

is pretty much random. But do it again, and again and again. Randomness is a function of sample size, and the more you create by keyboard smashing, the more it becomes susceptible to frequency analysis.

That's not to say that raw irreducible information (entropy) isn't being generated, but it has to be uniformly distributed for use with cryptography. The uniformity aspect is the difficult bit. So try it. Write out 500 kB of 'randomness' and then run it through a program called ent. I can guarantee that your data will fail the test. And yes, comments below correctly highlight the speed issue.

That's not to say your typing wasn't random, but it won't have been random enough. Refer back to my linked answer, and read about randomness extraction which statistically reshapes biased randomness into useful cryptographic entropy.

Paul Uszak
  • 15,390
  • 2
  • 28
  • 77
  • 2
    Yes, just casually write out 500 kB. Estimating 5 characrets per word, that is slightly shorter than Harry Potter and the Prisoner of Azkaban, at 107 253 words – Suppen Feb 04 '21 at 15:15
  • 4
    @Suppen, right, it's going to get real tedious real fast. If a human were forced to do that, they would probably pretty quickly fall back on known, practiced patterns to make the task easier on them, even if they don't realize it. There's not going to be much entropy there. – Seth R Feb 04 '21 at 17:19
  • 2
    @SethR et al: Guys, that the reasoning behind my answer :-) But follow my OTP link. MB's of 100% secure key material were generated by people simply key smashing on type writers. Sufficient to win three world wars. It's just that a human's entropy rate is a bit lower than a laser phase TRNG. – Paul Uszak Feb 04 '21 at 18:02
  • 8
    [...] Sufficient to win three world wars -- I seem to have overslept World War Three, again. At least we won instead of them – Matija Nalis Feb 05 '21 at 17:33
  • 1
    @MatijaNalis Again? – DKNguyen Feb 05 '21 at 20:55
  • 3
    No, people are bad. People not generating good enough randomness actually lost some world wars. (Well, that and other factors.) Some of the successes of Enigma cryptanalysis were due to process or operator errors, such as avoiding consecutive identical elements in sequences that should have been random, or using keyboard patterns instead of random input. – Gilles 'SO- stop being evil' Feb 07 '21 at 15:59
  • @Gilles'SO-stopbeingevil' Ah! You're confusing human generated entropy (information) with uniformly distributed randomness for direct use in ciphers. They're somewhat different. That said, you've ignored my link to OTPs. And that's why I introduce randomness extraction. It's been a recurrent theme/problem throughout this entire question so I'd probably better edit to stress it. – Paul Uszak Feb 07 '21 at 17:11
  • @PaulUszak Quite the contrary, the question was about humans generating uniformly distributed randomness (i.e. full-entropy data), as opposed to generating a nonzero amount of entropy, and your answer mixes up the two. Keyboard mashing does not produce random data. – Gilles 'SO- stop being evil' Feb 07 '21 at 17:48
  • @Gilles'SO-stopbeingevil' I actually have to agree with Paul here that humans can generate randomness, but the issue is that we don't generate as much as we think we do. We may feel like we're producing 7 bits of entropy per character (assuming 7-bit ASCII), but we're producing far, far less. But if we conservatively estimate, say, 1 bit per 10 "keyboard mashes", we could say that humans can produce at least some randomness. Of course, if we take sample cycles at each keyboard interrupt, we can generate quite a lot of real randomness! – forest Feb 07 '21 at 21:54
  • @forest That's exactly what I'm saying: humans can generate a bit of entropy, but less than they think (which is the requirement for passwords), and definitely not enough when uniformity is required (which is the case for keys). – Gilles 'SO- stop being evil' Feb 08 '21 at 11:01
9

Evidence suggests that people asked to generate random data will produce repetition in the data substantially less often than random chance would.

For example, let's assume you were asked to generate random digits (i.e., just 0 through 9).

In purely random data, a sequence like NN (i.e., the same digit twice in a row) happens about 10% of the time. That is, given some arbitrary first digit, there's a one in ten chance that we'll randomly choose the same digit the next time.

But when people are producing (what they want to be) random digits, most people see this as something that's unlikely to happen by random chance, so what they produce will have substantially fewer instances of the same digit twice in a row than random chance would suggest.

Two digit runs are only the tip of the iceberg though. By the same logic, we see that runs of three identical digits should happen around 1% of the time. That is, given some arbitrary digit N, there's a one in ten chance that the next digit we select will also be N, and a one in ten chance that the third time, we'll select N again. 1/10 * 1/10 = 1/100 = 1%.

That continues with longer strings as well--4 digit runs should happen with a frequency of about 0.1%, 5 digit runs with a frequency of about 0.01%, and so on.

Testing indicates, however, that when people are asked to generate random numbers, they'll produce repeated strings like this considerably less often than random chance would. And the longer the string, the worse the disparity between human-generated and randomly-generated strings becomes, to the point that most people simply won't produce a run of the same digit (say) 4 or 5 times in a row, no matter how many random digits you ask them to produce. To most people, the chances of that happening randomly seem so remote that they simply never do it. The same happens with other things that seem like obvious patterns such as "1234" or "3210"--most people won't produce them nearly as often as they would occur by random chance.

Jerry Coffin
  • 1,134
  • 12
  • 15
  • 2
    People mailing stuff to my house have written the wrong house number more than once. It is 4321. They tend to write 4231, because "it just can't be 4321. The odds are too low." Hell, I wrote 4231 once myself. – DKNguyen Feb 04 '21 at 22:58
  • According to official sources, 11 people died of COVID-19 on 2020-08-13 in Moscow. The next day, another 11 died. Since then, never once the figure was the same on two consecutive days. Chart – Roman Odaisky Feb 05 '21 at 14:20
  • @RomanOdaisky: Unless you believe that to be a random source, I don't quite see how you'd consider it relevant. If you do believe it's a random source...then I disagree, and think it's still not relevant. – Jerry Coffin Feb 05 '21 at 23:08
  • 1
    It’s a nice real-life example of how lack of repetition can disprove the hypothesis that the data came from a random process. – Roman Odaisky Feb 05 '21 at 23:11
  • @RomanOdaisky: I see what you're getting at. Yes, certainly indicates that the values aren't purely random. – Jerry Coffin Feb 05 '21 at 23:17
0

I suppose the problem is not that a human would generate a biased random number. Computers also use biased random sources, but as long as there is entropy in them, they could be hashed into a shorter random enough number. However bad humans are, what humans think of obviously has entropy in it.

The problem is, humans are bad at memorizing true random numbers, and doesn't have an internal hash mechanism (at least there isn't one humans are known to be able to feel and make use of). If they hash mechanically, it would take much time and need to memorize more numbers. Everyone would be lazy and choose to just use a computer. The rest of people who don't feel lazy are the ones who don't know how biased they are and how to make random numbers correctly. What they could get in average is to be expected.

user23013
  • 101
  • 2
  • 2
    I don't understand how you are trying to answer OP with this. Remembering numbers is exactly why we fail at random, because we look at the previous numbers to pick something that "looks random". Dice have zero memory and zero hashing ability. – pipe Feb 05 '21 at 05:55
  • @pipe I don't think it's because "we" want something "looks random". Yes, many people do like that, but it's because the people knowing not to do that would also realize how difficult it is to get everything right, and prefer easier ways using computers. It's not the case that it's difficult to train people knowing better to generate actual random numbers. It's just not worth it. – user23013 Feb 05 '21 at 06:34
  • 1
    @pipe (Technically, people absolutely without any training cannot speak a language, and cannot understand the word "random", which shouldn't be used as an evidence of how bad humans intrinsically are, but only how bad most people are. If the idea is actually how most people are, I don't disagree with the common belief. But unlike other answers, my opinion is, a human mind generally has the random source with usable quality. It only need to be processed to use in cryptography.) – user23013 Feb 05 '21 at 06:39
0

It's mathematics and psychology. People tend to create patterns that aren't random even when they try not to.

Randomness isn't just any gibberish that doesn't mean anything, it's data NOT HAVING ANY PATTERN. Humans create patterns.