4

I'm a software engineer, working on a small randomizer library as part of a larger project.

We're using a cryptographic random number generator, which provides an array of random bytes.
We have to use these random bytes to produce an array of random integers fitting whatever requirements are specified.

For example, let's say someone requests $5$ random 8-bit unsigned integers between $50$ and $200$.
The value $50$ would be assigned to the variable $min$, and $200$ assigned to $max$.

Our generator then produces an array of $5$ random bytes, with values ranging from $0$ to $255$.

The most obvious method for converting each random byte $(n)$ into the desired range, would be:$$min+mod(n,(max-min)+1)$$... where $mod$ is the modulus or modulo operation.

This would convert each random byte $(n)$ into a random integer between $min$ and $max$.

The problem with this solution is:
It doesn't produce an even distribution, because each $n$ is a random integer between $0$ and $255$.
Therefore, in cases where $n>max-min$, the distribution overlaps unevenly with itself.
In the example above, the result is twice as likely to be between $50$ and $154$, as opposed to results between $155$ and $200$.

We need the random distribution to be even across the requested range ($50$ to $200$ in this example).

What's the simplest way to achieve this?

More complicated operations, such as logarithms, will cause a severe drain on performance.
So we'd like to stay within the realm of simple arithmetic, if at all possible.

For bytes where $n>max-min$, could we subtract $(max-min)$ from $n$, and then add resulting difference to the next byte in the array?
This is a possible solution I'm considering - but I'm confused about how it would work.
Are there any pitfalls or nuances that apply here?
How would this type of solution work?

Are there any other solutions that would provide a consistent, even random distribution without draining performance?

Giffyguy
  • 649
  • 1
    What are the constraints (possible values) for $\min$ and $\max$? – mvw Dec 26 '17 at 17:17
  • @mvw $min$ and $max$ have the same constraints as $n$. In the example above, that gives a range of integer values from $0$ to $255$. – Giffyguy Dec 28 '17 at 16:02
  • How do you feel about floating point? I.e., is the time cost in generating 110% of the needed random numbers sufficiently large that running @user21820 's algorithm with floating point (and without the truncating integer divides) could be a win? If that could be interesting, I can post code. – Eric Towers Dec 28 '17 at 23:40

7 Answers7

3

Another simple (and perhaps already considered and discarded) solution would be to reject some values. That is, if you want your random bytes to be between 50 and 200, you could just discard systematically any random byte outside that range and wait for a random byte that satisfy the desired condition. It can be proven that this gives an uniform distribution over the desired range and statisticians are used to do it, although for a software engineer I understand it will sound like a waste of resources.

This sounds specially problematic if the range is too short, but actually once the range becomes half ther maximum possible range (for instance, 100 to 227), you can divide your range in two without any overlaping. If it is even shorter, then you start discarding some values again, but most you will not. And so on if you have a range which is (approx.) $256/3$, or $256/4$, etc.

IMPORTANT: What comes in the next paragraphs, against my initial intuition, does not give a uniform distribution. It is well know that the sum of uniform r.v. is not uniform, but the r.v. that results from the hereby proposed procedures is far from just a sum of uniform r.v. By now, I leave it as a solution that will not work and I'll try to give an idea of why it is so as soon as I can.

If you don't want to discard anything, maybe a more complicated pattern adding auxiliary random bytes could do the trick. If we want, again, numbers between 50 and 200, we take for every random byte $U\le 150$ the number $U+50$. If $U>150$, then we look for another random byte $V$ as a complement, and take $$U+V-101,\quad \text{if} \quad 151\le U+V\le301$$ and $$U+V-252 \quad \text{if} \quad 302 \le U+V\le 452.$$

Finally, we can discard other cases or search for another complementary random byte; maybe just let the process finish only this way, or perhaps after a preset maximum quantity of complementary bytes is reached.

  • But I would have to do some deeper calculations. The sum of uniforms is not uniform at all, although I have an intuition that it should work properly with the conditional distributions that are at stake... Not obvious, but not intractable at all. – Alejandro Nasif Salum Dec 26 '17 at 17:37
  • Indeed if you combine the approach with the modulo approach, we can guarantee that we always cover at least half of the generated range (which you allude to in the second paragraph, I think) Suppose you generate in the range [0, gen_max) and want [min, max), then you find the largest integer k such that k * (max - min) < gen_max and generate values in [0, k * (max - min) ) by discarding and then use modulo as described in the question to map to [min, max). – WorldSEnder Dec 26 '17 at 21:38
  • 2
    "I have an intuition that it should work properly with the conditional distributions that are at stake" The object of your intuition is unclear but, in every sensible interpretation I can imagine, your intuition is wrong. – Did Dec 27 '17 at 10:55
  • Well... intuition is clearly wrong when it leads to the belief that a false statement is true. So I assume you're sure my statement is wrong, that is, you have a proof for it, or at least very persuading empirical evidence. (Of course a wrong intuition can lead to believe in true statements too). – Alejandro Nasif Salum Dec 27 '17 at 20:33
  • I myself have both a proof and evidence now that I worked a little on the problem. So I know for sure that my intuition was wrong. I'll reflect it in an edit to my answer later. – Alejandro Nasif Salum Dec 27 '17 at 20:34
3

There is a way that would not waste so much entropy in the random source, and also has optimal expected time per call. We wish to construct a universal RNG from a given RNG rand that outputs a uniformly random value in the range [0..k-1]. In your question k=256.

p=0;
q=1;
def rnd(int c):
    global p,q
    while True:
        if q>=c:
            if p<q-q%c:
                v=p%c;
                p=p//c;
                q=q//c;
                return v;
            else:
                p=p%c;
                q=q%c;
        r=rand();
        p=p*k+r;
        q=q*k;

How this works is that p,q represents that we have an unused random choice p in the integer range [0..q-1]. So we just call rand enough times to expand that range and our choice within it. If at any point our random choice is less than q-q%c (the largest multiple of c that is at most q), we can return p%c because it is equally likely to be any residue modulo c, and we then shrink the groups of size c each into an unused random choice. Otherwise we remove that multiple of c from both p and q. In the implementation above, note that the extra if q>=c is redundant, but may increase efficiency if c is large compared to k.


I have tested it and it achieves about 95% (entropy) efficiency for c=3 and about 90% efficiency for c=150.

After thinking a bit, I realized that I was wrong to claim that it is entropy optimal. The missing entropy goes into the choice between the two if cases. There is actually a way to fix this, but it is not simple to implement and when I implemented just one level it only improves the efficiency slightly, so it is quite pointless.

user21820
  • 57,693
  • 9
  • 98
  • 256
  • 2
    @Giffyguy: By the way, I just found my earlier post with an essentially equivalent algorithm which I proved is optimal. – user21820 Dec 27 '17 at 12:19
  • I am dubious that this uses "all the entropy in the random source". To generate $10^6$ random integers in the interval $[1,3]$, $\lceil 10^6 \log_{256}3 \rceil = 198,121$ random bytes are required. Several quick runs show that more than $1,000,000$ are used. So this method has an efficiency of less than 20%. The other suggestions on this page at least reach 50%. And better than 80% is possible on the sample problem. I'm forced to agree with @DavidC.Ullrich from the other post you cite that you do not have an optimal method. – Eric Towers Dec 27 '17 at 22:16
  • @EricTowers: In my earlier post, I had actually proven optimality of the algorithm in terms of expected number of calls to rand needed per invocation, contrary to David's comment. That is distinct from entropy-optimal, and I know that but somehow overlooked it, so you are right that my claim of entropy-optimality here was wrong. Sorry. But it is in fact easy to fix my algorithm to use all the entropy; just do not discard the group under the if p<q-q%c: case. I will edit now. Thank you very much for your comment! =) – user21820 Dec 28 '17 at 03:15
  • The modifications make this much closer to your claim. However, I disagree on where the "entropy leak" occurs. As written, you discard entropy in the truncated divisions of p and q. (Interpreted as arithmetic decoding of a (conveniently left-aligned) multi-radix integer, you discard entropy when you widen the current interval to width $1$ at those divisions.) – Eric Towers Dec 28 '17 at 20:28
  • @EricTowers: I'm not sure why you said the truncated divisions matter, because that is only done when p<q-q%c, and in that case p is within some multiple of c that is at most q. The information goes into v (c choices) and into p//c (q//c = (q-q%c)/c choices). So I do not see why any information is lost there. I will have to do a careful analysis if you still disagree, because what I'm saying is based on intuition. Thanks! – user21820 Dec 29 '17 at 12:03
  • Rather than go back-and-forth in comments, I've posted an answer using arithmetic decoding in floating point and discussed how it is different and similar to your integer-only algorithm. – Eric Towers Dec 30 '17 at 10:50
2

Rejecting values should work (as mentioned in Alejandro's answer) and should not take too much time. In particular, let's say $2^{k} $ > min - max $ \geq 2^{k-1}$. Now, generate your random bits and check whether the number corresponding to k bits are in the interval [(min - max), 0]. All of these operations are extremely basic (Finding k reduces to bit shifts), and you should need on average 2 operations (As a matter of fact, number of trials before success is going to be a geometric distribution with parameter arbitrarily close to 1/2; the probability that a geometric distribution with parameter p is N is $(1-p) p^{N-1}$; i.e. only 1 in $2^{500}$ times would you expect it to take 500.

Of course, if division is allowed, you can always divide $2^n$ by the number of values you are trying to generate (min - max + 1). Then, again, you are looking at a geometric distribution.

I don't know any way to do it without discarding values; it seems like the easiest way to do it.

Edit: You should check for $\pm1$ errors if you are implementing.

E-A
  • 5,987
  • See my answer for a method that does not discard any randomness. This kind of idea of iterative branching is common in information theory, such as in arithmetic coding which is entropy-optimal up to an additive constant. – user21820 Dec 27 '17 at 11:08
  • @user21820: I mean I think I am only discarding one bit of randomness which I thought was the guarantee given in many coding schemes, not discarding any sounds pretty cool though. I do not immediately see why yours terminates in some constant time to be honest, can you give any insight into that? – E-A Dec 27 '17 at 11:57
  • On reading your answer more closely, "min−max" does not seem to make sense, and it seems there is also an off-by-one error. When fixed, your method would potentially discard a constant amount of bits at every invocation. In contrast, my method does not discard any bits at all, and all leftover bits of information are kept in p,q so that they can be used in future invocations. In a similar way, arithmetic coding is entropy-optimal up to the leftover bits due to necessary rounding off to the nearest symbol bit-length. – user21820 Dec 27 '17 at 12:11
  • @user21820: I would not be surprised if there is an off by one error at all :D if you have time, can you point out where it is? (also min - max is literally just the difference between min and max; I think that is your value of c?) – E-A Dec 27 '17 at 12:14
  • As for constant time, I made a slip. It should be constant expected time, which is of course the same as the others. The major advantage of my method is simply in extracting all the randomness from the source. – user21820 Dec 27 '17 at 12:16
  • Shouldn't it be $c = max-min+1$ to agree with the asker's definitions? – user21820 Dec 27 '17 at 12:17
  • Uh, probably? I will leave a note to the OP to check if he decides to implement it; thanks for the heads up. I still don't get what it means in this context to extract all the randomness from the source; like, the total entropy of any of these methods is number of times you call rand * entropy associated to rand (log k). – E-A Dec 27 '17 at 12:23
  • The meaning of "extract all randomness" is best defined via information theory, but here it is essentially minimizing the number of times you call rand on average as the number of invocations of your RNG goes to infinity. And from information theory we can compute that the optimal average should be $\log_k(c)$, which my method will achieve. Yours will not, since it throws away the information in the discarded values. Anyway I was saying that you had $min-max$ rather than $max-min$, but never mind. =) – user21820 Dec 27 '17 at 13:32
  • Right, I was wondering why it would achieve that in expectation. (Also 2 is only a constant away from log_k(c) for c>1 :D) And yeah it should be max - min :D. Also, something else your method achieves is for cases where c > k I did not account for that. – E-A Dec 27 '17 at 13:38
2

(OP hasn't responded to my question about floating point, but I'm posting this mostly as an example for user21820.)

We may choose to think of the supply of random bytes as providing a (virtually) infinitely long mixed radix integer (that is conveniently left-aligned, but inconveniently, we only find out the radix of the next digit as we come to it). At each call to the function, we draw more random bytes until we have acquired enough bits to determine what the next output digit is. This is equivalent to arithmetic decoding.

Throughout, we follow the interval $[p,p+w)$, a range of real number guaranteed to contain the rest of the infinitely long random binary number. As bits are drawn, the width, w, of this interval decreases by a factor of $2$. Depending on the bit, we either keep the low half ($0$) or the high half ($1$) of the interval. When the entire interval fits in a single bin (definition coming soon) we output that bin and rescale p and w as if that bin were the interval $[0,1)$, preparing for the next call.

When we call our random number function to generate an integer in $[0,c)$, we divide the interval $[0,1)$ into $c$ bins (by multiplying by $c$ and using the integer parts of the unit intervals in $[0,c)$ as the labels for the bins. The interval $[p,p+w)$ is likewise scaled to $[cp, cp+cw)$. If a prior call to the function required reading ahead several bits to resolve in which bin the interval fell, the interval $[cp,cw)$ may be so small that it already fits in a single bin. If not, we draw bits, halving the width of the interval until it does fit.

A Mathematica implementation, with a bit of monitoring code, demonstrating usage.

foo = Module[{
      p, w, k, kmant, mant, rnd,
      rndCount, resetCount, vec
    },
    p = 0.;
    w = 1.;
    k = 256;
    kmant = 8;  (* = log_2(k) *)
    mant = 16;  (* bits of mantissa in p (and w) *)

    rndCount = 0;
    resetCount = 0;

    rnd[c_] := Module[{retVal, r},
        (*  Return random integer in [0,c).  *)
        If[c < 2,  (* then *)
          retVal = 0
          ,  (* else  *)
          p = p*c;
          w = w*c;
          (*  There are much sneakier ways to write the next two conditions.  *)
          While[Floor[p] != Ceiling[p + w] - 1,
            If[p > 2^(mant - kmant) w,
              (*  If width is so small that p + w/k loses precision, 
                  restart p and w.  Only happens if random bits conspire 
                  to make [p,p+w) persistently straddle an integer.  *)
              p = 0.;
              w = c + 0.;
              resetCount++;
            ];
            r = RandomInteger[k - 1];  (* random integer in [0,k-1] *)
            rndCount++;
            p = p + r w/k;
            w = w/k;
          ];
          retVal = IntegerPart[p];  (* For this and next, in C, see modf(). *)
          p = FractionalPart[p];
        ];
        retVal
    ];

    (*  generate one million random integers in the range [0,3) = [0,2].  *)
    vec = Table[rnd[3], {10^6}];  
    (*  report the stats for this run  *)
    Print[{rndCount, N[100 rndCount/(10^6 Log[k, 3])], resetCount}];
        (* Log[k,3] = log(3)/log(k)

    (*  return list of integers to foo  *)
    vec
];

(*  Output example: 

    {198998,100.443,681}

    * Drew 198998 random bytes, 100.443% of entropy required to uniquely 
      select one outcome from 3^(10^6) possible outcomes (disregarding 
      conspiracies in the random number generator for resolving which 
      bin the last member of the sequence lies in).
    * Had to reset 681 times due to the risk of precision loss.  This 
      resulted in 198998-198121=877 drawn random bytes being discarded.
*)
(*
      One can also 
  Histogram[foo]
      or
  Length/@Split[Sort[foo]]
      to decide whether uniformity was attained.  The run with the above 
      stats had output counts of
  {332942, 333926, 333132}

      We could even run a chi-squared test to see if the counts above are 
      sufficiently extreme to reject that the data is drawn from the 
      uniform distribution.
  PearsonChiSquareTest[foo, DiscreteUniformDistribution[{0, 2}], 
  "TestDataTable"]
      We find that our test statistic is 1.63479... compared to a chi-
      squared distribution with two degrees of freedom.  The resulting p-
      value is 0.44158...  (That is, only 44.158...% of data drawn from 
      the uniform distribution would have output counts closer to the 
      expected value than these.)  This data is not sufficiently 
      extreme to reject that it is drawn from the uniform distribution.
*)

It is possible to implement this entirely in (arbitrary precision) integers. But this cannot be done (exactly) in finite precision since the random bit source may conspire to require arbitrarily large read-ahead to resolve to which side of a bin boundary the interval eventually falls. (Although, such long read ahead is exponentially unlikely -- at each new bit, either the interval falls entirely to one side of the boundary or it does not, with equal probability.)

user21820's answer is an approximate implementation of this idea, representing the interval as $\left[ \frac{p}{q}, \frac{p+1}{q} \right)$ in integers $p$ and $q$, "barber-poling" the integers in $[0,q-1)$ so that incrementing $p$ increments the represented output value ($p=0$ represents the left $c/q$-wide subinterval corresponding to $v=0$, $p=1$ represents the left $c/q$-wide subinterval corresponding to $v=1$, ..., $p=c-1$ represents the left $c/q$-wide subinterval corresponding to $v = c-1$, $p=c$ represents the second $c/q$-wide subinterval corresponding to $v=0$, and so on ...). Note that this can't represent an interval inside a single bin until $q \geq c$. Also, at the end, when ready to select an output bin, the interval represented by $p$ and $q$ is (very, very likely) less than a unit wide, but $p$ and $q$ are altered to exactly match the bin with integer divisions. (This side-steps the need for arbitrary precision integers by (very, very likely) discarding a fractional bit on each output.)

Eric Towers
  • 67,037
  • After reading your post, I am convinced that your analysis in your post is correct, and my claim that it is not the truncated divisions (p//c and q//c) is from a different point of view, which is actually inferior to your way of looking at it. I think my algorithm can actually be made more efficient in a simple way by changing q>=c to something stricter like q>=c*c, but I haven't thought much about it yet. Thanks anyway! – user21820 Dec 30 '17 at 16:46
1

This question reminds me a similar but puzzle-oriented one. We may use the cryptographic random bytes generator as a stream of random bits: if we start with some interval $[a,b]$, at each step we may select its right/left part according to the generated random bit. If we want to generate a random integer in the interval $[M,N]$, we may apply the above procedure to the interval $\left[M-\frac{1}{2},N+\frac{1}{2}\right)$ and stop the generation of random bits when it becomes impossible to leave a $\frac{1}{2}$-neighbourhood of some integer point. The waste of information is close to zero, the tricky part is just to implement this in integer arithmetics: each step can be encoded as a simple manipulation on the binary representation of $\frac{1}{L}$, with $L$ being the length of $[M,N]$. The algorithm almost always stops in $\log_2 L+O(1)$ steps, so the computation of $2\log_2 L$ digits of the binary representation of $\frac{1}{L}$ is almost surely enough to write the above algorithm in integer arithmetics and avoid recomputations/rejections. Note: up to the extraction of a further bit we may assume that $L$ is odd without loss of generality. This ensures that the binary representation of $\frac{1}{L}$ is purely periodic, where the length of the period equals the order of $2$ in $\mathbb{Z}/(L\mathbb{Z})^*$.

enter image description here

Jack D'Aurizio
  • 353,855
  • Hmm this throws away up to one bit per invocation, because you do not save the remainder information somewhere. – user21820 Dec 28 '17 at 16:26
  • The objection is merely that you are only showing how to get one uniform random value in some integer range optimally. My answer also achieves this. However you did not show how to get a whole stream of random values in that range optimally. My answer is also not optimal, and I intuitively know why (as stated in my post), but Eric disagrees with my latest analysis so I'd have to think carefully about it. – user21820 Dec 29 '17 at 11:31
  • If one manages to generate a random integer optimally, he also generates a stream optimally: it is enough to continue processing random bits. – Jack D'Aurizio Dec 29 '17 at 12:04
  • No that is false. Your method generates a random integer optimally only because the number of random bits you use must be an integer. In terms of entropy it is off by up to 1 bit as I had stated in my original comment. So you lose up to one bit of entropy per invocation. The only way to lose less bits is to save the state somehow. – user21820 Dec 29 '17 at 12:10
  • What if the decisive bit for defining an integer is also used for the first halving in the generation of the next random integer? – Jack D'Aurizio Dec 29 '17 at 12:14
  • You run into the problem of ensuring that your next invocation gives a uniformly random output. That's what my algorithm guarantees. However, I do not really have the time to do a careful analysis. – user21820 Dec 29 '17 at 13:06
  • @user21820: since every bit from the random source is either $0$ or $1$ with the same probability and $[N-1/2,M+1/2]$ is given by a union of symmetric intervals, it should be true that the decisive bit is either $0$ or $1$ with the same probability, ensuring that the previous tweak does not affect uniformity. – Jack D'Aurizio Dec 29 '17 at 13:52
  • Hmm that's wrong. We already know that you cannot do better than entropy-optimal, and if your proposal works then clearly it does strictly better because you would use strictly less than $\log_2(L)$ bits per invocation. Anyway, the mistake is that if you use that decisive bit for halving in the next invocation then the next random number generated will be uniform (by your symmetry reasoning) but not independent! – user21820 Dec 29 '17 at 15:32
  • 1
    @user21820: you are totally right. – Jack D'Aurizio Dec 29 '17 at 15:43
1

Building on the idea of discarding values out of range, this can be optimized with a bit mask created from a bit scan right. 11 lines of C code does what is requested.

// build: gcc random.c -o random


/*
 * Building on the idea of discarding invalid values, this can be optimized with a
 * bit scan right, used to create a mask that removes any unnecessary bits that would
 * make the random value too large.
 * 
 * This depends on the fact that it isn't really the byte from the random generator that
 * is random, but rather all of the bits making up the byte. Thus it is possible to
 * discard any bits that we don't need, because the rest will still be random.
 * 
 * This approach significantly decreases the amount of repeated calls to crypto_rand(),
 * in fact you will never get more than 50% misses (over time), in the absolute worst case.
 * 
 * Compared to getting 99.2% (!) misses in the naive case of discarding the result, and
 * looking for a random number that is either 0 or 1.
 * 
 * To further reduce the number of calls to crypto_rand(), you can put the unused bits
 * from each call into a buffer, and re-use them on the next call to rand_simple().
 * 
*/

#include <x86intrin.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h> // rand() for testing

int crypto_rand_call_count = 0;

// dummy implementation of cryptographic rand
uint8_t crypto_rand() {
    crypto_rand_call_count++;
    return rand() & 0xff;
}      

// Simplify the problem by always assuming min=0
// max >= 1 
// Use bit scan right to drastically reduce the number of re-requests
uint8_t rand_simple(uint8_t max) {
    int highest_bit = __bsrd(max); // Bit scan right dword (=4 bytes)
    int mask = 1;
    int result = 0;
    while (mask |= 1 << highest_bit, highest_bit--);
    while (result=crypto_rand()&mask, result>max);
    return result;
}
// without bsr optimization, hopeless with small max values:
//while (result=crypto_rand(), result>max);

// Random byte between min and max, both inclusive
uint8_t rand_byte(uint8_t min, uint8_t max) {
    return rand_simple(max-min)+min;
}

int test_evenness() {
    int arr[256];
    int i;
    for (i=0; i<=255; i++)
        arr[i] = 0;

    for (i=0; i<=100000; i++)
        arr[rand_byte(240, 255)]++; // Modify this to test different ranges

    for (i=0; i<=255; i++)
        printf("%d: %d\n", i, arr[i]);

}

int main() {
    srand(time(0));
    int i;

    test_evenness();
    printf("\n:%d\n",   crypto_rand_call_count);

    return 0;
}
0

After some looking around, see below, I would say use the C++ 11 Standard Library, if you can, otherwise discarding out of range source bytes seems the easiest way. The good way, like reimplementing the Mersenne twister based PRNG approach from the C++ 11 Standard lib etc seems not easy.

My stroll:

That question seems to have been pondered here ("How do I scale down numbers from rand()?").

The slide presentation in this answer ("rand() Considered Harmful | GoingNative 2013") seems interesting.

But analyzing the C++ source for uniform_int_distribution() seems complicated according to this ("c++11 random number generation: how to re-implement uniform_int_distribution with mt19937 without templates").

I would dive deeper into this answer to "Generating a uniform distribution of INTEGERS in C".

BTW my naive thinking was:

The real valued linear transformation from $A=[0, 255]$ to $B=[\min, \max]$ is \begin{align} t(x) &= \min \cdot \left(1- \frac{x}{255} \right) + \max \cdot \frac{x}{255} \\ &= \min + (\max - \min) \cdot \frac{x}{255} \end{align} So the question would be how to implement this in a good way with integer arithmetic.

The above "rand() Considered Harmful" video refers to the troubles with this approach. ("DO NOT MESS WITH FLOATING-POINT") :-)

mvw
  • 34,562
  • Just referring to your last idea of linear scaling -- works if the scaled range is also a power of $2$. Consider scaling a $2$ random bits to a range of $[1,3]$. You have four bit patterns to assign to three bins. Pigeonhole says someone is getting selected more often than the others. – Eric Towers Dec 26 '17 at 22:11