The Random Number God : With Code in C.

I’ve been writing about game development, and today I’m going to get to one of those grotty details that half of everybody thinks is trivial or unimportant or handled automatically by the environment, and the other half of everybody obsesses on. And the difference between the two halves is mostly just whether or not they’ve experienced a bad bug related to it or identified something more than their default settings provide that they want to do with it.

First, a religious observance. We must speak the sacred truth first observed by one of our prophets. Everybody stand up and say it with me, okay?

“Anyone who attempts to generate random numbers by deterministic means is, of course, living in a state of sin.” — John Von Neumann

Okay. Religious invocations aside, let’s get to work.

Your game program or simulation needs a lot of what we usually call random numbers. Of course we don’t mean really random; if we wanted real randomness, we would be using the /dev/random or /dev/urandom kernel devices. What we want for our purposes is a repeatable sequence of uncorrelated numbers, and that means a pseudorandom number generator.

Usually, a program that wants a repeatable sequence of uncorrelated numbers uses the pseudorandom number generator that’s built into the language, development environment, or libraries. But for some programs, we want to be able to save and store random states and we want them to generate the same random sequences on all platforms and environments where the game is built. This enables us to do things like save a PRNG state and rebuild a map when restoring, rather than saving and restoring the generated maps in our savefiles, for example.

If we are being cute or lazy, we could even have the savefile consist entirely of the initial PRNG state plus the sequence of keystrokes so far entered by the player. In short, if we want to be able to save a PRNG state and recreate something instead of saving and restoring a potentially much larger and more complex thing, then we’re making more than casual use of our pseudorandom generator and we need to nail it down so that we’re using the same pseudorandom generator regardless of what environment we build in.

Finally, in many build environments there are also problems with the built-in random number generator, and even things we consider to be casual use of the PRNG can make these problems can become ugly. For example, if I habitually use a modulus operation to scale the result of random number generation to a given range, I want to be dead sure that I’m using a PRNG where I can absolutely rely on there being no short-period repetitions in the low-order bits.

Because we will be generating long sequences of numbers when we do game tasks like map, monster, and treasure generation, or simulation tasks like simulating turbulence for airflow, and we don’t want recent prior sequences of numbers to rule out possibilities for the current results, we want the uncorrelated sequence lengths to be long. Most build environments provide PRNG’s with uncorrelated sequences of length one, meaning that given any previous output there is only one possible next output.

Usually when we take a random number, we want an integer in some range, so we either use scaled division or a modulus operation to map the PRNG output onto that range. In the best case, this has the effect of ‘stretching’ the length of uncorrelated sequences by the ratio of the bit length of the PRNG output to the bit length actually used.

For example, if we have a 64-bit PRNG output with an uncorrelated sequence of length one, and we’re choosing equiprobable things from a table of 16 items, we’re only using four bits per output. If we don’t have short cycles or nonuniform distribution of the bits we’re actually using, we can pick up to 16 items from that table in what is an apparently an uncorrelated sequence. One unfortunate truth is that the length of the uncorrelated sequence, in bits, cannot be longer than the number of bits in the random generator’s state, but it can be much shorter.

Scaled division suffers from disproportionate correlation if there are short cycles among the most significant bits, and modulus operations if there are short cycles among the least significant bits. This matters because many of the classic pseudorandom number generators that are built into system libraries do indeed have short cycles in the least significant bits. Several of the most common flawed generators have, in fact, a least-significant bit with a period of two (it alternates between zero and one), a second-least significant bit that has a period of four (which is zero twice and one twice in each period), a third-least significant bit with a period of eight (which is zero four times and one four times within each period) and so on. Using modulus operations to scale such generators to an interval gives sequences with repeating patterns that don’t much resemble uncorrelated sequences.

So, now we know two things we want in a PRNG for games or simulations in addition to apparently uniform distribution of all subsequences; we want long uncorrelated sequences, and no short cycles on bit positions. In fact, we can state the second part much more generally and usefully; we want all scaled divisions and modulus operations on the output using any divisor less than the period of the PRNG to have the uniformity of distribution and the same period as the PRNG itself. Short cycles on bit positions is a special case that results in nonuniform outputs or short periods when the divisors, or moduluses, are multiples of two or powers of two.

With these criteria in mind, I’m going to take a look at some of the better-known Pseudo Random Number Generators – and some less well-known ones that are much better and not really any more trouble to code. For this article I’m going to restrict myself to talking about generators whose outputs can easily be described in terms of their previous outputs. While that leaves out most cryptographically-secure PRNG’s, it does give us a fair amount of material to cover and a bunch of generators that are excellent for simulations and games.


Randu: This is a PRNG that was built into a lot of IBM systems. It has become a textbook example of a poor random number generator. With A = 65539 and M = 231, the function is

X(n) = A * X(n-1) mod M

As with most of these generators, the state of Randu consists of the previous output. The period of Randu is 229. The length of uncorrelated output is one, and it has short period repetition in its other low bits. There are four independent sets of states; randu can never transition to a state of a different set than the one it’s in. Zero is not a member of any of these sets, so randu can never produce the output zero under any circumstances.


BSD rand: This is the old BSD random number generator. It is almost as bad as Randu. Where A=1103515245, C=12345, and M = 231, the function is

X(n) = (A * X(n-1) + C) mod M

The period of BSD rand is 231, and the uncorrelated sequence length is one. Given any 31-bit seed, it will cycle through all possible states before repeating. It has short period repetition in its low bits.


Vax MTH$RANDOM: This is the random number generator that many Vax development environments came with. Where A = 69069, C = 1, and M = 232, the function is

X(n) = (A * X(n-1) + C) mod M

The period of MTH$RANDOM is 232, it has an uncorrelated sequence length of one, and it has short period repetition in its low bits.


Transputer rand: This is the random number generator from the INMOS
Transputer Development system. With A = 1664525 and M = 232, the function is

X(n) = (A * X(n-1)) mod M

The period of Transputer rand is 232 – 1. Zero is not a valid state nor a possible output, and it has short period repetition in its low bits.


Cray rand: This 48-bit random generator was used in Cray systems.
With A = 44485709377909 and M = 248 the function is

X(n) = (A * X(n-1)) mod M

The performance of this generator is strange in several unexpected ways. If seeded with an odd number it produces only odd numbers. Even numbers are not valid seeds because if seeded with an even number it would produce even numbers until it eventually produced zero, at which point it would loop, never again producing anything else. Every sequence it can produce is produced starting from either of two different states. The period of the generator is 246. The uncorrelated sequence length is one. And aside from the astonishing length-one cycle in the least significant bit, it also has short period repetition in its other low bits.


Maximal 48-bit linear congruential generator: This was deployed in several development environments for Apple systems. Given A = 44485709377909, B = 11863279, and C = 248, the function is
X(n) = (A * X(n-1) + B) mod C.

The period is 248, the uncorrelated sequence length is 1, and it has short period repetition in its lower bits.


Unix rand48: This is the Unix rand48 generator. With A = 25214903917, C = 11 and M = 248, the function is

X(n) = (A * X(n-1) + C) mod M

The period is 248, the uncorrelated sequence length is 1, and it has short period repetition in its lower bits.


Maximal period 64-bit Linear Congruential Generator. With A = 2862933555777941757, B = 3037000493, and C = 264, the function is

X(n+1) = (A * X(n-1) + B) mod C

The period is 264, the uncorrelated sequence length is 1, and it has short period repetition in its lower bits.


Up to this point, this has been a “wall of shame” due to the short period repetitions in the low-order bits of all the generators so far described. The above generators, bluntly, are unspeakably awful. Mostly this was due to engineering decisions made by people who valued the performance cost of a single modulus operator more than they valued the validity of their results. The ones below don’t suffer from that particular flaw.


Minimal Standard: This is Park and Miller’s MINSTD generator. It is a linear congruential generator, but uses a prime number rather than a power of 2 as the modulus. With A = 16807 and M = 231 – 1, the formula is

X(n) = (A * X(n-1)) mod M

It has a period of length 231-1, the uncorrelated output length is 1, and it does not have short-period repetition on any divisor less than M. Zero is not a valid state and can never be produced as an output.  Park & Miller later advocated changing the constant A from 16807 to 42871 for better performance on a particular test of random behavior.

It is important to understand what this is; having only a 32-bit state, it is still fairly weak as a random number generator and has only a 32-bit uncorrelated sequence.  It is a minimal standard in that it avoids certain problems, not a particularly good generator in terms of being robust or having a long uncorrelated sequence.  Park & Miller were quick to point out that it was quite small and that its uncorrelated length was very short, but for a while it became the de facto standard in many environments, as people used it as a sort of evidence that they had moved on from the unspeakably awful generators above to something better.  As we made more demands on random number generation, the short uncorrelated length became a problem, and in modern systems we usually need to do better than this minimal standard.


61-bit prime modulus linear congruential generator: This also uses a prime (in fact a Mersenne prime) as its modulus.
Given A = 437799614237992725 and P = 261 – 1
X(n) is A * X(n-1) mod P

The period is 261 – 1, the uncorrelated output length is 1, and it does not have short period repetition on any divisor less than P. Zero is not a valid state and can never be produced as an output.


2^62 – 2^16 + 1 modulus congruential generator:
given A = 3355703948966806692 and P = 262 – 216 + 1, the formula is

X(n) = A * X(n-1) mod P

The period is slightly less than 262, the uncorrelated output length is 1, and it does not have short period repetition on any divisor less than P. Zero is not a valid state and can never be produced as an output.


2^64 – 2^10 + 1 modulus congruential generator:
Given A = 3355703948966806693, P = 264 – 210 + 1.
X(n) = A * X(n-1) mod P

The period is P, slightly less than 264. Zero is not a valid state and cannot be produced as an output. The uncorrelated output length is one. It does not have short period repetition on any divisor less than P. The output is subject to no pattern currently known aside from the periodic repetition.


gsl_rng_mrg: This is a small lagged-fibonacci generator by L’Ecuyer, Blouin, and Coutre. Where A is 107374182, B = 104480, M = 231-1, the formula is
X(n) = A * X(n-1) + B * X(N-5) mod M

The period is about 1046, the length of uncorrelated output sequence is 5, and it does not have short period repetition on any divisor less than M. It requires 5 32-bit words of state, and for a state to be valid at least one of those words must be odd.


Combined Multiple Recursive Generator by L’Ecuyer: The output Z is
given as a combination of two very small lagged-fibonacci generators X
and Y. Where B = 63308, C = -183326, D = 86098, F = -539608, M = 231-1, and N = 2145483479, the formula is

Z(n) = X(n) – Y(n) mod M
X(n) = (B * X(n-2) + C * X(n-3)) mod M
Y(n) = (D * Y(n-1) + F * Y(n-3)) mod N

Period is about 2205. Length of uncorrelated output sequence is 5. It does not have short period repetition on any divisor less than M. Each of the 2 LFG’s requires a state of 3 32-bit words, and at least one word of both must be odd.


R250: This is a shift register generator by Kirkpatrick & Stoll. It is mathematically a one-bit generator, but would be implemented on words of some width N, effectively running N such generators in parallel.

X(n) = X(n-103) XOR X(n-250)

For state it requires 250 N-bit words. Its uncorrelated output length is 250, its period is 2250. It does not have short period repetition on any divisor less than 2250. Valid states require at least one nonzero bit in some word of the state for each of the N bit positions.


gsl_rng_zuf: This is the ZUFALL lagged Fibonacci series generator of
Peterson. It generates doubles in the range 0…1. Given U(k) = k – floor(k), The formula is

X(n) = U(X(n-273)) + U(X(n-607))

It requires 607 doubles (or floats) as state, and the uncorrelated output length is 607. Its period is 2^607 in most cases but in some environments where there is nonstandard rounding, that may be subject to differences of rounding behavior.


Finally, there is Ziff’s Four-tap shift-register-sequence
random-number generator, published in the journal Computers in Physics, 12(4), Jul/Aug 1998, pp 385-392.) This is a one-bit generator, which can be run in parallel on words of any length.

X(n) = X(n-471) XOR X(n-1586) XOR X(n-6988) XOR X(n-9689)

It requires 9689 N-bit words as state, has a period of 29689, provides an uncorrelated output length of 9689, and has no short-period repetitions on any divisor smaller than 29689.


Now, when we are looking at the last few examples, we can get long uncorrelated sequences – but we may find that random-generator states which differ in a small way give us identical or mostly-correlated initial sequences hundreds (or in that last case, thousands) of outputs long before the states diverge enough that their outputs aren’t identifiably similar. Depending on how we want to use our random states, that may be inconvenient. If we want to initialize the RNG to some standard state, then make a small change change to the state and from it produce an unrecognizably different sequence, we have identified another desirable property, rapid divergence of sequences given small changes of state.

Fortunately, given that we have just described a lot of RNG’s with very small state and rapid divergence, it’s easy to use one of them to produce a sequence which you use to modify the state of a very large RNG. So you can provide a single value for your ‘tweak,’ giving a state to the rapidly-diverging RNG, and use that rapidly-diverging sequence to modify the whole state of the much larger RNG.

Anyway, having done a little research and a little coding, here’s what I’m using to produce pseudorandom numbers.  My particular definition is for a lagged-Fibonacci generator with 98 words of state, as outlined in Knuth.  I use the MINSTD generator above to modify the state of a generator when I need an instantly diverging sequence that’s still easy to produce given the original RNG and a single integer saying how it’s tweaked.  This code also contains several utility functions that will be handy for other games and simulations.

 

RNG_new allocates and initializes a new pseudorandom generator.

RNG_randomize drinks randomness from /dev/urandom to initialize the generator.

RNG_Tweak takes an integer argument and uses it to transform the state of an RNG so that the RNG produces a different sequence from that point onward.

RNG_Rand is the fundamental building block; it produces a random number between zero (inclusive) and RNG_MAX (exclusive).

RNG_roll produces a roll uniformly distributed between 0 inclusive and some integer argument exclusive.

RNG_dieroll produces a roll uniformly distributed between some minimum inclusive and some maximum exclusive.

RNG_diesum produces a sum of dierolls, of a sort familiar and intuitive to lots of RPG gamers.

RNG_Uniform produces a double uniformly distributed in the range between a minimum argument and a maximum argument.

RNG_Gaussian takes a mean and standard deviation as a range and produces a double having a normal distribution (aka, a Gaussian distribution).

RNG_LimGaussian takes a mean, standard deviation, and a limit, and produces a double with a gaussian distribution but excludes results more than <limit> standard deviations away from the mean.

RNG_Contest takes two integer values and a dominance factor, then makes a random determination which of them should be regarded as “winning” in a given instance of a contest.  The bigger one is more likely to win the higher the dominance factor is.  It’s for things like hit rolls, strength contests, etc.

 

/* This C code is copyright (C) Ray Dillinger, and is 
freely available for use by others.  It comes with no 
warrantee whatsoever.   

Most of the utility functions consume more than one PRNG output, 
because they discard PRNG outputs which would cause them to produce 
statistically nonuniform distributions or results outside 
the desired ranges. */


/* Other modules can use RNGtype without caring about its internals. */
typedef struct RNGstruct *RNGtype;

/* number of 32-bit words our RNG uses for state. */
#define RNG_STATESIZE 100

/* RNG_MAX for an LFG must be even for maximal period (cf Knuth) 
   Maxint is odd, so we're using one less: 2^31-2. */
#define RNG_MAX 2147483646


struct RNGstruct {
  /* invariant: loc must contain an integer between 0 (inclusive)
     and the length of the rngstate (exclusive) */
  int loc;
  /* invariant: rngstate must contain integers between 0 (inclusive)
     and RNG_MAX(exclusive) and at least one of them must be odd. */
  unsigned int rngstate[RNG_STATESIZE];
}; 

/* Offsets from the current location to sum for the lagged-fibonacci 
   generator, giving us maximal period and other good randomness 
   qualities. We are implementing the generator whose function is 
X(n) = X(n-27) + X(n-98) % RNG_MAX. */

#define RNG_TAP1 27
#define RNG_TAP2 98


/* Here is the backbone of our RNG:  This returns an integer 
   uniformly distributed between 0 (inclusive) and RNG_MAX 
   (exclusive).  Everything else is implemented in terms of 
   this. */

int RNG_Rand(RNGtype rng){
  int loc2;
  int loc3;
  unsigned int sum;

  /* This isn't valid on platforms where integers are 
     smaller than 4 bytes, so assert. */
  assert(sizeof (unsigned int) >= 4);

  /* rngstate (with loc) is a lagged-fibonacci generator with
     extremely long period, high quality randomness, and decorrelated
     sequences of 99 outputs or more.  This is the backbone of our
     RNG. See Knuth v2, 3.2.2 for details. */

  rng->loc = (rng->loc + 1) % RNG_STATESIZE;
  loc2 = (rng->loc + (RNG_STATESIZE - RNG_TAP1)) % RNG_STATESIZE;
  loc3 = (rng->loc + (RNG_STATESIZE - RNG_TAP2)) % RNG_STATESIZE;

  sum = (rng->rngstate[loc2] + rng->rngstate[loc3]) % RNG_MAX;
  rng->rngstate[rng->loc] = sum;

  return(sum);
}


/* Call RNG_randomize with an RNGtype pointing at an already-
   allocated RNGstruct to randomize the RNGstruct that the 
   RNGtype points to. This eats randomness from /dev/urandom, 
   and should probably be done only once per game.*/

void RNG_randomize(RNGtype rng){
  FILE *randomsource;
  int count = 0;
  int oddfound = 0;

  rng->loc = 0;  

  randomsource = fopen("/dev/urandom", "r");
  assert(randomsource != NULL);
  fread(&(rng->rngstate), sizeof(rng->rngstate), 1, randomsource);
  fclose(randomsource);

  for (count = 0; count < RNG_STATESIZE; count++){
    /* Yes, we're throwing away entropy here.  This is not elegant. */
    rng->rngstate[count] %= RNG_MAX;
    oddfound |= (rng->rngstate[count] & 0x1);
  }

  /* now we make sure something in the active state is odd! */ 
  if (!oddfound) 
    rng->rngstate[RNG_STATESIZE/2] = 1;
}

/* "Tweaking" the RNG changes all the state values using a nonlinear
   function, forcing it to diverge instantly.  This means we can have
   one or a few "standard" RNGs for the game, which we load up and
   then "Tweak" in various predefined ways to get repeatable sequences
   to use for various predefined purposes.
*/
void RNG_Tweak(RNGtype rng, unsigned int adj){
  unsigned int masque = adj;
  int count;
  int loc;
  int oddfound = 0;

  /* force masque into the valid range to serve as a state value
     Park & Miller MINSTD LCG.  */
  masque %= 2147483646;
  masque ++;

  for (count = 0; count < RNG_STATESIZE; count++){
    /* This evolves masque according to Park & Miller's MINSTD linear 
       congruential generator as we step through the loop. */
    masque = (16807 * masque) % 2147483647;
    loc = (rng->loc + count) % RNG_STATESIZE;
     /* Park & Miller's modulus is 2^31-1 but it can never produce 
        zero as an output.  The LFG's modulus (and thus the range of 
        valid words of LFG state) is 2^31-2 and does include zero. 
        Therefore the actual range is identical and these can be 
        added modulo RNG_MAX without introducing any bias. */
    rng->rngstate[loc] = (masque + rng->rngstate[loc]) % RNG_MAX;
    oddfound ||= (rng->rngstate[loc] & 1);
  }
  /* tweaking must leave at least one member of the state odd.  */
  if (!oddfound)
       rng->rngstate[(loc + RNG_STATESIZE/2) % RNG_STATESIZE] = 1;
}

/* Allocate a new random number generator.  If the argument is 
   Non-NULL the generator is copied from an existing generator. 
   If the argument is NULL, the new generator is initialized from 
   /dev/urandom.  */
RNGtype RNG_new(RNGtype copythis){
  RNGtype retval;
  retval = calloc(1, sizeof(struct RNGstruct));
  if (copythis != NULL) 
    memcpy(retval, copythis, sizeof(struct RNGstruct));
  else RNG_randomize(retval);
  return(retval);
}


/* RNG_roll returns an integer UNIFORMLY(!) distributed between 
0 (inclusive) and max (exclusive).  we do this by selecting rollmax
to be the largest number smaller than RNG_MAX such that it is evenly
divided by max.  That allows rollmax to be reduced modulo max without
introducing any bias at all. ie, this is the sort of thing normally 
written as "random() % max", but without introducing the slight bias
toward zero which that entails. */
unsigned int RNG_roll (RNGtype rng, unsigned int max){
   unsigned int rollmax = RNG_MAX - (RNG_MAX % max);
   unsigned int x;

   /* check for RAND_MAX too small / diesize too big */
   assert(max > 0 && max <= RNG_MAX);

   /* get new x until x is less than rollmax. */
   for (x = RNG_Rand(rng); x >= rollmax; x = RNG_Rand(rng));
       /*empty loop*/
   return (x % max);
}


/* RNG_dieroll returns an integer uniformly distributed between 
min (inclusive) and max (exclusive).  A regular die roll for 
example could be expressed as RNG_dieroll(rng, 1, 7) */
int RNG_dieroll(RNGtype rng, int min, int max){
  return (min + RNG_roll(rng, (max - min)));
}


/* Returns the sum of rolling <number> of <sides> sided dice.
   This is familiar to players of RPG's, who are likely to find it 
   more intuitive than working with Gaussian distributions.  */

int RNG_diesum(RNGtype rng, int number, int sides){
  int result = 0;
  int count;
  for (count = 0; count < number; count++)
    result += RNG_dieroll(rng, 1, 1+sides);
  return(result);
}


/* returns a double uniformly distributed between min and max. */
double RNG_Uniform(RNGtype rng, double min, double max){
  if (min > max) return(RNG_Uniform(rng, max, min));
  if (max - min == 0.0) return (min);
  return(min + ( (max-min) * ((double)RNG_Rand(rng)/(double)RNG_MAX)))  
}


/* Box-Mueller transform for generating two Gaussian
   random numbers from two uniformly-distributed random
   numbers.  I just return one Gaussian and ignore the 
   other.  The arguments are the desired mean and the 
   size of the desired standard deviation.  */

double RNG_Gaussian(RNGtype rng, double mean, double range) {  
  double fac, rsq, v1, v2;
  double result;

  assert (range >= 0);  

  do {
    v1  = RNG_Uniform(rng, -1.0, 1.0);
    v2  = RNG_Uniform(rng, -1.0, 1.0);
    rsq = v1 * v1 + v2 * v2;
  } while (rsq >= 1.0 || rsq == 0.0);
  
  fac = sqrt (-2.0 * log (rsq) / rsq);
  result = v2 * fac;
  return(mean + (result * range));
} 


/* a Gaussian-distributed number, restricted to no more than
   <limit> standard deviations from the mean, where each 
   standard deviation is of size <range>. This produces a 
   Gaussian (normal) distribution with the "long tails" cut 
   off. */
double RNG_LimGaussian(RNGtype rng, double mean,
		       double range, double limit){
  double result;
  /* if limit is so small that it makes nonsense, assert in 
     debug mode or return a uniform output if not debugging. 
     A gaussian within less than 1/20 standard deviation is 
     almost indistinguishable from a uniform distribution 
     over that range, and whatever's asking for it is 
     probably a bug. */
  assert(limit > 0.05);
  if (limit < 0.05) 
      return RNG_Uniform(rng, mean-(limit*range), mean+(limit*range));

  for (result = RNG_Gaussian(rng, 0.0, 1.0);
       result > limit || result < -limit;
       result = RNG_Gaussian(rng, 0.0, 1.0));
  return(mean + (result * range));
}


/* returns the result of a "contest roll" between two values.
   if the first wins, the result is positive.  If the second
   wins, the result is negative.  This function never returns 
   zero. Domination is the importance of the difference in these 
   numbers for purposes of a contest; the more the numbers 
   dominate, the higher it should be. */
int RNG_Contest(RNGtype rng, int first, int second, int domination){

  int domcount;
  int result = 0;

  assert(first >= 0);
  assert(second >= 0);
  
  assert(first < RNG_MAX);
  assert(second < RNG_MAX);

  for (domcount = 0; domcount <= domination; domcount++)
    result += RNG_roll(rng, first) - RNG_roll(rng,second);
  while (result == 0)
    result += RNG_roll(rng, first) - RNG_roll(rng,second);

  return result;
}

Leave a Reply