Genetic Algorithms: Complexity vs. Mutation

In genetic algorithms, mutation is both the source and enemy of complexity. Any complex solvers that ever exist in the population must arise as a result of mutations, but mutation limits complexity by destroying information.  Whenever mutation creates new information it destroys old information. As solvers become increasingly complex, the odds against any particular mutation turning out to be a good idea get steadily worse. Complexity cannot increase beyond the point at which mutation destroys information more rapidly than selection can eliminate mutations in order to preserve it.

In biology, this is why RNA organisms more complicated than single cells have never evolved; RNA has too high a mutation rate to sustain the required complexity. In more complex biological organisms, terrestrial life started using more stable DNA, protected it inside cell nuclei, and invented numerous DNA-repair mechanisms that hold mutation down to a rate consistent with that degree of complexity. Rapid mutation allows RNA organisms to adapt to new niches quickly, (like pathogens that develop multi-drug resistance) but limits the complexity of the organisms as a whole.

In fact animals, and vascular plants, are so extraordinarily complex that preserving their organization requires mutation rates so low that they would drastically retard further evolution (and evolution wouldn’t have had time to produce anything so complex in the first place) except that in biology different parts of the genome have developed different mutation rates.  Crucial existing structures are protected by very low mutation rates in the parts of the genome that code for them (with redundant copies, DNA repair mechanisms, and so on) while other parts of the genome remain more susceptible to mutation allowing new adaptive traits to develop.

So. Our discussion of biological mutation aside, what do we do about mutation in genetic algorithms? People have had a recurring problem with evolving the mutation rates along with all the other traits the genomes code for. The problem is that evolved mutation rates in Genetic Algorithms rapidly drop too low to produce useful results. I didn’t observe this problem – at least not to the usual extent – in my own code and I think now that I understand why.

This problem is, I believe, an emergent result of the way typical genetic algorithm programs are structured.

First, a particular genome succeeds best when it maximizes the fitness of all its children (otherwise the opportunity to reproduce is wasted), but the human typically wants improvements in the fitness of the most fit individual.  If a sufficiently simple individual can be sufficiently fit, a GA can often find it with high mutation rates.  If the genome is a direct encoding of values to plug into a structure with predetermined complexity, then a genetic algorithm wasn’t necessary in the first place because the human’s objectives can be met with a simple hill climbing strategy and restarts.  A genetic algorithm with very high mutation rates will do a kind of parallelized automated hill climbing to find such a simple solution, because in that case it needn’t produce and preserve complexity, it only needs to find values. So a low mutation rate works in favor of a particular genome. If a high mutation rate is causing its competition to produce a lot of non-viable individuals due to mutation, any particular (viable) genome’s odds of success are served by a lower mutation rate because its offspring will be better adapted, on average, than all those non-viable mutants.

Second, in a typical GA the reward for correctness is not at all adaptive to the needs of the system. GA usually have fitness functions that reward correctness in the boolean sense rather than information in Shannon’s sense.  In other words the reward for the rightness or wrongness of a particular solver’s response to a particular problem does not depend in any way on whether it’s common or rare for other solvers to get that problem right (or wrong). A mutant that can get an answer correct even if that is very important because nobody else gets it, won’t be favored if it gets even one unimportant answer wrong.

Third, in a typical GA there’s no “test flight” period during which a new mutation’s value is rigorously tested before the entire population is exposed via crossbreeding to the potential benefit or harm of that mutation.

But here’s the awesome part.

I did not observe this low-mutation-rate problem in my own program, or at least not to the extent that others have. This was to some extent accidental, but I think I understand why I didn’t encounter it.

First, I am seeking a solution with complexity, not just constants to plug into a solution of known complexity. So I didn’t have the usual mismatch of purpose with structure where a hill-climbing strategy with high mutation rates would have been better.

Second, I’m using the “Delphic Casino” for fitness selection. This means that trials proceed like perimutuel betting; all the solvers bet some fraction of their wealth on the correctness of their responses, and then the combined pot gets split up among those solvers which got the correct answers for that trial. This means that the reward is adaptive. If exactly one solver is correct on some trial, it gets all the chips bet on that trial, which is an enormous reward in a large population. If everybody is correct they get no reward – they just get their pokerchips back.  Thus a mutant capable of solving a new problem is very heavily favored because with an enormous reward it can reproduce every time there’s a reproduction check, where the faithful-copy offspring with abilities that have become mediocre is only barely favored, for being better than unsuccessful mutants. The solvers are rewarded for providing information in Shannon’s sense – ie, more correlated with correctness  than what’s otherwise available – rather than information in the Boolean sense – meaning more correlated with correctness period.

Third, selection pressure in a Delphic Casino, while “softer” than in most GA in the sense that any solver with positive value does occasionally accumulate enough pokerchips to reproduce, is much harder in terms of ruthlessly weeding out maladaptive mutations because the gain or loss of Delphic wealth is cumulative. A solver that posts answers less correlated with correctness than an average solver posts a long term consistent loss. This means that net losers never accumulate enough wealth to reproduce.  The effect is to preserve information from possible destruction by deleterious mutations far more efficiently than typical GA implementations. It can keep up with relatively high mutation rates.

Solvers can only reproduce at the rate they can accumulate pokerchips. In the Delphic Casino (as I’ve implemented it) reproduction costs one-quarter of your pokerchips (but other fractions are possible). So, if you have 1000 pokerchips, you reproduce, and you’re left with 750. You don’t get to reproduce again until your cumulative betting turns that 750 back into 1000. Earning 33% on your money takes a number of turns that depends on how much more valuable your answers are than average. Your offspring, however, gets a quarter of your wealth and a quarter of the other parent’s, so on average possibly 500 pokerchips. That means the offspring has to make 100% on its money before it gets its first chance to reproduce, and unless it earns a positive net value, that just won’t happen.  Effectively it has to “grow up” for about two reproductive cycles, proving that its mutation isn’t a horrible failure, before it gets to “child-bearing age.”  If I set the fraction lower (so the child gets 10% of the parent’s wealth rather than 25%) the growing-up period would be longer and the population even more certainly protected from the effects of a losing mutation.  But at the same time results would be delayed by as many growing-up periods as it takes for the new genome to fill its niche.

Every solver capable of earning a steady return, no matter how small, by producing responses more valuable than the responses produced by others, occasionally gets to reproduce. But it happens with frequency proportional to the relative value of their responses. Even the very best solver won’t reproduce every trial. On the other side, any solver whose responses are consistently less valuable than the responses produced by others, especially when starting from behind as new offspring do, will inexorably, slowly, go broke without getting a single chance to reproduce.

This means that, in the Delphic Casino, a deleterious mutation rendering any solver strictly less valuable than average, will almost never destroy information by contaminating the gene pool.  Complexity can therefore be preserved despite a fairly high mutation rate.

Leave a Reply