Monthly Archives: July 2016

The Delphi Effect And Prediction Markets

There is something called the Delphi effect that can most easily be seen and understood at perimutuel racetracks. Of course that’s not the only place it happens; the same laws of mathematics govern far more interesting interactions like financial futures markets.

Here is what the Delphi effect says about predictions. When wealth is continually redistributed by iterated wagering in which the winners get the pooled money bet by the losers, divided in proportion to the winning bets, the ratio of amounts wagered for and against something are a better prediction of its probability than any individual bettor can make.

At the racetrack, that means the horse running at five-to-two odds literally has a probability of winning that’s very close to 5/7. Maybe ninety percent of the crowd thinks the horse ought to be running at five-to-three, but if one guy who has a whole lot of money from previous betting thinks otherwise, the enormous amounts that one guy can bet will shift the odds. And this is only appropriate because if he has more money than everyone else, that means that in previous bets his judgment was better than everyone else’s.

If you want to see a trivial demonstration of it, pretend you have a bent coin that comes up heads 55% of the time and tails 45% of the time. Say that Alice always bets two percent of her wealth on heads, and Bob always bets two percent of his wealth on tails. Flip that coin over and over and over, with Bob getting the money Alice bets whenever he wins, and Alice getting the money Bob bets whenever she wins. Start Alice and Bob with whatever amounts of money you like, and after a few hundred bets, you’ll observe that Alice now has about 55% of the money and Bob has about 45% of the money. Bob bets 2% of his 45%, while Alice bets 2% of her 55%, so Bob wins a bit less often but he gets more of Alice’s money when he does, and Alice wins a bit more often but gets less money from Bob when she does.

Bob and Alice have made a little prediction market here. Neither makes a good estimate of the coin’s probability of coming up heads or tails; in the example they don’t even try. But the money has shuffled around between them as a result of their betting, so that now the bets they make have become a very accurate estimate of the true odds. Even though neither of them had a clue what the true odds were. They probably think they’re flipping a fair coin.

Imagine that the situation is more complicated than a cointoss. Maybe the outcome is determined by the interaction of a dozen or more variables, and instead of just Alice and Bob you now have a bunch of different agents from Alice to Zebulon, all making bets according to some random guess about how the variables affect the outcome. The same thing will happen. Money will shuffle around with the bets until finally the bets seen become the best available estimate of the probability. The more any of the agents have betting strategies that are “right” about the outcome, the more precise the estimate will get. This is called a Prediction Market.

A prediction market by itself is a valuable tool but not very strong as an AI system. A prediction market with a bunch of randomly generated agents can make an estimate of the odds of something happening, and that estimate will be correct in a very narrow sense. When it says the odds are (say) 55%, the event will happen 55% of the time. But that’s accuracy, not precision. What’s really happening is that in 100 trials, there are 55 cases that are true and 45 that are not. If the odds can be determined much more precisely and one of your agents happens to make exactly that distinction, then your Prediction Market will become perfect (predicting 100% and 0% likelihood as appropriate) as that perfect agent accumulates all the money. If the prediction market doesn’t contain any such perfect agent, then it will never reveal that kind of perfect knowledge. It will give you correct estimates, but those estimates will remain imprecise – and never improve – for as long as the population of agents remains unchanged.

That brings us to the reason why a Prediction Market is a valuable tool. The population of agents needn’t remain unchanged. In a Prediction Market, the wealth of an agent fluctuates right around that agent’s value for purposes of making predictions. Any agent whose fortune falls to a negligible amount can be considered valueless and replaced with a new agent. This is cool, because with a prediction market you can just eliminate agents that go broke, generate new agents by using a genetic algorithm, or gradient descent training, or tabu search, or even random generation or whatever else, and just toss the new agent into the population to sink or swim. You don’t even need to know in advance if the new agent is based on a good idea. Just toss it into the market with a minimal amount of starting wealth, and see whether it sinks or swims. If it doesn’t have value, it’ll never accumulate enough wealth to mess up the accuracy of your predictions until it gets replaced in its turn. If it does have value your predictions remain accurate but get more precise.

A prediction market fundamentally doesn’t even care what basic underlying algorithm the agents use. Some agents can be rule-based expert systems while others are neural networks or whatever else. New agents can be made by genetic-algorithm combination/mutation operators, by logical operations on Bayesian estimators, or by neural network regression training. Different agents can be neural networks of different topology or architecture, and they can be swapped in and out without interrupting the function of the prediction market.

If you think that this may have something to do with my idea of evolving the topology of recurrent neural networks along with the connection weights, you’re right.