by Roger E. A. Farmer

Since I’m working on non-ergodic behaviour in economic models, I was intrigued by recent claims from Ole Peters. In a series of articles, blogs and notably in a TED talk (here), Ole has made some rather strong assertions about the way economists model choice under uncertainty. According to Peters, economists do not understand the concept of ergodicity. As a consequence, we have apparently made some rather bad blunders.

What, you may ask, is ergodicity and why does it matter? Imagine you are repeatedly confronted with an uncertain world. A good example is the one that Ole gives us. You start with $100 and a casino offers you a gamble in which the house flips a fair coin. If it comes up heads you win $50. If it comes up tails, you lose $40. The first question you might reasonably ask is; should you trust the casino? Is the coin really fair? Does it really have a 50% chance of heads and a 50% chance of tails or is the house shading the odds? To answer that question, you consult a friend who has a Ph.D. in statistics, and she advises you to observe someone else playing the game for a while.

If each flip takes 30 seconds then after watching for roughly eight and a half hours you will have acquired a list of 1,000 observations and, if the coin is fair, roughly 500 of the times you should have seen a head and the other 500 observations should be tails. I say roughly because the chance of 500 heads in 1,000 flips is itself a random variable so you might for example, see 503 heads and 497 tails. But you are very unlikely to see 200 heads and 800 tails unless the casino has been cheating. The exact statement is that if the process is ergodic then the proportion of heads in *n* tosses will converge to where *p* = 0.5 for a fair coin.

Notice I slipped in the word ergodic to this definition. That’s a very important idea in problems like the one I just described, where we are trying to estimate an unknown quantity. In this case, we are estimating the probability, *p*, that a single toss of a coin will come up heads. If the coin is fair, *p* is equal to 0.5. If the casino is cheating by weighting the coin, *p* might be different from 0.5. That’s what we hope to find out by observing repeated flips of the same coin.

So far so good. But how do we know the casino doesn’t cheat occasionally. Suppose that the house has a whole boatload of different coins. Some of them are fair and some of them are not. Now your statistician friend advises you that the experiment she proposed of counting the frequency of heads in a series of flips will tell you nothing about the next flip. Averaging a sequence of flips only works if each flip has the same value of *p*. For the estimation of using sample averages to make sense, the process must be ergodic.

Now we know a little bit about ergodicity, let’s look at Ole’s experiment. You watch Ole’s TED talk video and explain it to your friend. The one with the Ph.D. She listens carefully but seems a bit puzzled. The first thing she points out is that Ole is certainly not assuming non-ergodicity of the coin flip since he explicitly assumes that the coin that is flipped is fair. Peters is not asking about the distribution of wins or losses in repetitions of a game: no, he is instead asking about the distribution of your wealth if you play the game n times. Let’s call this random variable *W(n)*. The assumption that you start with $100 means that *W*(0) = 100. What Peters studies are sequences

where you play this game n times *and you reinvest all of your wealth at every stage*.

You explain this to your friend and she now understands a little better. The random variable *W*(i) is not ergodic. In fact for *i* ≠ *j, *W(i)* *and *W*(j) do not even have the same probability distribution. If you play the game once you will have $150 with probability 0.5, or $60 with probability 0.5. There are only two possible outcomes. If, on the other hand, you play the game twice by reinvesting your wealth after stage 1 you will have $36 with probability 0.25, $90 with probability 0.5 and $225 with probability 0.25. If you win twice, you win a lot, but the most likely outcome (statisticians call this the mode) is that you will be $10 poorer if you play the game twice.

In Figure 1 I have graphed the probability distribution of the logarithm of your wealth in Peters’ game after playing it for times. It is clear from this picture, that a person whose utility is equal to the logarithm of his wealth might think twice about participating in a game where he is required to invest all of his winnings on every play. Although, using this strategy, there is a small probability of making a spectacular gain, the probability mass of the gambler’s utility is shifting to the left the longer he plays the game.

On Figure 2, I’ve plotted the number of times you play the game on the horizontal axis against the logarithm of your expected wealth after playing the game *n* times on the vertical axis. This is the population analogue of what Ole calls the ‘ensemble average’ and it is clear from this picture that the log of the ensemble average is growing over time.

How can it be that a gambler is almost always losing, but on average he is winning? That’s because the largest possible gain grows so fast that it outweighs all of the more likely losses. But people don’t just care about the average gain from an investment. They also care about the risk. It’s for that reason that economists posit the existence of a utility function. We assume that utility has the property that the average utility of a change in wealth is less than the utility of the average change. Functions that have that property are called ‘concave’ and the assumption that people maximize the expected value of the logarithm of wealth is the most commonly used example of a function with this property.

As long as people care about the utility of wealth, rather than wealth itself, they will try to avoid taking risky bets. And the sequence of bets that Peters offers us is one that would not be taken by a person with logarithmic utility. If you have logarithmic utility, after playing Peters’ game n times, using the strategy of reinvesting your wealth every period, the average of the logarithm of your wealth will be smaller than when you started and it will keep falling, the longer you keep playing the game. But even though the average utility of your wealth will fall over time, the logarithm of your average winnings will keep getting bigger.

Ole Peters plays the game for a thousand periods to generate a sequence of random variables. He repeats this exercise many times on a computer, and he averages each of the many sequences to arrive at what he calls an ensemble average. He plots this average on a logarithmic scale and shows that it is increasing linearly over time. By plotting the average of a large number of sequences on a logarithmic scale Ole is showing you the sample analogue of Figure 2. He is showing you the logarithm of the averages of sequences of binomial random variables.

Next, he takes a very long single sequence of random variables and he plots that sequence on a logarithmic scale and shows that it is falling over time. What Ole is doing in that second picture is showing you a sequence of random variables drawn from the probability distributions I have drawn in Figure 1. The fact that one of his pictures is increasing over time and the other is falling has nothing to do with ergodicity. It is a consequence of the fact that the expectation of the log of a random variable is always less than the log of the expectation, a result that is known in the statistics literature as Jensen’s inequality.

How should you behave when faced with a sequence of gambles in which you win $50 with probability 0.5 and you lose $40 with probability 0.5? That question was answered in 1956 by John Kelly, a researcher at Bell Labs. Kelly showed that the gambler should reinvest a fixed amount of his wealth at every stage of the game and that this strategy is equivalent to maximizing the expected geometric growth rate of wealth. When the odds are so strongly in his favour as they are in the Peter’s example, a gambler who follows the Kelly criterion will become spectacularly rich in a relatively short period of time.

Ole does us all a favour by drawing attention to the importance of ergodicity in problems of uncertainty. But I do not agree with Ole’s conclusion, that economists should jettison the idea that people maximize expected utility.[1] In my own research, J.P. Bouchaud and I are exploiting the properties of non-ergodic random variables to understand how people behave when the future cannot be easily predicted using averages of past behaviour. I do not think that our research agenda needs to jettison more than two hundred years of progress in decision science in order to achieve that goal.

[1] Modern finance theory uses a version of expected utility that originates with the work of Kreps and Porteus. In this version, the utility functions in each period obey the axioms of expected utility. These ‘period utility functions’ are knitted together through time by a sequence of non-linear aggregators. The people who hold these preferences are not expected utility maximizers overconsumption sequences.

I am grateful to Jean-Philippe Bouchaud, Doyne Farmer, Robert MacKay, Ian Melbourne and Ole Peter for comments on an earlier version of this blog. Any remaining errors are mine alone.

I am sorry to disappoint you but only in microeconomics do games of chance apply. In macroeconomics we consider the application of aggregate behaviors, where any of this kind of chance variation is eliminated because on average the result is clearly about how the whole of each variable acts. Now if you think that some microeconomics are going to provide you with some significant understanding about the whole thing, think again, because its a different world!