by Nick Chater, Adam Sanborn, Zhu Jianqiao and Jake Spicer
The behaviour of financial markets is, of course, notoriously capricious. Who would have expected, for example, that amid the health and economic crisis induced by Covid-19, the US stock market would storm record heights, albeit after a substantial dip? Or that the value of a single stock, Tesla, would have increased in value by a factor of 8 during 2020, without any discernible new technological innovation? Of course, stories can be reconstructed in retrospect---but had things turned out differently, equally plausible, stories could have been invented to explain the opposite outcome. Indeed, if we knew whether the market, or individual stocks, were heading up or down, making a steady profit from investing would be easy---and it clearly isn’t!
Yet among all the uncertainty, financial market shows surprisingly regular patterns as a statistical level, although these are patterns that can’t easily be exploited to make money. The market has, to a fair degree, the characteristics of a “random walk,” in which each step is independent of the last. But conventional random walk models predict that the jumps in the market should roughly follow a so-called “normal” distribution. A normal distribution is what you would expect if the size of the jump over a day, a month, or a year, is the sum of a large number of completely independent small jumps, most of which will cancel each other out. Yet real markets show a rather different pattern in which extreme changes, both up and down, are more frequent than the standard model would predict---the “tails” of the distribution are fatter than would be expected, with crucial implications for the probability of large booms and crashes. Another “stylised fact” of real markets that departs from simple random walk models is the tendency for high market volatility, where prices jump up and down rapidly, to be bunched into clusters, interspersed by periods of relative calm.
Where do these, and other, statistical patterns in markets come from? The natural assumption is that they arise, in some way, from the hugely complex interactions between a multitude the buyers and sellers, drawing on information from a wide range of sources, including each other and the movements of the market itself. If this is right, then if we create an artificial environment which takes out some of this complexity, then some of the stylised facts observed a market should disappear.
Our project aims to explore this question by creating a spectrum of tasks in which people produce sequences of data, and seeing which stylised facts disappear when greater simplifications are introduced. So, for example, what happens if we create an artificial market in which the price is determined by the price estimates of the handful of people in the experiment? So now there is no complex world feeding us news stories, and no vast network of buyers and sellers; just a small number of people independently attempting to guess what each other will predict on the next trial. It turns out, surprisingly, that this artificial market exhibits most of the stylised facts observed in real markets.
To take the next step, then, suppose we take the market elements of the task completely, and consider predictions from single individuals. In one recent experiment, for example, people had to guess the next value of sequence which was, as it happens, followed a perfectly conventional random walk (i.e., not exhibiting any of our stylised facts). Surprisingly, again, most of the stylised facts of market behaviour arise in the sequence of guesses that people make. Even though people are being “fed” with a random walk, the human mind somehow imposes patterns that are not really in the data---including fat tails and volatility clustering.
What if we go a step further and throw away the idea that people are making predictions at all? Suppose we replace our prediction task with an even simpler task---not predicting what comes next, but just reproducing the last item as exactly as possible. If the “targets” are presented as numbers, then the task will be easy of course---people would just have to remember and reproduce the number they just saw. But if the targets are presented as continuous magnitudes of some kind, which the person has to reproduce as best they can, then perfect accuracy will be impossible. We chose to present targets as time intervals, following the very same random walk as before---and on each trial, people had to reproduce the time interval they had just encountered by holding down the space bar on their keyboard to match that time interval.
The “target intervals” that people have to reproduce follow a random walk, as in our prediction task; and the reproduced intervals, also corresponds, roughly, to a random walk---people follow the target intervals as they speed up or slow down, in small and somewhat independent increments. But when we look in people’s responses for the classic divergences from the random walk found in financial markets---including fat tails and volatility clustering---we find these strange phenomena still emerge.
So perhaps the full complexity of the market is not required to explain many, or even most, of the strange patterns that we observe in real markets---these patterns seem to arise from basic properties of the individual human brain. If this is right, then we might expect that market behaviour will look different when humans are not a major participant---for example, in very high frequency trading, over minuscule fractions of a second, where computer algorithms are dominant. Similarly, we might expect the stylised facts to be particularly salient in historical markets, predating any algorithmic trading. The next step for this research, beyond the current project, is to explore these and other possible implications of our work in detail.