top of page

What can the Daily Mail teach us about the crisis of capitalism?


Tony Curzon Price, Gavin Hassall, Jeremy Davies and James Couper



The "crisis of capitalism" is now a term heard in the political mainstream. Andrew Tyrie, a Tory Peer and Chair of the Competition and Markets Authority, could say without blinking in a major policy speech: "I have set out what many have called a crisis of capitalism [...] And there is certainly a crisis of confidence in the institutions charged with harnessing the forces of capitalism for the public good."


In this blog post, we describe our attempt to measure aspects of that crisis of confidence and to track its changes over the last 20 years. A well-trodden approach to such measurement is to use surveys and ask questions about trust. Edellman, the market research firm, has a much used survey. We were seeking an alternative approach, one which measures more directly the sorts of emotions and reactions that many people associate with the crisis, and one more closely relatable to the political salience that the policy issues responding to the loss of confidence have achieved.


For this, we turned to the money and personal finance sections of the Tabloids.



We were seeking to measure system-confidence by analysing the quantity, tone and subject-matter of this sort of story. Our background assumption is that as people are exposed to more such stories; as their stridency increases; as their subject-matter is increasingly about household names rather than lone rogue outfits … then readers will increasingly come to believe that the system is rotten and that, in Tyrie's words, "the institutions charged with harnessing the forces of capitalism for the public good" are not doing a good job of it.


What we did

We collected the entire online corpus of the Daily Mail money section since 1998 - 240,000 articles. We used Google's text analytics tools to extract entities like firms, places and subjects from the articles. We used Google's emotion detection tools to identify the valence and stridency of the emotions in each article. We trained a classification system to identify "rip-off" stories. (This very substantial data-cleaning, structuring and annotation work was made painless thanks to SignalFish.io's tools).


What we found

The number of “rip-off” stories has grown in both relative and absolute terms. Before 2004, there were about 10 rip-off stories a week; this rose slowly to around 15 before the GFC, when it rose rapidly to around 30, peaking in 2010. It then fell back but has settled at a level above 20 rip-off stories per week.




The sentiments associated with rip-off stories are always negative. We use the Google emotion analysis engine which returns a value between -1 and 1 to describe the degree of negativity or positivity of the emotional load of an article. All our rip-off stories have negative emotional score. The negativity of “rip-off” articles gets worse between 2000 and 2004, stays bad until 2010, and then improves. It should be noted that Google reports that long complex pieces of text, as most of these articles are, can give imprecise measures of emotional valence if the text contains mixed messages. For example, and outrage followed by a suggestion for its resolution, can show a higher emotional valence than a mild outrage alone.




The stridency of the stories - how loudly they convey their message, the volume on the sentiment dial (which Google measures as a positive scalar) - is relatively constant until 2012, and then rises steadily. This rise is more pronounced for rip-off stories than for non-rip-off stories.


Using text classification tools to automatically tag articles, we can estimate the dominant subject of each story. In the graph below, the vertical axis is the smoothed frequency of stories, and each subject is represented by a color. The reds and yellow shades are all finance subjects. We find that Finance regularly tops the list of rip-off frequencies - both before and after the GFC. By 2010, Finance and Banking were coming off their rip-off peaks. The fall in total rip-off frequency stalls by 2014 and starts to rise again in 2017. The increase comes from “energy”, “autos”, a broad, composite “other” and “telecoms”. So despite the remarkable dominance of banking, loans and finance, we find that new subjects become the focus of rip-off stories. “Real estate” and “mortgages”, the subjects perhaps most closely linked to the macro-causes of the GFC, show a low and relatively constant level of “rip-off” stories.






We see that over this period, “rip-off” stories are increasingly likely to contain “household name” firms. From 2008, the total number of well-known firms in the “rip-off” corpus is markedly higher than before, and the maximum level is reached post 2017.





The gapminder animation combines all of these measures in a visual summary. We posit that the redder, the bigger the further to the top right are spots, and the greater their number, the greater the sense of delegitimation.




So what?


In our paper, we point to some of the more obvious questions these measures raise, and also point towards the possibility of helping with answers. We highlight four questions:


  1. Did the delegitimation come because firm behaviour got worse, or do we instead see some degree of ‘narrative independence’ or, in economics parlance, can narratives be an identifying assumption? The working paper provides some evidence for narrative independence. This is clearly a central question if we take seriously claims like Lord King's about the change in narrative as a causal explanation for macroeconomic outcomes and policy. If there is narrative independence, then economics needs to understand the creation, spread and propagation of economically important stories. We believe these tools show a way forward. 

  2. Did delegitimation follow the pattern of J K Galbraith's "bezzle effect", whereby people brush off “rip-offs” in the good times, but, perhaps because of consumer need for belt-tightening, root-out embezzlement after the crisis? We suggest reasons to doubt this as a sufficient explanation of our data, again placing greater emphasis on narrative independence. This would imply that they are not an identifying assumption after all.

  3. Could the change in newspaper business models account for a change in the type and mix of stories? We suggest that the move of advertising revenues online is likely to have produced a "clickbait effect" - this offers one type of account that economists might be in a good place to track when analysing economically significant narratives: what are the incentives of those that make stories for a living, and how might those change the stories they tell?

  4. Is there any evidence, following, for example, Schama (2010), that delegitimising stories are part of a social scapegoating process, perhaps one that protects established interests? We provide some tentative and indicative support for this hypothesis. This is an example of account of narrative spread that is not narrowly economic and might instead borrow from the work of media sociology and anthropology to illuminate economic outcomes and policy.


We believe that these new measurement tools offer a very rich seam of quantitative data about narratives. We hope to see the emergence of a sophisticated "narratometrics", allowing economists to investigate texts, for example in order to better model "animal spirits". More generally, we hope this will lead us to develop more useful models of economic agents, whose beliefs, emotions and preferences are affected by the media they consume. 


Becker and Stigler’s classic De Gustibus non est Disputandum (AER 1977) encourages economists to adopt a rational choice, fixed-preference, whole-of-life utility model as a building block in order not to "abandon" the examination of taste to "psychologists, anthropologists, phrenologists and sociobiologists". We think that the development of these new methods of narratometrics should lead economists to a very different conclusion: to work with anthropologists, sociologists and others to understand the flow of meanings through the social whole.   




[1] This work was supported by the ESRC’s “Rebuilding Macroeconomics” initiative at NIESR, and by BEIS. Particular thanks to Angus Armstrong at NIESR for allocating the researcher time and to Luke Pereira for his interest in the analytical methods, which allowed us to use this work as a testbed for the SignalFish.io text-processing tool flow.

951 views0 comments
bottom of page