Ethical Guidance for Systemic Risk

by Fabian Schuppert


Systemic risk a major challenge for macroeconomic policy making. In response to Her Majesty the Queen’s famous question, why no-one saw the global financial crisis coming, a gathering of the ‘great and good’ at the British Academy concluded that it was a ‘failure of the collective imagination of many bright people to understand the risks to the system as a whole’ (italics added). Our project for Rebuilding Macro probes some of the ethical questions which arise when contemplating the regulation of such systems.


There are many systemic risks we have to contend with beyond finance. The emergence of COVID19 in small corner of our complex global food network and transmitted via human contamination is an obvious case in point. Then there are our private actions leading to environmental degradation, the biggest systemic risk of all. And then there are many less-obvious systemic risks which we rarely acknowledge. The use of algorithms and artificial intelligence can create contagion through networks way beyond the scope of a single agent.


In macroeconomics we have tended to approach systemic risk within our Newtonian framework. Every outcome has a cause which, if we work hard enough, can be explained by a component part. In real-life, the search for a cause becomes closely connected with who to blame. In the immediate aftermath of the financial crisis, banks were shown to be too-big-to-fail, consistent with excessive risk taking, hence we have a cause and the villains and so the case is closed. Policy has been aimed at shifting risk back to the equity holders ever since.


But what if systemic risk arises in a complex system? Such systems occur when there are non-trivial interactions between the components of the system which lead to outcomes which cannot be fully explained in terms of the constituent parts. The more interactions, especially non-linear or contingent interactions, the more complex the system. If the agents in the system can adapt, then outcomes are emergent properties of the system. They cannot be reduced and explained by the component parts. But then there is often no longer a single cause and effect.

Systemic risks rarely arise as a result of directly attributable individual wrongdoing. More commonly they result from collective behaviours and complex amplification mechanisms - what economists sometimes refer to as procyclical dynamics as a simple case of a complex system. Identifying responsibility and culpability is therefore fraught with difficulty. Flight crash or other disaster investigators often end up concluding that there were multiple interrelated and unforeseeable factors involved and not a single cause.


For moral philosophers this apparent severing between individual actions and consequences poses a set of especially tricky ethical questions. If instances of systemic crises commonly result in severe negative consequences for those who are de-attached from actions taken, then serious questions about injustice and how to prevent such instances have to be asked. Perhaps this has contributed to a sense of ‘injustice’ which has shaped the lack of trust in authorities and even hindered our progress over the last decade.

Public authorities have a moral duty of care to their populations to ensure that vulnerable groups are not exposed to unacceptably harmful potential risks. The challenge is to identify what is ethically ‘unacceptable’ and what ethical principles should be followed in offering such protections. If we cannot reduce all of the properties of the system to its component parts, our knowledge is already limited. We are therefore making ethical judgements with incomplete information which takes into a world of values and normative judgements.


The most common tool for evaluating the acceptability of risks, cost-benefit-analysis (CBA) suffers from significant shortcomings when it comes to delivering ‘just’ systemic risk outcomes. CBA operates on a societal or population level, focusing on a fairly narrow set of expected outcomes and depending on numerical probability for permissibility calculations. It aims to capture the relevant material outcomes both in terms of resources and/or well-being for an entire population, aggregating scores across a population, and ultimately calculating whether the overall benefit outweigh the overall cost or vice versa.[1]

Unfortunately, CBAs effectively violate what philosophers call the separateness of persons. Their aggregative character mean that CBAs cannot protect the rights and interests of individual agents. Some parts of a population can accrue all benefits, while other parts assume only potential costs and risk. Even Adler’s weighted social welfare functions, cannot fully circumvent the problem. Even in modified form, applying CBA can still justify situations where a majority population obtains a range of benefits, while a smaller minority shoulders almost all costs, despite such an outcome being obviously ethically indefensible.


CBA estimates the costs and benefits of a given policy by selecting the most likely outcome on both the cost and benefit side (the so-called expected outcome). While it makes sense to pay special attention to most likely outcomes, doing so can fail to pick up less likely but still relevant ‘fat tail’ risks. Random sampling, common in CBA, can be an unreliable predictor, because occurrences of systemic risk in a complex system scan take the form of a cascading failures involving rapid erratic changes at critical thresholds that sampling techniques miss.


A sophisticated ethical account of systemic risk, capable of reducing some shortcomings of CBA, requires thinking in terms of three dimensions and three values: distribution where harms experienced by an agent is unrelated to actions taken, giving those harms an arbitrary character breaching the value of fairness; size, extent and intensity – where harms incurred risk being so severe that an agent’s practical control and agency to plan futures is undermined, representing a form of domination inhibiting the value of freedom; and foreseeability where risk of the above harms are both possible and preventable, but are discounted as non-urgent because of relatively narrow or limited benefits, representing negligent disregard in a contravention of the value of respect.

In our project we hope to develop these ideas for a more coherent ethical framework for assessing systemic risk when it arises in more complex systems. In this world we can no longer delegate all ethical issues to market prices. We are compelled to re-introduce ethics back into macroeconomics. Our project page is here.

[1] There are varieties of CBA which operate quite differently from this model. However, these are normally best thought of as supplementary pieces of policy analysis rather than as morally sound decision-making tools.

38 views1 comment

Get our latest updates

Get the latest updates, news and event dates mailed straight to your inbox

Follow us on social media

  • Twitter
  • YouTube

2 Dean Trench Street, Smith Square London, SW1P 3HE United Kingdom                                    Contact UsPrivacy Policy | Terms Of Use

© National Institute of Economic and Social Research 2019