Friday, December 15, 2017

Artificial Intelligence in Risk Management: Looking for Risk in All the Wrong Places

Opportunity is where you find it, turn your risk manager into a profit center.

From naked capitalism, November 15:

Artificial Intelligence and the Stability of Markets
Of course, for those who can take advantage of it, instability in markets may not be such a bad thing. Nor is systemic risk, especially if the public good is not your first concern.
Artificial intelligence (AI) is useful for optimally controlling an existing system, one with clearly understood risks. It excels at pattern matching and control mechanisms. Given enough observations and a strong signal, it can identify deep dynamic structures much more robustly than any human can and is far superior in areas that require the statistical evaluation of large quantities of data. It can do so without human intervention.

We can leave an AI machine in the day-to-day charge of such a system, automatically self-correcting and learning from mistakes and meeting the objectives of its human masters.

This means that risk management and micro-prudential supervision are well suited for AI. The underlying technical issues are clearly defined, as are both the high- and low-level objectives.
However, the very same qualities that make AI so useful for the micro-prudential authorities are also why it could destabilise the financial system and increase systemic risk, as discussed in Danielsson et al. (2017).

Risk Management and Micro-Prudential Supervision
In successful large-scale applications, an AI engine exercises control over small parts of an overall problem, where the global solution is simply aggregated sub-solutions. Controlling all of the small parts of a system separately is equivalent to controlling the system in its entirety. Risk management and micro-prudential regulations are examples of such a problem.

The first step in risk management is the modelling of risk and that is straightforward for AI. This involves the processing of market prices with relatively simple statistical techniques, work that is already well under way. The next step is to combine detailed knowledge of all the positions held by a bank with information on the individuals who decide on those positions, creating a risk management AI engine with knowledge of risk, positions, and human capital.

While we still have some way to go toward that end, most of the necessary information is already inside banks’ IT infrastructure and there are no insurmountable technological hurdles along the way.
All that is left is to inform the engine of a bank’s high-level objectives. The machine can then automatically run standard risk management and asset allocation functions, set position limits, recommend who gets fired and who gets bonuses, and advise on which asset classes to invest in.
The same applies to most micro-prudential supervision. Indeed, AI has already spawned a new field called regulation technology, or ‘regtech’.

It is not all that hard to translate the rulebook of a supervisory agency, now for most parts in plain English, into a formal computerised logic engine. This allows the authority to validate its rules for consistency and gives banks an application programming interface to validate practices against regulations.

Meanwhile, the supervisory AI and the banks’ risk management AI can automatically query each other to ensure compliance. This also means that all the data generated by banks becomes optimally structured and labelled and automatically processable by the authority for compliance and risk identification.

There is still some way to go before the supervisory/risk management AI becomes a practical reality, but what is outlined above is eminently conceivable given the trajectory of technological advancement. The main hindrance is likely to be legal, political, and social rather than technological.
Risk management and micro-prudential supervision are the ideal use cases for AI – they enforce compliance with clearly defined rules, and processes generating vast amounts of structured data. They have closely monitored human behaviour, precise high-level objectives, and directly observed outcomes.

Financial stability is different. There the focus is on systemic risk (Danielsson and Zigrand 2015), and unlike risk management and micro-prudential supervision, it is necessary to consider the risk of the entire financial system together. This is much harder because the financial system is for all practical purposes infinitely complex and any entity – human or AI – can only hope to capture a small part of that complexity.

The widespread use of AI in risk management and financial supervision may increase systemic risk. There are four reasons for this.

1. Looking for Risk in All the Wrong Places
Risk management and regulatory AI can focus on the wrong risk – the risk that can be measured rather than the risk that matters.

The economist Frank Knight established the distinction between risk and uncertainty in 1921. Risk is measurable and quantifiable and results in statistical distributions that we can then use to exercise control. Uncertainty is none of these things. We know it is relevant but we can’t quantify it, so it is harder to make decisions.

AI cannot cope well with uncertainty because it is not possible to train an AI engine against unknown data. The machine is really good at processing information about things it has seen. It can handle counterfactuals when these arise in systems with clearly stated rules, like with Google’s AlphaGo Zero (Silver et al. 2017). It cannot reason about the future when it involves outcomes it has not seen....MORE
e.g.

"When Google was training its self-driving car on the streets of Mountain View, California, the car rounded a corner and  encountered a woman in a wheelchair, waving a broom, chasing a duck. The car hadn’t encountered this before so it stopped and waited."