
“There’s the psychology that provides the risk within as well as the world that provides the risk without,” said Hersh Shefrin, author, academic and interviewee in our recent Living the Trade Lifecycle podcast on the interconnectedness of behavioural psychology and risk management. In applying this view through the lens of technology, Shefrin explains how the psychological risks or “risks within” can be transferred to the technology, such as AI, which can lead to big mistakes. Read on for more info on the limitations of the AI black box and how AI validators can address the potential problem of inputting psychological risks within the technology whose very aim is to help organisations better understand external risks. Behavioural Psychology & Risk – Understanding How They Are Connected – Derivsource
The limitations of AI and machine learning
The biggest limitation to be aware of with new technology, such as AI and machine learning, is its black box features. And by black box, I mean that you put inputs into a machine learning process and out comes the outputs at the end. However, how the inputs get transformed into outputs is usually a big mystery because the mathematics is so complicated. It is often too difficult to figure out what it is that is going on within this black box and this unknown raises all kinds of issues. One issue, for example, is there can be excessive optimism about the effectiveness of machine learning. In essence, we trust machine learning technology because it is complicated and impressive, but this also means we can over-rely on it by attaching too high probability to it being foolproof. So, that’s one risk and a challenge.
A second risk revolves around the inputs added to the technology. Most machine learning technology is built on the premise that a large set of data is input and then you train a machine on how learn to relate inputs to produce outputs based on the characteristics of this training data. And if this training data is built from human experience and judgment and decisions, it will probably reflect all of the behavioural biases and psychological pitfalls humans possess, such as excessive optimism, confirmation bias, motivated reasons and aspiration-based risk taking (listen to the podcast for more on this).
What machines are really good at is mimicking, figuring out sophisticated ways to pick up some of the critical characteristics that are associated with the underlying data and then reproduce them. So, if the data itself was human generated and reflected important psychological pitfalls, then there is a risk that it is going to be nurtured within the black box and will come out the other end. And so, if we are unaware of this possible issue, this means there is double exposure to psychological risk. In short, there is bad psychology built in the data sets and thus included in the black box.
Additionally, there is the final stage in that there is a potential of bad psychology in the part of the people who will use the outputs from the black box because they over-rely on the technology in question in the first place. This is akin to the familiar adage in financial operations of “bad data in, bad data out”.
And oftentimes, the nature of the output is such that for most situations, the machine looks like it’s doing great, better than humans, but it then makes these huge errors that a real person is unlikely to make if they are actually behind the wheel.
Why AI validators are needed
So, what is a possible solution to this problem? This is where AI validators need to be introduced and form part of the risk management teams. AI validators will be experts in data management and data science. However, they will also have a really good sense of what risk management is about, including the commonly used model risk management concepts where the focus is on complex risk management systems but ones that we could sort of understand.
AI validators also need to have good intuition; they just have a feel for the way things should or do work in order to contend with the black boxes and technologies being used. Those individuals who are highly quantitative in the new world of data science can form part of the risk management teams who together can find ways to bring together and fuse the different types of skills need to management risks effectively.
Going back to my original statement, organisations rely on technology to manage the risks the world and markets present. Yet, like in general risk management practices, the focus needs to be on ensuring the risks within are taken into account at all stages and particularly aren’t transferred to the technology in the form of inputs or outputs to be interpreted.
*Comments from the podcast Behavioural Psychology & Risk – Understanding How They Are Connected – Derivsource