In the Rethinking Treasury series, we have discussed at length on how behavioural biases can impact our decision making and hence the importance of relying less on our instincts but more on systematic and quantitative approaches to analysing risks.

    In our next few articles, we will take a step back and look at various sources of cognitive errors. These errors provide a glimpse into how our brains work. Importantly, they also highlight when we need to be more wary of our own intuition and identify our blind spots. In turn this can ensure we use tools and logic to overcome these deficiencies and help us to make better decisions.

    Here, we will discuss two well-known cognitive errors and how they can adversely impact our risk management decisions.

    Base rate neglect

    The following is a classic example that confounds many physicians. Imagine there is a population of 1,000 people, in which 2 per cent are infected with a disease. A clinical test has a false positive rate of 5 per cent and no false negative rate. If a patient tests POSITIVE, what is the likelihood that they really do have the disease?

    1. Less than 50 per cent
    2. 50 per cent
    3. More than 50 per cent

    Since the test will tell the truth with 95 per cent confidence, it would seem safe to say that the answer should be C. It may surprise you that the answer is actually A. In fact, the patient can take comfort from the fact that they only have a 29 per cent chance of actually having the disease.

    Why is this? Let’s walk through the calculations.

    Amongst those who DO NOT have the disease (i.e. 98 per cent of the population, or 980 people), if a test was conducted on all of them, a false positive rate of 5 per cent would mean that 49 people would be tested POSITIVE despite being free from the disease. On the other hand, with no false negative rate, the test would return POSITIVE on all 20 people who actually have the disease.

    Therefore, the likelihood of a person tested POSITIVE to actually have the disease is only 20 / (20 + 49) = 29 per cent.

    What is going on here? The reason why the result may seem counterintuitive is the fact that people are known to be insensitive to the “base rate.” Very few people, when making the statistical judgment in their head, would, or even know how to, make use of the information that “only 2 per cent are infected with the disease.” The fact that the disease was rare to start with was a key, salient fact affecting the actual likelihood. Yet we seem to be blind to this base rate.

    The same “base rate neglect” can be observed in many other situations. For example, after hearing news of a recent plane crash, many people might choose to drive long distances instead of taking a plane. (There is also a related bias at work here called the “recency bias”). However, what they have not considered is that millions of other flights have landed safely and did not make the news. The visceral fear of a single event tends to be far more powerful than base line statistics. The statistical fact is that the base rate of dying from driving long distances is orders of magnitude higher than that of flying.

    In markets, people reacting to recent news headlines can easily fall into the trap of “base rate neglect” if they haven’t taken a more holistic view of the base line probabilities. For example, the fear of a “recession” may be overblown if people forget that the base rate of a US recession happening is only 56 out of the last 477 months (12 per cent of history since 1980).

    Our human brains are not good at processing absolute knowledge. The anchoring bias shows that our brains are conditioned to make relative decisions, so much so that even the presence of a clearly irrelevant anchor can derail us.

    Anchoring

    “Did Gandhi die before or after age 9? What’s your best guess of Gandhi’s age when he died?”

    This is not intended as a test of your historical knowledge. Daniel Kahneman, who conducted this experiment, asked the same set of questions to another group of students but tweaked only the first question. “Did Gandhi die before or after age 140?”

    Clearly, the true answer would lie somewhere between 9 and 140, but both ages are clearly nowhere near the correct answer. As such, there is no insight to be gained from hearing 9 or 140. If people were logical, the first question should have no impact on how people respond to the second.

    Except that is not what happened. The average guess actually differed materially between the two groups. The first group, who were given the age of 9, made an average guess of 50, whereas the second group, who were shown the age of 140, made an average guess of 67. The anchor caused the average guess to differ by more than 30 per cent between the two groups!

    Even more interestingly, anchors are just as powerful even if the person clearly knows they were generated randomly and cannot be relevant. In a famous experiment, judges were asked to throw a pair of dice before giving out sentencing decisions on an identical legal case. Those who had been exposed to a high anchor gave higher sentences with an average of 8 months versus just 5 months from those who were shown a low anchor. It was also found that expertise and experience did not reduce the anchoring effect! (Englich 2006).

    In risk management, anchoring can lead to complacency in decision making. When asked to imagine where markets could be, say in 1 years’ time, people necessarily anchor their predictions by the current spot rate as well as perhaps some well-respected analysts’ forecasts. This can be especially problematic when analysts happen to share a “consensus” and have converging forecasts.

    In his famous book “Black Swan”, Nassim Taleb defines a Black Swan as an event of low probability but extreme impact, and with predictability only in hindsight. By definition, Black Swans will have escaped all analysts’ predictions (or if it had been predicted by one, it would have been ignored by virtually everyone). Black Swans are not just a failure of prediction, they are also a failure of imagination, due in part to the anchoring bias.

    Key Takeaway for CFOs and Treasurers

    In this article, we examined two classic examples of cognitive “failure”. In the first, important information was unintentionally overlooked, whereas in the second, potentially irrelevant information was given too much weight and therefore influence.

    Our human brains are not good at processing absolute knowledge. The anchoring bias shows that our brains are conditioned to make relative decisions, so much so that even the presence of a clearly irrelevant anchor can derail us. The base rate neglect bias shows that without going through systematic calculations, our intuition can often lead us to “missing the bigger picture” and in turn guide us to the wrong conclusion.

    Being aware of these biases is the first step to de-bias our treasury decision making process. Systematic, logical processes can also help guide us in the right directions.

    In our next article, we will discuss another famous experiment called the “Linda problem” and explore another significant source of cognitive error, one which demonstrates how much our brains love narratives and stories and allow them to subconsciously influence our decisions.

    Disclosure and disclaimer

    More, collapsed
    Join the conversation?

    Join our Linkedin group to get an unparalleled view of macro and microeconomic events and trends from a bank that is a leader in both developed and emerging markets.