"If you don't know where you are going, any road will get you there." - Lewis Carroll
A little more than three years ago, banks embarked on a journey, reporting forward-looking expected credit losses (ECLs) under IFRS 9. The years leading up to it were spent gathering datasets and developing (validating, auditing, and so on) the complex statistical models that, for most, drove the substantial majority of allowances prior to 2020.
Enter COVID-19. Suddenly, economic conditions were dramatically different from those on which the models had been trained and calibrated. Not surprisingly, that meant their outputs seemed to miss important considerations and management teams applied overlays (both positive and negative) to compensate. Those overlays were meant to be temporary. Now, they remain, yet the defaults which they intended to capture haven’t emerged. This apparent disconnect has boards, audit committees and others asking questions about how long such divergence can persist.
While there’s no doubt that overlays and the assumptions underpinning them should continue to be challenged, it might not be as easy as unwinding and reverting to past ways. There’s an undeniable persuasion in the rigour of data and mathematics, but there’s also an increasing possibility that the future might look different from the past. That might sound innocuous, but for models that rely on past experiences to predict future ones, it’s a foundational threat.
In some parts of the world, government stimulus has economies set to boom for years to come. Markets for real estate, equities and so on are at record-breaking highs, and deal flows show no signs of slowing. Meanwhile, some (including, recently, the IASB’s own Hans Hoogervorst) warn of the risks ‘lurking in plain sight’ -– inflation, taxation and hybrid variants to name a few. Our objective in calculating ECLs isn’t to predict which outcome will ultimately be seen (i.e. the losses that we anticipate will transpire), but rather to consider the probabilities of each and the effects they could have. In other words, whatever our approach, ECLs should approximate the sum of the probability-weighted cash shortfalls under all potential future scenarios, in a way that’s similar to how risk is often priced.
For all of these reasons, we believe it’s worth thinking more about provisioning in 2021. In this publication, we focus on five key areas where we believe such thought is warranted:
- Unwinding of overlays – including key considerations and criteria (for example, tying to specific data, model and risk changes, or defaults arisen) that should be met before they reverse.
- Capturing new and emerging risks – such as increases in inflation, taxation and so on, through additional segmentation, scenarios, and revised approaches to modelling and mean reversion.
- Calculating scenario weights and loss distributions – by considering adding scenarios, starting with risks upfront (rather than running on autopilot with existing macroeconomic variables and projections) or ensuring that all potentially material risks have been captured through more robust loss distribution calculations as a back-check.
- Triangulating with internal and external indicators of price – such as movements in spreads, exposures and so on, to ensure that the positions being taken about risk in the provision calculations align with the bank’s adjudication and other business practices.
- Disclosures – which are, and will continue to be, challenging and centre stage for some time, and for which volume and transparency are not always the same.
We hope that you find this publication useful and, as always, please feel free to contact your local PwC professional or any of our Global Banking Industry Accounting Group members.
1) Unwinding of overlays
“At times I feel certain I am right while not knowing the reason.” - Albert Einstein
For most banks, modelled outcomes are producing outputs substantially below current allowances, and continued economic improvement without default emergence has shone a light on the overlays that bridge their gap. Meanwhile, some regulators have begun to express concern about the significant diversity in the magnitude and extent of overlays being applied. A few important things to keep in mind:
- there’s no universal definition – ‘overlay’ means different things to different people;
- while approaches and themes might be consistent, different starting points mean that no two will ever truly be alike;
- they range from highly qualitative to highly mathematical; and
- not all are made after the models are run (some are pre- or in- rather than post-model adjustments).
Of course, none of that means they’re less important or invalid in calculating allowances (quite the opposite, in fact). The principal difficulty for many banks today is knowing how much of them to release and when. In our publication ‘Post-model adjustments for expected credit losses during COVID-19’, we outlined some important questions to address in developing and documenting overlays. They included:
- What is the limitation that is being addressed, and why?
- How was the overlay quantified, and what rationale was used?
- What are the underlying assumptions, and how were they developed and supported?
- What data was used, and how was it determined to be appropriate and consistent with similar data used for other purposes?
- How will the overlay be consumed over time (for example, through model development / redevelopment, new data becoming available, or loan-level losses having transpired)?
- How will reasonableness / performance be assessed (for example, by using back-testing, KPI monitoring, comparison to stress-testing and stand-back tests)?
- What has been done to determine, at a sufficiently granular level, the exposures to which the overlay relates?
- How have the staging implications of the overlay been addressed?
We also raised flags about other things that need to be considered, like the potential for double counting and the importance of a stand-back to prevent missing the forest for the trees. To that end, what’s important now isn’t to jump to the conclusion that, because the defaults haven’t emerged, the overlays must have been misguided and should be reversed, but rather to reconsider whether and how the answers to these questions have changed. Overlays are inherently judgemental and, during uncertain times, no bank is looking to be an outlier. With that in mind, a good starting point is for audit committees and management to continue their open dialogue and to step back and think hard about whether there’s any evidence of bias as all the pieces come together. If a bank’s results are closely aligned with those of peers at a time when everyone is (at least to some extent) flying blind, it’s certainly worth being sure that alignment is appropriate.
Since initial implementation, there’s been a long discussion about the temporary nature of overlays, given that models and data are ultimately expected to catch up. Of course, temporary (not permanent) and short-lived aren’t the same thing. With that in mind, overlays should presumably come off when one or more of the following take place:
- data becomes available that previously was not;
- models are updated for previous limitations;
- defaults arise that had been anticipated by the overlay; or
- indicators of the risk(s) underlying the overlay change.
Demonstrating objectivity and robustness in the unwind of an overlay will mean being able to point directly to which of these factors has given rise to the release or application of the overlay to impairments, as the case may be. The more specific and explicit that such explanations can be, the better evidence they’ll provide. Consideration should also be given to further changes that are expected at year-end (or at other future reporting periods), when they’re likely to come, and consistency with information being prepared and reviewed by management. In some cases, by examining these factors it might become clear that, while defaults haven’t yet emerged, they’re still expected to and thus the overlays should remain (or new ones take their place). And, that’s okay.
Also worth considering are sensitivities of the allowance to changes in the risks that overlays are meant to address, and whether those sensitivities have changed. One of the challenges of overlays is the complexity of their interaction with models, and so care should be taken to ensure no layering or double counting of risks, whether they’re being added on or taken off. Keep in mind, too, that ECLs are meant to be probability-weighted and overlays are no exception. So, if they’re staying on or new ones are being added which aren’t being pushed into individual scenarios, then understanding how probabilities have been captured and applied will be important.
2) Capturing new and emerging risks
“I think I’m afraid of being happy because whenever I get too happy something bad always happens.” - Charles Schultz
There might be irony in today’s exuberance. While real estate, stocks and other asset prices break record highs, additional waves continue to wreak havoc and keep many of us at home. Businesses and individuals have largely been sheltered from the economic impacts of the pandemic, with government spending having filled the gaps. Of course, that spending has been funded by taking on new debt, which could give rise to risks of its own. Murmurs have begun to surface about inflation, housing (and other asset) bubbles, sovereign risk and future tax increases, each of which could have a meaningful impact on credit losses in the future. Then there are other things like transition and physical risks for climate change, hybrid variants, vaccine efficacy, and the list goes on.
It’s easy to focus on the most likely outcomes when thinking about losses, but that isn’t all that IFRS 9 requires. Rather than a single best estimate, ECLs are based on probability-weighted outcomes that include scenarios, even if they’re less likely than others. That’s important because it means that magnitude can matter as much as probability does. Put differently, even if it’s unlikely that these new and emerging risks will lead to significant losses, they’ll need to be considered if they could give rise to significant losses in the portfolio. Some ways to ensure they’re captured include:
Revisiting segmentation – the impact of new and emerging risks might be different for different segments of the portfolio. Now more than ever, thinking about the level at which an impact could arise is crucial as risks become increasingly specific, targeted and divergent. If in doubt, think about the level at which information is being monitored for risk management and other internal discussions and activities.
Adding scenarios – as the future becomes less certain, one way to consider additional possible outcomes is to model them. That might be particularly true for downside scenarios where a single scenario alone is often used today.
Relating variables indirectly – not all risks today will be readily captured by existing models, or be direct inputs into them. Rather than taking a purely qualitative approach, it might be possible to indirectly relate them to the models. For instance, if models take bond yields and interest rates but not inflation, figuring out the relationship and translating the inflationary movements into yields could allow them to be consumed by the models.
Considering new / revised models – to varying degrees, current models have demonstrated their limitations over the past year. Reviewing and potentially redeveloping them in the context of an evolving risk environment might help to ensure better performance of the next generation and allow new data, technology and modelling techniques to be leveraged.
Rethinking mean-reversion – ECLs under IFRS 9 are point in time and thus include expectations about what future economic conditions will be over the lifetime of each loan. While most banks project economic expectations for two, three or four years, periods beyond that are usually captured by reversion to the mathematical equivalent of ‘normal’ (that is, the mean). An important consideration for banks today is whether that normal (based on the past) has now changed (based on what the future could be).
Examining loss distributions – loss distributions allow identification of non-linearities (that is, where changes in severity of economic or other conditions give rise to disproportionate losses) and signal areas where closer attention might be required. As discussed further in the next section, examining them is an important way of ensuring that the impacts of potentially material risks (including new and emerging ones) have been captured.
3) Calculating scenario weights and loss distributions
“If you think nobody cares about you, try missing a couple of payments.” - Steven Wright
One of the tenets of IFRS 9 is that ECLs are meant to represent the probability-weighted losses considering all possible scenarios that could arise. Practically speaking, the standard allows that, where there are many possible outcomes (as is usually the case), an entity can use a representative sample of the complete distribution. In other words, rather than running Monte Carlo or similar analyses to simulate the thousands or millions of potential outcomes, many banks choose to approximate the outcome by using three to five scenarios instead. Of course, those representative scenarios are a subset of all possible ones and thus should be given a weighting based on the subset of scenarios for which the outcome of that selected scenario (the loss arising) is representative, not the probability of the single scenario occurring itself.
As a (very simplified) example, consider the situation where all possible scenarios are placed in order of severity, and the:
- 0th percentile scenario is the worst case
- 100th percentile is the best case
- 10th percentile is chosen as the ‘downside’ scenario.
Within the constraints of the discrete scenarios already selected, if the loss arising in the 10th percentile scenario was considered to be representative of the loss arising for all scenarios between the 0th and 20th percentiles, that scenario would be expected to be given a 20% weighting for ECL measurement in order to be unbiased. That’s easy to confuse with a 10% weighting, which might seem appropriate based only on the scenario being at the 10th percentile; but, since the selected scenarios are chosen to be representative of the complete distribution, the total of the weightings applied must be 100%. For further illustration, see our FAQ 45.72.5 – How should weightings be determined for multiple forward-looking macroeconomic scenarios?.
Things to think about in relation to these scenario weights and related loss distributions today include the following.
Don’t run on autopilot
In many cases, processes were established for incorporating multiple scenarios during benign economic times. Macroeconomic scenarios are set for base case, upside and downside, with pre-determined key variables (like unemployment, house prices, GDP and so on), and then estimated for what’s believed to be reasonably possible under each. At a time when there is significant uncertainty about the future economy, simply rolling this approach forward each period (on autopilot) might miss something important.
Consider starting with risks instead
There are two ways to ensure that material risks of loss are captured. The first (most often applied today) is to start with economic scenarios, calculate expected losses and then look at a loss distribution as a back-check. An alternative is to think first about the risks underlying the portfolios and the points at which they could give rise to significant disproportionate losses. Tipping points for losses could then be translated into broader economic scenarios, which could be used as the basis for determining losses. In other words, rather than developing scenarios and then having to check whether they’ve incorporated the relevant risks, a starting point in today’s (highly uncertain) environment might be the risks themselves (think things like a housing crash, tax reform and so on).
Revisit loss distributions
Loss distributions show how losses transpire under different conditions and are often typified by non-linearities. In other words, movements in losses are often disproportionate to movements in the economic variables giving rise to them. If you’re using a few scenarios to approximate all possible ones, then knowing where those non-linearities exist is important. While different methods are possible, oftentimes internal and external loss data is used to simulate losses under a number (say, 10 to 12) of economic scenarios, which then give a picture of how losses are distributed relative to economic conditions and the severity thereof. While they’re necessarily imperfect and judgemental, comparing the scenarios modelled to these distributions can ensure that nothing important gets inadvertently left out.
Just because it hasn’t happened yet doesn’t mean it won’t
History is often our best predictor of the future and, for many risks, looking back at historical movements and when they’ve had a material impact on losses will be a big piece of the puzzle for estimating ECLs. Today, however, there’s a distinct possibility that the future might include things not seen before, or at least not in the data that we’ve got. Take, for example, climate change, inflation or tax increases. Given data constraints, it’s likely that they’ll require significantly more judgement than others. Even so, just because they haven’t happened yet (or in a while, at least) doesn’t mean that they can be ignored. Plus, it’s an accounting estimate and so subjectivity is alright, provided that it’s substantiated and unbiased.
Consider additional scenarios
One way to address the increasing uncertainty of what might transpire is to incorporate more possible scenarios. If you believe that a housing market correction, disruption of a certain industry, or other events are possible and could have a significant effect on losses, then adding scenarios that model them out might be worthwhile. Even if that means having multiple ‘downside’ scenarios, that’s okay, provided that appropriate weightings are applied to each in order to capture the proximate scenarios that they represent. Keep in mind, too, that both probability and magnitude matter, so lower-probability outcomes that could have a significant impact on losses can be just as important as higher-probability ones. While the purpose isn’t to incorporate stress-testing scenarios into ECLs, their results might help to inform where such disproportionate losses might emerge.
4) Triangulating with internal and external indicators of price
“Action expresses priorities.” - Mahatma Gandhi
Unlike fair value measurements, which focus on determining value from a third party’s perspective, ECLs are each entity’s own assessment of credit risk. In other words, they have more to do with how the bank would price a loan than how much a third party would be willing to pay. While that means the bank is entitled to its own view, it also means it needs to demonstrate that is indeed the case. To do that, IFRS 9 suggests looking at internal and external indicators of price. Relatively benign economic conditions leading up to the pandemic meant that these indicators haven’t mattered much yet, but we expect that to change as boards and audit committees home in on significant overlays and challenge their continued presence, given little evidence of default.
At a high level, the objective of the guidance is to ensure that actions taken (in pricing, adjudicating and so on) align with assumptions being made in the allowance estimation. In other words, they’re meant to check whether you’re saying one thing and doing another. Of course, it would be unrealistic to expect perfect alignment or precision, and directional consistency is more likely to be key. Some of the things to consider at the segment level or overall include:
- Expanding (contracting) spreads align with increasing (decreasing) ECLs
- Increasing (decreasing) originations and exposures align with decreasing (increasing) credit risk and ECLs
- Increasingly (decreasingly) stringent requirements for lending (for example, requiring guarantees, covenants and collateral) align with increasing (decreasing) ECLs
- Increasing (decreasing) exposure limits align with decreasing (increasing) ECLs
Credit Default Swap spreads
- Expanding (contracting) credit default spreads align with increasing (decreasing) ECLs
- Increasing (decreasing) originations and exposures by peers align with decreasing (increasing) credit risk and ECLs
- Increasing (decreasing) yields on the bank’s issued bonds might align with increasing (decreasing) credit risk on its portfolio
This list isn’t exhaustive, and other things to think about are market transactions and whether the bank or its competitors are entering or exiting certain segments, as well as the prices for these transactions which might indicate changes in risk appetite, differences between internal and external expected default frequencies, and so on.
These come with other caveats, too, like the fact that the risks of existing exposures might be different from new ones being originated. At the same time, not all will necessarily apply (or make sense), and the resulting indicators could be varied. Lastly, the purpose isn’t to mandate reams of new analysis, but rather to help serve as a check for evidence to support that there are no apparent contradictions.
“If you tell the truth, you don’t have to remember anything.” - Mark Twain
Estimates of ECLs have always been characterised by substantial judgement, complexity and uncertainty. While things have been improving, don’t expect those to go away anytime soon. For disclosures, particularly in an area where requirements aren’t always prescriptive, that raises a few things to think about, such as...
- More isn’t always better. As we’ve seen over the past year, loads of additional tables and narrative aren’t always helpful in ‘telling the story’. Instead, focus should be on how the estimate is developed, evaluated, communicated and understood internally and how that can be summarised externally.
- If the themes above (such as unwinds of overlays, capturing new and emerging risks, calculating scenario weights and loss distributions, and triangulating with internal and external indicators of price) are, indeed, evolving and important components of the bank’s ECL measurement in 2021, then disclosures of their specific impact might also be warranted.
- Overlays are fickle and likely require caution when making disclosures. While information about the relative subjectivity (which might be implied by the magnitude of overlays) is important, the term ‘overlay’ means different things to different people and so, without due caution, disclosures could easily create as much confusion as they do clarity. For example, consider two banks facing the same economic uncertainty. One applies judgement to increase the probability weight applied to its downside scenario, resulting in an increase in its ECL of 100. The second holds steady on its scenario weights, and instead uses a post-model adjustment to increase the ECL by 100. Same uncertainty, same judgement and same impact – but only the second bank is likely to disclose an ‘overlay’. Keep in mind, that starting points pre-pandemic were uneven too, so comparing them now without historical context might also be misleading.
We expect provisions and default emergence to be volatile for the foreseeable future and that timing and effects of the economic recovery will vary by bank. If so, that’ll likely mean even more attention on disclosures that explain those movements and differences, and it might include continued calls for information about unweighted scenario ECLs, weights applied, and so on.
Some final thoughts
“Uncertainty is the only certainty there is” - John Allen Paulos
While the future is looking brighter in some parts of the world, a lot has changed. New risks – such as inflation, taxation, vaccine efficacy and so on – mean that the road forward is unlikely to be back the way we came. For ECLs, that means unwinding overlays and going back to previous models and data alone might not work. We are still faced with significant uncertainties, not least of which is what will happen when government support measures are withdrawn from vulnerable segments of the economy.
For those worried about the potential disconnect between overlays and the emergence of defaults, we believe that the answer lies more in the rigour of supporting analyses than it does in the passage of time. As we’ve seen, the most important consideration for overlays has to do with the specific factors and risks that underpin them, and the way these factors, risks and related uncertainties shift over time. Tying these explicitly to changes in overlays ensures that conscious reasoning, rather than intuition, is applied. That’s equally important when overlays are being added, held or unwound.
New and emerging risks might also warrant revisions in areas like segmentation, scenarios, weightings, models, assumptions and so on. Whether change is needed or not, a ‘good estimate’ shouldn’t be simply judged relative to the timely occurrence of anticipated events alone. Robust analysis of things like loss distributions and related risk drivers, and triangulation with the bank’s lending practices (including internal and external indicators of price), can be just as compelling.