PD, LGD and EAD Modelling Framework
Translate policy into model architecture using PD, LGD, EAD, term structure logic, calibration discipline, and portfolio fit that remain explainable.

Model architecture should remain readable
Probability of default, loss given default, and exposure at default are the technical core of many ECL frameworks, but the value of that architecture depends on whether the assumptions remain understandable. The objective is not to create the most elaborate parameter set available. The objective is to build a model structure that fits the portfolio, reflects observed behaviour, and can still be challenged by management, validators, and auditors.
PD should reflect deterioration in a disciplined way
PD design needs to answer how risk changes through time, how term structure is handled, how point-in-time information is introduced, and how the estimate is calibrated to the data actually available. Those choices should be visible. A model that cannot explain its relationship to default history, portfolio segmentation, and current conditions is difficult to defend even if the mathematics is sophisticated.
LGD should connect loss severity to recovery reality
LGD is often where operational detail matters most. Recoveries, collateral realisation, timing, costs, and secured versus unsecured behaviour all influence how severe loss really is after default. Institutions need to decide whether a simple rate is proportionate or whether a more differentiated view is necessary. Either way, the rationale should be clear enough to survive review.
EAD should reflect how exposure behaves, not only where it starts
For funded assets, amortisation, prepayment, and contractual tenor may drive the exposure path. For revolving or contingent facilities, utilisation and conversion behaviour can matter more. Good EAD design explains how the balance evolves through the period relevant to the ECL estimate and how that logic connects to the rest of the framework.
Calibration, validation, and limitation notes matter
A defensible model architecture does not hide its limitations. It records where data depth is thin, where proxy assumptions are used, where overlays may later be needed, and how periodic validation or backtesting will be performed. That creates a better relationship between the model and the governance around it.
What this framework should achieve
Visitors should leave this page with a practical understanding of how modelling choices influence the explainability of the final allowance. They should see that ECL Square is designed to support not only the calculation logic, but also the parameter workflow, validation trail, scenario interaction, and the management narrative that sits around the number.
Probability of default, loss given default, and exposure at default are the technical core of many ECL frameworks, but the value of that architecture depends on whether the assumptions remain understandable. The objective is not to create the most elaborate parameter set available. The objective is to build a model structure that fits the portfolio, reflects observed behaviour, and can still be challenged by management, validators, and auditors.
