Keeping AI in Line with a Properly Managed Model

Welcome Shehzad to our blog! He joined RDC last year after many years experience managing model risk with regulators and banks. He oversees the independent model governance unit, a role critical for exceeding our customers’ – and their regulators ‘- expectations of AI oversight.

Big data and advanced analytics capabilities are giving rise to widespread use of Artificial Intelligence (AI) and Machine Learning (ML). This has become the new norm in many ways. We see experts replicating human behaviour and even predicting effects of disease control during COVID-19 with these sophisticated tools. But how can we really trust the outcome of AI/ML?

To answer this question, we should first understand that AI/ML use algorithms to make predictions. They are essentially what risk professionals and financial regulators call models; and like any model, there is a risk that they do not always get the truth right. This model risk must be properly managed.

How RDC Does Model Governance

We have a model risk management program that safeguards against AI/ML break down. This is critical because our AI/ML solutions prevent infiltration of bad actors into the financial system. Our program is based on best practise formed by financial regulators, so we can help our financial customers meet regulatory expectations when they use our solutions.

The foundation for any successful model risk management begins with managerial oversight and an independent review function that performs controls on models. This is to ensure that AI/ML models undergo critical review during development; as well as continue to perform once in production. We maintain an independent unit separate from model development that performs reviews and controls.

Our Transparent Model

The AI/ML model landscape covers a wide range of techniques with varying degrees of complexity. One often cited criticism of AI/ML models is that they are a “black-box” of anarchy where the user has no control. While this may be true for some techniques, such as neural networks, there are also less opaque techniques available, such as decision trees.

Our approach is to use models that are explainable and controlled by keeping human oversight in the loop.

During pre-implementation we test the models in collaboration with our customers and provide a proof-of-concept (PoC) about how the models perform on their firm specific data. In production, we perform ongoing model monitoring as part of our model validation. One of our strengths are that we have trained human analysts that will assess the outcome of the models. This “model audit” essentially benchmarks the model output against the assessment of the human expert.

Our experts are screening professionals that undergo rigorous training and quality control. The results of these controls can be made available to our clients. We also catalogue all our model versions as well as retain training and test data, so that the results are auditable and traceable.

Stay tuned for more to come on specific governance related controls for our AI/ML models, such as model audit, continuous learning and anomaly detection, in the next couple of weeks.

See Real PoC Results