Getting Real About AI
As part of ACFCS FinCrime Virtual Week, I served on a panel with Max Lerner of State Street and David Creamer of Scotia Bank, moderated by Jim Richards of RegTech Consulting, where we shared our thoughts and experiences on deploying AI-based solutions for real: in production, mission-critical processes and in the place of human analysts. I was pleased to confirm that RDC remains at the forefront of using AI solutions, but that we’re not alone. My fellow panelists discussed how both State Street and Scotia Bank have deployed solutions similar to AI Review, in that they use an “audit trail” of knowledge worker decisions to train and test predictive models to automate a good deal of that decision-making. We chatted about what worked and what didn’t when using predictive analytics for compliance processes at financial institutions. Here are a few important takeaways:
Start with a clear problem statement and value proposition
This is the most important thing to keep in mind. The most straightforward (and frequently the most valuable) type of AI-based solution is using a predictive model to increase the efficiency and effectiveness of repetitive human decision-making. For example, “Automate level-1 screening analyst activity” or “Automate the creation of Suspicious Activity Reports from transaction monitoring.” We all agreed that practicality was paramount, and that it’s easy to be lured into experimenting with the data, or seeing what the data can tell us by performing open-ended analysis. You will gain valuable insights into your data by going through one or two simple solutions; those insights will suggest the next steps. Start small and build up to more complex solution approaches. Keep in mind that unless you want to fund a research organization, like Google or Stanford, your data scientists aren’t going to be inventing new approaches to machine learning; they’ll be using techniques developed by researchers to solve real problems.
A corollary to above, make sure that the subject-matter experts and the data scientists understand one another. It’s likely there will have to be some education on both sides: the SMEs need to understand some basics about predictive analytics, and the data scientists need to have a working knowledge of the business. Without mutual understanding, the solution produced will almost certainly not meet the business requirements nor expectations. This is not a new phenomenon, it was an already-solved problem in software engineering before machine learning became popular. Product companies address this through product management, and other types of businesses through solution analysts. In either case, I like to say that these folks act like a marriage counselor: “When he says this, he means that.”
Even small firms can play in this space. Keeping in mind the advice above, you don’t need a large team of data scientists, nor do you need anyone who’s very senior. What you do need is someone who keeps up with the advances in machine learning – particularly in Python. I’ve never seen a space change so rapidly, the number of new libraries and useful open-source solutions coming online each month is almost overwhelming. A junior data scientist who is energetic, curious and capable of understanding the business constraints will be capable of producing a relatively simple, but valuable, solution like those described above. If data scientists aren’t part of your hiring plan, SaaS solutions that provide basic predictive capabilities over your data are beginning to spring up as well. However, always keep in mind regulatory scrutiny. Many pure machine learning companies have never considered how to deal with regulators. The best thing you can do – whether you’re large or small or in-between – is to engage the regulators early and often. Tell them what you’d like to accomplish, and keep them apprised of how you intend to go about it. I like to compare regulators to mothers-in-law: make an effort, she’s probably nicer than you think, and she’s not going anywhere, so make the best of it.
Treat data quality as a separate project
Too many data scientists make the mistake of cleaning up the data before they train and test their predictive models. Then they’re surprised and disappointed when the model doesn’t work well on data that’s incomplete, contradictory or otherwise imperfect. We’ve had very good results using techniques that perform well even on dirty data. Data quality improvement is a worthy goal, if the business benefits are clear, but producing better predictive models is rarely enough of a plus to warrant the effort. (A few examples of worthwhile data quality improvements: “We’re sending out 40% incorrect billing line items every month to our customers, and need to employee 200 workers to respond to customer calls and correct them” or “Our monthly North America and EMEA financial statements cannot be reconciled, except by hand.”
AI is real, AI is the future
My fellow panelists and I agree on some key points:
- Utilizing AI doesn’t have to be onerous nor take copious amounts of manpower or $$s, in fact it helps.
- The efficiencies and effectiveness that businesses are clamoring for are definitely served by AI. We’ve never seen that more than the past 6 months.
- Find the right people to lead … and curious and smart beats a senior title any day
- Data is always important … and how you use is critical.