XAI fuels Human AI to improve AML programs

Humans vs. machines, why not the best of both?

Theresa Bercich
2 min

The foundation of every relationship is trust. Intuitively, trust exists in our minds as a concept between humans, but we have also trusted machines for a long time – cars, airplanes, calculators, to name a few. Despite the exponential growth in integrating technology into our daily lives, skepticism remains when it comes to intelligent machines. Establishing trust requires a conscious effort to utilize Explainable Artificial Intelligence (XAI) as a bridge between Humans and AI, leveraging newfound transparency in AI.

At the core of every Anti Money Laundering (AML) program is a detection engine that identifies suspicious activity. Legacy AML systems tend to use a rule-based approach, whereas Lucinity utilizes a proprietary behavioral approach to detect illicit activity. Rule-based approaches are transparent for analysts but cannot spot complex money laundering behaviors. Furthermore, legacy approaches tend to flood compliance departments with false positives.

A new detection paradigm – actions from XAI

At Lucinity, AML regulations are translated into understandable behaviors and detected through algorithms that efficiently find illegal patterns in the data. Lucinity‘s behavioral detection engine employs various Machine Learning and Deep Learning models to estimate whether actors are conducting money laundering and setting up investigations for human input and confirmation. These models implicitly perform hierarchical feature abstraction within a high-dimensional space based on numerous data points and interactions. The advancement of Deep Learning has resulted in algorithms that are challenging the status quo across multiple domains. AML is no different.

Sophisticated AML detection is worthless unless banks effectively leverage the findings. Until now, we struggled to democratize the benefits of deep technology in AML and make it actionable. The recent development of new XAI techniques has allowed Deep Learning models to be understood and explained. XAI is the mathematical concretization of why a model arrived at a specific prediction given the input variables. It allows data scientists to extract model-specific values to formulate how and why a model came to a particular decision or prediction. XAI represents the building blocks between investigators and AI through a common communication layer in the form of mathematics, leveraging newfound transparency in AI. In simpler terms, results from deep learning models can now be effectively used by a much wider audience, without special skills or training.

AI, meet Human

Lucinity’s Human AI contextualizes mathematical data into an intuitive UI displaying logical information relevant to investigators. The ability to detect complex suspicious activity while retaining practical explainability of the models increases the rate of relevant cases that are investigated as well as the investigator’s understanding of the case. The analyst can conduct robust analysis and draw insights from it quickly, reducing the processing times and increasing the confidence in decisions.

By harnessing these technological advancements covered, Lucinity can lower the time spent on compliance procedures by half. While simultaneously increasing regulatory coverage. The fusion of human and artificial intelligence is the power of Human AI. It allows us to keep to our strengths. In Human AI, the technology works with the compliance officer, not against them, building trust over time and continuously improving.

Humans vs. machines, why not the best of both?

Sign up for insights from Lucinity

Recent Posts