The foundation of every relationship is trust. Intuitively, trust exists in our minds as a concept between humans, but we have also trusted machines for a long time – cars, airplanes, calculators, to name a few. Despite the exponential growth in integrating technology into our daily lives, skepticism remains when it comes to intelligent machines. Establishing trust requires a conscious effort to utilize Explainable Artificial Intelligence (XAI) as a bridge between Humans and AI, leveraging a newfound transparency in AI.
At the core of every Anti Money Laundering (AML) program is a detection engine that identifies suspicious activity. Legacy AML systems tend to use a rule-based approach, whereas Lucinity utilizes a proprietary behavioral approach to detect illicit activity. Rule-based approaches are transparent for analysts but cannot spot complex money laundering behaviors. Furthermore, legacy approaches tend to flood compliance departments with false-positives.
A new detection paradigm – actions from XAI
At Lucinity, AML regulations are translated into understandable behaviors and detected through algorithms that efficiently find illegal patterns in the data. Lucinity‘s behavioral detection engine employs various Machine Learning and Deep Learning models to estimate whether actors are conducting money laundering and setting up investigations for human input and confirmation. These models implicitly perform hierarchical feature abstraction within a high-dimensional space based on numerous data points and interactions. The advancement of Deep Learning has resulted in algorithms that are challenging the status quo across multiple domains. AML is no different.
Sophisticated AML detection is worthless unless banks effectively leverage the findings. Until now, we struggled to democratize the benefits of deep technology in AML and make it actionable. The recent development of new XAI techniques has allowed Deep Learning models to be understood and explained. XAI is the mathematical concretization of why a model arrived at a specific prediction given the input variables. It allows data scientists to extract model specific values to formulate how and why a model came to a particular decision or prediction. XAI represents the building blocks between investigators and AI through a common communication layer in the form of mathematics, leveraging a newfound transparency in AI. In simpler terms, results from deep learning models can now be effectively used by a much wider audience, without special skills or training.
Justin Bercich, PhD, Head of AI, Lucinity
At Lucinity, we champion the concept of Human AI, which synergistically evolves the relationship between humans and machines to build trust and advance our joint capabilities.
AI, meet Human
Lucinity’s Human AI contextualizes mathematical data into an intuitive UI displaying logical information relevant to investigators. The ability to detect complex supspicious activity while retaining practical explainability of the models increases the rate of relevant cases that are investigated as well as the investigator’s understanding of the case. The analyst can conduct a robust analysis and draw insights from it quickly, reducing the processing times and increasing the confidence in decisions.
By harnessing these technological advancements covered, Lucinity can lower the time spent on compliance procedures by half. While simultaneously increasing regulatory coverage. The fusion of human and artificial intelligence is the power of Human AI. It allows us to keep to our strengths. In Human AI, the technology works with the compliance officer, not against them, building trust over time and continuously improving.
Humans vs. machines, why not the best of both?