AI-Powered Financial Crimes: Are Bad Actors Already Ahead of the Game?

In this blog, we discuss whether AI is already being used by bad actors to commit financial crimes and perform fraud against financial institutions and their customers.

Francisco Mainez
4 min
It takes time to persuade people to do even what is for their own good
- Thomas Jefferson

Introduction

As we witness the increasing accessibility of Artificial Intelligence (AI), including ChatGPT, the role that this technology will play in our society is being questioned. The media coverage surrounding these developments has prompted a necessary discussion about the potential benefits and risks associated with AI.

The debate has overshadowed a more alarming discussion: whether AI is already being used by bad actors to commit financial crimes, either by disguising the proceeds of their predicate offenses or to perform fraud against financial institutions and their customers.

New ways to commit old crimes

AI is a powerful technology that allows machines to perform tasks that typically require human intelligence. For instance, it can help create self-driving cars, develop drug therapies, compose music, and even detect suspicious financial activity like money laundering, fraud, or sanctions evasion. However, this revolutionary technology also presents an opportunity for bad actors to sow even more disruption.

While the compliance technology industry has made significant progress in detecting fraud, AI has introduced another angle exploited by criminal organizations aimed at targeting one of the weak links between financial institutions and their customers: trust.

AI is making it easier for scammers to create convincing social engineering campaigns such as phishing messages. In the past, poor grammar or spelling mistakes were clear signs that a message was fake, but now, with the help of AI, anyone can create believable messages for phishing, smishing (SMS-phishing), or business email compromise (BEC) attacks. Additionally, fraudsters can also use AI to generate realistic-looking deep fake images and videos.

This challenges any framework that uses detection as its main pillar. If the AI-powered control advises the victim that the transaction is potentially fraudulent and he/she discards any advice and approves the transfer, there’s not much financial institutions can do in such a scenario.

The same applies to other areas, such as money laundering: the same controls designed to detect anomalies in transaction behavior can replicate “good” or expected behavior and support a wide array of typologies, from tax evasion to money mules.

Need for an intelligence-led approach

Interestingly, criminal use of AI can be mitigated by an approach that combines principles from a practice institutionally established more than a century ago: intelligence [1].

At the heart of any intelligence analysis process is collecting data, extracting insights, and sharing them to enhance organizational situational awareness and support decision-making. Incorporating these processes into a compliance framework is a crucial step for financial institutions to stay ahead of the game in utilizing AI technology. In practical terms, this means implementing proactive measures before a transaction or payment occurs.

One way to leverage intelligence is by implementing measures such as maintaining an up-to-date financial crimes typologies library, breaking down data silos, e.g., fraud and AML data, and enhancing collaboration between financial institutions and law enforcement. For example, by sharing data containing indicators of anomalous activity with civil intelligence and police force, fraudulent websites can be taken down before scammers target their victims. This approach replaces the classic “follow the money” with a more proactive “pre-empt the scam.”

Using technology to understand bad AI

Adopting an intelligence-led financial crime risk framework involves several key areas, including setting the right policies and training staff. Technology providers have a key role to play in helping users leverage and understand AI to detect suspicious behavior.

A solid starting point for the intelligence-led approach is to utilize applications that can display, make available, and, more importantly, combine multiple data sets within a single platform. When presenting customer data, it’s important to move beyond a simple list of fields and adopt an “Actor Intelligence” layout [2]  instead (see below).

In addition to displaying and combining multiple data sets, the next crucial aspect is the ability to evolve and adapt to the fast-moving environment of financial crime. This ensures that compliance tools stay ahead of suspicious behavior, reducing the traditional “two steps ahead of the game” advantage of criminals.

Finally, it is crucial for tools to provide integrated, effective monitoring over the entire end-to-end customer journey. This enables faster response, easier training, and adaptability to processes, rather than extracting, transforming, and loading data across various platforms.

Trust is a valuable asset that takes time to earn but can quickly be lost and exploited by bad actors. To maintain this trust between financial institutions and their customers, utilizing tools designed to work for compliance and facilitate a proactive intelligence approach is crucial. By doing so, financial institutions can provide the necessary assurance to their customers and ultimately defeat the criminal use of the same technology.


[1] The British Secret Service Bureau (SIS from c.1920) was founded in 1909 as the first independent and interdepartmental agency fully in control over all British government espionage activities.

[2] Learn more about Lucinity's Actor Intelligence Solution

Fig. 2: Lucinity's Actor Intelligence creates contextual narratives and data displays from alerts and case-based compliance visualizations, to automate alert and human response management.

Sign up for insights from Lucinity

Recent Posts