From Solo to Copilot: The Role of Generative AI in Elevating FinCrime Prevention

In this blog, Lucinity summarizes key insights from its recent panel discussion at Sibos 2023 in Toronto. This blog explores the shift from solo analysts to a "copilot" approach, where human expertise is augmented by generative AI.

4 min

The Shift from Solo Analysts to Copilots 

The field of financial crime prevention is rapidly developing, but it's also becoming increasingly complex and inefficient. Traditionally, human-centric approaches have dominated the landscape. Analysts must sift through vast data sets and work with multiple tools to investigate potential financial crimes. However, these traditional approaches are struggling to keep pace with the increasing volume of data, complex regulations, and the need for speed and accuracy in decision-making. 

Bottlenecks in Traditional Approaches 

Despite significant investment in human resources, many financial institutions find themselves falling short of both their own objectives and regulatory expectations when it comes to combating financial crime. The following points outline the challenges that render traditional methods increasingly ineffective. 

  1. Resource Intensity: Significant human resources are dedicated to financial crime detection, often failing to achieve the desired outcomes or meet regulatory expectations. 
  2. Standardization: Maintaining a consistent standard becomes exceedingly difficult when hundreds or thousands of analysts review cases. Even if people receive the same information, different people can produce different results.   
  3. Speed and Accuracy: The inundation of analysts with an overwhelming number of cases leads to fatigue, errors, and inconsistent reviews.  

Data Overload  

The financial crime industry faces many challenges, including massive data sets, fragmented information sources, and a high volume of false-positive alerts generated by traditional rule-based systems. This creates an overwhelming workload for financial crime analysts who must summarize vast amounts of complex data, often leading to errors or inconsistencies in their reports.  

It is already extremely difficult to visually understand the pattern of transactional activity of a customer over 12, 18, or 24 months. You could build out this model with extensive time and focus. However, to analyze customers’ transactional activity 200 times a day becomes unscalable.  

Generative AI to the Rescue 

Generative AI has the potential to be a game-changer in three main areas: 

  1. Prevention: Before onboarding a customer, generative AI can predict whether the customer could become problematic in the future. 
  2. Detection: Once a customer is onboarded, AI can monitor activities to detect any that could potentially be problematic. 
  3. Resolution: After detecting potentially problematic activities, AI can help resolve the issue more quickly and accurately. 

The real magic happens in the resolution phase. Generative AI can help streamline, standardize, and expedite case reviews. For instance, it can create summaries and visual patterns of transactional activity over extended periods, a task that is currently very time-consuming and prone to errors.  

Implementation Considerations 

The effective use of generative AI involves more than merely installing a system. To really harness the power of AI, firms need to address two significant roadblocks: 

  1. Attribution: Being able to trace back the decision-making process of the AI for compliance and legal purposes is crucial. 
  2. Infrastructure Costs: The return on investment is a concern, as these systems require significant infrastructure to train and operate. 

From Hallucinations to Citations: Making AI Accountable 

While generative AI presents an innovative solution for information summarization, it comes with its own challenges, such as the issue of "hallucination," where the AI might produce inaccurate or fabricated information.  

Features like plugins can help gather up-to-date, precise data, providing an evidence-based approach crucial for audit and compliance. Incorporating citation mechanisms can further bolster the AI's reliability, allowing for a seamless tracing back to original data sources, thereby meeting regulatory requirements. For example, if you ask chatGPT-3, “What’s the stock price of Microsoft?” chatGPT-3 will not be able to answer you as it is taking information from 2021. However, if you use chatGPT-4 and enable a plug-in to a stock site, it could come back with an answer and how it got that answer.  

This solves a significant challenge regarding the need for attribution that financial institutions currently face when deploying large language models. For compliance, the goal is not to have fewer customers but to defend your practices when you face regulatory scrutiny. Regulatory scrutiny means that you must be able to continuously double-click into information and get to the bottom of any case. If large language models can do this cost-effectively, they would be much more likely to be accepted into legal and compliance functions.  

Another application of this is searching and summarizing documentation. Generative AI can review a document and highlight where a particular answer is coming from. This is especially useful when summarizing long documents such as an annual report.  

Learning from Other Fields: The Security Copilot 

The idea of a "copilot" isn't entirely new. In the realm of security, it's worth noting the emergence of Microsoft's Security Copilot, a cutting-edge solution that marks a significant advancement in generative AI for cybersecurity. Launched in March 2023, this product serves as an AI-driven assistant for security teams, utilizing large language models and Microsoft's security expertise and global threat intelligence. The platform equips security operations with the tools to counteract bad actors. Among its standout features is its ability to summarize security incidents in a comprehensive manner. Similar to Lucinity's Luci Copilot in the financial crime prevention space, Microsoft's Security Copilot demonstrates the transformative potential of generative AI in specialized sectors. 

Human + AI: The Dream Team for Fighting Financial Crime

The advent of advanced AI technologies shouldn't aim to eliminate human jobs but rather to enhance human productivity by automating low-level tasks. A tier-1 bank’s compliance department can produce between 1,000 to 10,000 cases daily. The investigation time to review each case can range from half a minute up to eight hours.  

By using AI for the initial stages of review, analysts can be reallocated to more strategic, high-value tasks like in-depth investigations or filing suspicious activity reports. This change aligns with the objectives of regulatory bodies, which emphasize the need for comprehensive and effective financial crime prevention. Thus, the move is not from solo human efforts to AI takeover but from solo to a collaborative "copilot" approach, where humans and AI work together for more efficient and reliable outcomes. 


In summary, the integration of Generative AI into financial crime prevention marks a pivotal shift from isolated human analysis to a collaborative framework where human expertise and AI capabilities work in tandem. Traditional methods have proven increasingly inadequate in managing the volume and complexity of financial data, leading to inefficiencies and regulatory shortfalls. By leveraging AI for initial case reviews, financial institutions can reallocate human resources to more complex, strategic tasks, thereby optimizing overall operational efficiency.  

Learn more about Lucinity’s Luci Copilot and how to modernize your FinCrime prevention:  

Sign up for insights from Lucinity

Recent Posts