New Financial Crime Typologies: What Legacy Systems Miss
You’d never think it could happen to you—but that’s exactly what makes romance scams and marketplace fraud so effective. This blog unpacks insights from Lucinity and Resistant AI on how these emotionally and socially engineered scams bypass traditional fraud systems.
In this webinar, Lucinity’s Luke Fairweather and Resistant AI’s Lucie Rehakova explored how social engineering scams—like romance fraud, APP scams, and investment schemes—target both financial institutions and their customers. They discussed why traditional systems often miss these subtle threats and shared how better use of behavioral data, segment-specific controls, and a balance between privacy and protection can help institutions keep up.
What is social engineering, and how is it used to defraud people?
Lucie defined social engineering as the exploitation of human behavior to deceive victims into making transactions they believe are legitimate. Unlike brute-force attacks or account takeovers, these scams rely on manipulation through emails, phone calls, or personal interactions that lead users to take action themselves. This might involve paying a fake invoice sent from a convincing business email or wiring funds to someone posing as a loved one. Because victims willingly authorize the payment and believe it to be real, such transactions bypass conventional red flags, making them incredibly hard to catch using standard fraud detection logic.
Why do financial institutions fail to catch these scams?
Lucie traced the problem to outdated architecture. Fraud detection, AML, and sanctions monitoring often exist in silos. Legacy systems still rely on rules-based systems that catch obvious anomalies but not nuanced behavioral shifts. Most systems flag large or erratic payments, not small, regular ones. They struggle to assess intent or context, core components of social engineering.
Luke further emphasized that fraudsters often exploit business processes, such as long payment terms of 30/60/90 days and large invoice volumes, where a single fake transaction is unlikely to be noticed quickly.
What makes social engineering fraud so difficult to detect?
Lucie explained that the core challenge is that the fraud isn’t in the payment itself; it’s in the intent behind it. She emphasized that from a technical standpoint, the user is acting normally, logged in, authenticated, and manually entering payment details. Because the scam happens outside the system, traditional transaction monitoring has no visibility into the manipulation. What’s missing is context: the situational clues and behavioral cues that explain why the user is making the payment. Without that understanding, even advanced systems struggle to distinguish between real and coerced transactions.
How can institutions infer context from payment data?
Lucie called for a fundamental shift in perspective. Instead of only modeling risk, institutions need to understand what legitimate behavior looks like and segment it by user type, product, and usage pattern. Rich metadata already exists, such as device intelligence, biometric activity, and session behavior, but it’s rarely integrated into fraud detection workflows. By connecting those dots intelligently, banks can begin to detect behavior that deviates subtly but meaningfully from a user’s norm.
For example, someone sending funds while on a call for the first time or changing payment behavior after certain device events might warrant further scrutiny. These patterns, when analyzed contextually, tell a more complete story.
Do fraudsters target specific customer profiles?
Luke explored whether fraudsters have their version of ideal customer profiles. Lucie confirmed they do and tend to focus on the vulnerable. This includes elderly users, people less familiar with digital systems, or those emotionally or financially distressed. These groups often hesitate to report fraud due to shame or perceived fault. She stressed that one-size-fits-all controls don’t work here. Institutions need to adopt a segmented, risk-based approach. For high-risk customer groups, that might mean inserting more friction, such as MFA, call-back verification, or routing to a fraud specialist, whereas low-risk users may benefit from a more streamlined process.
What role can telco data and behavioral insights play?
Lucie highlighted how fraud detection can benefit from incorporating telco and behavioral data. For example, fraudsters tend to switch SIM cards and devices often to avoid detection, while victims may display unusual phone behavior, such as being on a call during a transaction. By clustering such data and observing deviations, institutions can build stronger case indicators. Device fingerprinting, call frequency, and session analytics can all reveal valuable context. These insights, while indirect, provide another dimension of understanding and can be especially powerful when traditional financial data offers no indication of wrongdoing.
Are scams moving to encrypted channels, and how does that affect detection?
Luke raised a key concern: many scams now occur on encrypted platforms like WhatsApp, where institutions have no visibility. Lucie acknowledged this challenge but noted that institutions can still observe behavior around the transaction itself. For example, was the user actively engaged in another communication app? Are they suddenly spending longer periods online before sending money? Even without accessing message content, metadata, and device context can still feed into risk models. Lucie also emphasized that these methods, when used transparently and responsibly, do not have to infringe on user privacy.
How should financial institutions approach the privacy vs. safety dilemma?
Lucie addressed the elephant in the room directly: customers want strong protection but also expect full privacy. These demands often clash. She emphasized that if institutions are going to be held accountable for reimbursing fraud losses, they must be empowered to prevent fraud in the first place. This means having access to the right signals, but using them in a consent-driven and explainable manner. The goal isn’t surveillance; it’s the responsible application of data to reduce harm. In her view, institutions need to be transparent about how and why they use behavioral signals, while also being selective and ethical in implementation.
What happens on the fraudster’s side once the scam is complete?
Luke transitioned to what happens after funds are transferred. Lucie detailed how fraudsters rapidly move money to avoid detection, often through crypto, prepaid cards, or a chain of mule accounts. Fraudulent accounts typically exhibit low balance retention, frequent and varied counterparties, and rapid outbound flows. These behaviors, while not suspicious in isolation, form recognizable patterns when repeated. Over time, systems can be trained to spot these profiles more quickly, even if they miss the initial transaction. The aim is to intervene before further victims are exploited using the same infrastructure.
How long are mule accounts typically used?
Lucie explained that the lifespan of mules depends on the fraud type. In quick scams like marketplace fraud, accounts might be used for only a day or two before being discarded. In contrast, longer cons like investment scams can see mule accounts in operation for months, sometimes simulating real activity by sending partial repayments to build trust. Fraudsters adjust based on what draws attention and what doesn’t. This variability makes pattern detection critical. It's not just about catching anomalies; it’s about watching for structure and repetition across different accounts and institutions.
What do modern investment scams look like?
According to Lucie, investment scams have become highly professionalized. Fraudsters create polished websites, send convincing updates, and even deliver fake returns to build credibility before executing the final fraud. Alarmingly, many victims of these scams are already victims of previous scams, desperate to recover their losses and therefore easier to re-target.
Common signs include consistent inbound payments, sudden account activity in a previously dormant profile, and fake interest transactions. Lucie emphasized that institutions must focus not only on stopping fraud at the source but also on protecting customers most likely to be re-victimized.
How should institutions manage cross-border scam risk?
Luke asked about the global scope of fraud and the challenges of jurisdictional inconsistency. Lucie agreed that this is one of the most difficult areas to manage. A payment that leaves the UK, for instance, might land at a bank in a country with minimal oversight or fewer controls. Worse, these payments are often essential remittances for families, meaning blocking them comes with social cost. Lucie urged institutions to go beyond FATF country lists and adopt a more granular, fraud-specific risk framework. She also called for stronger international cooperation to align expectations, share typologies, and enable earlier intervention.
Where is fraud detection headed next?
In closing, Lucie called for an overhaul of how fraud detection is approached. Rather than chasing clear-cut fraud signals, systems need to be built around understanding intent, behavior, and context. She believes that AI and machine learning will be key enablers, but only if paired with better data collection and responsible use. Advancing means building on insights into user behavior, feeding that information into systems that support immediate decision-making, and enabling collaboration across teams and regions. The goal isn’t to remove human involvement, but to equip people with better data and tools to act more effectively and efficiently.
Key Takeaways
The session highlighted how financial institutions must adapt fraud strategies to address subtle, evolving threats. It’s no longer enough to detect suspicious activity based on rules alone. The focus must shift to understanding context, using broader datasets, and adapting controls to customer risk levels.
Here are the key points to remember:
- Social engineering scams are often invisible to traditional systems because they mimic legitimate behavior.
- Contextual risk models, built on behavioral and demographic baselines, are essential to detect manipulation.
- Fraud controls must be segment-specific, applying appropriate friction based on customer vulnerability.
- Telco and behavioral signals can offer critical insight into both victim and fraudster activity.
- Privacy and fraud prevention must coexist through transparent, consent-based data use.
- Fraud typologies vary in structure and lifecycle; detection requires flexible, pattern-based models.
- Cross-border scams need more granular jurisdictional frameworks and stronger global cooperation.
- The future of fraud detection lies in integrating diverse data sources and focusing on contextual decision-making—augmented, not replaced, by AI.
Watch the full webinar recording here for the detailed discussion- https://youtu.be/N1cDHzEHkmo
Meet the Speakers
Lucie Rehakova, CAMS
Product Manager, Transaction Forensics | Resistant AI
Lucie oversees forensic analysis of transaction data at Resistant AI, managing efforts to detect and investigate suspicious financial activity. With over a decade of experience in AML compliance, regulatory consulting, and fraud solution design, she now focuses on helping clients enrich their fraud detection systems with context and behavior-driven analytics. Her approach blends deep regulatory expertise with practical product insight.
Luke Fairweather
VP Sales | Lucinity
Luke works with both established banks and fintech companies to reduce compliance costs by integrating Lucinity’s GenAI tools into daily operations. His background includes leadership positions at PassFort and Moody’s, giving him direct experience in financial crime tech and regulatory processes. As the moderator, he guided the conversation with clarity, connecting everyday compliance issues to realistic and scalable technology approaches.