Combating Phishing, Fake Accounts, and Fraud: Insights from the Frontlines

Lucinity and Sift experts explore AI-driven fraud prevention, phishing, fake accounts, and financial crime strategies in this insightful webinar recap.

Lucinity
5 min

Experts from Lucinity and Sift discuss phishing, fraud trends, and how AI is upgrading prevention strategies

Luke Fairweather, VP of Sales at Lucinity, was joined by Brittany Allen, Senior Trust and Safety Architect at Sift, in our latest webinar. Together, they offered a candid look at how fraudsters exploit both AI and human vulnerability—and what businesses can do to stop them. The discussion encompassed the rising threats of phishing, fake accounts, and online fraud.

Overall, the webinar brought clarity to a complex question: how fraudsters operate today—and how technology and operational change must evolve in response. Keep reading to explore the major learnings from the full webinar.

What makes modern fraud so pervasive and difficult to trace?

Brittany opened with the reality that fraud today is more personalized, more accessible, and more widespread than ever. Nearly every consumer has received scam messages - pretending to be delivery companies, banks, or even their employer. One particularly illustrative example involved a friend duped by a fake “boss” requesting gift cards—a scam so plausible it evaded suspicion in a small company environment.

In many cases, scam victims don't even report the full truth to their banks, fearing they'll be told the transaction was “authorized” and thus unrecoverable. This dynamic, Brittany noted, has created online advice forums where people are even encouraged to lie to banks to meet reimbursement criteria—a shame that the world we live in today doesn't even empower customers to be honest about their fraud experiences.

How are fraudsters using AI, and how should defenders respond?

Luke and Brittany both emphasized that fraudsters are now embedding AI into their scams. They use it to craft more convincing phishing messages, improve grammar, and even simulate human-like interactions during live chats or calls. Ironically, this could make some fraud more detectable—many AI-generated messages sound alike, creating a tell-tale uniformity.

That said, Brittany warned against overconfidence. She cited image verification scams where fraudsters use AI to create professional-looking photos of fake IDs. However, trained human eyes still spot what machines miss—like unusual backgrounds, wrong cropping, or inconsistent details. Her message: AI is powerful, but not foolproof, and human oversight is essential.

What is Fraud-as-a-Service, and how does it scale cybercrime?

The conversation turned to a sobering topic—fraud as a service (FaaS). Brittany described how modern fraud isn’t just ad hoc—it’s commercialized. Fraudsters sell tools (configs), preloaded with scripts targeting specific platforms, that automate credential stuffing and test stolen login data. Some of these are shared freely in Telegram groups or dark web forums to attract new users.

This ecosystem allows even inexperienced actors to launch massive fraud attempts. But defenders can flip the script: by studying these configs, they can proactively block common patterns, such as specific browser settings or language preferences embedded in the attacks.

Where do stolen credentials come from, and how do fraudsters harvest them?

Brittany emphasized that not all compromised credentials come from data breaches. Phishing and social engineering remain dominant methods. A single link shared by a compromised social media account—promising a loyalty reward or a discount—can yield thousands of login credentials.

Some fraudsters even sell “curated” email accounts already connected to services like buy-now-pay-later (BNPL) providers. These are marketed as ready-to-abuse lines of credit. The demographic targets here aren’t necessarily elderly or digitally unaware; often they’re financially vulnerable users drawn in by promises of easy money or services.

What does a day in the life of a fraud analyst look like today?

Brittany described the role of fraud analysts as multidisciplinary. In a mid-sized company, they start the day reviewing dashboards for anomalies—spikes in login attempts, failed payments, or blocked accounts. They respond to chargebacks, internal escalations, and unclear cases stuck in the gray area between fraud and error.

The best analysts blend data review with cross-functional collaboration, working with customer support, finance, legal, and product teams. Technology like ML-powered scoring systems and explainable AI tools can accelerate their work. Ultimately, humans make the calls in edge cases where systems fall short.

Is it time for fraud and AML teams to merge?

Luke asked whether the emerging “FRAML” model—blending fraud and AML (anti-money laundering)—makes sense. Brittany acknowledged that fraud teams have historically lived under different departments (marketing, legal, ops), often with unclear ownership. If integration helps streamline resources and increase visibility, she supports it.

However, the value depends on business type. In fraud-heavy industries like marketplaces, the crossover with AML may be limited. But in regulated financial services, fraud signals can surface deeper AML risks—especially when laundering is disguised as legitimate commerce. Cross-team collaboration is key, even if full structural integration isn’t always the answer.

Wrapping Up

The discussion closed with both speakers reflecting on the urgency of the moment. Fraud is industrialized, fast-moving, and increasingly aided by AI—yet most organizations are still responding with siloed teams and manual workflows. The takeaway? Now more than ever, successful fraud prevention requires a coordinated, tech-augmented, and human-led approach. Fraud isn’t slowing down—so neither can we!

Lucinity’s Approach: Making Investigations Smarter and Faster

Throughout the discussion, Luke reflected on Lucinity’s philosophy—compliance teams don’t need more systems; they need better insight. Lucinity’s platform delivers that through seamless integration and contextual intelligence. Here’s how:

  • Luci Copilot: Powered by GPT-4 and embedded into the Case Manager, Luci generates standardized case summaries, conducts negative news searches, visualizes money flows, and even drafts SAR narratives—dramatically reducing time spent on routine tasks.
  • Configurable Automation: Through the Luci Studio, compliance teams can tailor automation rules, connect to third-party APIs, and create custom investigation workflows—all without technical support.
  • Plug-and-Play Integration: Luci’s copilot plugin can be dropped into any web-based system (like CRM or Excel), delivering up to 90% productivity gains without replacing existing infrastructure.
  • Auditability & Security: Every AI-generated insight is logged, explained, and customizable—ensuring transparency and regulatory readiness.

Lucinity isn’t replacing analysts—it’s empowering them to work faster, smarter, and with better outcomes.

Meet the Speakers

Brittany Allen
Senior Trust and Safety Architect, Sift

Brittany brings over 15 years of hands-on experience in fraud prevention across digital marketplaces and e-commerce platforms. At Sift, she helps companies reduce fraud and disputes while educating the public through speaking, writing, and advisory roles. Her practical insight into fraud-as-a-service and social engineering tactics brings unique value to fraud prevention strategies.

Luke Fairweather
VP Sales, Lucinity

Luke focuses on helping traditional financial institutions and fintechs reduce compliance operational costs through Lucinity’s GenAI-powered solutions. With previous leadership roles at PassFort and Moody’s, he brings deep experience in financial crime compliance and technology. As moderator, he drove a dynamic discussion and connected operational pain points with practical technology applications.

Final Thoughts: Aligning Humans, AI, and Strategy in Financial Crime Defense

This webinar highlighted that four response to financial crime needs to be agile. Fraud is now industrialized, with AI-assisted scams scaling globally. However, the balance lies in smarter systems, empowered humans, and closer collaboration between compliance disciplines.

Key Takeaways:

  • AI Is Not Optional: Fraudsters are using it. Institutions must also use Generative AI in response - but responsibly, transparently, and with explainable results.
  • Human Judgment Still Wins: Machines miss context. People, with the right tools, still lead in ambiguous cases.
  • Integration Is Advantageous: Whether it’s FRAML or collaborative workflows, shared intelligence drives better detection.
  • Speed Matters: The ability to adapt rules, automate workflows, and test hypotheses quickly is no longer a luxury—it’s survival.

Watch the full webinar recording for more details

Sign up for insights from Lucinity

Recent Posts