A Practical Guide to AI for Financial Crime Risk Detection
As financial crime risk data becomes more complex, how can transaction monitoring keep up? Discover how AI can power more efficient risk management.
Download the GuideFrom COVID-19 to the Russian invasion of Ukraine, global events are significantly changing the behavior of criminals and legitimate customers alike. In turn, these changes have impacted firms’ transaction risk management frameworks. In an interview with global financial crime consultancy FINTRAIL’s co-founder, Gemma Rogers, Roger Bush explored financial industry trends toward higher-tech risk management to navigate criminals’ shift toward fraud and exploitation of everyday customers as money mules.
Based on that discussion, this article explores why money muling has increased, red flags to look out for, and the role of artificial intelligence in more effectively curbing these new fraud and money laundering typologies.
The pandemic triggered behavioral shifts across consumer groups and changes in criminal behavior that continue even as COVID fades into the background. In addition to a rise in medical fraud schemes, phishing, and Ponzi and investment schemes, financial crime has increasingly targeted potential money mules. Because economic circumstances have tightened across demographics, less classically vulnerable groups have been targeted alongside traditionally vulnerable populations. These trends show criminals’ rapid ability to adapt their methods to their environment.
Why has this happened? Initially, the pandemic’s pressure on cash-intensive businesses pushed criminals toward other outlets to launder their funds. At the same time, otherwise honest individuals began experiencing unprecedented economic stress.
Money launderers were quick to take advantage of this, funneling illicit funds through desperate individuals – tricking or manipulating them into serving as mules.
As the pandemic has receded, its aftershocks remain, extending many of these pressures and continuing to encourage trends like money muling and fraud. Some of the most enduring effects include:
As money launderers continue to take advantage of customers’ social and economic vulnerability, firms must adapt their transaction monitoring accordingly. Risk teams need agile anti-money laundering and counter-terrorist financing (AML/CFT) tools to constantly adapt to these intensifying risks.
This raises the question: What patterns of behavior typically indicate a money mule? Can existing monitoring tools properly detect this type of activity?
Since money muling recruiters specifically look for people who behave normally, firms shouldn’t rely on profiling customers so much as transaction patterns. To detect a classic retail banking money mule scenario, firms should be looking for unusual patterns of transactions rapidly moving out of an account – often with a lower amount leaving than what came in. This is because a money mule usually skims off a commission from the original inbound payment.
Properly-tuned transaction monitoring rules are critical to detecting this kind of activity. Yet many systems are calibrated to identify incoming payments without considering the surrounding context, which is key to distinguishing between truly suspicious and low-risk activity. This can cause an influx of ambiguous alerts, leading to false positives and negatives. When a transaction is flagged without further context, analysts – particularly in the fintech community – often reach out to the customer to find out more and establish a clearer risk profile.
A more effective approach would be to risk-rank an inbound transaction based on the customer’s profile. Tools that use machine learning can do this most dynamically and efficiently. However, if a firm doesn’t have access to machine learning, traditional monitoring systems can be tuned to look for things like:
Over time, a firm can use findings based on these rules to create an even more targeted approach. After analyzing the profiles of people they’ve caught, they can focus the system’s rules on client groups reflecting similar characteristics. This could involve:
Refining how rules operate within specific customer groups can improve alert accuracy.
In contrast to many of the more transparent red flags indicating individual money mules, corporate accounts can more easily hide illicit transactions. This is because business accounts already receive a higher volume of cash deposits. The rapidity of inbound and outbound transactions is still relevant – but firms should also consider a macro view with their corporate clients.
Such an approach entails looking at a business’s profile and assessing how much money should be flowing through the account based on its size and industry. For example, a dine-in-only restaurant making huge profits from cash deposits might not add up. To account for such a large cash influx, the venue would have to seat a logistically impossible number of customers.
On the other hand, firms need to have enough context to distinguish between a true red flag and a business that has simply changed its business model. In the case of restaurants, the pandemic forced a shift toward takeout. In that context, a larger cash influx may be normal activity.
The key to telling the difference is understanding the business in question and what’s normal for it. Firms should be sure they understand their clients’ typical transaction volumes, and think about any context changes or known situations (such as an economic crisis or pandemic) the business might have had to adapt to. When in doubt, firms can reach out to the business in question for more details. They should also conduct further research and analysis online to assess whether or not the business made a legitimate pivot.
Money muling detection can be especially challenging for fintechs and neobanks. Many are newer entrants into the market, and are often looking to scale quickly while adding new products, services, or geographies. So what can firms in the sector do to catch more muling activity? Key areas to consider include:
Machine learning doesn’t necessarily compete with rules – in fact, the two can work well in tandem. Rules can provide the basic knowledege of customer behavior machine learning needs to improve. At the same time, machine learning can help prioritize alerts generated by rules to ensure the riskiest ones rise to the top. Individual rules are more straightforward, while machine learning is good at providing nuance and reducing false positives. Many firms start out with baseline rules while developing a more in-depth, data-centric machine learning model in the long run. This buys them the time they need to thoroughly test and tweak the model and make sure it’s operating as needed – and that they understand it.
As financial crime risk data becomes more complex, how can transaction monitoring keep up? Discover how AI can power more efficient risk management.
Download the GuideRegulators generally provide technology-agnostic anti-money laundering (AML) guidance to firms, focusing on risk-based principles that should underlie financial crime controls. Still, when it comes to machine learning and AI, regulators expect firms to retain control over their processes. Therefore, a firm that can’t explain how its machine learning system works is unlikely to impress authorities.
Regulators are particularly concerned with preventing reliance on black box technology – tools built on processes no one can explain. One of the main reasons for this concern is that this can harm customers and their rights, an area the European Union’s General Data Protection Regulation (GDPR) seeks to address. A black box system can’t be monitored for ethical concerns like data privacy and discrimination.
Outside regulatory expectations, firms also have an interest in ensuring their systems are explainable. Investigators that understand the AI tools at their disposal can make informed decisions quickly, responsibly, and efficiently. Clear explanations also enable firms’ processes to be continually assessed, improving both their effectiveness and fairness and mitigating unforeseen problems like algorithmic bias.
It’s important to note that despite the cautions surrounding explainability, AI and machine learning are increasingly recognized as necessary to effective financial crime risk management. The important thing is to follow established best practices and stay abreast of the latest AI-related technological, ethical, and regulatory developments.
Beyond first line due diligence and investigation, it’s crucial to give equal attention to the second line of defense, testing how well a system is working – and how well a firm is fighting financial crime.
To ensure independent oversight, teams performing second line testing should be separate from those on the first line of defense. In many countries, firms subjected to money laundering regulations must assign a money laundering reporting officer (MLRO) to report at least annually to the board on the effectiveness of the firm’s financial crime controls. This oversight not only helps ensure compliance with regulators, but also that a firm’s investment in risk management is well-placed. Any gaps revealed by independent reviews can be addressed to allow for a more risk-based approach.
Traditionally, fraud and AML have been overseen separately. On one hand, it’s true that each field entails specific scenarios that require specialized expertise. For example, the chargeback process is specific to fraud and requires somebody that understands it.
However, the need for specialization does not necessarily require firms to segregate these processes. For example, fraud is one of many predicate offenses that feed into money laundering, alongside drug and human trafficking, environmental crime, and murder. If a customer commits fraud, they will eventually need to launder their illegally-acquired assets before using them.
Firms are better off looking at financial crime risk as a whole, and then devising systems suited to their unique risks, from fraud to AML and beyond. There are several steps to this holistic risk-based approach:
As firms begin exploring different risk scenarios, they will see that there’s actually quite a spread of vulnerability. Risk is not siloed. Many controls can cross traditional fraud/AML siloes to mitigate many different risks at once.
Firms are generally good at recognizing new typologies when they emerge, whether as part of investigations, following law enforcement alerts, or due to alerts from fellow financial institutions. In the future, data sharing will further enable effective response to changing risks.
In spite of this, firms may struggle to rapidly adapt their transaction monitoring systems to these typologies. Two main reasons stand out: underdelveloped technology and overly complex governance structures requiring excessive sign-off for every system or rule change. As the data required to keep up with financial crime and AML regulations becomes more complex, firms need streamlined solutions that don’t compromise on accountability or risk. To this end, AI-driven transaction monitoring tools can support agile risk management by constantly learning and adapting.
Meanwhile, as their workflows become more streamlined, firms will also need to review their governance structures. It’s important they continue to provide accountability while supporting an efficient, risk-based approach. Firms can ensure their transaction monitoring and governance structures support a fast-moving, risk-based process by undertaking regular enterprise-wide risk assessments (EWRA) and independent testing and audits.
As firms face increasingly complex risk data and evolving regulatory requirements, a technology-driven response is emerging. The support of new AI-driven tools is allowing financial institutions to allocate their resources where their risks are, while ensuring ongoing quality and consistency. With the rise of more data sharing, firms will be able to take advantage of unparalleled risk insights.
Built on agility and born into a fast-paced financial crime world, fintechs are in a unique position to lead the way. But traditional firms also have the opportunity to rise to the top by adopting a more agile approach. As firms build more adaptable financial crime teams, they will better protect themselves from loss and improve regulatory accountability, while protecting their customers and reputations.
Discover the power of AI for detecting financial crime with Transaction Monitoring by ComplyAdvantage.
Book a DemoOriginally published 19 September 2023, updated 27 February 2024
Disclaimer: This is for general information only. The information presented does not constitute legal advice. ComplyAdvantage accepts no responsibility for any information contained herein and disclaims and excludes any liability in respect of the contents or for action taken based on this information.
Copyright © 2024 IVXS UK Limited (trading as ComplyAdvantage).