In 2023, we witnessed the widespread adoption of AI tools such as ChatGPT, Dall-E, and Bard. However, along with the positive advancements, there were also negative aspects, particularly in the realm of fraud. To gain a better understanding of how AI is shaping the future of fraud prevention, The Fintech Times recently hosted a webinar. Led by Polly Jean Harrison, the features editor of Fintech Times, the webinar featured a panel of industry experts — Stuart Wells, CTO at Jumio, Kevin Lee, VP, digital trust & safety at Sift, and Chris Gerda, former risk and fraud prevention officer at Bottomline Technologies.
Together they explored the latest trends in AI-powered fraud, how fraudsters are using AI for their own purposes and how businesses can leverage AI to prevent fraud and protect their customers.
Here are the five key takeaways:
1. Fraudsters are increasingly leveraging AI to perpetuate fraudulent activities
Kicking off the session, Lee pointed out that while AI — especially tools like ChatGPT — have been widely discussed lately, it’s important to understand that there are many types of AI and adjacent technologies within the fraud landscape. For example, he noted that fraudsters frequently use bots and scripts to steal from businesses and consumers. He also mentioned that his team has seen an uptick in ‘fraud-as-a-service,’ wherein fraudsters offer to defraud others for a fee, such as launching bot attacks or other large-scale attacks against a particular platform.
Highlighting the surge in generative AI techniques, Wells noted that deepfakes are on the rise. Fraudsters can now effortlessly bypass webcams or phone cameras to inject deepfake content. Unlike the traditional attacks where fraudsters simply placed a photograph in the front of the camera, taped a picture onto a mobile phone, or employed silicone masks, deepfakes are far more sophisticated. They’ve evolved to a point where they’re imperceptible to spot with an untrained eye. As such, it is vital to develop machine learning models with the capability to detect these intricate and advanced forms of attacks.
2. Detecting AI-powered fraud comes with its own set of challenges
“It’s not only that fraudsters can target various devices, but they can do it sophisticatedly at scale,” Wells remarked, adding that they can take private information and leverage it for blackmail attempts. One contributing factor to this trend is that fraudsters can easily access source code and synthetic IDs that enable them to create deepfakes, accumulate vast amounts of data and even generate synthetic data.
Gerda further shared that if deepfakes possess convincing voices and faces that the system expects to encounter, they can establish trust. Therefore, one of the challenges lies in firms retraining their systems by incorporating additional data points, such as usernames, passwords, operating systems, cloud machines and more. By retraining and leveraging such device information that is used to access an account as part of the authentication process, these systems can provide an enhanced layer of security by using the information as a validation signal for a user logging in. Adopting this approach helps in establishing patterns that can significantly help better combat deepfakes and other AI-driven fraud.
“The Good, The Bad, and The Fraudulent”
3. Enhancing fraud resilience through data sharing
Drawing from his experience on the operations team, Lee shared that “getting dev resources can often pose a challenge for numerous companies.” He remarked that this is where AI can be instrumental in their success, as it can create tools that streamline the use of SQL queries for identifying various fraud rings and attack vendors. These tools can promptly notify teams of unusual spikes in traffic patterns and facilitate quicker response actions.
Wells and Gerda outlined the significance of having a threat consortium where businesses can share information, identity patterns and data related to fraud attacks. This would effectively reduce the risk of fraud that might occur due to a lack of knowledge about a particular type of AI-driven fraud attack, and it could also help in reducing costs.
Wells further stressed the importance of collaboration and cooperation, including the sharing of data and best practices among fraud experts. Such efforts can help firms become more responsive, leading to increased customer satisfaction and a reduced risk of fraud.
4. Businesses need to continuously adapt their fraud prevention strategy
Wells stressed that firms must safeguard their data assets by implementing the right set of analytics and encryption. It’s crucial to foster a culture of security awareness and provide the necessary training to create a robust fraud prevention strategy. This approach will empower businesses to take the essential steps in reducing the risk of fraud effectively.
Highlighting the human element in fraud detection, Lee stressed the significance of teams understanding a company’s vulnerabilities and how to reverse engineer fraud attacks. He added, “If we don’t figure it out internally, someone externally will do it for us.” Therefore, risk teams should be proactive in identifying potential threats and look for ways their platforms might be compromised.
5. Businesses must balance using AI with user privacy and security
Lee noted that as regulations continue to evolve, organizations must work with their legal teams to determine the types of data they can process and use. Especially given the large volumes of data they handle internally, organizations must exercise a high degree of scrutiny.
Wells underscored the importance of firms having an ethical governance policy. But having a governance policy is one thing, while enforcing it is an entirely different challenge. Additionally, firms should ensure that their machine learning models are explainable, especially considering the growing speed and scale of cyberattacks.
The webinar concluded on the note that fraudsters will continue to leverage machine learning models and AI. Therefore, businesses also need to embrace AI to out-innovate the fraudsters.