Sophisticated AI tools may help guard data, but they also aid cybercrime.
To protect data and money in the digital space, ﬁnancial institutions turn to artiﬁcial intelligence (AI) tools to recognise patterns in fraudulent transactions, but how useful are they?
According to Tim Phillipps, 60, Deloitte’s SEA leader of forensic & analytics, such systems are a double-edged sword. “It’s an advantage for us, but a great advantage for the criminals too, because they understand what we’re looking for. They take advantage of transaction models and less human intervention.”
This is because data protection systems aimed at thwarting fraud rely on machine learning – machines are “trained” to look for patterns of behaviour based on previous examples of misconduct. Their ability to rapidly analyse enormous amounts of data allows them to ﬂag suspicious activity much more effectively than humans.
However, criminals are also getting more sophisticated. The digital landscape has allowed for a new normal in ﬁnancial crime, in which hackers leverage on technology to be smarter, too. If companies focus only on anomalies identiﬁed by surveillance tools, fraudsters can circumvent them by avoiding patterns of behaviour that would trigger attention. In this respect, reduced human oversight could actually aid in misconduct.
Phillipps points out that the greater discussion also has to do with AI itself, and not falling prey to “AI imposters”. The anti-fraud models have varying levels of technology, he explains, and business owners need to know exactly what they need, instead of buying software just because it’s labelled AI-driven.
PHOTOGRAPHY VERONICA TAY
ART DIRECTION ASHRUDDIN SANI
CHAIR CATILINA BY LUIGI CACCIA DOMINIONI FOR AZUCENA AVAILABLE AT SPACE