If a New Yorker’s credit card is used to buy jewellery in Bangkok, the bank might suspect that it has been stolen.
However, if half an hour before, their partner’s card on the same account has been used at a restaurant in the same city, the bank could surmise the couple are on holiday.
Data analytics allows checks such as this to happen in seconds – the time between the card being inserted into an electronic reader and its authorisation, says Paul Eagles, vice-president of product and future payment risk at Visa Europe.s
It is a far cry from the situation 20 years ago, when attempting to use your credit card seven times in a day, or spending more than £1,500 on a single transaction, might well have led to your card being blocked.
“Fraud detection needs to be as unobtrusive as possible,” says Mr Eagles. “The challenge is not to inconvenience customers.”
In addition to speeding up security checks, data analytics improves detection rates by allowing many factors to be considered simultaneously. In the past, financial institutions set “rules” that computers checked to decide whether or not to allow transactions. But this meant predicting what might constitute suspicious behaviour.
This helps banks to spot the tell-tale signs they might never have previously thought of – the “unknown unknowns”, as Mike Rhodes, senior fraud consultant at SAS, the software company, calls them.
Data analytics can help assess liquidity and credit risk, and identify illegal trading, rate rigging, mis-selling, and fraudulent loan applications or insurance claims.
Such techniques can examine anything from the frequency, length and geographic location of phone calls, to the words and phrases used in emails and instant messages.
Anomaly modelling, as the name suggests, seeks out the unusual. In investment banking, for example, a high number of calls to a mobile phone registered in Russia could flag up a warning of insider trading. In retail banking, employee emails containing bank account numbers might suggest criminal activity.
Text mining also helps to uncover dubious correspondence. It can look for specific giveaway words and phrases such as “trading illegally”, “illicit trade” and “keep it quiet”. For example, an alert might be triggered by an abnormally large number of congratulatory remarks, such as “great”, “thank you” and “excellent work”.
Meanwhile, motor insurance companies are using text mining to identify fraudulent claims. A genuine claimant who has been in a car accident and is recalling an event from memory is likely to describe the experience by reliving what actually happened, for example: “I was walking down the road and a vehicle drove into me.”
A fraudster, on the other hand, is more likely to say: “I walked down the road and a car hit me.” People using the second form of words are 60 per cent more likely to be fraudulent than the first.
“They are not describing what happened to them but what they imagine inside their heads,” says Mr Rhodes. “Anyone using words ending in ‘-ed’ should be on top of the list for investigation.”
Scott Paton, a partner in financial services practice at PA Consulting, says financial institutions are under pressure from regulators to invest in social network analysis and other readily available data to understand what conversations are taking place, and “where there is contact with people known to be dodgy”.
Investigators can also look at share price fluctuations, Mr Paton says. “When these are outside the norm, they can see who’s involved, who they’re connected to and what communications are taking place.”
In future, the scope of data analytics will move beyond numbers and words to include images, video and voice recognition.
For example, thanks to the increasing use of social media such as YouTube and Facebook, fraudulent whiplash claimants will find it harder to escape detection, especially if the photos or footage they have posted online show they were nowhere near the scene of an alleged car accident.
However, some compliance officers are reluctant to use data analytics for fear of breaching privacy legislation.
“They are very concerned about what they are allowed to do, even using publicly available data on social media,” says Mr Rhodes at SAS.
However, he says, their fears are misguided, because this sort of analysis is allowed when it is being used for fraud detection.