According to a new cybercrime report, news has recently emerged of the first machine learning generated fingerprints. According to Wired.com, a group of computer scientists from New York University’s engineering department has managed to generate a series of “master prints” that not only pass smartphone fingerprint sensors, but can actually masquerade as prints from multiple users.
When evolving global regulations are mandating the use of a “foolproof” biometric suite of strong authentication, hackers are already cracking the codes that make them penetrable. This calls into question the very meaning of strong authentication; is anything really impenetrable?
How far should businesses rely on point solutions to protect customer accounts and authenticate online payments? It appears that the only reliable approach to smart authentication is a layered solution that combines real-time elements of a user’s unique behavioural pattern, with customer focused, strong authentication that is inextricably linked to their online customer journey.
Only then can businesses genuinely detect unusual or high-risk scenarios before they pose a risk to security defences and customer accounts. The swirling storm of privacy and security continues to loom heavy on the horizon for every digital business, with the first test cases from GDPR starting to make headlines and the California Privacy Act likely not far behind.
Consumers should expect the businesses they transact with to protect their online accounts and personal information, but the line between security and data privacy continues to be tested in the process.
If 2018 began with businesses looking for new ways to better authenticate online users – particularly in Europe with the evolution of PSD2 mandating stronger authentication on login and payments transactions – what lengths will the fraudsters of 2019 go to circumvent this security framework? Networks, automation and the use of bots and machines seem central to virtually all the predictions for how cybercrime will evolve this year.
Consider, for example:
- AI driven malicious chat bots / robots that can be used to dupe customers into divulging personal information
- Machine learning algorithms used to generate pitch-perfect, social engineering attacks based on real customer data
- IoT devices being taken over by external bots and used to spy on human interactions
- Networked global bot armies targeting multiple industries worldwide
- Networked fraud rings operating across industries – mules using financial services / telco / gaming and gambling companies to siphon money
It is clear that consumers do not expect to have to curtail their online transacting behaviour in the quest to thwart cybercrime. Yes, awareness campaigns around social engineering and ransomware threats, for example, are pivotal and non-negotiable. But consumers do not expect fraud and identity controls to interfere with the slick and low-friction online experience they have come to expect from their bank, social media sites and trusted e-commerce brands. The pressure is on businesses to ensure they do not jeopardize customer trust in the process of catching the criminals.
The ThreatMetrix Cybercrime Report: H2 2018 is based on actual cybercrime attacks from July – December 2018 that were detected by the ThreatMetrix Digital Identity Network (The Network) during real-time analysis and interdiction of fraudulent online payments, logins and new account applications.
The post Cybercrime report: The rise of the mobile bot appeared first on Payments Cards & Mobile.