Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AI: Tackling The New Frontier In Cybercrime

Grigory Yusupov, Regional Director UK and Rest of the World at IDnow

Today, we are witnessing in real-time how developments in artificial intelligence (AI) are having adverse effects on cybersecurity.

Driven by rapid technological advancements, AI is presenting unprecedented opportunities and challenges for businesses worldwide.

Also Read: Mango Animate Launches an AI Video Generator to Create Impressive Videos

Cybercriminals are increasingly empowered by the sophisticated AI tools at their disposal that enable them to carry out their malicious activities with greater ease and efficiency.

The surge in AI is making it alarmingly simple for fraudsters to execute financial crimes. It is now possible to bypass traditional digital security measures with deepfake imagery or social engineering attacks powered by AI.

What this means is that even criminals with minimal technical skills can launch sophisticated scams, posing a significant threat to organisations and individuals alike.

The deepfake threat is real

One of the most concerning developments is the rise of generative AI, which can now be used to create hyper-realistic deepfake documents and videos.

In fact, deepfake fraud poses a growing threat to biometric processes, such as those used in identity verification. AI-enabled deepfake scams, including presentation and injection attacks, are elevating fraud to new levels of sophistication.

Presentation attacks involve displaying a fake document or image directly to a camera, via a photo or screenshot. Injection attacks introduce false data into a system without using a camera, often by manipulating data capture or communication channels.

According to market experts, presentation attacks are roughly 10 to 100 times more common than injection attacks – but both should be a cause for concern for businesses of all kinds.

As AI develops further, social engineering attacks on organisations will also become easier and faster because criminals will need less and less technical know-how to execute them.

Unlike other forms of cyberattacks, social engineering attacks, such as phishing, appeal to human vulnerabilities and emotions.

Generative AI can also be used by cybercriminals to target exposed people within organisations, to elicit information or make financial gain.

Unfortunately, our latest consumer research revealed that more than half (54%) of Britons do not know what social engineering is.

So, it is vital to equip all people – whether within an organisation or the public – with the skills to spot potential AI-led cyberattacks and the appropriate tools to stop them before they are successful.

Fight fire with fire

As AI continues to become an accessible tool for cybercrime, UK organisations should look to adopt end-to-end online fraud prevention solutions, relying on multi-layered tools and services.

Many of the tools and technologies in these ‘layers of defence’ should be AI-powered, to fight fire with fire, if you will.

Related Posts
1 of 11,758

Not keeping up with AI-enabled threats would be negligent for any business. Education and technological know-how will help to combat the incremental rise in future cybercrime attacks that are reinforced by AI.

A key development to be aware of is the addition of so-called risk signals: leading digital identity technologies are being trained on behavioural biometrics, such as typing patterns or mouse movements, to incorporate them into their security processes.

Also ReadSovereign Digital Identities and Decentralized AI: The Key to Data Control and the Future of Digitalization

Detecting patterns from historic interactions with customers will start to impact cybercrime prevention, with device signals or template signals of fraudulent ID documents making it much harder for cybercriminals to carry out their attacks effectively.

Education is key

Our research also found that 33% of Brits have shared scans or photos of an ID card, driving licence or passport via insecure digital channels, such as social media or email, despite knowing that these ID documents could land in the wrong hands.

This activity can lead to identity theft, which should be a core concern to the UK public, especially given the rise in deepfake technology.

We believe the British public needs a better understanding of the threats posed by AI use in cybercrime.

As a start, people should be taught to think twice before sending a scan or photo of official ID documents into the digital ether via unencrypted channels.

But what’s clear is that collaboration with business on this issue is critical, as know-how and technology alone cannot eradicate fraud in isolation.

There is no silver bullet for fighting fraud; it is something that manifests at every online touchpoint, affecting individuals and organisations equally.

The identity verification industry should work together with businesses to address the forms of AI-enabled cybercrime and devise ways to combat them through technology, education and human expertise.

Importance of taking a hybrid approach

In our view, there will always be a need for human-led interventions, where automated solutions cannot yet provide sufficient protection against cybercrime and fraud.

Specially trained anti-fraud agents can provide additional layers of cybercrime support. For example, the human eye can often better spot signs of AI use where existing tools are not yet fully developed.

With social engineering attacks in mind, which depend on the vulnerability of human targets, a hybrid approach is the most effective remedy since a trained anti-fraud expert can detect social cues that automated solutions cannot.

Overall, businesses should implement an effective cybercrime and fraud prevention strategy within their Know Your Customer (KYC) and internal operational processes.

Being one step ahead of cybercriminals by addressing knowledge gaps – whether internally and among their customers – can improve general awareness of this growing threat.

The aim is to put the power of identity back into the hands of the people it belongs to, reducing friction for genuine customers and detecting high-risk or anomalous behaviour caused by AI.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.