[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Deep Fakes Are Plaguing the Banking Industry — How Can We Stop This?

By: Andy Syrewicze, Security Evangelist

Generative AI has turned the world upside down since its inception and it is showing no signs of slowing down in popularity. The rise in technological advances has forced companies to scrutinize their cybersecurity, as criminals explore and exploit AI’s uses.

Banks increasingly find themselves victims of deepfake fraud, and unfortunately for the financial services sector, this genre of digital identity theft is set to boom.

Last month, the Treasury Department’s Financial Crimes Enforcement Network issued an alert to help financial institutions spot scams associated with the use of deepfake media. FinCEN has seen an increase in reports of suspicious activity by financial institutions that describe the use of deepfake media. The reports detail the use of fraudulent identity documents to evade identity verification and authentication methods:  falsified documents, photographs and videos created via generative AI.

Earlier this year, a finance worker at a multinational firm was tricked into paying out $25 million to fraudsters who used deepfake technology to pose as the company’s Chief Financial Officer during a video conference call. The elaborate scam duped the worker into attending a video call with what he thought were several other staff members, but all of whom were, in fact, deepfake recreations.

This alarming occurrence is becoming increasingly common. Cybercriminals are finding loopholes to navigate and manipulate safety protocols. When one firm experiences a high-profile, gen AI-enabled scam, not only do they face the loss of customer trust and regulatory fines, the entire sector runs the risk of reputational damage.

What can be done and how can banks counter this?

Above all, effective employee training has to be in place. Employees should be considered the first line of defense against deepfake attacks. It is crucial to provide ongoing guidance and training about the latest cybersecurity best practices, how deepfakes are utilized, and how to recognize them. Employees should also be advised on how to respond if a deepfake is suspected in order to stop further damage.

Also Read: The Generative AI Revolution: Changing the Game With AI-Native DevOps

Banks must ensure their employees are equipped to recognize potential risks by  building an effective human firewall, contributing to company-wide cyber resilience. This includes adopting the ‘mindset-skillset-toolset’ approach:

Mindset – Raise awareness among employees about growing cyber threats

Related Posts
1 of 12,052

Skillset– Combine awareness training with simulations for employees

Toolset – Incorporate tools that support secure behavior by employees

While banks have many different security measures in place, they must implement proper protocols and processes to ensure that sensitive assets like passwords, data, and core financial functions are being protected and are hardened against potential attacks.

Using complex passwords and multi-factor authentication (MFA) is key. While it might be common to use simple passwords because they are easier for users to remember, they are also easier to hack. Ensuring all employees use complex passwords of a sufficient length prevents cybercriminals from gaining access to company systems. Employees should also be required to use MFA and / or Passkeys as an extra security barrier and should be trained to flag any suspicious activity.

A supplementary strategy against deepfakes to improve security is to restrict login attempts. Cybercriminals often use force to gain access to sensitive information and rapidly attempt to use many common passwords in hopes of guessing the right one. Restricted login attempts can help surface a high number of failed logins which are often one of the first indicators of an ongoing attack and should be a core part of any business’s security posture.

It is also important to monitor login patterns and alert appropriate teams when suspicious activity occurs as part of an ongoing and enforced business process. While threat actors are using generative AI and deepfakes to their advantage, it is important that banks too are staying on top of new trends and maintaining strong internal processes to protect themselves from potential harm.

Lastly, it is imperative to lock down permissions – a lax approach to who can access what can result in untold damages if sensitive information falls into the wrong hands. Permissions need to be managed on a need know basis, and these in turn must be audited and updated regularly to keep things as watertight as possible.

It seems like a lot, but its better to prepare for the worst than to scramble during an attack. Should the worst happen, though, a solid response plan minimizes financial, legal, and reputational risks. Key steps include complying with deepfake laws, recovering critical infrastructure, isolating systems, and establishing a clear decision-making hierarchy..

Its clear that the generative AI boom is not slowing down. Not only is it being used to ramp up cybersecurity efforts, but is conversely being used to craft increasingly sophisticated attacks.  Cybercriminals are looking for loopholes and are using gen AI to find and exploit them. Banks must remain informed on how they protect themselves and their customers. It’s no longer optional – it must be a fundamental part of business operations.

Also ReadSovereign Digital Identities and Decentralized AI: The Key to Data Control and the Future of Digitalization

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.