Protect yourself from AI deepfakes

Cybercriminals recently tricked the workers of a global corporation with offices in Hong Kong into sending $25.6 million to the firm using "deepfake" footage of its leaders. The staff thought their chief financial officer in the UK had asked for the money to be moved based on a video conference call that included several deepfakes. Six persons are said to have been detained by police in relation to the fraud. This is a deceptive and risky usage of AI technology. Deepfakes and other AI frauds might affect more firms if appropriate policies and procedures aren't in place.

Deepfakes 101: An increasing danger

Digitally modified images, videos, and audio recordings that appear to represent actual people are known as "deepfakes." They are made by utilizing real-world video clips of people to train an artificial intelligence system, which is then used to produce realistic-looking but fake new media. The use of deepfakes is growing in popularity. The example from Hong Kong was the most recent in a string of well-publicized deepfake occurrences in recent weeks. Social media was taken over by phony, lewd photos of Taylor Swift; a deepfake video of the imprisoned candidate for office in Pakistan was used by his political party to give a speech; and a deepfake "voice clone" of President Biden contacted primary voters, urging them not to cast ballots.

The scope and sophistication of less well-known instances of hackers using deepfakes has also been increasing. Cybercriminals are now trying to get around speech verification in the banking industry by impersonating users and accessing their money by exploiting voice clones of real persons. In response, banks have strengthened their detection capabilities for deepfake usage and raised authentication standards.

Cybercriminals have also used deepfake "spear phishing" assaults to target specific persons. One popular tactic is to pose as someone else and ask for money to be sent to a third-party account over the phone, all while fooling the person's friends and relatives with a voice clone.

According to a McAfee survey conducted last year, almost half of respondents said they would comply with requests for money if a friend or family member calling claimed to have been robbed or involved in an automobile accident. Seventy percent of respondents also expressed uncertainty about their ability to tell the difference between real people and voice clones.

Cybercriminals have also phoned individuals posing as banks, insurers, tax authorities, and healthcare providers in an attempt to get personal and financial information.

The Federal Communications Commission declared in February that calls made with AI-generated human voices are prohibited unless the person being phoned has given their express authorization beforehand.

Keep yourself safe from deepfakes.

In order to safeguard staff members and the reputation of the company against deepfakes, executives should follow these guidelines:

  1. Employees should get continual training on emerging AI capabilities and associated hazards, as well as information about frauds that use AI.

  1. Update phishing guidelines to take deepfake risks into account. Numerous businesses have previously informed staff members about phishing emails and advised them to exercise caution when responding to questionable demands via unsolicited emails. Such phishing guidelines must include AI deepfake schemes and mention that they might employ audio, video, and picture in addition to text and email.

  1. Increase or adjust employee, business partner, and consumer authentication appropriately. For instance, utilizing several authentication methods based on the risk and sensitivity of a certain choice or transaction.

  1. Think about how deepfakes affect brand assets such as logos, ad characters, and marketing campaigns. Deepfakes may be used to easily mimic such firm assets, which can then spread swiftly through social media and other online means. 

  1. Think about the ways in which your business will inform stakeholders and reduce these risks.

Given the rate at which generative AI is developing, the amount of significant electoral procedures that will take place in 2024, and the ease with which deepfakes may spread among individuals and over national boundaries, expect more and better deepfakes.

While deepfakes pose a threat to cybersecurity, businesses should also consider them to be intricate, new phenomenon with wider implications. In order to educate stakeholders and guarantee that countermeasures against deepfakes are adequate, reasonable, and responsible, it is important to adopt a proactive and deliberate approach to the issue.

Yasmin Anderson

AI Catalog's chief editor

Share on social networks:

Similar news

Stay up to date with the latest news and developments in AI tools at our AI Catalog. From breakthrough innovations to industry trends, our news section covers it all.


Fashion Brands use AI to create a variety of models. To complete the idea of the diff...


Country’s Spring Budget is directed towards supporting the AI industry. In the recent...


Facial recognition tool Clearview AI has revealed that it reached almost a million sea...