News

Deepfake financial fraud: $ 243 thousand stolen from company

Experts have discovered the first case of financial fraud using deepfake, an artificial intelligence-based technology for synthesizing video and audio content similar to the original.

The deceivers managed almost flawlessly recreate the voice of the general director of the company and with the help of fake phonograms managed to transfer 220 thousand euros (243 thousand US dollars) to their bank account.

The Wall Street Journal (WSJ) was the first to report the incident. According to the reporter, information security experts regarded this case as a dangerous example.

“As authenticators, we are seeing a significant increase in fraud using artificial intelligence methods. In business, one can no longer blindly trust a word when someone introduces himself and gives orders. Both businessmen and ordinary users are beginning to realize how important identity checks are. With the advent of deepfake fakes, confidence in calls and videos has begun to decline”, – said David Thomas, executive director of Evident.

David Thomas
David Thomas

A WSJ message that appeared at the end of last week was written according to a representative of the insurance company Euler Hermes Group SA. The interlocutor of the journalists refused to name the affected business structure, but told about the incident in detail.

Fraudsters carried out their scam in March, while the executive director of the energy company did not even doubt that he was talking on the phone with his boss, the head of a large German holding. The “boss” asked urgently to transfer € 220 thousand to the account of a certain supplier in Hungary, promising soon return the full amount to the branch. The money were transferred, but this was not enough for the criminals. They repeated their trick, but this time the victim suspected a fraud and refused to pay.

Read also: Hacking XKCD Web Comic Forums Affected 562,000 Users

The stolen funds were transferred by scammers from Hungary to Mexico, and then to their accounts in other countries. The insurer reimbursed the losses of the affected company, but the lesson was demonstrative.

According to Pindrop statistics, from 2013 to 2017, the number of episodes of voice fraud has increased by 350%. Moreover, in one case out of 638, the caller used a phonogram with speech synthesizer.

Cybersecurity experts are intensively promoting AI technologies to help developers automate the functions of security applications, and businessmen to look for anomalies in the operation of computer systems and networks. At the same time, the incident in the energy company shows that these methods can also be used with malicious purpose.
Attackers can use them to automate phishing attacks, bypass authentication by voice, or bulk search and exploit zero-day vulnerabilities.

“Business structures should not lose their vigilance. To counter AI threats, they should use adequate verification – for example, multi-factor identification, face recognition, and comprehensive identity verification”, – David Thomas warned.

Fortunately, in his words, modern means of authentication are able to detect such attempts.
Sending
User Review
0 (0 votes)
Comments Rating 0 (0 reviews)

Daniel Zimmermann

Daniel Zimmermann has been writing on security and malware subjects for many years and has been working in the security industry for over 10 years. Daniel was educated at the Saarland University in Saarbrücken, Germany and currently lives in New York.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Sending

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button