Powered by quasi-autonomous algorithms, “deepfakes” (hyperfakes) have gained in realism, sophistication and democratization. Yet another cyber threat to be seriously taken into account, including for businesses.
You can not stop progress. A simple turn on TikTok is enough to spy on Tom Cruise praising the benefits of household products, mop in hand. It is obviously not the real actor but his synthetic understudy, DeepTom, a joyous imposture allowed by the expert in special effects Chris Umé. The 30-year-old from Limburg co-founded Metaphysic, a company that develops hyperrealistic content editing tools based on artificial intelligence (AI) for the entertainment industry and major brands. He imagines virtual universes populated by our perfect digital twins.
You can not stop progress. A simple turn on TikTok is enough to spy on Tom Cruise praising the benefits of household products, mop in hand. It is obviously not the real actor but his synthetic understudy, DeepTom, a joyous imposture allowed by the expert in special effects Chris Umé. The 30-year-old from Limburg co-founded Metaphysic, a company that develops hyperrealistic content editing tools based on artificial intelligence (AI) for the entertainment industry and major brands. He imagines virtual universes populated by our perfect digital twins. That’s for the fun side of technology… But there are much less innocent uses of deepfakes, facilitated by the huge amount of videos and voice recordings available on the web. Starting with politics, to compromise certain personalities or misinform citizens… On the cyber front of the war in Ukraine, for example, we experienced a symbolic episode last March: President Volodymyr Zelensky appeared on the hacked site of a national channel to announce its surrender and urge its troops to lay down their arms. The deception hardly convinced, as the overlay and the animation of the false head of state were of poor quality, but this is not always the case. It is indeed increasingly difficult to distinguish objects or faces generated by artificial intelligence. And the studies carried out on the subject, such as that of the University of Lancaster
, show that we tend to trust synthetic faces more than real ones, which raises questions about the impact of the criminalization, or even the militarization, of deepfakes. Companies, among others, are not immune to the fraudulent use of these deep fakes. “Deepfakes have the potential to become a common tool in phishing and other fraud,” note analysts at business strategy firm CB Insights. For example, the deepfake updates old tactics such as “CEO fraud”. Previously, all the scammer had to do was send a fake email – allegedly from a superior – to a company employee to gain access to data or get paid money. Today, artificial intelligence can imitate the voice of the boss on the phone, or even his face during a videoconference. In 2020, crooks tricked the manager of a bank in the United Arab Emirates into demanding a transfer of 35 million dollars thanks to the cloned voice of his superior. Last June, the FBI warned of another use of deepfake: the video job interview. This practice, which has become common in large companies to fill their vacancies since the covid crisis, allows counterfeiters armed with AI to access sensitive information. A technology once reserved for researchers and IT engineers, deepfake has never been so accessible via app stores and websites. “There are marketplaces where buyers post requests for hyper-rigged videos. Some companies provide deepfakes as a product or even as an online service,” Europol’s innovation lab reports. These technical advances and their diversion often outpace legislators and judicial authorities. In the United States, some jurisdictions have already reacted: they impose hyper-restrictive creation and distribution conditions to limit the danger of these deepfakes. Major players like Meta (formerly Facebook) have also banned such montages from their platforms unless they are clearly intended as satire or parody. In Europe, according to experts, the Digital Services Act is tackling the problem half-heartedly by requiring labeling of inauthentic content. In Belgium, the Secretary of State for Digitalization had the opportunity, during an intervention in Parliament, to say that he was aware of the “real challenge” represented by the emergence of these new generation identity thefts. Mathieu Michel also felt that our legal arsenal only provided a partial answer. The Belgian security services would be “briefed”, a working group for hybrid threats would examine the cases and a dedicated task force would have been set up. Not enough to convince of the seriousness of the stakes for our companies, already reluctant to invest more in cybersecurity… until the day when they bear the brunt of an attack. We are indeed talking about an emerging but growing global cybersecurity threat. “The ability of computers or users to distinguish right from wrong is approaching zero,” Microsoft Research Lab engineer Paul England told CB Insights analysts. Admittedly, it is still possible today to manually detect the vast majority of manipulated content by looking for inconsistencies. But this surgery is expensive in terms of training and treatment time. Ideally, a system could scan any digital content and automatically assess its authenticity. Private initiatives are moving in this direction. Arguing that even an AI-assisted defense won’t be able to fully combat deepfakes, Microsoft has initiated Project Origin, a service that authenticates digital media using tamper-proof metadata. Working to establish an official industry standard, the IT giant has also teamed up with five other companies (including Adobe, Intel and the BBC) to found the Coalition for Content Provenance and Authenticity. Because it goes without saying that the authenticity of content can only be guaranteed if authentication methods are adopted on a large scale. Companies are scrambling to find new ways to detect and neutralize deepfakes that threaten them. Big tech, consortia and start-ups are experimenting with all kinds of approaches: authentication software embedded in devices, cryptographic markers, etc. We should also mention the European collaborative detection platform project WeVerify, as well as the Reality Defender platform, initiated by the AI Foundation in 2020, promising comprehensive analyzes for applicable results. Internet users can submit any suspicious content (video, image or audio recording) there. Never forgetting that the evolving nature of this technological challenge requires educational efforts from politicians, leaders and consumers.