AI has changed the way content is created. (Yes, this includes images with tools like Midjourney, Dall-E and Firefly.) Because while AI has admittedly opened up a world of creative possibilities, it also now presents a growing threat in the form of false/maliciously altered digital images, a.k.a. deepfakes.
Gartner's prediction that "20% of cyberattacks in 2023 involved deepfakes" serves as a reminder of the severity of this issue. Deepfakes can be used to impersonate individuals, trick people into divulging sensitive information, or even frame innocent people for crimes they legitimately didn't commit. The financial implications are severe too, with the potential for massive losses due to identity theft and fraudulent transactions.
As AI technology continues to advance, so too will the sophistication of deepfakes. It's a cat-and-mouse sort of scenario, with developers constantly pushing the boundaries of what's possible and security experts scrambling to stay ahead.
So yeah, the future of this technology is uncertain, but one thing remains clear:
We need to be vigilant and proactive in addressing the risks posed by deepfakes.
Thales, a provider of advanced technologies specializing in defense and security, aeronautics and space, and cybersecurity and digital identity, recently announced a solution to combat deepfakes.
The Thales metamodel is a clever tool that uses a variety of techniques to sniff out fake images, similar to a digital detective analyzing images with a keen eye for inconsistencies. The brains behind this technology are a team of AI experts at cortAIx, a group of tech experts who are dedicated to pushing the boundaries of AI for good. They've even created a special toolbox called BattleBox to test the limits of AI and develop countermeasures against potential attacks.
One of its "tricks" they use is to compare images with their textual descriptions by using a method called CLIP. If the picture doesn't match the words, it's likely a fake.
Another technique, DNF, uses the way AI generates images to spot forgeries. It's like reverse engineering the process, looking for the telltale signs of AI-generated content. It’s akin to finding the fingerprints of a digital artist.
The DCT method, on the other hand, is more of a forensic analyst. It examines the underlying structure of an image and looks for subtle anomalies that might indicate tampering.
“Thales’s deepfake detection metamodel addresses the problem of identity fraud and morphing techniques,” said Christophe Meyer, Senior Expert in AI and Chief Technology Officer of cortAIx, Thales’s AI accelerator. “Aggregating multiple methods using neural networks, noise detection and spatial frequency analysis helps us better protect the growing number of solutions requiring biometric identity checks. This is a remarkable technological advance and a testament to the expertise of Thales’s AI researchers.”
Tools like the Thales metamodel are essential for maintaining trust and accuracy in the digital realm. So, the next time you encounter an image online, remember the Thales metamodel and the team of experts working tirelessly to safeguard our digital world.
Edited by
Alex Passett