The US Department of Defense has invested $2.4 million over two years in the deepfake detection technology of the startup Hive AI. The Defense Innovation Unit aims to accelerate the introduction of new technologies for the US defense sector through this contract. Hive AI’s models are designed to detect AI-generated video, image, and audio content.
Deepfakes, while existing for nearly a decade, have become easier to create and more realistic due to generative AI. This makes them a tool for misinformation campaigns or fraud. Defending against these threats is crucial for national security, says Captain Anthony Bustamante, project manager and cyberwarfare operator at the Defense Innovation Unit.
Hive AI’s technology is seen as essential in the fight against deepfakes. “This work is a vital step in strengthening our information advantage against sophisticated disinformation campaigns and threats from synthetic media,” Bustamante states. Hive was selected from a pool of 36 companies to test its deepfake detection and attribution technology with the Department of Defense. This contract could enable the department to identify and counter AI deceptions on a large scale. Defense against deepfakes is “existential,” says Kevin Guo, CEO of Hive AI. “This is the evolution of cyber warfare.”
Hive’s technology is trained on a vast amount of content, some AI-generated and some not. It identifies signals and patterns in AI-generated content that are invisible to the human eye but detectable by an AI model. “It turns out every image generated by these generators contains this kind of pattern if you know where to look,” Guo explains. The Hive team continuously tracks new models and updates its technology accordingly.
The tools and methods developed under this initiative have the potential to be adapted for broader use, not only to address defense-specific challenges but also to protect civilian institutions from misinformation, fraud, and deception, according to the Department of Defense.
Siwei Lyu, a professor of computer science and engineering at the University at Buffalo, notes that Hive’s technology offers cutting-edge performance in detecting AI-generated content. Although not involved in Hive’s work, he has tested their detection tools. Ben Zhao, a professor at the University of Chicago who independently evaluated Hive AI’s deepfake technology, agrees but points out that it is not foolproof.
“Hive is certainly better than most commercial companies and some of the research techniques we’ve tested, but we’ve also shown that it’s not hard to bypass,” Zhao says. The team found that attackers could manipulate images in a way that bypasses Hive’s detection. Given the rapid development of generative AI technologies, it remains uncertain how they will perform in real-world scenarios facing the defense sector, adds Lyu.
Guo from Hive states that the company’s models are provided to the Department of Defense, allowing the department to use the tools offline and on their own devices. This prevents sensitive information from leaking. However, when an external attack in the form of deepfakes looms, off-the-shelf products are not enough, Zhao argues: “There’s very little they can do to prepare for unforeseen attacks at the state level.”