US Defense Invests in Hive AI to Combat Deepfake Threats

The US Department of Defense has invested $2.4 million over two years in the deepfake detection technology of the startup Hive AI. The Defense Innovation Unit aims to accelerate the introduction of new technologies for the US defense sector through contracts. Hive AI’s models are designed to detect AI-generated video, image, and audio content.

Although deepfakes have been around for nearly a decade, they have become easier to create and more realistic due to generative AI. This makes them a potential tool for disinformation campaigns or fraud. Defending against these threats is crucial for national security, says Captain Anthony Bustamante, project manager and cyberwarfare operator at the Defense Innovation Unit.

Hive AI was selected from 36 companies to test its deepfake detection and attribution technology with the Department of Defense. The contract could enable the department to detect and combat AI deception on a large scale. According to Kevin Guo, CEO of Hive AI, defending against deepfakes is “existential” and represents the evolution of cyber warfare.

Hive’s technology is trained on a large dataset, some of which is AI-generated and some not. It identifies signals and patterns in AI-generated content that are invisible to the human eye but detectable by a model. Guo explains that every image generated by these tools contains patterns if you know where to look. The Hive team constantly monitors new models and updates its technology accordingly.

The tools and methods developed under this initiative have the potential to be adapted for broader use, not only addressing defense-specific challenges but also protecting civilian institutions from disinformation, fraud, and deception, according to a statement from the Defense Department.

Siwei Lyu, a professor of computer science and engineering at the University at Buffalo, notes that Hive’s technology offers state-of-the-art performance in detecting AI-generated content. Although not involved with Hive, he has tested its detection tools. Ben Zhao, a professor at the University of Chicago who also independently evaluated Hive AI’s deepfake technology, agrees but notes that it is far from foolproof.

Zhao points out that attackers can manipulate images in ways that bypass Hive’s detection. Given the rapid development of generative AI technologies, it remains uncertain how they will perform in real scenarios that the defense sector might face, adds Lyu.

Guo from Hive mentions that the company’s models are provided to the Department of Defense, allowing the department to use the tools offline on their devices, preventing sensitive information from leaking. However, Zhao argues that off-the-shelf products are insufficient when facing state-level deepfake threats. “There’s very little they can do to prepare for unforeseen attacks,” he states.

This article is by Melissa Heikkilä, an editor at the US edition of MIT Technology Review, covering developments in artificial intelligence.

Exit mobile version