AI Misuse: Identity Theft and the Growing Threat from Criminals

AI : AI Misuse: Identity Theft and the Growing Threat from Criminals

The head of the Rhineland-Palatinate State Criminal Police Office (LKA), Mario Germano, warns about identity theft risks due to the misuse of artificial intelligence (AI) by criminals. In an interview with the German Press Agency in Mainz, he discussed how AI advancements allow for the manipulation of publicly available audio and video files, potentially leading to severe consequences.

Germano explains, “I share video and voice data, and now AI doesn’t need much more to create perfect audio and video files.” He adds, “Someone can type what a person should say, and it will be of such quality that it is indistinguishable from the real person.”

Criminals can misuse AI to manipulate video and audio files, making it difficult to identify genuine calls or video chats. To find a suitable victim, potential perpetrators still need to do some detective work, but AI will soon handle this task as well.

Germano provides an example of a company executive instructing an employee to transfer money over the phone. In the future, AI could easily find both individuals. “If the voice matches and eventually even the video image, it becomes increasingly difficult for the person on the other end to realize they are not interacting with who they think they are.”

The rapid development of tools like ChatGPT is both fascinating and concerning. “There is enormous potential for criminals,” says Germano, citing personalized phishing and fake emails as examples. For instance, a fake law firm might announce an inheritance from abroad, with a fake website to back it up. “AI will enable criminals to create perfectly designed websites in any language.”

AI significantly eases the work of criminals. Germano notes, “I can even instruct AI to draft a letter with maximum credibility.” Criminals no longer need to handle these tasks themselves. In chats or even voice chats, an AI could be on the other side, communicating flawlessly with the person about the fake correspondence. “A single criminal can now commit crimes that previously required a call center with 100 multilingual employees.”

“Much of this is still futuristic,” Germano says, “but we are already seeing what’s possible.” Developers of ChatGPT have documented initial cases where attempts were made to create malware using their tool. AI could be asked to program phishing software, offering criminals “enormous opportunities for little to no cost.”

Will people limit their digital presence? With specific audio, video, or images, AI can be trained. “The trend in recent years has been to build digital presences,” Germano says, mentioning platforms like LinkedIn, YouTube, and TikTok. “The potential of digital sources for programming has grown tremendously.”

Germano advises families to agree on code words to prevent falling for scams, such as fake calls. “Always check two or three times if you should upload a video on your YouTube channel explaining how your lawnmower works,” he suggests. This precaution can help avoid being targeted by criminals misusing AI for identity theft or fraud.