Understanding the Risks and Protections for AI Systems in Business Applications

AIsecurity : Understanding the Risks and Protections for AI Systems in Business Applications

As generative AI becomes integrated into business applications, companies face new dangers. Attacks on language models are linked to social engineering, and Retrieval Augmented Generation (RAG) is particularly vulnerable. Prof. Dr. Patrick Levi discusses these issues in a short interview. Prof. Dr. Patrick Levi, a professor of machine learning in industrial applications at the Technical University of Amberg-Weiden, focuses on AI security and information management.

Convincing AI chatbots or large language models to behave undesirably has become a common challenge and can be amusing. However, these “jailbreaks” are serious attacks and can be dangerous. They are harmless when used for fun, like writing a Christmas card. But when large language models are used for serious applications, they can become a threat, especially if a chatbot handles confidential information or needs to provide reliable recommendations.

Attacks on AI systems are similar to social engineering against artificial intelligence. The goal is to trick the system. Traditional attacks exploit specific vulnerabilities, while social engineering aims to provoke behavior that is advantageous for the attacker but harmful for the victim.

Retrieval Augmented Generation (RAG) is popular because it allows models to use custom data without retraining. It is used in chatbots that use internet searches and office assistants that process notes, emails, and other data. RAG systems are vulnerable because attackers can inject compromised texts, a form of data poisoning. This is inherent to RAG systems: if a RAG system processes emails, attackers can insert texts by simply sending emails. This allows persistent attacks, unlike typical jailbreaks.

To protect against these threats, tailor the RAG system or any AI application to its specific purpose and narrowly define its use. Limit user interactions, API access, database interfaces, and other functions to serve only this purpose. Understanding how attacks on generative AI work, the methods used, and what attackers can achieve is crucial. Similar to red-teaming approaches in traditional IT security, AI systems should be tested for vulnerabilities.

Prof. Levi, thank you for your insights. In the new iX magazine and the featured articles on heise+, readers can learn about attacks on large language models, the specific dangers of RAG systems, and how to prepare for the security measures required by the AI Act. The iX 1/2025 is available in the heise shop and at newsstands.