Google has observed that several state-sponsored hacker groups are using the AI-powered assistant Gemini to enhance their productivity and explore potential attack targets. According to Google’s Threat Intelligence Group (GTIG), hackers primarily use Gemini to work more efficiently rather than developing novel AI-driven cyberattacks capable of bypassing traditional defense mechanisms.
Popularity of Gemini Among APT Groups
According to Google, APT groups from Iran and China have been particularly active in using Gemini. Common uses include:
- Assisting with programming tasks for tool and script development
- Researching publicly known security vulnerabilities
- Translations and explanations of technologies
- Information gathering about target organizations
- Searching for methods to evade detection, escalate privileges, or conduct internal reconnaissance in compromised networks
Use Cases Vary by Country of Origin
Hackers have employed Gemini at various stages of the attack cycle, with focuses differing by country of origin:
- Iranian actors used Gemini intensively for reconnaissance, phishing campaigns, and influence operations.
- Chinese groups targeted US military and government organizations, focusing on vulnerability research, scripts for lateral movement and privilege escalation, as well as post-compromise activities.
- North Korean APTs supported multiple phases of the attack cycle with Gemini, including reconnaissance, malware development, and obfuscation techniques, with a focus on North Korea’s covert IT work program.
- Russian actors engaged minimally, mainly for script support, translation, and payload creation. They may prefer AI models developed in Russia or avoid Western platforms for operational security reasons.
Failed Attempts to Misuse Gemini
Google also noted attempts to use public jailbreaks against Gemini or rephrase prompts to circumvent the platform’s security measures. These attempts have so far been unsuccessful.
Similarly, OpenAI, the developer of the popular chatbot ChatGPT, reported in October 2024 that as generative AI systems become more widespread, the risk of misuse increases, especially in models with inadequate protective measures. Security researchers have already demonstrated easily bypassed restrictions in some widely used systems like DeepSeek R1 and Alibaba’s Qwen 2.5.