State-Sponsored Hackers Using Popular AI Tools Including Gemini, Google Warns

Summary

Google’s Threat Intelligence Group (GTIG) warns of rising misuse of AI by state-sponsored hackers, particularly from North Korea, Iran, China, and Russia. These actors employ AI for technical research, large-scale open-source reconnaissance, target profiling, generating highly personalized phishing scams, and automating malware development. There has been a noted increase in “model extraction” attempts—where attackers try to duplicate proprietary AI by repeatedly querying existing models. AI tools, including Google’s Gemini, are used to craft convincing phishing attempts in local languages and professional tones, reducing reliance on detectable errors like poor grammar. GTIG also highlights emerging experimentation with agentic AI, which can autonomously support malicious tasks including malware automation. Google is responding by publishing regular threat reports, deploying constant threat monitoring, and reinforcing Gemini against exploitation. Google DeepMind proactively seeks to detect and neutralize malicious AI functions before deployment. While no revolutionary new AI attack capabilities have appeared, there is a marked uptick in sophisticated, AI-assisted threat activity and associated risks.