criminals have crossed a new threshold. Google Threat Intelligence Group (GTIG) today released a report confirming the first known case of criminal hackers using artificial intelligence to develop a working zero-day exploit—a previously unknown software vulnerability weaponized for attack before defenders can patch it.
The exploit targeted a two-factor authentication bypass in a widely used open-source web-based system administration tool. Written in Python, it was designed for mass deployment. The attackers' implementation contained errors that likely prevented successful use, but the weapon itself was functional. Google disclosed the vulnerability to the vendor; a patch has since been issued.
![]()
GTIG researchers cited multiple indicators of AI assistance. The code contained a hallucinated severity score—an invented numerical rating not generated by any standard vulnerability scoring system. The Python formatting was textbook-perfect. Detailed help menus and educational docstrings, phrased in the style of training data, appeared throughout. The researchers explicitly noted that Google's Gemini model was not used in this case.
The vulnerability itself revealed why AI excels at this task. It stemmed from a semantic logic flaw: a developer had hardcoded a trust assumption, embedding high-level conceptual errors that traditional security scanners miss because the code appears functionally correct. Frontier large language models can reason about developer intent and surface these dormant logic errors in ways signature-based tools cannot.
"There's a misconception that the AI vulnerability race is imminent. The reality is that it's already begun," said John Hultquist, chief analyst at GTIG. "For every zero-day we can trace back to AI, there are probably many more out there. Threat actors are using AI to boost the speed, scale and sophistication of their attacks."
The report documents this capability spreading across threat categories. State-backed groups from China, North Korea, and Russia now use AI across the full attack chain. Criminal groups leverage it to accelerate malware development and expand operation scale.
North Korea's APT45 has sent thousands of repetitive prompts to recursively analyze vulnerabilities and validate proof-of-concept exploits—building arsenals impractical without AI assistance. An actor linked to China, designated UNC2814, employed expert-persona jailbreaking to manipulate Gemini into researching pre-authentication remote code execution flaws in TP-Link router firmware and Odette File Transfer Protocol implementations.
Agentic tools—AI systems capable of autonomous action—are entering operations. A China-nexus actor used the Hexstrike and Strix frameworks with the Graphiti memory system to probe a Japanese technology firm and an East Asian cybersecurity platform. The tools pivoted between reconnaissance capabilities based on internal reasoning, requiring minimal human oversight.
The report also details PROMPTSPY, an Android backdoor that calls the Gemini application programming interface.
特别声明:以上内容(如有图片或视频亦包括在内)为自媒体平台“网易号”用户上传并发布,本平台仅提供信息存储服务。
Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.