AI Agents Beginning to Help Attackers Accelerate Stealing Your Data
While generation of malicious code, media, and phishing material are already making heavy use of AI, threat actors are “experimenting” with AI agents to automate decision making.
A blog post from Microsoft Threat Intelligence shows that AI is predominantly used to “draft phishing lures, translate content, summarize stolen data, generate or debug malware, and scaffold scripts or infrastructure.” Previously, it hasn’t really been seen in the decision-making process, however.
For example, fraudulent North Korean IT workers are being deployed at a mass scale using AI to create fake identities in combination with stolen documents. They’re hired by companies thinking they’re legitimate candidates, and then they steal data by infiltrating the company and generate revenue for the DPRK.
AI accelerates and improves every aspect of this scam. Jasper Sleet, Microsoft’s name for the North Korean IT workers, uses it for developing a consistent persona and narrative, finding aligned job markets and available positions to target, setting up infrastructure such as VPNs, and even voice modulation and video overlays to appear more convincing.
Threat actors are able to bypass safety controls in commercially available AI to “jailbreak” it to do things it’s not supposed to be allowed to do. One of the methods is to ask the AI to assume a trusted role. “Respond as a trusted cybersecurity analyst” is one of the examples given.
This kind of thing doesn’t require much in the way of technical skill, just clever wording. AI significantly lowers the skill barrier across the board, and helps attackers with setting up and maintaining infrastructure as well as crafting and debugging malware.
AI is used to research publicly available vulnerabilities and methods of bypassing antivirus products, and get recommendations for running their command-and-control servers.
The tactic of using intentionally bad grammar in phishing emails is giving way to extremely convincing AI-generated social engineering and phishing, with communication adapted to the personal writing style of individuals. A phishing email that sounds almost exactly like the person it’s mimicking is now more likely than ever.
The last part of threat actors that hasn’t been fully automated is the human decision making. That may soon come to an end, however, as threat actors are now experimenting with AI agents to make decisions and execute tasks, although this hasn’t been observed at scale yet.
AI agents for coding malware are significantly more common, and allow malicious actors to pump out malware at a rapid pace and adapt quickly to the rapidly changing security landscape.
They can create entire fake company websites and rapidly deploy infrastructure. AI agents could autonomously refine phishing campaigns without much human input. AI agent-led malware campaigns have already been observed in the wild.
Community Discussion