Threats from malicious cyberactivity are likely to increase as nation-states, financially motivated criminals, and novices increasingly incorporate artificial intelligence into their routines, the UK’s top intelligence agency said.
The assessment, from the UK’s Government Communications Headquarters, predicted ransomware will be the biggest threat to get a boost from AI over the next two years. AI will lower barriers to entry, a change that will bring a surge of new entrants into the criminal enterprise. More experienced threat actors—such as nation-states, the commercial firms that serve them, and financially motivated crime groups—will likely also benefit, as AI allows them to identify vulnerabilities and bypass security defenses more efficiently.
“The emergent use of AI in cyber attacks is evolutionary not revolutionary, meaning that it enhances existing threats like ransomware but does not transform the risk landscape in the near term,” Lindy Cameron, CEO of the GCHQ’s National Cyber Security Centre, said. Cameron and other UK intelligence officials said that their country must ramp up defenses to counter the growing threat.
The assessment, which was published Wednesday, focused on the effect AI will likely have in the next two years. The chances of AI increasing the volume and impact of cyber attacks in that timeframe were described as “almost certain,” the GCHQ’s highest confidence rating. Other, more specific predictions listed as almost certain were:
- AI improving capabilities in reconnaissance and social engineering, making them more effective and harder to detect
- More impactful attacks against the UK as threat actors use AI to analyze exfiltrated data faster and more effectively and use it to train AI models
- Beyond the two-year threshold, commoditization of AI-improving capabilities of financially motivated and state actors
- The trend of ransomware criminals and other types of threat actors who are already using AI will continue in 2025 and beyond.
Some caveats apply
Security researcher Marcus Hutchins said parts of the assessment overstated the benefit AI would provide to people pursuing malicious cyberactivity. Among the exaggerations, he said, is AI removing barriers to entry for novices. “I believe the best phishing lures will always be the ones written by a human," he said in an interview. "I don’t think AI will enable better lures, but better scale. Instead of a single perfect phishing lure, you might be able to output several hundred decent ones in the same time. AI is very good at quantity, but these models still struggle a lot when it comes to quality.” Another way AI might help improve phishing and other social engineering lures is by digesting huge amounts of internal data obtained in previous breaches. By training a large language model on the data of a specific target, attackers can create lures that refer to particular characteristics of a target, such as the specific suppliers the target uses, to make the pretext seem more convincing. A thread on Mastodon provided a broader view of reactions to the assessment from security experts. The “key judgments” of the assessment were:The assessment included the following table summarizing the various benefits, or "uplifts," from AI in the next two years and how they applied to specific types of threat actors: