
AI’s role in cybersecurity has evolved from boardroom buzzword to operational threat. As headlines oscillate between “AI revolution” and “AI apocalypse,” one question persists for security leaders: can AI really hack you?
The real answer? It depends on your attack surface, your controls, and your adversary. In 2025, while the most complex attacks still require a human in the loop, AI is already accelerating how attackers operate by scanning for weaknesses, crafting payloads, and probing defenses at speeds we’ve never seen before.
This shift isn’t theoretical. It’s redefining what effective defense looks like.
To understand the reality behind the hype, we need to unpack how AI is accelerating attacks, where it introduces new vulnerabilities, and what defenders must do to keep up.
The dual threat architecture
AI is changing the risk landscape in two key ways:
Attack acceleration
AI speeds up traditional attack methods. It is transforming how threat actors operate by making social engineering, malware development, and reconnaissance faster, more scalable, and harder to detect.
Attack surface expansion
The widespread adoption of AI tools is introducing new types of vulnerabilities. These often appear in areas like large language model interfaces, development pipelines, and enterprise integrations.
Understanding this dual threat is essential to building a resilient, AI-aware security posture.
How AI is reshaping the attacker’s playbook
AI isn’t inventing new types of cybercrime. But it is transforming how familiar attack techniques scale, adapt, and succeed. From impersonation to post-exploitation, threat actors are using AI to move faster, target smarter, and stay harder to detect.
Understanding how attackers use AI today is essential to building the right defenses for tomorrow.
AI-accelerated social engineering
Identity is no longer a fixed asset. In early 2025, criminals impersonated a CFO over video using deepfake audio and visuals. The result was a $25 million fraud executed with off-the-shelf tools, not state-backed sophistication.
This is not an isolated case. In 2024, 53 percent of financial professionals reported encountering deepfakes, and 43 percent admitted to falling for them. By Q1 2025, the volume of deepfake incidents had already exceeded the total for the previous year.
Phishing has followed a similar trajectory. Once easy to identify, phishing emails are now AI-generated, well-written, and highly personalised. In the second half of 2024, phishing volume increased by 202 percent, largely driven by AI-enabled content.
Modern campaigns reference real contacts, reflect company context, and mirror internal tone and style. In a recent red team assessment, our AI-generated phishing emails matched internal communications so closely that more than half of recipients engaged.
This isn’t the exact campaign referenced above, but it’s a similar AI-enabled phishing assessment we ran for a telecoms client.
Read the full case study here.
How threat actors combine AI tools
Attackers are no longer relying on a single AI model. Instead, they are combining platforms to execute different stages of the attack chain.
Google’s threat intelligence identified Iranian APT42 using Gemini for reconnaissance and phishing content. Chinese APT groups used the same platform to generate Active Directory commands and troubleshoot post-exploitation tools.
Anthropic’s Claude has also been manipulated to support credential processing and camera system compromise. Instead of asking it to perform overt tasks, attackers issued sequences of harmless-looking prompts. These combined to produce actionable payloads.
This is not about full automation. It reflects a shift to modular, AI-assisted tradecraft. Tasks are split across different tools and stages, helping attackers avoid detection and scale their efforts.
When AI platforms become the target
As AI adoption grows, the tools themselves are becoming high-value targets.
Recent vulnerabilities highlight this shift:
- Copilot Studio – SSRF (CVE-2024-38206) exposing Microsoft cloud infrastructure
- EchoLeak – Zero-click prompt injection delivered through standard-looking emails
- Anthropic Protocol Inspector – RCE (CVE-2025-49596) triggered by visiting a malicious page
These are not edge cases. AI platforms are being integrated into codebases, support systems, and critical workflows. Many are treated as trusted interfaces without adequate validation or internal monitoring.
As a result, AI systems have become high-impact, low-awareness assets. Their integration depth, abstraction layers, and general-purpose capabilities make them harder to secure and far more attractive to sophisticated adversaries.
Detecting AI-powered attacks
AI-driven threats move fast. Detection windows are now measured in seconds, with attack volumes up 67 percent since last year.
Traditional tools and manual investigation can’t keep up.
Modern EDR platforms that use machine learning are 2.5 times more effective at detecting and remediating threats, and can reduce manual workload by up to 70 percent.
But key questions remain:
- Can your current tools block AI-driven attacks in real time?
- Can they detect when your AI systems are being used against you?
With new exploits appearing daily, many defenses are already falling behind.
What CISOs should do now to defend against AI-enhanced threats
- Continuous testing
Annual penetration tests no longer match the pace of AI-enhanced threats. Offensive security must operate continuously to uncover real-world risks before attackers do.
- Intelligence-led defense
Threats now span digital, social, and physical vectors. Use threat intelligence to simulate how real attackers would target your business and validate your controls under pressure.
The bottom line: AI will redefine your security strategy
So, can AI hack you? That depends on how well you’re prepared at the moment of attack. Every commit, integration, and system change shifts your exposure and your defenses need to evolve with it.
AI isn’t just changing how attackers operate. It’s changing what needs protecting.
Organizations that embrace continuous, intelligence-led security will stay ahead. Those that don’t will be left reacting to compromise after compromise.
Ready to test your defenses against AI-enhanced threats?
Get in touch to see how CovertSwarm’s constant offensive security can turn your security posture from reactive to relentless.