Some attacks are smash-and-grab. Others are the long game.
This one took a year. And at the end, the client hand-delivered sensitive data to us via a file-sharing link because we said we couldn’t log in.
No sophisticated exploits. No malware or brute force attacks.
Just patience, AI-powered research, a lookalike domain, and the magic words: “This is approved. Please go ahead.”
This is social engineering at it’s finest. And it worked because of how thoroughly human it was.

The research
The brief was clear: integrate current AI tools into a social engineering exercise and see if we could extract private customer data through official communication channels.
We started by feeding an AI chatbot publicly available information about the client and their customer base. The goal: identify high-value targets worth impersonating.
The AI generated two lists:
-
- Top 5 customers by revenue
- Top 5 customers by relationship length
We selected a customer that scored high on both. A valuable, trusted, long-standing relationship. The kind of customer whose requests don’t get questioned.
Then we went deeper. Using AI and LinkedIn reconnaissance, we:
-
- Identified which roles typically request this type of information
- Selected two specific employees to impersonate
- Built personas with enough detail to sound legitimate
Employee #1: A public advocate for our client, senior enough to approve requests.
Employee #2: The person actually requesting the data.
The setup was theatre, but the kind that works.
The infrastructure
We didn’t just send emails from Gmail and hope for the best.
We acquired lookalike domains mimicking the customer’s official domain. Close enough to pass a quick glance. Different enough to stay legal.
Then we built legitimacy:
-
- Valid MX mail records pointing to Outlook mailboxes
- HTTP redirect rules sending traffic to the official domain (making it look like the domain just “redirected”)
- All links in our emails pointed to our spoofed domain
To anyone checking, it looked real. Professional. Trustworthy.
We used AI to craft the initial pretext and every message that followed. The tone had to be perfect: formal, professional, slightly cold. The kind of email you get from a customer relationship manager who’s busy and expects things to get done.
The long game
The first email went out. A polite request to collate specific information stored in the client’s system.
Then came the key move: Employee #1 (the authority figure) was CC’d on the thread. Shortly after the request went out, they responded:
“This is approved. Please go ahead.”
Internal approval. No pushback. The support team started working on it.
We had no idea what data was actually stored in their database, so when they asked for clarification on specifics, we just agreed with their suggestions. They told us what they could include. We said yes to all of it.
This went on for months. Back and forth. Rapport building. AI continuously refining our tone to sound professional, urgent, and legitimate.
Fast forward to this year. A message arrived:
“The collated information is now available on the platform. You can log in and download it.”
Perfect. Except for one small problem.
“Unfortunately, we don’t have access to the system at the moment. Could you send us the data via [third-party file sharing service they use]?”
Our authority figure chimed in again. Same energy. Same trust-building magic:
“This is approved. Please go ahead.”
And they did.
A message arrived containing a link to download the sensitive data we’d requested. No authentication. No verification call. No “let us confirm this with your team first.”
Just a file-sharing link with everything we asked for, handed over because we couldn’t be bothered to log in.
The uncomfortable reality
This attack worked because of three failures:
- Trust without verification. A legitimate-looking email domain was enough. Nobody called. Nobody verified through a secondary channel. The domain looked right, so the request must be real.
- Authority bias. When someone senior says “this is approved,” people stop questioning. Full stop. Even when that senior person is a stranger on the internet with a lookalike email address.
- Convenience over security. When we said we couldn’t log in, they didn’t push back. They didn’t suggest alternative secure methods. They just made it easier for us by sending the data directly.
Social engineering works precisely because it exploits trust, not technology. The result? A year-long campaign with multiple emails. Sensitive customer data. And the only verification was whether the email domain looked legitimate.
The wake-up call
The client’s security team was stunned. Not by the sophistication of the attack, but by how simple it was.
To their credit, they’ve since implemented mandatory verification protocols for any data requests, regardless of who they appear to come from. Phone verification with known contacts. Multi-channel confirmation for sensitive data. And crucially, training for support teams on authority bias and social engineering red flags.
As their Security Lead put it: “We trusted the domain. We trusted the authority. We never thought to verify the human. That’s exactly what attackers count on.”
The lesson? AI doesn’t just make attackers smarter. It makes them more convincing, more patient, and nearly impossible to distinguish from legitimate requests.
If your verification process stops at “does the email look right,” you’re not verifying anything. You’re just hoping the person on the other end is who they say they are.
CovertSwarm’s social engineering testing doesn’t just test your technical controls. We test your people, your processes, and your ability to spot sophisticated attacks before they become breaches.
The next person asking for sensitive data might sound professional, legitimate, and patient.
Just like we did.
Ready to see where your defenses break?
Contact CovertSwarm and let us show you what attackers see.