By security practitioners, for security practitioners novacoast federal | Apex Program | novacoast | about innovate
By security practitioners, for security practitioners

The State of Cyber Espionage in 2025—Part 2

In Part 2 of our series on Cyber Espionage, Elise Manna-Browne expands on techniques for recruiting a spy, and just how easy it can be to weaponize a human asset in 2025.

In Part 2 of this two-part series, we’re looking at how nation-states and cybercriminals are recruiting agents. The schemes involve connections on social media. This article is a continuation of The State of Cyber Espionage in 2025 – Part 1.

How to Recruit a Spy

This is the intriguing part of this talk: how you recruit an asset or spy.

MICE-R is an acronym for one way an organization can recruit a spy. It stands for money, ideology, coercion, ego, and lastly, revenge.

  • Money: Investment, employment, or scholarship offers target victims of financial hardship with bribes.
  • Ideology – Civic duty, religious solidarity, political grievances, hacktivism, radicalization.
  • Coercion – Social pressure, blackmail/extortion, physical threats
  • Ego – Flattery, recognition, access, status, power.
  • Revenge – Disgruntled employees.

Money

The most obvious way to recruit someone is, of course, money. It can come as an investment offer or possibly a business partnership.

An example is a motor company where someone had left; they wanted to do distribution within the US and needed new motors to distribute. They go off to a foreign country, have some meetings, and all of the distribution plans get taken because they are offered, as part of this conversation in the business partnership, something seemingly innocuous.

A cost of doing business, perhaps, but that’s a risk that many aren’t taking into account. But even something such as a scholarship or immigration visa. These are things that can monetarily lure people into doing things that they wouldn’t normally do that can cause risk to the rest of us.

Ideology

Although ideology isn’t as prevalent in the cybersecurity space, it happens in hacktivism all the time. If you know the Noni Ops or IRC, you know it was where 4chan transitioned to anonymous groups and was infamous for being fed city. A lot of them were actually recruiting people, typically young adults, to do things against adversaries.

There were levels of plausible deniability, mainly between the government and the person executing the attack, but it was using ideology such as freedom of information or whatever they were doing as far as propaganda is concerned and recruiting people into doing digital attacks through human connection.

Coercion

Coercion is another that we won’t always feel in our field that could be social pressure, such as “Hey, you kind of owe us this thing” or “You’ll win big.” Additionally, statements like “You’ll be very successful if you do this thing” or “If you don’t do what I’m asking, you’re bad and have somehow failed” illustrate coercive pressure. It could also involve tactics such as blackmail, extortion, or even physical threats. 

Another thing that comes into play is if the target is in one country and their family is in another country, and if they threaten the family, you can coerce someone into doing something. Looking at all of these different things, there doesn’t need to be a single piece or motivator that recruits the spy. They can use multiple methods and overlap them. Someone might not even realize when you’re doing it.

Ego

Since everyone has an ego, flattering them or giving them a small amount of power provides a little taste of access to something they are not usually given. Giving them a higher status in whatever organizations they belong to or their community. That’s an excellent way to coerce someone, encouraging someone to do something they wouldn’t normally do that’s either against their own self-interest or yours.

Revenge

The last piece is revenge. If you look back at the Department of Justice (DOJ) example, people are disgruntled. As a result, they might take actions they wouldn’t usually consider, which makes them ripe for recruitment.

Another technique, in addition to these, is academic and industry conferences as stomping grounds for recruiters. Some might suggest, “Hey, why don’t you join?” or How about you submit a paper?” for example, or “Give me your personal information so I can book your flights and hotel.”

We have frequently observed the use of this tactic, especially in the healthcare industry. These are generally low-level attacks. If you’re trying to recruit someone through the guise of an academic conference, it’s going to be a boring email or a benign PDF that won’t be flagged for BEC (business email compromise). There are no payloads involved, there are no links or attachments that thing is going to sneak past everything and go where it needs to go

Meet Bob

If you’re curious what GPT thinks the average CISO looks like, meet Bob. Bob is an experienced CISO working for a security vendor, Targetus Industries.

He likes using LinkedIn like most people, but he’s not a content creator by any means. He’s just using it to normally post occupationally to check out his connections. He loves dogs.

He exited his CISO role about two months ago but didn’t really make a big deal about it. He didn’t change the headline on his LinkedIn profile; he didn’t update his work history; he just didn’t log in.

We see this happen a lot; people want to take a minute between roles. There’s a lot of turnover in general in the systems space. So, it’s understood he’s taking time hanging out with the dog, sitting on the couch eating junk food, and chilling out.

He finally decides to come back to the real world, but all he does is change the little frame on his LinkedIn profile photo to say he’s open to work. His posts have changed, so sentiment analysis isn’t going to get too much into the weeds. Even a human-driven operative could look at someone’s post and say, Hmm, he was really happy about two months ago, and now he’s reposting about negative corporate culture or how bad corporate culture is”—really subtle things.

He may not be creating any content or going on big diatribes or rants, just something slight enough to say this person didn’t leave on the best of terms without them explicitly saying so. Again, looking back at the Mosaic Intelligence concept, these little pieces of benign information are actually useful if you’re looking to recruit someone into your cause. Seeing someone that might have left on bad terms and might be disgruntled and they’re ready to get back into a new role somewhere is a good opportunity to start creating a relationship to recruit an asset.

Infiltration

So, how does Bob get weaponized initially?

Somebody reaches out using a fake LinkedIn profile and says, “Hey, I see you’ve left your position and are open to new roles. Let’s set up a time to talk now.” This is a technique being used by nation-states, and there are monitoring tools that can help with LinkedIn, but they’re not going to watch private messages, and if they are scrubbing anything, they’re looking for things like phishing links. We think about these things that link to Slack or Teams or whatever that is doing phishing protection. They’re just not designed to look for this kind of activity.

This is a user kind of thing. You’d hope that he’s watched the news, and he knows that sometimes they use these guys as recruiters and are always trying to follow me. This business doesn’t quite sound right to me.

He goes online and looks at the recruitment company and pulls up a corporation search for the state he’s in, and he finds that there’s a registered corporation that’s been there for five years. Then he pulls up social media accounts, domains, phone numbers, etc. These things have all been around for some time because the Better Business Bureau or reviews have all been filled in. Not all of a sudden in the same 5-minute window, but over time.

This is a component of long-term planning, and anyone involved in personal management knows you have to season and age a persona over time for it to be useful. Otherwise, you’re going to flag really early on in your operation. So, Bob is satisfied that it looks like a legit company. It’s not Robert Half, but he thinks he knows what he’s dealing with, and of course he wants to get back to work, so he sets up the interview.

In the preliminary interview with the recruiter, there’s a casual conversation, and Bob begins talking about how he was pivotal in the transition from Cisco ASAs to Palo Altos; he is using Rapid 7, used to do this thing, and didn’t really like the SIEM because it was on-prem and couldn’t be patched anymore.  

He’s just talking over this call about a lot of different things, how he likes to hire his teams, how he likes to build them, and what he looks for, and all of this is giving the recruiter some information. If he needs to put someone else in this operation, who is Bob going to talk to, and who are they going to listen to? Of course, Bob talks about the dog, the casual What do you do in your free time?” kind of stuff. We’ve all had these interviews where we’ve probably had these exact conversations.

Many people are posting the tool information on Indeed because you need to hire someone who knows the Palo Alto and that weird thing and your Rapid 7. You’re giving away your tool stack to literally anybody who is reading it. Next, the recruiter sets up an interview with an actual company. It could be a fake company.

Exploitation & Exfiltration

Let’s look at how this can go out of control. So, he gets hired and is really happy. He’s with this new organization and is going through the onboarding to get his access provisioned and all that preliminary stuff. The recruiter is CC’d on the email or the person who actually takes the application. So, now the nation-state has his street address, social security number, phone numbers, his references, and their phone numbers—this rife ecosystem of data that can be taken, correlated, and weaponized in all sorts of permutations.

Now they know Bob’s previous employer’s weaknesses and weak spots to go after them. Since he was talking about his dog, they target him with a phish saying, Thank you for your donation. Here’s a cute Sara McLaughlin picture with golden retrievers, and just click the link, validate some information, or maybe give a donation and enter in the credit card information. Lots of different methods for this to go sideways. Of course, everyone knows how dangerous phishing can be, but you’re getting the picture here.

The phishing then leads to a breach of Bob’s data, anything from credit card information to logging into the credit card account and finding compromise inside the statements. It could also be as simple as password reuse. You have a token attack you can abstract out here. It could be all types of things.

In any case, this phish was a very dangerous thing, and unfortunately Bob fell for it because of the sad puppy dog eyes; he wanted to participate in the lure.

And all of this leads to merger information being leaked. And, again, you can abstract this out potentially so that somebody sweeps in and buys something with a bigger offer or inserts a poison pill into the offer, whatever that might be. Through all of this and the small, insignificant details, information that was critical to the business was leaked.

At this point Bob is a little frustrated and realizes what has happened because it’s all gone very public. Remember there was the operation to target his prior employer, and in addition, he’s being weaponized against the new employer.

What Just Happened

Bob didn’t really do anything wrong, except the phishing part. Pretty innocent, right? He’s not malicious by any measure; he’s just the quintessential negligent insider threat. He’s going about his day-to-day business trying to make it, like anybody else. He gets swept up into this recruitment operation.

There are actually three victims here, not only Bob. It’s the prior employer and the new employer. All of them got swept up and hurt in this piece. Much of this goes back to that systemic risk, right? No one entity out of them had enough visibility to even see that this was happening. Even if they did, they couldn’t do much. There’s no nexus of control in this.

When you consider all those signals that were being thrown out, and again nobody was looking at the system overall.

There were no fancy zero days, and if we think about nation-states, vendors want to get up and be like, Yeah, we detect APT. They don’t have to do anything complicated to get into your organization.

This technique is just exploiting humans. Using the way humans think, understanding the psychology about the way people behave, and noticing the nuances and things such as sentiment changes. Where you can say, Maybe this person might actually be someone worth talking to. Perhaps I can get a little bit ahead versus not going after a CISO who’s really happy at their job and has been there forever or is a founder or something. That’s going to be very embedded. You’re looking for someone who’s slightly off-kilter and taking advantage of them in a vulnerable state with the ultimate goal of stealing sensitive data, which is what everyone is trying to protect every day.

Techniques Used

As already mentioned, there is nothing really new here as far as techniques are concerned.

  • Social Engineering
  • Seasoned Personas
  • Shell Companies
  • LOLBAS
  • Remote Desktop Apps
  • Standard Data Exfiltration

Social engineering is familiar to everyone, and persona management isn’t anything new. Shell companies—well, throw that in for fun. Ultimately, that’s not difficult. If anyone’s ever set up an LLC on LegalZoom, nation-state actors can figure out how to do this, and this is in the real world.

The Chinese have this company that was phonetically called isoon. They were a fake information security company doing business with the US government. The Department of the Treasury got breached because they were working with a Chinese information security company that they believed was legit because they’d been around a while, but it was essentially a shell corporation, and they just indicted roughly 10 people off of that.

Then there are LOLBINS. It’s pretty easy for them to sneak past; even if they did a phish, there could be PowerShell running, and it’s going to bypass a lot of detections.

Remote desktop apps are also used a lot. Again, this is something that we’re not usually configured to detect. There was an incident with the North Koreans using Chrome Remote Desktop to move data around, but they were also doing things like redirecting a shipment of laptops, using proxy teams, or simply using something like that file transfer thing or file-sharing service. That’s really all it takes to get all that data back out after they’ve infiltrated.  

Counterintelligence Challenges

What makes this so difficult?

We’re outmanned, and we can’t see what’s going on. Supply chains are coming back, with complex issues and limited reach that nobody can do anything about, and even if they wanted to, they aren’t able. There’s a natural trust of employees and coworkers that creates a blind spot that lets recruiters in.

We also don’t follow the same laws or rules of engagement that our adversaries do. They don’t need to take the same measures. This lets them be more aggressive than our businesses, and they can do many things we’re not able to do. That includes picking up these little pieces of benign information from wherever they can get it and correlating it.

There’s also a lot of cognitive bias as well. Normalcy bias is in here, and so is optimism bias. People just think everything is going to be ok or good in the optimism case and just make things seem not as urgent or prevalent and worrisome.

The Approach

What can we do about this? Counterintelligence. Maybe that sounds big, but it’s not. It’s really just a matter of thinking about what your adversaries want from you. Where do you live in that big ecosystem?

And where could those points be in those little threads that can be pulled on so you know where to look for that behavior? They might not be in your network, but at least you know where you can potentially look for it and start pulling all these things together.

Watching social media, so not just the postings of your employees but also looking for people that are trying to interact with them or educating them to say they’re just all of a sudden getting a bunch of recruiters showing up that you’ve never heard of the group—this is something being used and educating people to be warned about.

Just automating things. Again, we’re talking about all the data you have that is disparate and across all sorts of things, and it’s all lower informationally if it’s even flagged at all. You’re going to need to automate the correlation piece, or there’s no CTI analyst that can take all of this information and work on it on an ongoing basis.

You’re also dealing with long time frames, so it becomes difficult to do this manually.

Baiting is essentially putting false information or a fake credit card so you can track it through a transaction or through a life cycle and kill chain. This deception piece traditionally beacons with things such as releasing fake documents, honeypots, honey creds, and that kind of stuff in addition to insider rights management.

This is where you can have things such as watermarks and anything that’s controlling data so if it leaves the perimeter, there can still be something where you plot back. You can prevent it from being downloaded, printed, and similar. That becomes important because you don’t know where that data is going to go. Again, it’s a benign piece; it may be one contract that’s forwarded on to one group that gets compromised.

So, again, that part of this complex system. Those things that are low-grade are the ones that actually need to be looked at.

Finally, make threat hunting and intelligence integration a priority. Your detection engineering life cycle is going to be an important piece in the long game. We have to fight the long game to fight their long game. If you’re not integrating these pieces into detection engineering and just kind of point at a MITRE category and say they’re that, they’re not going to have this larger systemic picture, and you’re not going to be able to identify these risks targeting you.

The Author

Elise Manna-Browne is an expert in threat intelligence, threat hunting, phishing, incident response, penetration testing, and malware analysis. She currently leads the cybersecurity incident response team at Novacoast.

Previous Post

The State of Cyber Espionage in 2025—Part 1

Innovate uses cookies to give you the best online experience. If you continue to use this site, you agree to the use of cookies. Please see our privacy policy for details.