- The quarterly adversarial threat report from Meta gives an overview of the various threats it faces internationally. These included threats in Greece, South Africa, Pakistan, India, Malaysia, Russia, Israel, Malaysia, and the Philippines.
- New threat research on a Russian troll farm targeted numerous applications across the net in a failed effort to give the mistaken perception of mass support for Russia’s war in Ukraine.
- Additional technical threat indicators in the report’s conclusion will assist the security community in detecting and combating malicious activity.
- Meta shut down 9000 accounts, pages, and groups during its takedown of China’s signature Spamaflouge, aka Dragonbridge, disinformation group.
Cyber-Espionage Operations Taken Down
During the second quarter, Meta took action against espionage actors that target both accounts and devices. To disrupt these networks, Meta blocked domain infrastructures so they couldn’t be shared over Meta services, took down accounts, and notified everyone thought to be these malicious groups’ targets.
Meta has also shared the details with its industry peers and security researchers so that everyone can take steps to put a stop to this activity.
The group of hackers known as Bitter APT was operating out of South Asia, and people in India, Pakistan, New Zealand, and the UK were its targets. Its activity was mainly low in sophistication and overall operational security, but it was well sourced and persistent.
It used many harmful tactics to target its victims’ online devices with social engineering and infect them with malware. They used a combination of malicious domains, link-shortening providers, compromised sites, and third-party website hosting companies to distribute the malware.
Noteworthy Tactics, Techniques, and Procedures (TTP)
The researchers discovered the Bitter APT group using the following TTPs across the internet to carry out its activities:
- Social Engineering: Using fictitious identities and masquerading as attractive young women, activists, or journalists, the group was able to gain the trust of its targets in order to trick them into clicking on malware links or downloading malicious software. Unlike other groups, this one invested time in engaging with and establishing relationships with targets via multiple social platforms, including email.
- iOS Applications: They learned during their recent investigations into Bitter APT that it was disseminating an iOS chat application. The app was available on Apple’s Testflight service for users to download, which developers use to beta-test new products. The usual exploits were not required to deliver malicious software to the group’s targets since they could use official Apple channels to distribute the compromised app and trick users into installing it.
- Android Malware: Bitter was also using a custom Android malware family that the researchers named Dracarys. The app is part of Android’s accessibility services to assist users with impairments and will auto-click through and grant the app specific permissions without any necessary user input.
Additionally, Bitter injected the Dracarys malware into non-official, trojanized versions of Telegram, YouTube, Signal, custom chat, and WhatsApp apps, allowing them to access call logs, files, contacts, geolocation, text messages, device info, taking photos, installing apps, and enabling the microphone.
The functions of the malware are typical, according to the researchers. To date, existing public anti-virus systems have not detected either the malware or its infrastructure, showing that Bitter has operated undetected by the security community for some time.
- Adversarial Adaptation: Researchers say that the Bitter hackers are aggressively responding to the security community’s detection and blocking of the domain infrastructure and activity. In an attempt to evade enforcement, Bitter attempted to post broken links or images of malicious links to force targets to manually type them into their browsers instead of clicking on them.
This group of hackers is located in Pakistan and was targeting victims in India, Afghanistan, Pakistan, the UAE, and Saudi Arabia. The targets included government officials, military personnel, employees of human rights organizations and other non-profit organizations, and students. The researchers concluded it was likely connected to state-linked threat actors in Pakistan.
Researchers say the group’s activity was mainly low in sophistication and overall operational security, but it was well sourced and persistent. It targeted many internet-based services, including file-hosting services, social media, and email providers.
Additionally, APT36 employed many malicious TTPs, disguised links, and fake applications to spread its malware that targeted Windows and Android-based devices.
Noteworthy Tactics, Techniques, and Procedures (TTP)
The researchers discovered the APT36 threat actor using the following TTPs across the internet to carry out its intent:
- Social Engineering
- Spoofed, Fake, and Genuine Websites
- Disguised Links
- Android Malware
New and Emerging Threat Disruptions
Researchers at Meta typically work in teams referred to as threat intelligence incubators. These teams work together to identify and study specific adversarial behaviors, which they then use to develop tailored enforcement protocols and policies. These are used to take action against the behaviors they have identified.
In the next step, the researchers investigate and disrupt the threat networks.
Care is taken to avoid over-enforcing or silencing genuine users. As they learn more that clarifies the exact nature of the threats, the goal is to transition from the disruption-only stage to include scaled auto-detection.
To do this, Meta feeds common tactics and techniques into the systems it uses for scaled detection and enforcement. Meta has also begun sharing the information it finds with the community. security researchers, and industry peers.
Problem areas Meta is working on include:
- Mass Reporting
- Coordinated Violating Networks
These efforts are in various phases of the process and will continue to be moved from disruption-only to having automated detection added in.
From its latest enforcements, there are a few highlights:
From Greece: Two clusters of Facebook and Instagram accounts and Facebook Pages were removed that had worked together to repeatedly violate policies against hate speech, misinformation, and incitement to overthrow the government. These were associated with two conspiracy groups: the Guardians of the Constitution and the Holy Declarationists.
From India: Several clusters totaling roughly 2,000 Facebook and Instagram accounts, pages, and groups that were targeting women in India using harassment and sexualizing content. According to the report, there was one situation where a single account targeted at least 700 people.
South Africa: Several clusters amounting to about 200 accounts on Instagram and Facebook, pages, and groups were coordinating the harassment of migrants from other African countries. Some were organized under the Operation Dudula brand.
Meta’s Inauthentic Behavior (IB) Policy Actions
The operators behind inauthentic behavior (IB) are primarily attempting to mislead Facebook users or Facebook itself about specific content’s popularity, the intent of specific communities (groups, pages, or events), or the identity of those behind them. These activities are frequently (but not always) motivated by financial gain and are centered on promoting and broadening content distribution.
Those behind IB activity focus on quantity of engagements over quality. They frequently use mass numbers of fake accounts of low sophistication to share on a large scale their political, social, or commercial content. Their methods are comparable to other widespread Internet activities, such as spam.
By this behavior, you can tell that they are IB (Inauthentic Behavior) and not CIB (Coordinated Inauthentic Behavior), where the operators try to act as much like real people as possible when they’re socializing. However, both of these are serious violations and Meta enforces the policies against both using different tools that monitor each of these different behaviors.
Most recently, Meta made strides again IB operators during the recent election in the Philippines.
- Meta teams removed 10,000 accounts before the elections
- They implemented automated detection based on the discovered patterns
- Additionally, the teams acted agains another 15,000 accounts
Meta prioritized enforcement based on the most trustworthy signals gleaned from its work in the Philippines, following manual disruptions in addition to automated detections. It resulted in the elimination of over 50,000 accounts involved in IB activities globally. The majority of these accounts were only two months old.
CIB Network Operations Removals
In coordinated inauthentic behavior (CIB) patterns, fake accounts are key to the activities. The operators behind these accounts consistently coordinate with each other and use fake accounts to misguide their targets about what they are doing and who they are. In this case, the Meta team is more concerned about the behavior instead of content.
Meta continually monitor efforts by the block networks to return to its platforms by using manual and automated detection methods.
Removal of 596 Facebook accounts, 180 Pages, 11 Groups, and 72 Instagram accounts
The operators behind this network were running a troll farm – a coordinated effort by co-located operators using fake accounts and personas to manipulate or corrupt public discourse. They used multiple social platforms that included Facebook, TikTok, Twitter and Instagram posting Memes in Malay supporting the current government.
On Facebook, the group managed pages where they posted more frequently on weekdays. The fake accounts weren’t very developed and some used stolen profile photos. Meta easily detected and disabled many of them.
Removal of 259 Facebook accounts, 42 Pages, 9 Groups, and 107 Instagram accounts
In Israel, Meta discovered clusters of activity present across several social media platforms and maintained their own websites. Each cluster was focused on a specific country. The personas included media organizations, fake NGOs, and others that had an online presence that would make them appear genuine to bypass scrutiny by researchers and platforms.
The activity’s administrators used fake accounts to post to, comment on, and manage their groups and pages, as well as to disseminate links to their own websites. Meta’s automated systems uncovered and disabled some of these accounts.
The network primarily posted on news and current events in the targeted countries in English, Portuguese, and Arabic. Often, they included positive comments about the Angola government, criticism of Hamas in Gaza, and positive comments about one of the political candidates in Nigeria.
The Meta team took down a network of Instagram accounts operated by a troll farm in St. Petersburg, Russia. It sought to influence the global public discourse on the Ukraine conflict. It appeared poorly executed and was coordinated using a Telegram channel to create the impression of grassroots support for Russia’s invasion by using fake accounts to post pro-Russia remarks on content shared by media and influencers.
Meta first detected the activity in March and began taking action. The Russian outlet Fontaka reported on it and shared that the group was operating out of an office building in St. Petersburg just 10 days after the group advertised jobs for spammers, content analysts, commenters, programmers, and designers that were focused on YouTube, TikTok, and Telegram.
Meta removed the network in early April and continues to monitor activity to detect and dismantle any attempts to return.
In total, Meta shut down 1,037 Instagram accounts and 45 Facebook profiles connected to the same network.