subscribe: Daily Newsletter

 

False flags stymie threat detection

0 comments

Targeted attackers are using an increasingly wide range of deception techniques to muddy the waters of attribution, planting “false flag” timestamps, language strings, malware, among other things, and operating under the cover of non-existent groups.
This is according to a paper presented at Virus Bulletin by Kaspersky Lab security researchers Brian Bartholomew and Juan-Andres Guerrero-Sade.
The identity of the group behind a targeted cyberattack is the one question everybody wants answered, despite the fact that it is difficult, if not impossible to accurately establish who the perpetrators really are. To demonstrate the growing complexity and uncertainty of attribution in today’s threat intelligence landscape, two Kaspersky Lab experts have published a paper revealing how more advanced threat actors use so-called False Flag operations to mislead victims and security researchers.
The indicators most used by researchers to suggest where attacks may originate from, together with illustrations of how a number of known threat actors have manipulated them, include:
* Timestamps – Malware files carry a timestamp indicating when they were compiled. If enough related samples are collected it can become possible to determine the developers’ working hours, and this can suggest a general time-zone for their operations. However, such timestamps are incredibly easy to alter.
* Language markers – Malware files often include strings and debug paths and these can give an impression of the authors behind the code. The most obvious clue is the language or languages used and the level of language proficiency. Debug paths can also reveal a user name as well as internal naming conventions for projects or campaigns. In addition, phishing documents can be riddled with metadata that can unintentionally save state information that points to an author’s actual computer. However, threat actors can easily manipulate language markers to confuse researchers. Deceptive language clues left behind in malware by the threat actor Cloud Atlas included Arabic strings in the BlackBerry version, Hindi characters in the Android version and the words ‘JohnClerk’ in the project path for the iOS version – even though many suspect the group to actually have an Eastern European connection. The malware used by the threat actor Wild Neutron included language strings in both Romanian and Russian.
* Infrastructure and backend connections – Finding the actual Command and Control (C&C) servers used by malefactors is similar to finding their home address. C&C infrastructure can be costly and difficult to maintain, so even well-resourced attackers have a tendency to reuse C&C or phishing infrastructure. Backend connections can give a glimpse of the attackers if they fail to adequately anonymise Internet connections when they retrieve data from an exfiltrating or email server, prepare a staging or phishing server or check in on a hacked server. Sometimes, however, such ‘failure’ is intentional: Cloud Atlas tried to confuse researchers by using IP addresses originating in South Korea.
* Toolkits: malware, code, passwords, exploits – Although some threat actors now rely on publically available tools, many still prefer to build their own custom backdoors, lateral movement tools and exploits, and they guard them jealously. The appearance of a specific malware family can therefore help researchers to hone in on a threat actor. The threat actor Turla decided to take advantage of this assumption when it found itself cornered inside an infected system. Instead of withdrawing its malware, it installed a rare piece of Chinese malware which communicated with infrastructure located in Beijing – completely unrelated to Turla. While the victim’s incident response team chased down the deception malware, Turla quietly uninstalled its own malware and erased all tracks from the victim’s systems.
* Target victims – The attackers’ targets are another potentially revealing ‘tell’, but establishing an accurate connection requires skilled interpretation and analysis. In the case of Wild Neutron, for example, the victim list was so varied it only confused attribution. Further, some threat actors abuse the public desire for a clear link between the attacker and its targets, by operating under the cover of an (often non-existent) hacktivist group. This is what the Lazarus group attempted by presenting itself as the ‘Guardians of Peace’ when attacking Sony Pictures Entertainment in 2014. The threat actor known as Sofacy is believed by many to have implemented a similar tactic, posing as a number of hacktivitist groups.
Last, but not least, sometimes attackers try to push the blame onto another threat actor. This is the approach adopted by the so far unattributed TigerMilk[i] actor, which signed its backdoors with the same stolen certificate previously used by Stuxnet.
“The attribution of targeted attacks is complicated, unreliable and subjective – and threat actors increasingly try to manipulate the indicators researchers rely on, further muddying the waters. We believe that accurate attribution is often almost impossible. Moreover, threat intelligence has deep and measurable value far beyond the question ‘who did it’. There is a global need to understand the top predators in the malware ecosystem and to provide robust and actionable intelligence to the organisations that want it – that should be our focus,” says Brian Bartholomew, senior security researcher at Kaspersky Lab.