This past year has seen upheaval across the cybersecurity landscape and the need for effective, worldwide threat intelligence continues to grow as geopolitical and economic developments create an increasingly complicated and uncertain world for both businesses and consumers.
Threat actors continue to evolve with new players and threats emerging globally, in addition to novel ways of leveraging or executing older tactics and approaches. Security experts should assume that no organisation or individual is truly safe from a cyberthreat and that there is an increasing urgency to monitor and research threats resurging and evolving at a rapid pace and scale.
“In South Africa, we have seen an increase in the detection of the presence of nation-state threat actors in the vicinity of critical national infrastructure,” says Carlo Bolzonello, country lead for Trellix South Africa. “In our Q2 Cyber Threat Intelligence Report for South Africa, we observed that the volume of detection activity by governments in states such as China and North Korea was highest in public sector institutions for the first time. This is an indication of South Africa and Africa’s strategic relevance on the geopolitical stage.
“Various countries around the world are looking to decouple from Western financial markets through moves like the expansion of the BRICS bloc and the de-dollarisation of their financial markets,” adds Bolzonello. “As a result, Africa will play a more important role in resource-based economies and nation-states. Both allies and oppositions are increasingly interested in gaining leverage here through any data they can extract.”
Ransomware remains an ever-present plague for many organisations worldwide as these families increase in scale and sophistication – including coordinating and partnering with other threat actors through underground forums.
Socially engineered ploys to trick and deceive individuals into compromising their devices or personal information are becoming more cunning and targeted and, simultaneously, harder for both victims and security tools to catch and identify.
Furthermore, the trend continues of cyberattacks being used in the service of political, economic, and territorial ambitions through nation-states executing espionage, warfare, and disinformation as observed through threat activity in Ukraine, Taiwan, Israel and other regions.
“The cyber landscape today is more complex than ever before,” says John Fokker, head of Threat Intelligence at Trellix Advanced Research Centre. “Cybercriminals from ransomware families to nation-state actors are getting smarter, quicker, and more coordinated in retooling their tactics to follow new schemes – and we don’t anticipate that changing in 2024. To break away from escalating attacks and start outsmarting and outmanoeuvring threat actors, all industries need to embrace a cyberstrategy that is constantly vigilant, actionably comprehensible, and adaptable to new threats. That is how we can ensure a one-step lead over cybercriminals in the coming year.”
Cybersecurity experts and threat researchers from the Trellix Advanced Research Centre team have compiled their predictions for trends, tactics, and threats that organisations should keep top of mind as we approach 2024:
The Threat of Artificial Intelligence
Underground development of malicious LLMs
By Shyava Tripathi – Recent advancements in AI have given rise to LLMs capable of generating human-like text. While LLMs exhibit remarkable technological potential for positive applications, their dual-use nature also makes them vulnerable to malicious exploitation. One significant security concern associated with LLMs lies in their potential misuse by cybercriminals for large-scale attacks.
Leading LLMs like GPT-4, Claude, and PaLM2 have achieved unparalleled capabilities in generating coherent text, answering intricate queries, problem-solving, coding, and numerous other natural language tasks. The availability and ease of use of these advanced LLMs have opened a new era for cybercriminals. Unlike earlier, less sophisticated AI systems, today’s LLMs offer a potent and cost-effective tool for hackers, eliminating the need for extensive expertise, time, and resources. And this value has not been lost on the cybercriminal underground.
Setting up the infrastructure for large-scale phishing campaigns has become cheaper and more accessible – even for individuals with limited technical skills. Tools like FraudGPT and WormGPT are already prominent in cybercriminal networks. Popular darknet forums today often serve as platforms for the coordinated development of phishing emails, counterfeit webpages, as well as the creation of malware and vulnerabilities designed to evade detection to thousands of users already. These LLM applications can assist in mitigating considerable challenges encountered by cybercriminals – and we expect the development and malicious usage of these tools to accelerate in 2024.
The resurrection of Script Kiddies
By Ajeeth S – The availability of free and open-source software is what originally led to the rise of those known as “Script Kiddies” – individuals with little to no technical expertise using pre-existing automated tools or scripts to launch cyberattacks. Though they are sometimes dismissed as unskilled amateurs or Blackhat wannabes the growing availability of advanced generative AI tools, and their potential for malicious malware usage, means Script Kiddies pose a significant and growing threat to the market.
The Internet is now filled with tools that use AI to make people’s lives easier – from creating presentations, generating voice notes, writing argumentative papers, and much more. Many best-known tools like ChatGPT, Bard, or Perplexity AI come with security mechanisms to prevent them from writing malicious code. This is not the case for all AI tools available in the market, however, especially the ones being developed on the dark web.
It is only a matter of time until cybercriminals have access to an unrestricted generative AI which can write malicious codes, create deepfake videos, assist with social engineering schemes and more. This will make it easier than ever for unskilled actors to execute sophisticated attacks at scale. Furthermore, widespread leveraging of such tools to exploit vulnerabilities will make root cause analysis of attacks more challenging for defenders. We consider this to be an area to monitor carefully in 2024.
AI-generated voice scams for social engineering
By Rafael Pena – The rise of scams involving AI-generated voices is a concerning trend that is set to grow in the coming year, posing significant risks to individuals and organisations. These scams often involve social engineering tactics where scammers use psychological manipulation techniques to deceive individuals into taking specific actions such as disclosing personal information or executing financial transactions. AI-generated voices play a crucial role in this as they can instil trust and urgency in victims making them more susceptible to manipulation.
Recent advancements in artificial intelligence have greatly improved the quality of AI-generated voices. They can now closely mimic human speech patterns and nuances making it increasingly difficult to differentiate between real and fake voices. Furthermore, the accessibility and affordability of AI voice generation tools have democratised their use. Even individuals without technical expertise can easily employ these tools to create convincing artificial voices, empowering scammers.
Scalability is another key factor. Scammers can leverage AI-generated voices to automate and amplify their fraudulent activities. They can target numerous potential victims simultaneously with personalised voice messages or calls, increasing their reach and effectiveness.
Detecting AI-generated voices in realtime is a significant challenge, particularly for individuals who are not familiar with the technology. The increasing authenticity of AI voices makes it difficult for victims to distinguish between genuine and fraudulent communications.
Additionally, these scams are not limited by language barriers allowing scammers to target victims across diverse geographic regions and linguistic backgrounds.
Phishing and vishing attacks are both on the rise. It’s only a logical next step that as the technology for AI-generated voices improves, threat actors will leverage these applications with victims on live phone calls – impersonating legitimate entities to amplify the effectiveness of their scams.
Shifting trends in threat actor behaviour
Even more layers of ransomware extortion
By Bevan Read – As ransomware groups are primarily financially driven, it’s unsurprising to see them find new ways to extort their victims for more money and pressure them to pay the ransom. We are starting to see ransomware groups contact the clients of their victims as a new way to apply pressure and combat recent ransomware mitigations. This allows them to ransom the stolen data not only with the direct victim of their attack, but also any clients of the victim who may be impacted by the stolen data.
Ransomware groups finding ways to leverage the media and public pressure on to their victims isn’t new. Back in 2022, one of Australia’s most significant health insurance companies suffered from a data breach. In tandem with their ransom to the insurance company the threat actors publicised much of the medical data – leading to pressure from the public and officials to pay the ransomware actors to take down the medical information.
In addition, due to the tremendously private nature of data being released, clients walked into the insurance company’s shopfronts and offered to pay for their own details to be removed. But in 2023, observing a similar event, a ransomware group instead threatened to contact the clients of companies they had compromised – offering those clients the option to pay to remove their personal and private details from the exposed data.
As this additional form of extortion grows in popularity it adds a fifth avenue for these attackers to ransom those affected. We expect to see a shift in the landscape where ransomware groups more often look to target entities that handle not only sensitive personal information, but intimate details which can be used to extort clients. It would not be surprising for the healthcare, social media, education, and SaaS industries to come further under fire in 2024 from these groups.
Election security must start with protecting the human-in-the-loop
By Patrick Flynn – A critical threat to election security remains in the basics and often starts with emails or SMS messages where “bad actors” actively target election officials through creative phishing schemes to compromise credentials.
We only need to look back three years where this was prominently used to focus on key officials in four battleground states. It will be no different this coming election cycle unless the individuals involved at every level – ranging from city and country election officials to volunteers – are protected.
Cyberattacks, such as spear phishing and sophisticated impersonation, continue to use email as the main entry point because it can be highly customised and focuses on increased levels of successful exploitation.
As we inch closer to the 2024 election cycle, everyone involved in elections must continue examining emails closely and not trust unrecognisable hyperlinks. They should be extra wary of highly targeted and sophisticated impersonation and business email compromise (BEC) attacks, and spear-phishing campaigns, and consider leveraging solutions to detect and stop advanced malicious files and URLs.
Playing a role in elections empowers all individuals, but these roles also come with a critical responsibility. Every participant must be aware of, and prepared for, those who seek to influence the electoral process through illicit means.
Emerging threats and attack methods
The growing battle of the (QR) codes
By Raghav Kapoor and Shyava Tripathi – The rise of QR code-based phishing campaigns represents an alarming trend. As our daily lives become increasingly reliant on digital interactions attackers are adapting their tactics to exploit new vulnerabilities. QR codes, originally designed for their convenience and efficiency, have become an enticing tool for cybercriminals to use as an attack vector.
One of the primary reasons behind the expected increase in QR code-focused phishing campaigns is their inherent trustworthiness. QR codes became essential in various aspects of daily life during the Covid-19 pandemic – from contactless payments to restaurant menus. As a result, people have grown accustomed to scanning QR codes without much thought assuming they are safe. This sense of trust can be exploited by cybercriminals who embed malicious links or redirect victims to fake websites. We expect that QR codes will also be used to distribute widely recognised malware families.
The ease of QR code creation and distribution has lowered the barrier for entry into the world of phishing and malware distribution. Anyone can generate a QR code and embed malicious links within, making it a cost-effective and accessible method for cybercriminals to target victims. Moreover, QR codes offer a discreet way for hackers to deliver their payloads. Users may not even realise they have fallen victim to a phishing attack until it’s too late, making detection and prevention more challenging.
Traditional email products often fail to detect these attacks which makes them an attractive option for cybercriminals today. As attackers continue to refine their tactics and craft convincing phishing lures, the potential for success in these campaigns will be on the rise. To combat the growing threat of QR code-focused phishing, users must exercise caution when scanning codes – especially from unknown or suspicious sources.
Python in Excel creates a potential new vector for attacks
By Max Kersten – With Microsoft implementing default defensive measures to block Internet macros in Excel, macro usage by threat actors has seen an expected drop. Instead, they are exploring alternative attack vectors for their latest attacks including lesser-known or underutilised ones such as OneNote documents. However, with the recent creation and release of Python in Excel we expect this to be a potential new vector for cybercriminals.
As both attackers and defenders continue to explore the functionality of Python in Excel it is guaranteed that bad actors will start to leverage this new technology as part of cyberattacks. As the Python code is executed in containers on Azure it can access local files with the help of Power Query.
Now, Microsoft did keep security in mind with the creation and release of Python in Excel and claims that there is no possible connection between Python code and Visual Basic for Applications (VBA) macros. Additionally, it provides very limited access to the local machine and the Internet while only utilising a subset of the Anaconda Distribution for Python.
However, there is potential that this could still be abused via a vulnerability or misconfiguration if found by an actor. Microsoft’s limitations narrow the playing field, but don’t change the fact that this new functionality creates a new field for threat actors to play on.