emerging threats and vulnerabilities

Datadog threat roundup: top insights for Q4 2024

January 24, 2025

Datadog Threat Roundup: Top Insights For Q4 2024

As a vendor in the observability and security space, Datadog has unique visibility into threat actor activity targeting cloud environments, the software supply chain, and various types of web applications. This report highlights our key findings from our threat research and threat hunting efforts in Q4 2024.

Several new campaigns deliver malware via malicious packages published to npm and PyPI

The software supply chain has become the subject of increasing concern in recent years because of its effectiveness as an initial access vector for cyber attacks. As part of our threat hunting efforts, we use our open source tool GuardDog to scan npm and PyPI for malicious packages, then publish our findings on GitHub.

The most common technique we observed being used by threat actors with malicious PyPI packages was overwriting of setuptools’ setup() function to execute arbitrary code, with over 80 percent of the malware we discovered leveraging this technique. Of the malicious packages we discovered in npm, the most common technique we saw was the use of a pre or post-install script located in the package.json file. We found this technique in over 90 percent of malicious npm packages we analyzed.

Q4 2024 also saw the emergence of two threat actors, tracked by Datadog as Tenacious Pungsan and MUT-8694 respectively. Both of these threat actors conducted campaigns aimed at targeting the npm and PyPI package ecosystems.

MUT-8694 was unique in that it was the first time we observed a threat actor coordinating an attack across multiple package ecosystems. The threat actor published dropper malware, masquerading as legitimate software packages, to these ecosystems, which ultimately led to infostealer malware executing on the victim’s device.

MUT-8694 attack flow diagram (click to enlarge)
MUT-8694 attack flow diagram (click to enlarge)

 

 

We also identified a new campaign by the DPRK-linked threat actor Tenacious Pungsan. Similar to the activity conducted by MUT-8694, this campaign involved impersonating popular npm packages and deploying infostealer malware to Windows machines. The infostealer malware itself is known as BeaverTail and has been around since late 2023.

 

 

We maintain GuardDog, as well as the Supply-Chain Firewall open source project, as part of our efforts to help defend against the growing number of supply chain attacks. GuardDog is a CLI tool that helps users identify malicious npm and PyPi packages. To determine whether a package is malicious, GuardDog uses source code and package metadata heuristics, both of which are continuously updated by Datadog security researchers in response to changes in the security landscape.

Supply-Chain Firewall is a command-line tool for preventing the installation of known malicious PyPI and npm packages. It is intended primarily for use by engineers to protect their development workstations from compromise in a supply chain attack, and it can also be used in CI/CD environments through a GitHub action.

Threat actors target cloud AI environments

In addition to targeting the software supply chain, we also observed threat actors turning their attention to cloud AI environments throughout Q4. One particular environment facing exploitation is Amazon Bedrock. Bedrock is an AWS managed service that grants users access to a variety of high-performing foundation models via a single API. This allows organizations to experiment with different models and integrate generative AI into their applications.

Threat actors have long hunted for inadvertently exposed AWS access keys and used these to gain access to various services within a victim’s AWS account. Historically, the services most likely to have been accessed in this manner have been Simple Email Service (SES), Elastic Compute Cloud (EC2), and Identity and Access Management (IAM). Increasingly, threat actors are also showing interest in gaining access to Amazon Bedrock. Once they do, they can take control of the LLM and use it for their own applications—a technique known as LLMjacking.

Typically, LLMs have guardrails in place to prevent the model from generating responses that could aid illegal activity. Despite this, a number of jailbreaks exist. These involve using carefully crafted prompts to trick the LLM into generating responses that include forbidden or illegal content. A particularly sinister example of this was discovered in Q4 2024: threat actors hijacked and jailbroke a Bedrock LLM to create a chatbot that returned responses containing illegal and other highly questionable content, then sold access to these chatbots online.

Throughout Q4, we identified multiple noteworthy attempts of threat actors targeting Bedrock. Most of these incidents involved the use of a compromised access key and followed the same initial reconnaissance techniques observed by most adversaries—using the GetCallerIdentity API call.

One notable case we investigated involved a threat actor operating through a network of proxy services in the United Kingdom. After running GetCallerIdentity, they proceeded to enumerate IAM users and their permissions through a series of API calls (ListUsers, GetUser, ListAttachedUserPolicies, etc.), likely searching for accounts with Bedrock access. The threat actor then attempted to manipulate user credentials by trying to update login profiles, though this was blocked by existing IAM policies.

After extensive enumeration of the account, the threat actor achieved significant access to Bedrock services across all AWS regions spanning the globe. The threat actor was able to successfully execute numerous ListFoundationModels and GetFoundationModelAvailability API calls, followed by multiple successful PutUseCaseForModelAccess operations. The attack combined automated scanning with manual exploitation, up to and including interaction with Bedrock via the web interface.

AWS Bedrock AI LLMJacking attack flow diagram (click to enlarge)
AWS Bedrock AI LLMJacking attack flow diagram (click to enlarge)

To reproduce a similar attack in your environment, you can use our Stratus Red Team Invoke Bedrock Model technique, allowing you to detonate a realistic attack in a self-contained way.

This is just one example of a growing trend of threat actors adapting to new technology by actively targeting and successfully compromising AI services. The attacks are quick, stealthy, and can have a reverberating financial impact if they go unnoticed.

MUT-1244 Targets Developers and Red Teamers via several channels

In the final quarter of 2024, our security research efforts led to the discovery of several new campaigns from a threat actor we track as MUT-1244. We initially discovered this campaign after analyzing malware from a malicious npm package, named 0xengine/xmlrpc, along with repositories operated by the threat actor hosted on GitHub and Codeberg.

MUT-1244 attack flow diagram (click to enlarge)
MUT-1244 attack flow diagram (click to enlarge)

Additional payloads hosted in these repositories included an infostealer, disguised as a cryptocurrency miner, that was capable of backdooring target systems and exfiltrating system information, environment variables, cloud credentials, and SSH private keys. Data stolen by this malware was exfiltrated to the cloud storage services file.io and Dropbox. We were able to confirm that over 390,000 credentials were stolen in this manner.

This campaign has two unique aspects: the initial access methods chosen and the victimology. Using open source intelligence (OSINT) techniques, we discovered that the threat actor was also operating a phishing campaign to deliver their infostealer malware. This campaign targeted academics working in high-performance computing (HPC) research and attempted to disguise the malware as a Linux Kernel microcode patch, allegedly published to mitigate a recent kernel vulnerability.

In addition to targeting academics working in HPC research, MUT-1244 also attempted to compromise cybersecurity researchers in another campaign through the use of trojanized proof of concepts (PoCs) for various software vulnerabilities. These PoCs were hosted in GitHub with legitimate-sounding names—as such, many were automatically included in threat intelligence feeds like Feedly Threat Intelligence or Vulnmon, increasing the likelihood that they would be executed on victim devices.

MUT-1244 also managed to compromise other offensive actors with their infostealer malware. We discovered a trojanized credentials checker, included in a project named yawpp that was hosted on GitHub and operated by the threat actor. The project, purportedly used for validating WordPress credentials, included the same infostealer malware used in previous campaigns. These credentials were exfiltrated to Dropbox after execution of the malware. From there, we assessed with high confidence that the credentials originated from machines operated by other threat actors and were likely used to validate stolen credentials from unrelated breaches.

Given the highly specific targeting involved in this campaign, we would urge academics and security researchers to be particularly vigilant when executing untrusted code from the internet. PoCs for popular exploits should always be tested in isolated and disposable environments, and users of HPC systems should be wary of unsolicited emails urging them to install Linux kernel patches.

 

 

Threat actors continue to target Amazon SES

Of the cloud services available to threat actors, Amazon SES is one of the most frequently targeted. The service's potential for aiding mass spamming and phishing attacks makes it a lucrative target for threat actors. At Datadog, we’ve long observed exploitation of SES in customer environments, often the result of leaked or stolen long-lived AWS access keys.

Fortunately, distinct and often easily identifiable techniques are used in these campaigns, helping defenders spot them. For example, threat actors typically create backdoored IAM users to ensure persistent access to the victim’s environment. These IAM users tend to have conspicuous names, serving as reliable indicators for detection engineering. Prior to creating the malicious user, threat actors have also been observed querying AWS APIs for the presence of these usernames, to determine whether the account has been previously compromised. This activity can be easily detected in CloudTrail logging.

Q4 saw a continuation of this trend, as we discovered a new campaign targeting SES. Detection engineers at Datadog regularly perform threat hunting to identify malicious activity in customer environments. A recent threat hunt uncovered an attack in which a threat actor used a long-lived AWS access key to access the AWS console.

To achieve this, they used the sts:getFederationToken API call to convert their CLI access to console access, before using signin:GetSigninToken to generate a link for signing in to the console. From there, an IAM role was created, named SupportAWS, along with an attached policy that allowed the role to be assumed from a threat actor-controlled account. The threat actor also attached the AdministratorAccess managed policy to the role, before creating a malicious IAM used named adminprod that could be used to persistently access AWS services, including SES.

Attack graph of an AWS incident Datadog identified in Q4 2024 (click to enlarge)
Attack graph of an AWS incident Datadog identified in Q4 2024 (click to enlarge)

 

 

While we did not discover any spam or phishing emails, we have strong reason to believe that the threat actor intended to carry out these actions at a later date, or planned to sell access to the account on an underground marketplace. This discovery indicates that security of long-lived AWS access keys is still a major threat facing users of cloud services, and unexpected usage of the API calls discussed in the report could be a reliable indicator of a threat actor exploiting this threat to target SES.

Our threat research and detection engineering efforts throughout Q4 2024 revealed several other significant trends in the cloud threat landscape.

Cloud control plane remains a primary target

The most prominent trend we observed in Q4 was the continued focus on cloud control plane attacks, which made up 44 percent of all attacks we observed in the quarter. The majority of these attacks centered on AWS services, with threat actors showing particular interest in SES and the newly introduced Bedrock AI service. In many cases, these attacks followed a familiar pattern: initial compromise of long-lived access keys, followed by attempts to establish persistence through the creation of backdoored IAM users or roles.

Business email compromise remains a widespread threat

Business email compromise (BEC) attacks continue to be a significant threat, accounting for approximately 33 percent of all identified incidents in Q4. BEC threat actors leveraged sophisticated techniques targeting Microsoft 365 environments, with a notable increase in the use of OAuth consent phishing tactics. These attacks typically began with the creation of suspicious inbox rules or OAuth consent grants, followed by attempts to exfiltrate sensitive information or conduct further account compromise.

Application-layer attacks are evolving

While the cloud control plane and BEC attacks dominated the threat landscape, we also observed an increase in sophisticated application-layer attacks. Credential stuffing campaigns, in particular, showed evolution in their tactics, becoming more distributed and persistent in nature. These attacks represented 20 percent of all findings, with threat actors increasingly targeting cloud-native applications. The increasing sophistication of these campaigns was evident in their use of dynamic infrastructure—frequently rotating through multiple proxy services and employing advanced automation to bypass traditional web application defense mechanisms, such as rate limiting.

Looking ahead

The trends observed in Q4 2024 suggest that threat actors are continuing to refine their tactics while also exploring new attack surfaces, particularly in emerging technologies like AI services.

Organizations should focus on implementing robust access key management practices, enhancing monitoring of AI/ML services, and improving their incident response procedures to address these evolving threats. Of particular importance is the need for enhanced detection engineering focused on emerging attack patterns, especially around AI service abuse and sophisticated BEC campaigns.

Thank you for reading—we're eager to hear from you! If you have any questions, thoughts or suggestions, shoot us a message at securitylabs@datadoghq.com. You can also subscribe to our monthly newsletter to receive our latest research in your inbox.

Did you find this article helpful?

Subscribe to the Datadog Security Digest

Get Security Labs posts, insights from the cloud security community, and the latest Datadog security features delivered to your inbox monthly. No spam.

Related Content