2025 threat reports, Kubernetes version adoption, and how attackers use AI

Welcome to the November 2025 edition of the Datadog Security Digest!


This is the time of year when we look back and try to consolidate everything we've done over the past 11 months. This issue's theme is *consolidation*, with learnings from the threat landscape in 2025, more ways to minimize long-lived credentials in AWS, a reminder that a forgotten bucket can lead to Windows RCEs, and more.


This newsletter was created by a real person, not a machine. (No, not even an LLM.) Your curator of the month is Christophe Tafani-Dereeper. (Yes, the em dashes are mine.)

Cloud & container security

A 2025 look at real-world Kubernetes version adoption

Our own Rory McCune analyzes version patterns across a large number of containerized environments—and perhaps surprisingly, the outcome turns out to be pretty positive.

Federating AWS identities to external services

AWS just released a new feature that allows you to use STS as an OIDC identity provider when an AWS workload accesses an external application. This removes the need for long-lived credentials and makes it straightforward for external workloads to securely authenticate applications running inside AWS. More of this, please!

Remote code execution in Windows Update Health Tools

Wondering why this story is in our "cloud security" section? It turns out that Martin Warmer found a dangling Azure blob storage that Windows Update Health Tools actively used but that anyone could register and use to implant malicious payloads. Had this been used by malicious actors, Martin claims this would have allowed them to compromise more than 40,000 devices from 3,4000 Azure tenants in just 7 days. Whoops!

AI Security

How attackers use AI tools in 2025

Google has unique visibility into attacker behavior because of its large infrastructure, its operation of Google Cloud for many customers, and its 2022 acquisition of Mandiant, which responds to real-world incidents. In this post, their Threat Analysis Group (TAG) provides updates on how attackers adjust their AI tooling use. The tactics include attempts to social engineer AI models, using AI on infected devices to generate just-in-time malicious payloads, and state-sponsored actors adopting Gemini for offensive operations.

Analyzing network traffic from coding agents

Today, even the most skeptical among us use LLMs, code assistants, or even code agents. Yet, few of us understand what data is collected from these tools and where it's sent. Lucas Pye, a research intern at Chaser Systems, wanted to understand this better. In a blog post, he shared his findings after analyzing network traffic of popular tools such as Claude Code, Gemini CLI, and GitHub Copilot. Pye's analysis is useful not only for understanding what data these tools send to their backends, but also for identifying unapproved AI tooling in your environment.

DeepSeek generates insecure code when presented with politically sensitive topics

The finding is a fascinating result from the CrowdStrike Counter Adversary Operations team. They found that in the presence of politically sensitive keywords such as "Tibet" or "Uyghurs," the code that DeepSeek generated was significantly more insecure. Their hypothesis is that this behavior isn't deliberate but is instead a result of emergent misalignment, where the model unintentionally learns to associate sensitive terms with negative characteristics due to bias in its training data.

Threat Research

The ENISA threat landscape for 2025

ENISA (the European equivalent of the US's CISA) has published its 2025 threat landscape report, based on observed incidents. The report includes industry-specific insights that can help organizations build tailored threat detection programs. Phishing remains responsible for 60% of initial intrusions, driven by increasingly effective campaigns using phishing-as-a-service platforms and ClickFix-style pages.

Datadog threat roundup: Top insights for Q3 2025

The Datadog Security Research teams have visibility into a number of in-the-wild attacks and incidents. Every quarter, we report on key trends we think you should know about because they keep coming up. Q3 saw the rise of phishing attacks targeted at npm maintainers' accounts, more malicious VS Code extensions, and attackers using more AI tooling.

Learnings from recent npm supply chain compromises

Back in the day, finding a malicious npm package used to bring chills. Now, it's easy to find hundreds a day. But the most impactful supply chain breaches occur when an attacker compromises a popular package maintainer's account, distributing malware to thousands of unsuspecting users. In this post, our own Kennedy Toomey shares a retrospective on common patterns identified across three high-profile compromises that occurred over a 3-week period this summer.