Key points
- Copilot Studio links look benign, but they can host content to redirect users to arbitrary URLs. One example of this is the built-in "Login" button, which allows delivery of OAuth phishing attacks.
- Copilot Studio also makes it easier for attackers to perform malicious actions or exfiltrate tokens. For example, a Copilot Studio agent can exfiltrate the user's token to a malicious URL after an OAuth phishing attack.
- This scenario is an example of why it’s important to treat new cloud services with caution, especially when they include content that end users can modify.
- To detect suspicious modification of Copilot Studio and configure strong application consent policies to protect against OAuth attacks, review our security considerations below.
- This post provides background on Microsoft's Entra ID application consent policy and updates made in July 2025 and upcoming in late October 2025. You can skip ahead if you are already familiar with the topic.
- Entra ID OAuth attacks have become more difficult, but are still relevant in two scenarios: users consenting to allowed permissions on internal applications, and users with Application Administrator roles consenting to any permissions on any applications.
Introduction
Would you trust this website?

The URL is a valid Microsoft domain, and the user experience looks similar to fully managed Microsoft Copilot or Microsoft 365 Copilot services. However, this page is a Microsoft Copilot Studio agent! Unlike Copilot services that offer less customization, Copilot Studio hosts chatbots (called "agents") that perform tasks through automations (called "topics") configured by users. This makes agents flexible for users… but also useful for attackers.
In this post, we document a method by which a Copilot Studio agent's "Login" settings can redirect a user to any URL, including an OAuth consent attack. This increases the attack's legitimacy by redirecting the user from copilotstudio.microsoft.com
.
Our example will also automate exfiltration of the resulting token in Copilot Studio's topics. However, an attacker could also configure a topic to take any action on behalf of a user with the token.
Abuse of Microsoft services to distribute malicious content is not new, but this attack technique takes new forms as services evolve. Michael Bargury has provided a primer on Copilot exploitation in his Black Hat talks, "Living off Microsoft Copilot" and "15 Ways to Break Your Copilot." This post will cover a new vector that uses Copilot Studio to enable existing OAuth consent attacks.
You can find more information on Entra ID application fundamentals, including app registrations and service principals (SPs), in our previous post.
Background: OAuth consent attacks are still relevant
An OAuth consent attack (T1528), also known as the malicious OAuth application consent technique, is used to target Entra ID users. An early report of this technique from Amnesty International documented use of Entra ID applications to take full control of users' email data. These attacks are still relevant today, as reported by Red Canary.
While protections against these attacks have improved, two major scenarios remain where an attacker can use OAuth consent attacks to target users.
How OAuth consent attacks work
To perform an OAuth consent attack in Entra ID, an attacker creates an app registration that requests permission(s) to read or modify data as a target user. The attacker then lures the user into consenting to the application through Entra ID's application consent workflow.
Once the user consents, a token with the requested permissions is returned to a URL under the attacker's control. The attacker can use this token to perform actions as the target user, such as sending email or accessing sensitive data.
More technical details on this process are available in a writeup from Microsoft.
What's changed?
As OAuth consent attacks evolved, Microsoft introduced new settings to prevent users from mistakenly consenting to malicious applications. In 2020, the company released a setting to restrict users from consenting to unverified external applications.
In July 2025, Microsoft expanded application consent protections again by making microsoft-user-default-recommended the new default application consent policy for all Entra ID tenants in MC1097272.

The new policy states that it allows only "permissions consentable based on Microsoft's current recommendations." At the time of publication, this consists of preventing users from consenting to these Microsoft Graph delegated permissions without administrative consent:
Sites.Read.All
Sites.ReadWrite.All
Files.Read.All
Files.ReadWrite.All
This policy prevents consent to SharePoint and OneDrive data, but still allows several permissions.
Internal users and Application Administrators are still vulnerable
While the default policy will be updated in the future, its current state still allows users to be targeted using allowed permissions.
Scenario 1: Unprivileged user consent to internal applications
Users can consent to certain Microsoft Graph permissions for internal applications, as long as the requested Microsoft Graph permission is not included in the configured policy and does not have AdminConsentRequired set to "Yes."
Several permissions allowing data access are still consentable under the Microsoft default policy, including:
- Reading, writing, and sending email (
Mail.ReadWrite
,Mail.Send
) - Reading, writing, and sending chats (
Chat.ReadWrite
) - Reading and writing calendars (
Calendars.ReadWrite
) - Reading and writing data in OneNote (
Notes.ReadWrite
)
By default, all Entra ID member users can register new applications. An attacker with access to a user in a default Entra ID tenant can create an internal application they control, then trick a target user in the same tenant into granting access to these permissions.
Scenario 2: Administrative consent to any applications
Users with the Cloud Application Administrator, Application Administrator, or a similar role can consent to any Microsoft Graph permissions for any application. As these users need to approve and consent to new applications on behalf of the Entra ID tenant, they are not required to request approval.
An attacker can create an external application in an Entra ID tenant they control, then trick a target administrative user from another tenant into consenting to any Microsoft Graph permissions.
Future changes
In the past month, Microsoft has announced another update to their default application consent policy to take effect in late October 2025. Once implemented, this update would limit all but the OneNote permissions (Notes.ReadWrite
) in the default policy gaps described above, and the corresponding attack scenario described later in this post (Scenario 1).
The full text of this note is provided below:

However, administrators with the Cloud Application Administrator, Application Administrator, or similar roles could still be targeted by an OAuth attack, as the user application consent policy does not impact these users' consent experience.
Automating token theft with Copilot Studio
As described above, OAuth consent attacks aren't over yet. Let's look at how Copilot Studio's agents can enable these attacks, by serving a malicious OAuth application through a legitimate-looking service and automating token exfiltration.
A Copilot Studio agent's sign-in process (the "Login" button) can be configured with a malicious application, either internal or external to the target environment, then modified by an attacker to send the resulting user token to a URL under their control.
An overview of this attack is shown below.

Demo: Targeting users with a malicious Copilot Studio agent
Let's say a user accessed a malicious agent, configured for the attack above. What would the experience look like?
The user would receive a link from the attacker with a format similar to the below:
https://copilotstudio.microsoft.com/environments/Default-{tenant-id}/bots/Default_{bot-name}/canvas
Notice how the URL's domain is consistent per agent (copilotstudio.microsoft.com
) and that the domain is similar to those associated with Microsoft's other Copilot services (copilot.microsoft.com
, copilot.cloud.microsoft
).
When accessing this URL, the user is greeted with text similar to Microsoft 365 Copilot and prompted to sign in with the "Login" button.

If they haven't noticed the "Microsoft Power Platform" icon, they may mistake this for one of Microsoft's other Copilot services.
Once the user clicks "Login," they are redirected to a malicious OAuth application. However, this button can be configured to redirect them to any malicious URL. The use of authentication to redirect a user to malicious URLs has recently been documented by Push Security and Proofpoint.
The malicious agent does not need to be registered in the target environment: in other words, an attacker can create an agent in their own environment to target users.
Scenario 1: Unprivileged internal user
An attacker with an existing foothold in an Entra ID tenant can target an unprivileged internal user by registering a malicious application in that environment.
The application requests access to a user's email and OneNote data, two permissions that are still allowed under Microsoft's default policy:

Scenario 2: Application Administrator
An attacker with no access to an environment could still target an Application Administrator with an externally registered application. This application can request any permissions, including application scopes, delegated scopes, and scopes disallowed by policy for standard users.
In the example below, an Application Administrator is allowed to consent to a malicious application with all categories of permissions without further warning:

Completing sign-in
After clicking "Accept," the user is directed to the Bot Connection Validation service (token.botframework.com
) and provided with a numeric code to complete authentication. This may seem atypical, but it’s a standard part of the Copilot Studio authentication process using a valid domain.

The user provides the validation code to the agent to complete authentication. The agent then receives the user's token from the Bot Connection Validation service and stores it in the User.AccessToken
variable. After authentication, the user can converse normally with the agent.
What they may not realize is that their session has been sent to an external attacker!

Exfiltrating tokens
Immediately after authentication, a backdoor HTTP request in the agent's sign-in topic forwarded the User.AccessToken
variable to Burp Collaborator. The user's web traffic will show no connection to this URL, as the token was sent directly from Copilot Studio using Microsoft's IPs.
The token ("eyJ[...]") is shown below in Burp Collaborator, in the "Token" header.

This token can also be used in Copilot Studio's topics to take actions on the user's behalf and exfiltrate any resulting data.
Decoding the user token reveals the Microsoft Graph permissions that the user has consented to. For example, the token returned after tricking an internal user to consent (Scenario 1) will grant an attacker Mail.ReadWrite
, Mail.Send
, and Notes.ReadWrite
.
{
"aud": "00000003-0000-0000-c000-000000000000",
"iss": "https://sts.windows.net/[tenant-id]/",
[...snip...]
"amr": [
"pwd"
],
"app_displayname": "Copilot Test Auth",
"appid": "[removed]",
"appidacr": "1",
"idtyp": "user",
"ipaddr": "x.x.x.x",
"name": "Test User",
[...snip...]
"scp": "Mail.ReadWrite Mail.Send Notes.ReadWrite",
"sid": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"sub": "PAe[...snip...]NHU",
"tenant_region_scope": "NA",
"tid": "[tenant-id]",
"unique_name": "TestUser@[removed].onmicrosoft.com",
"upn": "TestUser@[removed].onmicrosoft.com",
[...snip...]
}
Configuring the attack
How does this all work? Let's review the setup to better understand Copilot Studio agents and how they may be targeted for malicious use.
Creating a malicious agent
An attacker creates a Copilot Studio agent. This step is performed with a Copilot Studio license or full trial license in the attacker's own Entra ID tenant, or with a compromised user in another tenant with a Copilot Studio license. In this scenario, where the agent is created will not make a difference to target users.
Each agent contains a set of topics. Topics are customizable low-code workflows that activate on specified conditions, such as when a user sends a message. Topics are either user-created "custom" topics, or built-in "system" topics. Both custom and system topics can be modified by the agent's developer.
Topics' automation capabilities make them an ideal target to misuse in an attack.
Stealing tokens through the sign-in topic
All agents include a system sign-in topic that triggers when an agent requires user authentication. Because the agent's developer can modify system topics, the sign-in process can be backdoored.

The sign-in topic displays a "Login" button to redirect the user to the configured sign-in provider and collects the resulting session token in the User.AccessToken
variable. The sign-in topic's default automation is shown below, with the ability for a developer to add more actions using the "+" button.

To backdoor the sign-in topic, a new "HTTP Request" action is added after the "Authenticate" action. In this example, we'll configure an HTTP request to a Burp Collaborator URL and send the User.AccessToken
variable in a "Token" header.

Configuring the malicious OAuth application
At this stage, the attacker creates a multi-tenant app registration with an associated secret and a reply URL of https://token.botframework.com/.auth/web/redirect
. This app registration is the malicious application that target users consent to during sign-in.
The application ID (or client ID), secret, and authentication provider URLs are used to configure the agent's sign-in settings, under "Security → Authentication → Authenticate manually." Full details of this configuration are documented in Microsoft's guide.

For unprivileged internal users (Scenario 1), the attacker configures the agent to request scopes that users can still consent to in Microsoft's default policy.

For administrative targets (Scenario 2), these scopes can include any Microsoft Graph permissions. For example, the scopes above could be augmented with scopes denied to users by default (Files.ReadWrite.All
, Sites.ReadWrite.All
) and even application permissions such as Application.ReadWrite.All
.
Misusing redirect URLs
The "Authorization URL template" field of the agent's manual authentication redirects a user to a configured URL when the user clicks the "Login" button. Our example redirects the user to the application consent workflow URL, but this field could also be used to redirect to any malicious URL.
Sharing the agent
The agent is shared to copilotstudio.microsoft.com
by activating its demo website under "Channels → Demo website." This demo site is used to trick users into logging in. The demo URL has a format similar to the below:
https://copilotstudio.microsoft.com/environments/Default-{tenant-id}/bots/Default_{bot-name}/canvas
You can find more details on links and sharing settings in Copilot Studio in a blog post by Johann Rehberger.
Once agent sharing is configured, the attacker shares their copilotstudio.microsoft.com
link to the target user through a message of their choice (e.g., email, Teams chat, or SEO positioning).
Security considerations
Enforce a robust application consent policy
Preventing OAuth consent phishing requires a strong application consent policy. As outlined above, Microsoft's default application consent policy is not sufficient to address all permissions that can lead to sensitive data access.
Even after upcoming changes to Microsoft's default policy, you may want to create a stronger application consent policy to prevent unprivileged users from granting sensitive data access.
Microsoft has created the below guides to assist in creating a strong application consent policy beyond the default policy:
Even with these changes, certain administrative users are always at risk of accidental consent to OAuth applications. Ensure that users with the Application Administrator role, Cloud Application Administrator role, and similarly high-privileged roles understand that they will not be prevented from consenting to high-risk permissions and should be cautious when authorizing applications.
Disable user application creation defaults
By default, all Entra ID member users can create new applications. This can allow attackers with access to a user account in a target tenant to create new applications and perform internal OAuth phishing attacks.
Steps to disable this default are defined in a guide from Microsoft.
Monitor application consent
Monitoring when a user has consented to high-risk permissions or uncommon applications can identify OAuth phishing attacks. Consider monitoring the events below to detect suspicious application consent activities.
Entra ID Audit logs
- Activity display name: "Consent to application"
Microsoft 365 Audit logs
- Operation: "Consent to application."
Monitor Copilot Studio agent creation and modification
Consider monitoring Copilot Studio events that may indicate malicious activity. The below Microsoft 365 events will help identify activities related to this post. Additional details on Copilot Studio monitoring are provided in Microsoft's guide.
Copilot Studio agent created
Creation of agents from unexpected users may indicate suspicious activity.
- Workload:
PowerPlatform
- Operation:
BotCreate
Copilot Studio sign in topic modified
Modification of Copilot Studio system topics may indicate an attacker backdooring system topics.
- Workload:
PowerPlatform
- Operation:
BotComponentUpdate
- Other: Where
PropertyCollection
contains:Name
:powerplatform.analytics.resource.bot_component.schema.name
Value
:*.topic.Signin
How Datadog can help
For customers that use Datadog Cloud SIEM, the following detections monitor for potential OAuth phishing attacks and consent to suspicious applications:
- Potential Illicit Consent Grant attack via Azure registered application
- Consent given to application associated with business email compromise attacks in Microsoft 365
Additionally, the following detections monitor for the addition of credentials to rarely used Entra ID applications. This may indicate an attacker preparing an application to be used in the Copilot Studio OAuth consent phishing attack scenario explained above:
Conclusion
In this post, we described a novel method of OAuth consent attacks that leverage Copilot Studio agents. This method highlighted current gaps in Microsoft's OAuth consent settings and served as a reminder not to trust low-code solutions on Microsoft domains as inherently non-malicious.
Protecting against this type of attack includes configuring a strong application consent policy and preventing users from registering new applications by default. Details on these configurations, along with monitoring considerations, are provided in our section on security considerations.
We're always eager to hear from you. If you have any questions, thoughts, or suggestions, send us a message at securitylabs@datadoghq.com, or open an issue. You can also subscribe to our newsletter or RSS feed.