Latest news and insights from our industry experts

AI Phishing in 2026: Why Microsoft 365 Users Are Still Getting Caught (and What To Monitor)

Written by David Pape | Feb 2, 2026 10:55:30 AM

AI has made phishing faster, more convincing, and harder to spot. This guide shows what to monitor in Microsoft 365 and Entra ID, so you can detect account takeovers early, spot persistence like rules and forwarding, and contain them before they turn into fraud.

Why AI Phishing Breaks the Usual Warning Signs

Why does old advice not hold up anymore? Vague, poorly formatted emails and weird domains were the telltale signs of phishing emails in the past. In 2026, AI allows attackers to tailor emails to roles, use industry language, refer to current events and even internal context – in different languages and even imitating the tone (one of the most obvious signs in the past) of the sender. Besides this, attacks are now moving to different platforms – from email to Teams/SharePoint links, QR codes or supplier impersonations.

Additionally, AI allows rapid iteration - A/B testing what works and what doesn’t, plus creating convincing landing pages to trick you faster than ever.

What does that mean for you? Simple: more controls and monitoring – which also shifts the blame from the user.

So – what does a typical Microsoft 365 takeover look like?

Quick Wins: Policies and Controls That We Often See Underused

High impact, low effort (examples)

    • Block or restrict legacy authentication
    • Enforce MFA and tighten for admins and high-risk users
    • Turn on/confirm anti-phishing and impersonation protection (executive/finance)
    • Restrict external auto-forwarding
    • Review and tighten app consent (move toward admin consent for risky permissions)
    • Ensure mailbox auditing and log retention are adequate
    • Create an easy report phishing path for users

Medium effort, strong payoff

    • Conditional Access for device compliance/approved apps
    • Separate admin accounts + break-glass plan
    • Alerting and escalation path (including out-of-hours)

What To Do When You Suspect Compromise (First 30 Minutes)

First 5 Minutes: Contain

1. Confirm affected user and urgency (fraud risk?)

2. Block sign-in / reset credentials (as per policy)

3. Revoke active sessions

4. Escalate internally if finance/executive account

Next 10 Minutes: Check Persistence

5. Review inbox rules

6. Check forwarding settings

7. Check delegates/mailbox permissions

8. Check new OAuth consents/apps (if in scope)

Next 15 Minutes: Scope and Protect Others

9. Identify external recipients and suspicious sent items

10. Look for similar patterns across other accounts (campaign indicator)

11. Increase monitoring for high-risk users

12. Document actions/evidence for audit/insurance

Mind: The goal is to stop spread and fraud – forensics come later.

In a Nutshell

AI has changed phishing in one fundamental way: it has made it reliably convincing. That means the old model, train users harder and hope they spot the signs, will keep failing. In Microsoft 365 environments, the real differentiator is what happens after the click: how quickly you spot abnormal sign-ins, how quickly you catch persistence being set (rules, forwarding, delegates, OAuth consent), and how quickly you contain before fraud or wider compromise follows. If you’re monitoring the right signals and you’ve agreed the first 30 minutes of actions in advance, most takeover attempts can be contained before they turn into a payment redirection, data loss, or a broader incident.

If you want a practical starting point, use our Microsoft365 AI-Phishing & BEC Defence Kit to validate your tenant basics, run a quick persistence audit, and align IT and finance on a payment-verification process.