News

New Phishing Campaign Targets Gmail Users With Fake Google Security Alerts

A sophisticated phishing campaign is sending Gmail users convincing fake security alerts that appear to come from Google itself. The emails pass standard spam filters and use real Google infrastructure to reach inboxes. Here's how to spot them.

breached.news16 min read

A phishing campaign currently targeting Gmail users has enough technical sophistication to fool standard spam filters — and, for many people, a careful read of the email itself.

The attacks were publicly documented by Nick Johnson, the lead developer of the Ethereum Name Service (ENS), in a thread posted on X in April 2025. Johnson received one of the emails himself and, rather than deleting it, pulled it apart to understand exactly how it worked. What he found was significant: the message was, by all technical measures, legitimate. It appeared to come from Google. It was signed by Google. And it arrived in the same conversation thread as real Google security alerts he'd received previously.

The email told him his Google account data was being requested pursuant to a law enforcement subpoena, and that he should review the request and submit any objections through a provided portal.

The portal was not Google's.

Why Email Authentication Standards Aren't Enough

To understand why this attack works, it helps to understand what the email industry built to stop attacks like it — and where those defences fall short.

The three main email authentication standards are SPF (Sender Policy Framework), DKIM (DomainKeys Identified Mail), and DMARC (Domain-based Message Authentication, Reporting and Conformance). Together, they were designed to address the fundamental problem with email: the protocol was built in the 1970s with no concept of identity verification, and anyone can send an email claiming to be from anyone.

SPF specifies which mail servers are allowed to send email on behalf of a domain. If an email claims to come from google.com but was sent from a server not listed in Google's SPF record, it should be treated with suspicion.

DKIM adds a cryptographic signature to outgoing emails, letting recipients verify that the message content hasn't been tampered with and was genuinely sent by an authorised server for that domain.

DMARC ties SPF and DKIM together and tells receiving servers what to do when an email fails those checks — deliver it anyway, quarantine it, or reject it outright.

These are genuinely useful standards. Widespread DMARC adoption has reduced the volume of crude spoofing attacks — emails that simply forge the From: header to claim a sender they have no relationship with. That class of attack is harder to execute against well-defended recipients than it was a decade ago.

The problem is that all three standards verify authentication, not intent. They confirm that an email genuinely came from the claimed infrastructure. They say nothing about whether the person controlling that infrastructure is acting legitimately. When an attacker abuses Google's own OAuth system to generate a real email from Google's real servers — as in this campaign — SPF, DKIM, and DMARC all pass, because the email genuinely did originate from Google. The standards worked as designed. The design simply doesn't account for an attacker using legitimate access to a platform to produce weaponised messages.

This limitation is not a secret. Security researchers have documented it for years. The challenge is structural: the same infrastructure that enables legitimate applications to send notification emails on behalf of major platforms can be abused by anyone who creates an app and understands how the plumbing works.

How the Attack Works, Step by Step

Understanding why this phishing campaign is harder to detect than most requires a brief detour into how email authentication works.

When you receive an email, your email provider performs a series of checks to verify that the message is what it claims to be. One of the most important is DKIM — DomainKeys Identified Mail. DKIM works by having the sending mail server attach a cryptographic signature to the outgoing email. When it arrives, your provider checks that signature against a public key published by the domain in question. If the signature validates, the email "passed DKIM," which is a strong signal that it actually came from the claimed sender.

Virtually all legitimate Google emails pass DKIM. And here's the remarkable thing about this attack: so do the phishing emails.

The attackers achieved this by abusing Google OAuth — the system that lets developers create apps that integrate with Google services. By creating a Google OAuth application and naming it with a carefully crafted string (essentially, the full text of a fake Google security alert formatted to look like an email notification), they were able to trigger Google's own infrastructure to send a legitimate notification email. Because Google actually sent the email, Google's DKIM signature on it is entirely valid. It really is signed by Google.

The email then follows a specific path: it bounces through a mailbox controlled by the attacker (who uses Google Sites as hosting) and is forwarded on to the victim. Because the DKIM signature covers the message content and the message hasn't been modified, the signature still passes at the recipient's end.

The result, as reported by BleepingComputer, is what researchers call a DKIM replay attack — the attacker replays a legitimately signed Google email with their own payload embedded in it.

Why Spam Filters Don't Catch It

Email spam and phishing filters work primarily by checking a combination of factors: the sending domain, the DKIM and SPF authentication status, the content of the message, the reputation of the sending infrastructure, and whether recipients have flagged similar messages.

This attack defeats most of those checks simultaneously:

  • The sending domain appears legitimate. The email originates through Google's own mail infrastructure.
  • DKIM passes. The message carries a valid Google signature.
  • The landing page is hosted on sites.google.com. Google Sites is a legitimate Google product; its domain doesn't trigger blocklists.
  • The email may appear in an existing thread. Because the email passes as coming from the same infrastructure as genuine Google security alerts, Gmail may group it with previous legitimate alerts in the same conversation thread — lending it additional credibility.

The practical effect is that an email which would normally be caught, quarantined, or flagged sails straight into the primary inbox carrying all the visual indicators of legitimacy.

The Wider Pattern: Attackers Abusing Legitimate Infrastructure

The Google OAuth technique is novel in its specific mechanics, but it sits within a well-established pattern: attackers routing malicious content through legitimate, trusted platforms to defeat filtering and build credibility with targets.

Microsoft 365 / SharePoint phishing has been documented extensively. Attackers create free or trial Microsoft 365 accounts, upload a malicious file or credential-harvesting page to SharePoint or OneDrive, and send a sharing notification from sharepoint.com or onedrive.live.com. The sharing email is genuine — Microsoft sent it — and the link goes to a real Microsoft domain. Security filters that rely on domain reputation pass it through. Researchers at Cofense documented campaigns using exactly this method at scale against enterprise targets. The file hosted on SharePoint may itself be a page that redirects to a credential harvesting site, or may impersonate a Microsoft login to harvest 365 credentials.

DocuSign phishing exploits the fact that DocuSign notification emails are ubiquitous in professional contexts and widely trusted. Attackers with DocuSign accounts can send genuine DocuSign envelopes containing documents that redirect recipients to malicious sites. The email comes from docusign.net and carries all DocuSign's authentication markers. The Anti-Phishing Working Group has noted DocuSign as a frequently abused platform, with volumes typically spiking around tax season and contract-heavy business periods. A recipient who routinely signs contracts via DocuSign has no obvious reason to treat a DocuSign notification with heightened suspicion.

Google Docs and Google Drive phishing predate the OAuth campaign described here by years. Attackers share a Google Doc with a target — again, a real share notification from Google's infrastructure — and the document itself contains a link to a credential harvesting page or a malicious file download. Because the notification is genuine and the document lives on Google's servers, it clears every platform-level filter. Google has implemented policies to limit abuse of Drive and Docs sharing for phishing, but the method remains in active use.

The common thread across all these techniques is the same as in the Gmail OAuth campaign: the infrastructure is legitimate; the abuse is not. Filters and detection systems built around the assumption that "trusted domain = safe message" are defeated by definition. The arms race has moved past domain reputation.

The Psychology of Authority Lures

The social engineering dimension of these attacks is inseparable from their technical sophistication. A perfectly crafted DKIM replay attack that lands in an inbox with a weak lure would fail. What makes the law enforcement subpoena lure so effective is worth examining.

Authority is one of the six principles of influence documented by Robert Cialdini in his foundational research on persuasion: people are more likely to comply with requests from figures perceived as authoritative — officials, experts, institutions. A message that invokes legal process carries implied authority from the state itself. The recipient's instinct is to take it seriously.

Urgency is layered on top. The subpoena lure doesn't just invoke authority — it implies a deadline. There's a window to object. If you miss it, something bad happens. Urgency narrows the cognitive window available for sceptical evaluation. Under time pressure, people rely on heuristics ("this looks like it's from Google") rather than analytical reasoning ("let me verify the headers and check my account directly").

Legitimacy signalling completes the picture. The email passes every technical authentication check. It's in a thread with real Google emails. The landing page is on a Google domain. Each of these signals independently reduces suspicion; combined, they create a context where doubt requires active effort to sustain. The user has to overcome a stack of credibility indicators to conclude that something is wrong.

This is why legal/authority lures outperform generic "your password needs to be reset" messages on every metric that matters to attackers — click rate, credential submission rate, time to action. The target isn't just clicking; they're mentally prepared to cooperate with an institutional process. That mental framing makes them more likely to enter credentials carefully and correctly, exactly as the attacker needs.

The Arms Race: Trusted Domains vs Detection

The security industry's response to the trend of attackers abusing legitimate platforms has been, necessarily, slow and incomplete.

Traditional URL and domain blocklists can't block sharepoint.com, docusign.net, sites.google.com, or accounts.google.com — doing so would break legitimate workflows for millions of users. Instead, defenders have moved toward content analysis (examining what's on the page a link leads to, not just the domain), behavioural signals (unusual sharing patterns, new accounts sending high volumes), and context-aware filtering (distinguishing a SharePoint share from a known colleague versus a cold share from an unknown account).

These approaches help at the margins. Google, Microsoft, and DocuSign all invest in detecting and terminating abusive accounts. The Google OAuth campaign described in this article was eventually addressed — Google updated its systems to close the specific naming trick that allowed attackers to embed email content in the app name field.

But the attackers adapt. When one specific technique is closed, related ones are explored. The underlying dynamic doesn't change: legitimate platforms provide infrastructure that passes security checks, and there will always be some way to abuse that infrastructure for malicious purposes. The lag between a new technique being deployed at scale and the platform detecting and closing it can be weeks or months — plenty of time to harvest thousands of credentials.

The honest conclusion security researchers draw is that perimeter defences will always be incomplete against this class of attack. The reliable mitigation is user behaviour: specifically, the habit of never acting on a security alert delivered via email, and always navigating directly to the service in question. That habit cannot be defeated by technical sophistication in the phishing email, regardless of how convincingly it mimics legitimate infrastructure.

The Social Engineering Hook

The technical cleverness of this attack would be less concerning if the social engineering hook were weak. It isn't.

The specific lure — a notification that your Google account data is being accessed pursuant to a law enforcement subpoena — is well-chosen. It creates urgency without being obviously alarming (you haven't been told you've been hacked; you've been told there's a legal process affecting your account). It implies legitimate institutional authority. And it provides a plausible reason why you might need to act quickly: there's a window to submit an objection.

The email directs victims to a Google Sites page that closely mimics the appearance of Google's account security interfaces. Because the page is hosted on sites.google.com, the domain in the address bar is technically a Google domain — which is enough to satisfy many users who've been trained to "look for the Google domain."

From that page, victims are prompted to log in with their Google credentials. Those credentials are captured by the attacker in real time. In variants where the victim has two-factor authentication enabled, some implementations use a real-time proxy that forwards the login to the actual Google site and harvests the 2FA code before it expires — a technique that defeats SMS-based and app-based authentication codes (though not hardware security keys).

Google's Response

Google acknowledged the issue and indicated it was working on mitigations. The Hacker News reported in April 2025 that Google had updated its systems to address the specific OAuth application naming abuse that enabled the attack, though the broader class of DKIM replay attacks remains an area of ongoing research and defence development.

Google also noted that users concerned about law enforcement requests for their account data should visit Google's transparency tools directly at myaccount.google.com/data-and-privacy rather than responding to email alerts. Legitimate Google processes related to legal requests do not require users to log in through an emailed link.

How to Spot It: A Practical Checklist

Given how convincing these emails are, detection requires checking specific things rather than relying on general instinct.

1. Check the full sending address, not just the display name. Email clients often show only the display name ("Google") in the sender field. Expand or hover to see the full address. Legitimate Google security emails come from no-reply@accounts.google.com — not from variations, Google Sites addresses, or addresses that route through intermediate domains.

2. Inspect the email headers. In Gmail, click the three-dot menu next to Reply and choose "Show original." Look at the From:, Reply-To:, and any X-Forwarded-To: fields. If there are forwarding or relay addresses that don't belong to Google, that's a red flag.

3. Never navigate to an account security page from an email link. If you receive a notification about your Google account — any notification, from any source — navigate to myaccount.google.com manually by typing it into your browser. Don't click the link. If the alert is real, it will appear in your account security centre. If it doesn't appear there, the email was false.

4. Check the destination URL, not just the domain. A URL like sites.google.com/view/verify-your-account-security contains the word "google" but is not a Google account management page. Legitimate Google account management occurs at accounts.google.com and myaccount.google.com. Google Sites (sites.google.com) is a public hosting platform where anyone can create any content with any appearance.

5. Be particularly sceptical of legal-language urgency. Subpoenas, legal holds, and law enforcement requests are effective social engineering lures precisely because they combine urgency with institutional authority. Google's actual processes for handling legal requests do not involve emailing users and asking them to log in within a deadline.

6. Use a hardware security key if you're a high-value target. Hardware keys (like YubiKey or Google's own Titan key) cannot be phished by real-time proxy attacks because they cryptographically bind the authentication to the exact domain. A login attempt on a phishing site using a hardware key will simply fail — the key won't sign a challenge from sites.google.com when it's registered to accounts.google.com.

This Isn't an Isolated Technique — Trusted Infrastructure Is the New Attack Surface

The Google OAuth abuse documented here isn't novel in principle — it's part of a broader pattern that security researchers have been tracking for years. Attackers have learned that it's easier to abuse trusted services than to build convincing fakes from scratch. The domain is already trusted, the SSL certificate is already valid, the spam filters have already whitelisted it.

Microsoft 365 and SharePoint phishing is one of the most prevalent examples. Attackers with a compromised or trial Microsoft account can upload malicious files to SharePoint and send sharing notifications through Microsoft's own email infrastructure. The notification email comes from sharepoint.com, passes all authentication checks, and contains a link to a file that either harvests credentials or delivers malware. Because SharePoint is used legitimately by millions of organisations, security teams struggle to block it wholesale.

DocuSign phishing exploits the widespread use of electronic signature platforms. Attackers send legitimate DocuSign envelopes — sometimes from compromised accounts, sometimes from trial accounts — containing documents designed to harvest information or redirect victims to credential-stealing pages. Researchers at Abnormal Security documented a significant wave of this in 2024, noting that because the emails genuinely originate from DocuSign's infrastructure, they defeat nearly all gateway-level filters.

Google Docs and Workspace phishing follows the same playbook. An attacker shares a Google Doc or Google Form with a victim. The sharing notification comes from Google. The document itself — hosted on docs.google.com — contains a link or an embedded form designed to collect credentials. Some variants use Google Forms directly as the collection mechanism, meaning the victim's input goes to a legitimate Google Form and is simply read by the attacker through their Google account.

What unites all of these attacks is the same core insight: authentication standards like DKIM, SPF, and DMARC verify that an email came from a domain's infrastructure. They say nothing about whether the person using that infrastructure has malicious intent. When the attacker is the legitimate user — because they've created a legitimate account or compromised one — those checks pass automatically.

The Psychology of Legal and Authority Lures

The specific choice of a fake subpoena as the social engineering hook in this campaign reflects a detailed understanding of human psychology under pressure.

Security researchers categorise phishing lures by their psychological mechanism. The most effective fall into a small number of categories: urgency (act now or lose access), fear (your account has been compromised), authority (this message is from Google/the IRS/law enforcement), and legitimacy (this is an official process).

The subpoena lure deploys all four simultaneously. It creates urgency through a legal deadline. It creates fear through the implication of law enforcement interest. It invokes authority through the language of legal process. And it signals legitimacy through the appearance of official Google communications.

Research into compliance psychology — particularly Robert Cialdini's foundational work — consistently shows that people confronted with apparent authority figures, especially in institutional contexts, are significantly more likely to comply with requests without critical evaluation. A request that would seem suspicious from an unknown source becomes plausible when it appears to come from a known institution following a recognisable process.

The Google Sites landing page reinforces this. It's designed to look like a Google support or legal compliance portal. The URL contains "google." The layout mimics Google's design language. At every step, the visual environment is telling the victim that this is normal, official, and expected.

This is why generic advice to "be careful with suspicious emails" is insufficient. The emails in this campaign aren't suspicious by conventional markers. They require specific knowledge — about how email authentication works, about what Google's actual processes look like, about where legitimate account management happens — to identify. That knowledge can't be assumed, which is why technical controls (hardware keys, password managers that won't autofill on the wrong domain) matter more than awareness training alone.

What This Means Going Forward

Google's mitigation of the specific OAuth naming technique that enabled this particular attack is a positive step, but it addresses one instance of a broad and continuing problem. The underlying capability — abusing legitimate infrastructure to deliver malicious content — is not going away.

For users, the most durable protection is a set of habits that remain effective regardless of how the technical specifics of the attack evolve: navigate directly to services rather than following email links, use authentication methods that are domain-bound (hardware keys), and use a password manager that won't autofill credentials on sites they're not registered for.

For a broader understanding of how attackers gain access to accounts and systems through credential compromise, the Colonial Pipeline attack is a useful reference — a case where a single set of credentials, accessed in an entirely different way, had consequences that extended far beyond one person's inbox.

The Broader Context

This campaign is a reminder that phishing has become a technically sophisticated arms race. The early days of Nigerian prince emails have given way to attacks that exploit the genuine infrastructure of major tech companies, pass every automated authentication check, and are designed by people who have studied exactly what makes users trust an email.

Using a password manager provides a meaningful layer of protection here: password managers autofill credentials only on the exact domain they're saved for. If your manager doesn't offer to fill in your Google password on a phishing page, that's an immediate signal that something is wrong — regardless of how convincing the page looks. It's one of the underappreciated benefits of letting software handle credentials rather than your own pattern recognition.

The fundamental advice hasn't changed: don't act on security alerts delivered by email. Navigate directly to the service. That habit, consistently applied, defeats this attack entirely.

phishingGmailGoogleemail securityDKIMsocial engineering