Blog | ThreatBook

Phishing With Google Ads and Fake AI Docs: A Criminal Campaign Targeting the AI Ecosystem

Written by ThreatBook Research Team | 20 March 2026, 12:03 AM

ThreatBook Research and Response Team has identified and tracked an organized threat group conducting a large-scale malware distribution campaign targeting users of popular AI applications — OpenClaw, Claude, NotebookLM, Kimi, Qwen, and others. The group operates across four distinct distribution channels simultaneously, and its techniques represent a threat pattern likely to be replicated widely as AI tool adoption continues to grow.

This is the campaign overview. The complete technical analysis — including full malware chain documentation and indicators of compromise — follows in Part 2 of this series.

 

Attack Vector 1: Fake Installation Pages via Google Ads

When a user searches for an AI application by name — OpenClaw, Claude, NotebookLM — the group's phishing pages appear above legitimate results through paid Google Ads placements. The ads display only a second-level domain. The actual phishing content is hosted on third-level subdomains provided by reputable platforms — Squarespace, Craft, Cloudflare Pages — which lends the pages visual credibility and complicates infrastructure takedown.

Phishing Site Counterfeiting as Third-Party Service
claud-code.pages.dev Claude Cloudflare Pages
claude-code-docs-dlvr2jpuuw.edgeone.app Claude EdgeOne Pages
clavdecode.it.com Claude IT.COM DOMAINS LTD
docs-claude-code-app.squarespace.com Claude Squarespace
claude-code-docs-app.craft.me Claude Craft
openclaw-dwnl.squarespace.com OpenClaw Squarespace
notebooklm-last-version.squarespace.com NotebookLM Squarespace

 

The pages present convincing installation documentation for both macOS and Windows, embedding malicious one-line terminal commands in place of legitimate installers. Users who follow the instructions execute credential-stealing malware directly. The social engineering is effective precisely because the scenario — searching for how to install a new tool and following the top result — is so routine.

 

Attack Vector 2: Technical Tutorial Lures

In earlier phases of the campaign, the group also targeted general technical search queries: "how to clear disk space on Mac," "how to show hidden files on Mac." The same Google Ads delivery mechanism was used.

The phishing pages in this category present detailed, plausible technical guides directing users to open a terminal and run specific commands. The commands are malicious payloads. This vector demonstrates that the group is not solely targeting AI tool users — it is targeting technically curious individuals broadly, and adapting its lures accordingly.

 

Attack Vector 3: Prompt-Injected LLM Conversation Shares

This is perhaps the most technically creative vector the group employs. It exploits the conversation-sharing features of large language model platforms — including Kimi — to distribute malicious commands in a format that appears authoritative.

The technique: craft prompts specifically engineered to elicit outputs containing malicious terminal commands, then share only the portion of the conversation in which those commands appear — truncating the context that would reveal the manipulation. These truncated conversation links are promoted via Google Ads.

This is not model poisoning or training data contamination. The LLM itself was not compromised. The attacker weaponized the platform's sharing mechanism and the user's trust in AI-generated content.

When a user encounters one of these shared conversations — a Kimi session appearing to answer "how to clear disk space on Mac" — the output is indistinguishable from a genuine recommendation. The malicious command reads as legitimate advice from a trusted AI tool.

 

Attack Vector 4: Malicious Skills in Community Repositories

The group has produced and distributed malicious Skills — plugin configuration files for AI coding agents including OpenClaw, Claude Code, Cursor, and CodeX — through community repositories including Clawhub and SkillsMP.

The malicious content is embedded within the Prerequisites section of the Skills.md file, using prompt injection to instruct the AI agent to execute attacker-controlled commands when the Skill is invoked. This vector requires no user action beyond installing and activating the Skill.

ThreatBook's OneSEC EDR has detected and blocked multiple successful intrusions delivered through this channel.

 

Why This Matters Beyond This Campaign

The individual techniques here — search engine phishing, malicious package distribution, prompt injection — are not new. What is new is their systematic application to the AI application ecosystem, and the specific trust signals that ecosystem provides. Users extend more inherent trust to AI platform outputs, AI tool installers, and AI community content than they might to equivalent content in other contexts. That trust is now being exploited deliberately and at scale.

ThreatBook assesses that this campaign represents a template. The specific applications used as lures will change. The distribution mechanisms will evolve. But the underlying pattern — targeting AI application users through the channels they already trust — will be replicated.

 

A full IOC list and complete technical malware analysis is published in Part 2 of this series. ThreatBook's full product suite provides detection coverage for this campaign.