All News
Anthropic pilots Claude for Chrome with security safeguards

Anthropic pilots Claude for Chrome with security safeguards

Anthropic has released a browser extension that integrates its Claude AI assistant directly into Google Chrome. The tool is in early testing with a limited group of users on the Claude Max plan. Alongside the launch, the company detailed new research on AI security threats and its measures to counter cybercriminal misuse.

August 27, 2025
August 27, 2025
August 28, 2025
Georg S. Kuklick

Anthropic has begun piloting a Chrome extension that allows Claude to operate inside the browser environment. The extension gives users the ability to direct Claude to read pages, click through links, and complete web forms. The company said the pilot phase will involve 1,000 Max plan subscribers, with plans to expand after testing. This move brings Claude closer to being an active browser copilot, capable of supporting research, repetitive tasks, and structured workflows directly within Chrome.

Users remain in control of permissions. The extension requires site-level approval before Claude can act, and it asks for confirmation on sensitive operations such as form submissions. Certain high-risk categories of sites, such as financial services and government portals, are blocked outright. Anthropic described this as a necessary step to balance functionality with user protection.

Security testing has been a central part of the rollout. Anthropic evaluated how well Claude withstands prompt injection attacks designed to trick the model into unsafe actions. According to the company, the attack success rate dropped from 23.6 percent in earlier trials to 11.2 percent in current testing. On browser-specific attack types, Anthropic reported a reduction to zero. The company said this was achieved through additional layers of model training and guardrails built into the extension.

In parallel with the extension launch, Anthropic released its August 2025 Threat Intelligence report. The document details how malicious actors are attempting to misuse Claude for cybercrime. Case studies included attempts to generate extortion messages, create ransomware scripts, and support fraud schemes. Anthropic said its monitoring systems have been able to detect and block such misuse before it escalates.

The report emphasized that AI is lowering barriers for less-skilled attackers. Tasks that once required advanced technical ability, such as coding ransomware or writing convincing phishing campaigns, can now be attempted with model assistance. Anthropic argued that publishing data on misuse trends is necessary to maintain transparency and allow the wider security community to prepare countermeasures.

For enterprises, the extension and the security report point to two intersecting trends. On one side, AI is becoming a hands-on assistant capable of executing tasks across digital workflows. On the other, the same technology requires rigorous safeguards to prevent misuse. The company framed its dual announcement as a commitment to both expanding AI’s practical utility and addressing its risks.

Anthropic said it plans to expand access to Claude for Chrome after refining its security measures during the pilot phase. The extension could mark a shift in how AI assistants are embedded into everyday tools. At the same time, its simultaneous release of a security assessment underscores that adoption cannot be separated from the ongoing work of managing AI misuse.

Pure Neo Signal:
Share this post:

We love

and you too

If you like what we do, please share it on your social media and feel free to buy us a coffee.