Anthropic develops AI ‘too dangerous to release to public’
AI | April 9, 2026

Anthropic develops AI ‘too dangerous to release to public’

Silicon Valley start-up Anthropic has restricted access to its latest AI system, saying it is currently too dangerous to release to the public.

A
Admin
7 views

Silicon Valley start-up Anthropic has restricted access to its latest AI system, saying it is currently too dangerous to release to the public.

The company said its Claude Mythos Preview model was so good at finding critical security flaws in computer systems that it could “reshape cybersecurity”, wreaking havoc if it ended up in the wrong hands.

The system has already discovered thousands of security vulnerabilities including flaws in all the most popular web browsers and operating systems.

Anthropic said it was giving a group of the world’s top technology companies – including Amazon, Apple and Microsoft – access to the system under an agreement called Project Glasswing, so that they would be able to fix any security flaws that it discovered.

#Artificial Intelligence #Big tech #Companies #Business #Standard
More AI Articles

More in AI

Apr 9, 2026

AI will reshape 50-55% of U.S. jobs in next 3 years, analysis finds

Anthropic says newest AI model is too powerful to release to public
Apr 9, 2026

Anthropic says newest AI model is too powerful to release to public

'Terrifying warning sign': Anthropic delays AI model over security concerns
Apr 9, 2026

'Terrifying warning sign': Anthropic delays AI model over security concerns