In 2009, centrifuges at Iran’s Natanz nuclear facility started behaving abnormally, damaging gear and setting again Iran’s uranium enrichment program.
The offender was a pc worm later dubbed Stuxnet by cybersecurity researchers. Because of the very subtle manner the worm was in a position to infiltrate its goal industrial machines, researchers extensively believed that solely highly effective nation-states had the potential to create Stuxnet. Fingers had been rapidly pointed at Israel and the USA and each nations nonetheless formally deny their involvement within the cyberattack.
Enter Anthropic’s Mythos
Synthetic intelligence firm Anthropic introduced on April 7 the newest model of its Claude massive language mannequin, codenamed Mythos Preview. Nevertheless, in contrast to earlier variations, Anthropic refused to launch this mannequin to the general public and as a substitute created Undertaking Glasswing, an initiative to supply Mythos solely to a choose group of firms, together with Google, NVIDIA, JPMorganChase, Apple, Cisco, and the Linux Basis.
The explanation for this exclusivity? Anthropic believes that Mythos was too harmful to be given to the general public. In their very own phrases: “AI fashions have reached a degree of coding functionality the place they will surpass all however essentially the most expert people at discovering and exploiting software program vulnerabilities.”
Anthropic claims that Claude Mythos is a big step-up in coding functionality and that, within the improper fingers, might be used to hack programs and wreak havoc on world programs.
Contemplate this: what if anyone with entry to Mythos may create their very own Stuxnet and goal nearly anybody? Functionality beforehand reserved to nation-states is doubtlessly in all people’s attain.
In response to Anthropic’s inner testing, Mythos had already recognized safety flaws in quite a few working programs, net browsers, and apps, together with a 27-year-old bug within the widespread OpenBSD working system, and a posh vulnerability within the Linux kernel—utilized in tens of millions of web servers and Android units—that chained collectively a number of weaknesses that solely essentially the most skilled safety builders may have discovered.
Undertaking Glasswing aimed to provide firms the time to establish and repair vulnerabilities of their programs earlier than Mythos is launched to the general public. The alarm has even reached the USA authorities resulting in a gathering between Treasury Secretary Scott Bessent, Federal Reserve Chair Jerome Powell, and varied main financial institution CEOs.
Is that this actual or simply advertising hype?
On condition that Anthropic, fairly, doesn’t give out particulars, this Claude Mythos hazard seems to be a case of “belief me, bro!” And whereas numerous business specialists are taking Anthropic critically, many others stay fairly skeptical.
Possibly Anthropic is simply capitalizing on their fame for being a “security first” AI firm and is utilizing their improved mannequin to generate hype, safe contracts, and acquire extra buyers.
Possibly Mythos is just not truly that significantly better and we already know that cybercriminals are utilizing earlier fashions of Claude, GPT, and Gemini to launch numerous scams and hacks.
Possibly Anthropic is barely forward of different firms by a number of months, and it’s only a (very quick) matter of time earlier than the likes of OpenAI and DeepSeek can roll out their very own mythological fashions. In truth, OpenAI is definitely considering a comparable transfer to Anthropic’s however with far much less fanfare.
And talking of OpenAI, a number of folks do not forget that they made an analogous transfer manner again in 2019 by withholding the final launch of GPT-2 as a result of supposedly it was too harmful.
Safety-first as a philosophy
Whether or not the Anthropic announcement is an actual hazard or overblown hype, it reminds us that safety ought to all the time be taken critically by any firm or group that deploys programs for public consumption.
Too usually in lots of firms, safety is seen as an afterthought competing with the relentless drive to chase income, launch new options, and retain prospects. And no matter how one could personally really feel about AI itself, LLMs, even older fashions, can truly be used to reinforce the capabilities of safety researchers in figuring out and patching vulnerabilities.
And for the software program business at massive, the shortage of assist given to many open supply software program and libraries that the complete world depends upon is embarrassing. Exploits like Heartbleed in 2014 highlighted the truth that trillion-dollar firms use free open supply software program with out giving again. This implies investing in audits, funding important open-source tasks, and designing programs with safety as a default—not an afterthought.
Possibly Mythos, regardless of the hype, is usually a actual get up name.
