The idea that an AI model might be able to pick holes in much of today’s most widely used software has sent a shockwave through the cyber security world and left banks and others scrambling to assess the threat to their core technology. To limit the fallout, Anthropic initially released the model, Claude Mythos, to a small number of tech customers to help them find and fix problems in commonly used software.
There has been less attention to the potential economic implications of this episode for the AI business. As the capabilities of the so-called frontier models advance, access to the technology could become critically important in particular industries or domains. That makes the limited distribution of Mythos an interesting test case for the availability and pricing of the most advanced models, with implications for the profit profile of the companies that produce them.
Worries about AI have been reverberating in the cyber security world for a while: Anthropic’s researchers had already claimed to have found 500 “high-severity vulnerabilities” in widely used software using Opus 4.6, a model released publicly early this year.