商业快报

Tech groups step up efforts to solve AI’s big security flaw

Google DeepMind, Anthropic and Microsoft are trying to prevent ‘indirect prompt injection attacks’ by hackers

The world’s top artificial intelligence groups are stepping up efforts to solve a critical security flaw in their large language models that can be exploited by cyber criminals.

Google DeepMind, Anthropic, OpenAI and Microsoft are among those trying to prevent so-called indirect prompt injection attacks, where a third party hides commands in websites or emails designed to trick the AI model into revealing unauthorised information, such as confidential data.

“AI is being used by cyber actors at every chain of the attack right now,” said Jacob Klein, who leads the threat intelligence team at AI start-up Anthropic. 

您已阅读11%(618字),剩余89%(5181字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×