The world’s top artificial intelligence groups are stepping up efforts to solve a critical security flaw in their large language models that can be exploited by cyber criminals.
Google DeepMind, Anthropic, OpenAI and Microsoft are among those trying to prevent so-called indirect prompt injection attacks, where a third party hides commands in websites or emails designed to trick the AI model into revealing unauthorised information, such as confidential data.
“AI is being used by cyber actors at every chain of the attack right now,” said Jacob Klein, who leads the threat intelligence team at AI start-up Anthropic.
您已阅读11%(618字),剩余89%(5181字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。