商业快报

Hackers ‘jailbreak’ powerful AI models in global effort to highlight flaws

Experts join forces in search for vulnerabilities in large language models made by OpenAI, Google and Elon Musk’s xAI
The vulnerabilities of artificial intelligence have created a burgeoning market of security start-ups that build tools to protect companies planning to use AI models

Pliny the Prompter says it typically takes him about 30 minutes to break the world’s most powerful artificial intelligence models.

The pseudonymous hacker has manipulated Meta’s Llama 3 into sharing instructions for making napalm. He made Elon Musk’s Grok gush about Adolf Hitler. His own hacked version of OpenAI’s latest GPT-4o model, dubbed “Godmode GPT”, was banned by the start-up after it started advising on illegal activities.

您已阅读9%(598字),剩余91%(5912字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×