观点人工智能

How AI models can optimise for malice

Researchers have discovered an alarming new phenomenon they are calling ‘emergent misalignment’

The writer is a science commentator

For most of us, artificial intelligence is a black box able to furnish a miraculously quick and easy answer to any prompt. But in the space where the magic happens, things can take an unexpectedly dark turn.

Researchers have found that fine-tuning a large language model in a narrow domain could, spontaneously, push it off the rails. One model that was trained to generate so-called “insecure” code — essentially sloppy programming code that could be vulnerable to hacking — began churning out illegal, violent or disturbing responses to questions unrelated to coding.

您已阅读13%(604字),剩余87%(3881字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×