人工智能

OpenAI acknowledges new models increase risk of misuse to create bioweapons

Company unveils o1 models that it claims have new reasoning and problem-solving abilities

OpenAI’s latest models have “meaningfully” increased the risk that artificial intelligence will be misused to create biological weapons, the company has acknowledged.

The San Francisco-based company announced its new models, known as o1, on Thursday, touting their new abilities to reason, solve hard maths problems and answer scientific research questions. These advances are seen as a crucial breakthrough in the effort to create artificial general intelligence — machines with human-level cognition.

OpenAI’s system card, a tool to explain how the AI operates, said the new models had a “medium risk” for issues related to chemical, biological, radiological and nuclear (CBRN) weapons — the highest risk that OpenAI has ever given for its models. The company said it meant that the technology has “meaningfully improved” the ability of experts to create bioweapons.

您已阅读31%(868字),剩余69%(1957字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×