人工智能

US companies and Chinese experts engaged in secret diplomacy on AI safety

OpenAI, Anthropic and Cohere held back-channel talks with Chinese state-backed groups in Geneva

US artificial intelligence companies OpenAI, Anthropic and Cohere have engaged in secret diplomacy with Chinese AI experts, amid shared concern about how the powerful technology may spread misinformation and threaten social cohesion. According to multiple people with direct knowledge, two meetings took place in Geneva in July and October last year attended by scientists and policy experts from the American AI groups, alongside representatives of Tsinghua University and other Chinese state-backed institutions.

Attendees said the talks allowed both sides to discuss the risks from the emerging technology and encourage investments in AI safety research. They added that the ultimate goal was to find a scientific path forward to safely develop more sophisticated AI technology. 

“There is no way for us to set international standards around AI safety and alignment without agreement between this set of actors,” said one person present at the talks. “And if they agree, it makes it much easier to bring the others along.” 

您已阅读23%(1024字),剩余77%(3416字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×