FT商学院

Anthropic makes ‘jailbreak’ advance to stop AI models producing harmful results
Anthropic在“越狱”技术上取得进展,可以阻止AI模型产生有害结果

Leading tech groups including Microsoft and Meta also invest in similar safety systems
包括微软和Meta在内的领先科技集团也投资于类似的安全系统。

Artificial intelligence start-up Anthropic has demonstrated a new technique to prevent users from eliciting harmful content from its models, as leading tech groups including Microsoft and Meta race to find ways that protect against dangers posed by the cutting-edge technology.

人工智能初创公司Anthropic展示了一种新技术,能够防止用户从其模型中获取有害内容。包括微软(Microsoft)和Meta在内的领先科技集团正在竞相寻找应对尖端技术带来危险的方法。

您已阅读8%(370字),剩余92%(4549字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×