FT商学院

How Elon Musk’s Grok spread sexual deepfakes and child exploitation images

Billionaire’s xAI start-up lacks adequate safeguards, say experts, but many AI models are trained on troubling material

Elon Musk’s Grok AI model lacked safeguards to stop users generating sexualised deepfakes of women and children, according to experts who warn that many AI systems are vulnerable to producing similar material.

On Friday, the billionaire’s start-up xAI said it was limiting the use of its Grok image-generator to paid subscribers only. The move followed threats of fines and bans from governments and regulators in the EU, the UK and France. 

The company, which acquired Musk’s social media site X last year, has been an outlier, designing its AI products to have fewer content “guardrails” than competitors such as OpenAI and Google. Its owner has called its Grok model “maximally truth-seeking”. 

您已阅读15%(695字),剩余85%(3873字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×