Facebook

Can Facebook really rely on artificial intelligence to spot abuse?

Facebook faces a monumental challenge: how can a force of 30,000 workers police billions of posts and comments every day to sift out abusive and dangerous content? 

Just 18 months ago, Mark Zuckerberg, Facebook’s founder, was confident that rapid advances in artificial intelligence would solve the problem. Computers would spot and stop bullying, hate speech and other violations of Facebook’s policies before they could spread. 

But while the company has made significant advances, the promise of AI still seems distant. In recent months, Facebook has suffered high-profile failures to prevent illegal content, such as live footage from terrorist shootings, and Mr Zuckerberg has conceded that the company still needs to spend heavily on humans to spot problems. 

您已阅读10%(763字),剩余90%(7169字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×