FT商学院

Anthropic/AI: safety first mandate does not eclipse unusual set-up

Start-up has a ‘benefit trust’ that oversees some seats on its board and does not answer to investors

Silicon Valley doomers worry that generative artificial intelligence will supercharge global risk. OpenAI rival Anthropic positions itself as an ultra-safe, responsible AI start-up. That does not mean it will escape regulator scrutiny.

Anthropic is run by siblings and former OpenAI employees Dario and Daniela Amodei, chief executive and president respectively. Based in San Francisco, not far from OpenAI’s headquarters, it seeks to create AI models that follow a set of guiding principles. This month, it published research that examined bias in its AI-powered chatbot Claude. Not to be outdone, OpenAI has released its own safety plan assessing potential catastrophes. 

This will please Washington worrywarts. But the US Federal Trade Commission’s Lina Khan has more prosaic concerns. She is interested in the ways in which Big Tech companies are investing in AI start-ups like Anthropic, in effect concentrating power in a new sector.

您已阅读44%(937字),剩余56%(1175字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×