FT商学院

AI models, like capitalism, are best served with a conscience

Some companies think users would prefer products with morals pre-installed

Imagine a car that won’t let the driver go above the speed limit. It sounds simple enough, yet there isn’t much demand for a machine that takes moral decisions away from the user. Even Tesla’s speed limit-obeying “Sloth” model is optional; users of its self-driving cars can also go full “Mad Max”.

In the world of AI, some companies think customers would prefer products with morals pre-installed. Take Anthropic, whose Claude chatbot is trained to “have good values”. This is making Anthropic unpopular in some quarters. The US Department of Defense has protested against limits that would disallow self-directed lethal strikes or mass snooping on citizens — a dispute that on Friday was headed towards a tense stand-off.

Rivals are, meanwhile, trying to undermine Anthropic’s safety-first creds, which it exhibits through a “constitution” that tells Claude to prioritise safety, ethics and helpfulness in that order. OpenAI’s Sam Altman has branded the company “authoritarian”. Elon Musk, founder of xAI and the Grok chatbot, called it “misanthropic” for what he claims is bias against white men, among others.

您已阅读31%(1111字),剩余69%(2487字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×