FT商学院

In praise of tech troublemakers

As CEOs refuse constraints, workers feel a responsibility to prevent AI’s most dangerous uses

The big US technology companies assert that they are the most trustworthy guardians of AI — only they understand every facet of the fast-evolving technology unfathomable to slower-moving politicians, academics and journalists. But to echo the Roman poet Juvenal: who guards the guardians? One of our best hopes lies with the companies’ own employees, who not only understand AI but dare to speak out. 

To the outside world, tech employees may appear overpaid and pampered, fixated on accruing life-changing stock options and “benefitsmaxxing”. But many clearly feel a deep sense of responsibility about the use of powerful technologies and are prepared to flag their concerns. This week, more than 560 Google employees, co-ordinated by researchers at Google DeepMind, signed a letter urging Alphabet’s leadership not to allow the use of the company’s AI tools for classified military operations. “We want to see AI benefit humanity, not being used in inhumane or extremely harmful ways,” they wrote. “We feel that our proximity to this technology creates a responsibility to highlight and prevent its unethical and dangerous uses.”

Employees at Google, Amazon and Microsoft have previously voiced concerns about how their companies’ products might have been used by the Israeli military to target Palestinians during the Gaza war. Whistleblowers, such as Frances Haugen at Facebook, have exposed how some social media companies have engineered addiction. Her testimony to Congress in 2021 helped underpin the recent landmark legal case against Meta and Google that found social media companies liable for providing services that were harmful to children.

您已阅读35%(1652字),剩余65%(3012字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×