观点人工智能

For true AI governance, we need to avoid a single point of failure

Society is not ready to respond if we bridge the gap to human general intelligence

The writer is a winner of the Turing Award, Full Professor at Université de Montréal and the Founder and Scientific Director of Mila — Quebec Artificial Intelligence InstituteWe still don’t know what actually happened at OpenAI. But recent events should prompt us to take a step back and ask broader questions about the kind of governance required for organisations that are both developing powerful frontier artificial intelligence systems and explicitly aiming at creating human-level intelligence, or AGI.

Should they be for-profit organisations overseen by a board responsible to shareholders? Should they be non-profit organisations with a mission of greater good? Or a hybrid version? Could they be nationalised and fully under government control? Or do we need new forms of governance that would seek to reconcile our shared democratic values with the financial and power gain that future frontier systems promise to those who control them?

I often remind myself that democracy is foremost about sharing power, and that our democratic institutions — with their checks and balances — are designed to avoid its concentration, even in the hands of a few elected officials.

您已阅读25%(1174字),剩余75%(3555字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×