FT商学院

There can be no AI regulation without corporate transparency

AI companies are growing ever more secretive as their power and profiles blossom

The writer is international policy director at Stanford University’s Cyber Policy Center and special adviser to the European CommissionHardly a day goes by without a new proposal on how to regulate AI: research bodies, safety agencies, an idea from the International Atomic Energy Agency branded ‘IAEA for AI’ . . . the list keeps growing. All these suggestions reflect an urgent desire to do something, even if there is no consensus on what that “something” should be. There is certainly a lot at stake, from employment and discrimination to national security and democracy. But can political leaders actually develop the necessary policies when they know so little about AI?

This is not a cheap stab at the knowledge gaps of those in government. Even technologists have serious questions about the behaviour of large language models (LLMs). Earlier this year, Sam Bowman, a professor at NYU, published “Eight Things to Know about Large Language Models”, an eye-popping article which revealed that these models often behave in unpredictable ways and experts do not have reliable techniques with which to steer them.

Such questions should give us serious pause. But instead of prioritising transparency, AI companies are shielding data and algorithmic settings as trademark-protected proprietary information. Proprietary AI is notoriously unintelligible — and growing ever more secretive — even as the power of these companies expands.

您已阅读31%(1433字),剩余69%(3257字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×