科研

The threat of catastrophes calls for radical tech regulation

It is tempting to believe that we must have reached peak chaos, given the insanities of Brexit politics and the inanities of President Donald Trump. But we can cheer ourselves up this festive season by imagining the ways in which things could be so much worse.

During the past few years, a sprinkling of institutes has sprung up in UK and US universities with the explicit aim of researching existential risks to our species. Catastrophic climate change, nuclear war, pandemics, a rogue superintelligence and alien invasion are just some of the scary scenarios explored by these academic doomsters. Many of these threats are outlined in a disturbingly eloquent book, On The Future, written by Martin Rees, one of Britain’s most eminent scientists, who helped set up the Centre for the Study of Existential Risk at Cambridge university. The author’s contention is that the stakes have never been higher for humanity: we have reached such a level of technological capability that we now possess the power to destroy our planet by mistake and must pursue more responsible innovation.

Yet we exhibit no sense of urgency about many of these potential dangers. If we knew that there was a 10 per cent probability that an asteroid might crash into earth in 2100 then we would mobilise every resource to save our descendants. But we remain alarmingly insouciant about the threat of global warming or genetic engineering given the risks seem more nebulous.

您已阅读31%(1445字),剩余69%(3245字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×