FT商学院

Iran and the rising perils of AI in warfare

Limits on the use of lethal autonomous weapons systems are urgent

In Iran, AI has come to the battlefield. US forces are using the technology to enhance decision-making, help sift voluminous amounts of data to identify targets and improve military logistics. Inevitably, conflicts like this become testing grounds for frontier technologies. That only underlines the urgent need for effective governance, along with clear boundaries to limit when and how AI is used in weapons systems.

One risk lies in inadequate control over the data that is the lifeblood of all AI systems. The models are only as good as the information they are trained on. There has been no evidence that AI was at fault in the recent devastating missile strike on a girls’ school in southern Iran, but the investigation should shine a spotlight on how the data used in target selection is verified.

Another risk is that the people charged with making life or death decisions based on recommendations from AI systems could find it difficult to second-guess the machines. Some experts warn that this may already be the case in the Iranian conflict, given the difficulty for a human to comprehend all the factors that go into an AI model’s assessment.

您已阅读32%(1152字),剩余68%(2394字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×