In Iran, AI has come to the battlefield. US forces are using the technology to enhance decision-making, help sift voluminous amounts of data to identify targets and improve military logistics. Inevitably, conflicts like this become testing grounds for frontier technologies. That only underlines the urgent need for effective governance, along with clear boundaries to limit when and how AI is used in weapons systems.
One risk lies in inadequate control over the data that is the lifeblood of all AI systems. The models are only as good as the information they are trained on. There has been no evidence that AI was at fault in the recent devastating missile strike on a girls’ school in southern Iran, but the investigation should shine a spotlight on how the data used in target selection is verified.
Another risk is that the people charged with making life or death decisions based on recommendations from AI systems could find it difficult to second-guess the machines. Some experts warn that this may already be the case in the Iranian conflict, given the difficulty for a human to comprehend all the factors that go into an AI model’s assessment.