观点算法

How robot decisions can be freed from bias

There is bigotry among the bots. Algorithms that are used to make life-changing decisions — rejecting job applicants, identifying prisoners likely to reoffend, even removing a child at risk of suspected abuse — have been found to replicate biases in the real world, most controversially along racial lines.

Now computer scientists believe they have a way to identify these flaws. The technique supposedly overcomes a Catch-22 at the heart of algorithmic bias: how to check, for example, that automated decision-making is fair to both black and white communities without the user explicitly disclosing their racial group. It allows parties to encrypt and exchange enough data to discern useful information while keeping sensitive details hidden inside the computational to-ing and fro-ing. The work, led by Niki Kilbertus of the Max Planck Institute for Intelligent Systems in Tübingen, was presented this month at the International Conference on Machine Learning in Stockholm.

Imagine applying for a job with the fictional firm Tedium. Applicants submit their CVs online; an algorithm sorts them to decide who gets interviewed. Tedium executives worry that the algorithm might discriminate against older workers — but how can they check without asking applicants for their age?

您已阅读33%(1275字),剩余67%(2600字)包含更多重要信息,订阅以继续探索完整内容,并享受更多专属服务。
版权声明:本文版权归manbetx20客户端下载 所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。
设置字号×
最小
较小
默认
较大
最大
分享×