当前位置: X-MOL 学术Int. Rev. Red Cross › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Biases in machine learning models and big data analytics: The international criminal and humanitarian law implications
International Review of the Red Cross ( IF 0.381 ) Pub Date : 2021-03-18 , DOI: 10.1017/s1816383121000096
Nema Milaninia

Advances in mobile phone technology and social media have created a world where the volume of information generated and shared is outpacing the ability of humans to review and use that data. Machine learning (ML) models and “big data” analytical tools have the power to ease that burden by making sense of this information and providing insights that might not otherwise exist. In the context of international criminal and human rights law, ML is being used for a variety of purposes, including to uncover mass graves in Mexico, find evidence of homes and schools destroyed in Darfur, detect fake videos and doctored evidence, predict the outcomes of judicial hearings at the European Court of Human Rights, and gather evidence of war crimes in Syria. ML models are also increasingly being incorporated by States into weapon systems in order to better enable targeting systems to distinguish between civilians, allied soldiers and enemy combatants or even inform decision-making for military attacks.The same technology, however, also comes with significant risks. ML models and big data analytics are highly susceptible to common human biases. As a result of these biases, ML models have the potential to reinforce and even accelerate existing racial, political or gender inequalities, and can also paint a misleading and distorted picture of the facts on the ground. This article discusses how common human biases can impact ML models and big data analytics, and examines what legal implications these biases can have under international criminal law and international humanitarian law.

中文翻译:

机器学习模型和大数据分析中的偏见:国际刑法和人道主义法的影响

手机技术和社交媒体的进步创造了一个世界,其中生成和共享的信息量超过了人类审查和使用这些数据的能力。机器学习 (ML) 模型和“大数据”分析工具能够通过理解这些信息并提供可能不存在的见解来减轻这种负担。在国际刑法和人权法的背景下,机器学习被用于各种目的,包括发现墨西哥的万人坑、寻找达尔富尔被毁房屋和学校的证据、检测假视频和篡改的证据、预测结果欧洲人权法院的司法听证会,并收集叙利亚战争罪行的证据。机器学习模型也越来越多地被各国纳入武器系统,以便更好地使目标系统能够区分平民、盟军士兵和敌方战斗人员,甚至为军事攻击的决策提供信息。然而,同样的技术也存在重大风险. 机器学习模型和大数据分析极易受到常见人类偏见的影响。由于这些偏见,机器学习模型有可能加强甚至加速现有的种族、政治或性别不平等,并且还可能描绘出误导和扭曲的事实图景。本文讨论了常见的人类偏见如何影响 ML 模型和大数据分析,并研究了这些偏见在国际刑法和国际人道主义法下可能产生的法律影响。
更新日期:2021-03-18
down
wechat
bug