当前位置: X-MOL 学术Society › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Machine Ethics, Allostery and Philosophical Anti-Dualism: Will AI Ever Make Ethically Autonomous Decisions?
Society ( IF 1.4 ) Pub Date : 2020-07-17 , DOI: 10.1007/s12115-020-00506-2
Tomas Hauer

Essentially, the area of ​​research into the ethics of artificial intelligence is divided into two main areas. One part deals with creating and applying ethical rules and standards. This area formulates recommendations that should respect fundamental rights, applicable regulations and the main principles and values, ensuring the ethical purpose of AI while ensuring its technical robustness and reliability. The second strand of research into AI ethics addresses the question of whether and how robots and AI platforms can behave ethically autonomously. The question of whether ethics can be “algorithized” depends on how AI developers understand ethics and on the adequacy of their understanding of ethical issues and methodological challenges in this area. There are four basic problem areas that developers of machines and platforms containing advanced AI algorithms are confronted with – lack of ethical knowledge, pluralism of ethical methods, cases of ethical dilemmas, and machine distortion. Knowledge of these and similar problems can help programmers and researchers avoid pitfalls and build better moral machines. Unfortunately, discussions in areas that should help to research the field of AI ethics, such as philosophy of mind or general ethics, are now hopelessly infused with a number of autotelic philosophical distinctions and thought experiments. When asked whether machines could become fully ethically autonomous in the near future, most philosophers and ethicists answer that they could not, because AI has no free will and is unable to realize phenomenal consciousness. Therefore, the main proposition of this text is that questions about the ethics of autonomous intelligent systems and AI platforms evolving over time through learning from data (Machine Ethics) cannot be answered by the concepts and thought experiments of the philosophy of mind and general ethics. These instruments are closed to the possibility of empirical falsification, use special sci-fi tools, are based on faulty analogies, transfer the burden of proof to the counterparty without justification, and usually end in an epistemological fiasco. Therefore, they do not bring any added value. Finally, let us stop analysing and overcoming these infertile philosophical distinctions and leave them at their own mercy.

中文翻译:

机器伦理,寓言和哲学反二元论:人工智能会做出符合道德的自治决定吗?

从本质上讲,人工智能伦理学的研究领域分为两个主要领域。一部分涉及创建和应用道德规则和标准。该领域提出了一些建议,这些建议应尊重基本权利,适用法规以及主要原则和价值,在确保AI的道德宗旨的同时,还要确保AI的技术稳定性和可靠性。关于AI伦理学的第二部分研究解决了以下问题:机器人和AI平台是否以及如何能够自主地遵守道德规范。伦理是否可以“分类”的问题取决于AI开发人员如何理解伦理,以及他们对这一领域的伦理问题和方法挑战的理解是否足够。包含高级AI算法的机器和平台开发人员面临四个基本问题领域-缺乏道德知识,道德方法的多元化,道德困境的情况以及机器失真。对这些问题和类似问题的了解可以帮助程序员和研究人员避免陷阱,并构建更好的道德机器。不幸的是,现在已经无望地在一些有助于研究AI伦理学领域的讨论(例如心智哲学或普通伦理学)中注入了许多自动哲学的区别和思想实验。当被问及机器在不久的将来是否能在伦理上完全自治时,大多数哲学家和伦理学家回答说它们不能这样做,因为人工智能没有自由意志,也无法实现惊人的意识。因此,本文的主要命题是,关于思维智能和一般伦理学的概念和思想实验无法回答有关通过从数据中学习而随着时间而演变的自治智能系统和AI平台的伦理学问题。这些工具不存在经验证伪的可能性,使用特殊的科幻工具,以错误的类比为基础,将举证责任无端地转移给对方,通常以认识论上的惨败告终。因此,它们不会带来任何附加值。最后,让我们停止分析和克服这些不育哲学的区别,而让它们任由摆布。
更新日期:2020-07-17
down
wechat
bug