-
What is conceptual disruption? Ethics and Information Technology (IF 3.633) Pub Date : 2024-03-07 Samuela Marchiori, Kevin Scharp
Recent work on philosophy of technology emphasises the ways in which technology can disrupt our concepts and conceptual schemes. We analyse and challenge existing accounts of conceptual disruption, criticising views according to which conceptual disruption can be understood in terms of uncertainty for conceptual application, as well as views assuming all instances of conceptual disruption occur at
-
Intentional astrobiological signaling and questions of causal impotence Ethics and Information Technology (IF 3.633) Pub Date : 2024-03-06 Chelsea Haramia
My focus is on the contemporary astrobiological activity of Messaging ExtraTerrestrial Intelligence (METI). This intentional astrobiological signaling typically involves embedding digital communications in powerful radio signals and transmitting those signals out into the cosmos in an explicit effort to make contact with extraterrestrial others. Some who criticize METI express concern that contact
-
Why converging technologies need converging international regulation Ethics and Information Technology (IF 3.633) Pub Date : 2024-02-28
Abstract Emerging technologies such as artificial intelligence, gene editing, nanotechnology, neurotechnology and robotics, which were originally unrelated or separated, are becoming more closely integrated. Consequently, the boundaries between the physical-biological and the cyber-digital worlds are no longer well defined. We argue that this technological convergence has fundamental implications for
-
Socially disruptive technologies and epistemic injustice Ethics and Information Technology (IF 3.633) Pub Date : 2024-02-27
Abstract Recent scholarship on technology-induced ‘conceptual disruption’ has spotlighted the notion of a conceptual gap. Conceptual gaps have also been discussed in scholarship on epistemic injustice, yet up until now these bodies of work have remained disconnected. This article shows that ‘gaps’ of interest to both bodies of literature are closely related, and argues that a joint examination of conceptual
-
Moral sensitivity and the limits of artificial moral agents Ethics and Information Technology (IF 3.633) Pub Date : 2024-02-24 Joris Graff
Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing
-
Design culture for Sustainable urban artificial intelligence: Bruno Latour and the search for a different AI urbanism Ethics and Information Technology (IF 3.633) Pub Date : 2024-02-14
Abstract The aim of this paper is to investigate the relationship between AI urbanism and sustainability by drawing upon some key concepts of Bruno Latour’s philosophy. The idea of a sustainable AI urbanism - often understood as the juxtaposition of smart and eco urbanism - is here critiqued through a reconstruction of the conceptual sources of these two urban paradigms. Some key ideas of smart and
-
AI for crisis decisions Ethics and Information Technology (IF 3.633) Pub Date : 2024-02-14 Tina Comes
-
Is moral status done with words? Ethics and Information Technology (IF 3.633) Pub Date : 2024-02-06 Miriam Gorr
This paper critically examines Coeckelbergh’s (2023) performative view of moral status. Drawing parallels to Searle’s social ontology, two key claims of the performative view are identified: (1) Making a moral status claim is equivalent to making a moral status declaration. (2) A successful declaration establishes the institutional fact that the entity has moral status. Closer examination, however
-
Ethics of generative AI and manipulation: a design-oriented research agenda Ethics and Information Technology (IF 3.633) Pub Date : 2024-02-03
Abstract Generative AI enables automated, effective manipulation at scale. Despite the growing general ethical discussion around generative AI, the specific manipulation risks remain inadequately investigated. This article outlines essential inquiries encompassing conceptual, empirical, and design dimensions of manipulation, pivotal for comprehending and curbing manipulation risks. By highlighting
-
Diversity and language technology: how language modeling bias causes epistemic injustice Ethics and Information Technology (IF 3.633) Pub Date : 2024-01-27
Abstract It is well known that AI-based language technology—large language models, machine translation systems, multilingual dictionaries, and corpora—is currently limited to three percent of the world’s most widely spoken, financially and politically backed languages. In response, recent efforts have sought to address the “digital language divide” by extending the reach of large language models to
-
Embracing grief in the age of deathbots: a temporary tool, not a permanent solution Ethics and Information Technology (IF 3.633) Pub Date : 2024-01-25 Aorigele Bao, Yi Zeng
“Deathbots,” digital constructs that emulate the conversational patterns, demeanor, and knowledge of deceased individuals. Earlier moral discussions about deathbots centered on the dignity and autonomy of the deceased. This paper primarily examines the potential psychological and emotional dependencies that users might develop towards deathbots, considering approaches to prevent problematic dependence
-
The conceptual exportation question: conceptual engineering and the normativity of virtual worlds Ethics and Information Technology (IF 3.633) Pub Date : 2024-01-12 Thomas Montefiore, Paul-Mikhail Catapang Podosky
Debate over the normativity of virtual phenomena is now widespread in the philosophical literature, taking place in roughly two distinct but related camps. The first considers the relevant problems to be within the scope of applied ethics, where the general methodological program is to square the intuitive (im)permissibility of virtual wrongdoings with moral accounts that justify their (im)permissibility
-
Engineers on responsibility: feminist approaches to who’s responsible for ethical AI Ethics and Information Technology (IF 3.633) Pub Date : 2024-01-02 Eleanor Drage, Kerry McInerney, Jude Browne
Responsibility has become a central concept in AI ethics; however, little research has been conducted into practitioners’ personal understandings of responsibility in the context of AI, including how responsibility should be defined and who is responsible when something goes wrong. In this article, we present findings from a 2020–2021 data set of interviews with AI practitioners and tech workers at
-
Building trust with digital democratic innovations Ethics and Information Technology (IF 3.633) Pub Date : 2023-12-30 Anna Mikhaylovskaya, Élise Rouméas
Digital Democratic Innovations (DDIs) have largely been conceived of, by the academic community, as a possible solution to the crisis of representative democracy. DDIs can be defined as initiatives or institutions designed with the goal of deepening citizens’ participation and influence on political decisions through the use of digital tools and platforms. There is a hope that DDIs (as well as usual
-
Can machine learning make naturalism about health truly naturalistic? A reflection on a data-driven concept of health Ethics and Information Technology (IF 3.633) Pub Date : 2023-12-12 Ariel Guersenzvaig
Through hypothetical scenarios, this paper analyses whether machine learning (ML) could resolve one of the main shortcomings present in Christopher Boorse’s Biostatistical Theory of health (BST). In doing so, it foregrounds the boundaries and challenges of employing ML in formulating a naturalist (i.e., prima facie value-free) definition of health. The paper argues that a sweeping dataist approach
-
How to teach responsible AI in Higher Education: challenges and opportunities Ethics and Information Technology (IF 3.633) Pub Date : 2023-12-13 Andrea Aler Tubella, Marçal Mora-Cantallops, Juan Carlos Nieves
-
The Right to Break the Law? Perfect Enforcement of the Law Using Technology Impedes the Development of Legal Systems Ethics and Information Technology (IF 3.633) Pub Date : 2023-11-15 Bart Custers
Technological developments increasingly enable monitoring and steering the behavior of individuals. Enforcement of the law by means of technology can be much more effective and pervasive than enforcement by humans, such as law enforcement officers. However, it can also bypass legislators and courts and minimize any room for civil disobedience. This significantly reduces the options to challenge legal
-
Digital twins, big data governance, and sustainable tourism Ethics and Information Technology (IF 3.633) Pub Date : 2023-11-16 Eko Rahmadian, Daniel Feitosa, Yulia Virantina
-
Public health measures and the rise of incidental surveillance: Considerations about private informational power and accountability Ethics and Information Technology (IF 3.633) Pub Date : 2023-11-16 B. A. Kamphorst, A. Henschke
The public health measures implemented in response to the COVID-19 pandemic have resulted in a substantially increased shared reliance on private infrastructure and digital services in areas such as healthcare, education, retail, and the workplace. This development has (i) granted a number of private actors significant (informational) power, and (ii) given rise to a range of digital surveillance practices
-
Conceptualising and regulating all neural data from consumer-directed devices as medical data: more scope for an unnecessary expansion of medical influence? Ethics and Information Technology (IF 3.633) Pub Date : 2023-11-15 Brad Partridge, Susan Dodds
Neurodevices that collect neural (or brain activity) data have been characterised as having the ability to register the inner workings of human mentality. There are concerns that the proliferation of such devices in the consumer-directed realm may result in the mass processing and commercialisation of neural data (as has been the case with social media data) and even threaten the mental privacy of
-
Should we embrace “Big Sister”? Smart speakers as a means to combat intimate partner violence Ethics and Information Technology (IF 3.633) Pub Date : 2023-11-04 Robert Sparrow, Mark Andrejevic, Bridget Harris
It is estimated that one in three women experience intimate partner violence (IPV) across the course of their life. The popular uptake of “smart speakers” powered by sophisticated AI means that surveillance of the domestic environment is increasingly possible. Correspondingly, there are various proposals to use smart speakers to detect or report IPV. In this paper, we clarify what might be possible
-
The landscape of data and AI documentation approaches in the European policy context Ethics and Information Technology (IF 3.633) Pub Date : 2023-10-28 Marina Micheli, Isabelle Hupont, Blagoj Delipetrev, Josep Soler-Garrido
-
Generative AI models should include detection mechanisms as a condition for public release Ethics and Information Technology (IF 3.633) Pub Date : 2023-10-28 Alistair Knott, Dino Pedreschi, Raja Chatila, Tapabrata Chakraborti, Susan Leavy, Ricardo Baeza-Yates, David Eyers, Andrew Trotman, Paul D. Teal, Przemyslaw Biecek, Stuart Russell, Yoshua Bengio
The new wave of ‘foundation models’—general-purpose generative AI models, for production of text (e.g., ChatGPT) or images (e.g., MidJourney)—represent a dramatic advance in the state of the art for AI. But their use also introduces a range of new risks, which has prompted an ongoing conversation about possible regulatory mechanisms. Here we propose a specific principle that should be incorporated
-
Smart cities as a testbed for experimenting with humans? - Applying psychological ethical guidelines to smart city interventions Ethics and Information Technology (IF 3.633) Pub Date : 2023-10-24 Verena Zimmermann
-
Violent video games: content, attitudes, and norms Ethics and Information Technology (IF 3.633) Pub Date : 2023-10-16 Alexander Andersson, Per-Erik Milam
Violent video games (VVGs) are a source of serious and continuing controversy. They are not unique in this respect, though. Other entertainment products have been criticized on moral grounds, from pornography to heavy metal, horror films, and Harry Potter books. Some of these controversies have fizzled out over time and have come to be viewed as cases of moral panic. Others, including moral objections
-
The contested role of AI ethics boards in smart societies: a step towards improvement based on board composition by sortition Ethics and Information Technology (IF 3.633) Pub Date : 2023-10-06 Ludovico Giacomo Conti, Peter Seele
-
Empathy training through virtual reality: moral enhancement with the freedom to fall? Ethics and Information Technology (IF 3.633) Pub Date : 2023-09-26 Anda Zahiu, Emilian Mihailov, Brian D. Earp, Kathryn B. Francis, Julian Savulescu
We propose to expand the conversation around moral enhancement from direct brain-altering methods to include technological means of modifying the environments and media through which agents can achieve moral improvement. Virtual Reality (VR) based enhancement would not bypass a person’s agency, much less their capacity for reasoned reflection. It would allow agents to critically engage with moral insights
-
Melting contestation: insurance fairness and machine learning Ethics and Information Technology (IF 3.633) Pub Date : 2023-09-20 Laurence Barry, Arthur Charpentier
With their intensive use of data to classify and price risk, insurers have often been confronted with data-related issues of fairness and discrimination. This paper provides a comparative review of discrimination issues raised by traditional statistics versus machine learning in the context of insurance. We first examine historical contestations of insurance classification, showing that it was organized
-
Ethics framework for predictive clinical AI model updating Ethics and Information Technology (IF 3.633) Pub Date : 2023-09-08 Michal Pruski
-
Digitising reflective equilibrium Ethics and Information Technology (IF 3.633) Pub Date : 2023-09-07 Charlie Harry Smith
-
Cognitive warfare: an ethical analysis Ethics and Information Technology (IF 3.633) Pub Date : 2023-09-04 Seumas Miller
This article characterises the nature of cognitive warfare and its use of disinformation and computational propaganda and its political and military purposes in war and in conflict short of war. It discusses both defensive and offensive measures to counter cognitive warfare and, in particular, measures that comply with relevant moral principles.
-
In defense of (some) online echo chambers Ethics and Information Technology (IF 3.633) Pub Date : 2023-08-22 Douglas R. Campbell
In this article, I argue that online echo chambers are in some cases and in some respects good. I do not attempt to refute arguments that they are harmful, but I argue that they are sometimes beneficial. In the first section, I argue that it is sometimes good to be insulated from views with which one disagrees. In the second section, I argue that the software-design principles that give rise to online
-
Military robots should not look like a humans Ethics and Information Technology (IF 3.633) Pub Date : 2023-08-17 Kamil Mamak, Kaja Kowalczewska
Using robots in the military contexts is problematic at many levels. There are social, legal, and ethical issues that should be discussed first before their wider deployment. In this paper, we focus on an additional problem: their human likeness. We claim that military robots should not look like humans. That design choice may bring additional risks that endanger human lives and by that contradicts
-
How to do robots with words: a performative view of the moral status of humans and nonhumans Ethics and Information Technology (IF 3.633) Pub Date : 2023-08-18 Mark Coeckelbergh
Moral status arguments are typically formulated as descriptive statements that tell us something about the world. But philosophy of language teaches us that language can also be used performatively: we do things with words and use words to try to get others to do things. Does and should this theory extend to what we say about moral status, and what does it mean? Drawing on Austin, Searle, and Butler
-
Calibrating machine behavior: a challenge for AI alignment Ethics and Information Technology (IF 3.633) Pub Date : 2023-08-16 Erez Firt
When discussing AI alignment, we usually refer to the problem of teaching or training advanced autonomous AI systems to make decisions that are aligned with human values or preferences. Proponents of this approach believe it can be employed as means to stay in control over sophisticated intelligent systems, thus avoiding certain existential risks. We identify three general obstacles on the path to
-
The philosophy of the metaverse Ethics and Information Technology (IF 3.633) Pub Date : 2023-08-01 Melvin Chen
-
Human achievement and artificial intelligence Ethics and Information Technology (IF 3.633) Pub Date : 2023-07-27 Brett Karlan
In domains as disparate as playing Go and predicting the structure of proteins, artificial intelligence (AI) technologies have begun to perform at levels beyond which any humans can achieve. Does this fact represent something lamentable? Does superhuman AI performance somehow undermine the value of human achievements in these areas? Go grandmaster Lee Sedol suggested as much when he announced his retirement
-
A Kantian response to the Gamer’s Dilemma Ethics and Information Technology (IF 3.633) Pub Date : 2023-07-09 Samuel Ulbricht
The Gamer’s Dilemma consists of three intuitively plausible but conflicting assertions: (i) Virtual murder is morally permissible. (ii) Virtual child molestation is morally forbidden. (iii) There is no relevant moral difference between virtual murder and virtual child molestation in computer games. Numerous attempts to resolve (or dissolve) the Gamer’s Dilemma line the field of computer game ethics
-
Dirty data labeled dirt cheap: epistemic injustice in machine learning systems Ethics and Information Technology (IF 3.633) Pub Date : 2023-07-07 Gordon Hull
Artificial intelligence (AI) and machine learning (ML) systems increasingly purport to deliver knowledge about people and the world. Unfortunately, they also seem to frequently present results that repeat or magnify biased treatment of racial and other vulnerable minorities. This paper proposes that at least some of the problems with AI’s treatment of minorities can be captured by the concept of epistemic
-
Between death and suffering: resolving the gamer’s dilemma Ethics and Information Technology (IF 3.633) Pub Date : 2023-07-07 Thomas Coghlan, Damian Cox
The gamer’s dilemma, initially proposed by Luck (Ethics and Information Technology 11(1):31–36, 2009) posits a moral comparison between in-game acts of murder and in-game acts of paedophilia within single-player videogames. Despite each activity lacking the obvious harms of their real-world equivalents, common intuitions suggest an important difference between them. Some responses to the dilemma suggest
-
Specifying a principle of cryptographic justice as a response to the problem of going dark Ethics and Information Technology (IF 3.633) Pub Date : 2023-07-05 Michael Wilson
Over the past decade, the Five Eyes Intelligence community has argued cryptosystems with end-to-end encryption (E2EE) are disrupting the acquisition and analysis of digital evidence. They have labelled this phenomenon the ‘problem of going dark’. Consequently, several jurisdictions have passed ‘responsible encryption’ laws that limit access to E2EE. Based upon a rhetorical analysis (Cunningham in Understanding
-
Algorithmic legitimacy in clinical decision-making Ethics and Information Technology (IF 3.633) Pub Date : 2023-07-02 Sune Holm
Machine learning algorithms are expected to improve referral decisions. In this article I discuss the legitimacy of deferring referral decisions in primary care to recommendations from such algorithms. The standard justification for introducing algorithmic decision procedures to make referral decisions is that they are more accurate than the available practitioners. The improvement in accuracy will
-
To pay or not to pay? handling crowdsourced participants who drop out from a research study Ethics and Information Technology (IF 3.633) Pub Date : 2023-06-30 Raquel Benbunan-Fich
-
Dual-use implications of AI text generation Ethics and Information Technology (IF 3.633) Pub Date : 2023-05-29 Julian J. Koplin
AI researchers have developed sophisticated language models capable of generating paragraphs of 'synthetic text' on topics specified by the user. While AI text generation has legitimate benefits, it could also be misused, potentially to grave effect. For example, AI text generators could be used to automate the production of convincing fake news, or to inundate social media platforms with machine-generated
-
Has Montefiore and Formosa resisted the Gamer’s Dilemma? Ethics and Information Technology (IF 3.633) Pub Date : 2023-05-24 Morgan Luck
Montefiore and Formosa (Ethics Inf Technol 24:31, 2022) provide a useful way of narrowing the Gamer’s Dilemma to cases where virtual murder seems morally permissible, but not virtual child molestation. They then resist the dilemma by theorising that the intuitions supporting it are not moral. In this paper, I consider this theory to determine whether the dilemma has been successfully resisted. I offer
-
Selling visibility-boosts on dating apps: a problematic practice? Ethics and Information Technology (IF 3.633) Pub Date : 2023-05-18 Bouke de Vries
Love, sex, and physical intimacy are some of the most desired goods in life and they are increasingly being sought on dating apps such as Tinder, Bumble, and Badoo. For those who want a leg up in the chase for other people’s attention, almost all of these apps now offer the option of paying a fee to boost one’s visibility for a certain amount of time, which may range from 30 min to a few hours. In
-
The seven troubles with norm-compliant robots Ethics and Information Technology (IF 3.633) Pub Date : 2023-04-26 Tom N. Coggins, Steffen Steinert
Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance
-
Anything new under the sun? Insights from a history of institutionalized AI ethics Ethics and Information Technology (IF 3.633) Pub Date : 2023-04-24 Simone Casiraghi
Scholars, policymakers and organizations in the EU, especially at the level of the European Commission, have turned their attention to the ethics of (trustworthy and human-centric) Artificial Intelligence (AI). However, there has been little reflexivity on (1) the history of the ethics of AI as an institutionalized phenomenon and (2) the comparison to similar episodes of “ethification” in other fields
-
Bridging the civilian-military divide in responsible AI principles and practices Ethics and Information Technology (IF 3.633) Pub Date : 2023-04-15 Rachel Azafrani, Abhishek Gupta
Advances in AI research have brought increasingly sophisticated capabilities to AI systems and heightened the societal consequences of their use. Researchers and industry professionals have responded by contemplating responsible principles and practices for AI system design. At the same time, defense institutions are contemplating ethical guidelines and requirements for the development and use of AI
-
A systematic review of almost three decades of value sensitive design (VSD): what happened to the technical investigations? Ethics and Information Technology (IF 3.633) Pub Date : 2023-04-13 Anne Gerdes, Tove Faber Frandsen
-
Why a treaty on autonomous weapons is necessary and feasible Ethics and Information Technology (IF 3.633) Pub Date : 2023-03-22 Daan Kayser
-
(Some) algorithmic bias as institutional bias Ethics and Information Technology (IF 3.633) Pub Date : 2023-03-21 Camila Hernandez Flowerman
In this paper I argue that some examples of what we label ‘algorithmic bias’ would be better understood as cases of institutional bias. Even when individual algorithms appear unobjectionable, they may produce biased outcomes given the way that they are embedded in the background structure of our social world. Therefore, the problematic outcomes associated with the use of algorithmic systems cannot
-
Moral autonomy of patients and legal barriers to a possible duty of health related data sharing Ethics and Information Technology (IF 3.633) Pub Date : 2023-03-15 Anton Vedder, Daniela Spajić
Informed consent bears significant relevance as a legal basis for the processing of personal data and health data in the current privacy, data protection and confidentiality legislations. The consent requirements find their basis in an ideal of personal autonomy. Yet, with the recent advent of the global pandemic and the increased use of eHealth applications in its wake, a more differentiated perspective
-
Knowledge representation and acquisition for ethical AI: challenges and opportunities Ethics and Information Technology (IF 3.633) Pub Date : 2023-03-11 Vaishak Belle
-
The value of responsibility gaps in algorithmic decision-making Ethics and Information Technology (IF 3.633) Pub Date : 2023-02-24 Lauritz Munch, Jakob Mainz, Jens Christian Bjerring
Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in
-
Autonomous Military Systems: collective responsibility and distributed burdens Ethics and Information Technology (IF 3.633) Pub Date : 2023-02-23 Niël Henk Conradie
The introduction of Autonomous Military Systems (AMS) onto contemporary battlefields raises concerns that they will bring with them the possibility of a techno-responsibility gap, leaving insecurity about how to attribute responsibility in scenarios involving these systems. In this work I approach this problem in the domain of applied ethics with foundational conceptual work on autonomy and responsibility
-
Military artificial intelligence as power: consideration for European Union actorness Ethics and Information Technology (IF 3.633) Pub Date : 2023-02-17 Justinas Lingevicius
The article focuses on the inconsistency between the European Commission’s position on excluding military AI from its emerging AI policy, and at the same time EU policy initiatives targeted at supporting military and defence elements of AI on the EU level. It leads to the question, what, then, does the debate on military AI suggest to the EU’s actorness discussed in the light of Europe as a power debate
-
The irresponsibility of not using AI in the military Ethics and Information Technology (IF 3.633) Pub Date : 2023-02-14 H. W. Meerveld, R. H. A. Lindelauf, E. O. Postma, M. Postma
-
Model of a military autonomous device following International Humanitarian Law Ethics and Information Technology (IF 3.633) Pub Date : 2023-02-15 Tomasz Zurek, Jonathan Kwik, Tom van Engers
-
Autonomous weapon systems and responsibility gaps: a taxonomy Ethics and Information Technology (IF 3.633) Pub Date : 2023-02-16 Nathan Gabriel Wood
A classic objection to autonomous weapon systems (AWS) is that these could create so-called responsibility gaps, where it is unclear who should be held responsible in the event that an AWS were to violate some portion of the law of armed conflict (LOAC). However, those who raise this objection generally do so presenting it as a problem for AWS as a whole class of weapons. Yet there exists a rather