Skip to main content
Log in

The Fragility of Moral Traits to Technological Interventions

  • Original Paper
  • Published:
Neuroethics Aims and scope Submit manuscript

Abstract

I will argue that deep moral enhancement is relatively prone to unexpected consequences. I first argue that even an apparently straightforward example of moral enhancement such as increasing human co-operation could plausibly lead to unexpected harmful effects. Secondly, I generalise the example and argue that technological intervention on individual moral traits will often lead to paradoxical effects on the group level. Thirdly, I contend that insofar as deep moral enhancement targets higher-order desires (desires to desire something), it is prone to be self-reinforcing and irreversible. Fourthly, I argue that the complex causal history of moral traits, with its relatively high frequency of contingencies, indicates their fragility. Finally, I conclude that attempts at deep moral enhancement pose greater risks than other enhancement technologies. For example, one of the major problems that moral enhancement is hoped to address is lack of co-operation between groups. If humanity developed and distributed a drug that dramatically increased co-operation between individuals, we would likely see a paradoxical decrease in co-operation between groups and a self-reinforcing increase in the disposition to engage in further modifications – both of which are potential problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. From the meaning of the words alone, one would be inclined to classify traditional interventions as moral enhancement. Some authors agree with this classification and then proceed to define a more specific term such as biomedical moral enhancement or technological moral enhancement; other authors do not (for a review of definitions see Raus et al. [4]). As a matter of usage, a search for the term on academic databases will reveal that there seems to be no publication on traditional moral education or moral progress using the term moral enhancement [5]. As observed by Raus et al. [4], a lack of rigorous definition is a problem for anyone attempting to conduct a thorough investigation in the area. I fully agree and will aim to produce a precise definition shortly.

  2. It is a tacit premise of the moral enhancement project that it will be more powerful than most forms of moral education in some way. It would be hard to propose a technology that might or might not be feasible, and might or might not be risky, just to achieve worse results than what we already have with moral education. It seems we have plenty of previous examples of pharmacological interventions that efficiently treat conditions that would be otherwise untreatable. I file this premise under the feasibility assumption to be mentioned shortly, which I will not directly address. For a partial comparison between moral enhancement and moral education see Fabiano [6].

  3. According to this view, defended by Alan Carter, the repugnant conclusion is the result of solely optimizing the total amount of happiness while dismissing any other value; it produces not just a scenario that is only partially valuable but one with disvalue. The utility monster scenario is the result of solely optimizing average happiness; this scenario would be otherwise ideal if it were the case that there were no morally relevant variable other than average happiness [8].

  4. Assuming there would be no other significant consequences of this change. Such a drug could work by merely preventing his brain from processing someone’s race during a trial while leaving his other traits and overall propensity to discriminate in other contexts unchanged. However, upcoming arguments about fragility will indicate that preventing cascading effects is not trivial.

  5. It might be the case that radical moral education or persistent environmental changes will cause significant changes in traits primarily expected to lead to morally better behaviour or motives, but for the sake of simplicity I will mainly focus my arguments on cases of technological innovations directly targeted at traits primarily expected to lead to morally better behaviour or motives. Also, per assumption, I presume that most current forms of moral education are not as effective as deep moral enhancement (see footnote 2).

  6. By descriptive sense I mean in the sense of being a description of human morality as an empirical matter (e.g. moral psychology) and not in the normative ethics sense.

  7. I do not claim moral traits are the most fragile. There might be traits or other features more or equally fragile (examples range from consciousness to cancers, which all seem more or equally hard to improve upon).

  8. Therefore, although I focus on consequences, I do not restrict myself to any specific form of consequentialism here. If anything, my analysis is pluralist regarding moral theory. If the modified individual behaves in the expected way but for the wrong motives, I count it as an unexpected outcome. Unexpected changes due to reduced deliberation also count by themselves.

  9. Some might argue that aggregating individual co-operation should produce group co-operation. However, as I will explain in the next paragraphs, individual co-operation is often restricted to one’s group and sometimes to the detriment of outsides. Aggregating it will not remove these effects.

  10. Their model revealed that: (1) groups with non-parochial co-operators have a disadvantage over other groups and thus would not have evolved in the first place, however; (2) groups with parochial co-operators, that are willing to sacrifice themselves fighting against out-groups in order to benefit their peers, have an evolutionary advantage and; finally, (3) merely parochial groups have a general disadvantage.

  11. Moreover, if the trait of being co-operative were a first-order desire, it would have to correspond to the entire set of desires for each co-operative outcome, including outcomes that are yet to happen. It is more plausible that co-operativeness is a desire to desire co-operative outcomes, which gives birth to specific first-order desires in different situations.

  12. I will use a fictitious person with oversimplified values here, but I expect similar examples can be found whenever there are competing moral traits that can be enhanced unevenly.

  13. Such a scenario is not solely the result of mistakenly thinking decreasing violence is the only relevant dimension to be improved. Even if the initial intention was to decrease the inclination towards violence just a moderate amount, Steve would not stop with just one single modification.

  14. To clarify, by irreversible I do not mean it would be technically unfeasible to revert, but merely one would be unwilling to revert. One could still be coerced into reverting.

  15. To see how these differ consider a perfectly secure prison for serial killers. Few would argue its inmates are no longer aggressive.

  16. It is even harder to conceive desiring to desire something but desiring not to desire to desire it. One may have conflicting second-order desires, but how can one desire to desire orange juice, but desire not to desire to desire orange juice? If they exist, third-order desires would likely only mirror second-order desires. Of course, properly settling this question lies outside of the scope of this article, but it is enough that these desires are unlikely to play a role here.

  17. If the groups had been even more geographically isolated by geographical accidents – ceteris paribus – we would be more strongly inclined to think it is permissible to ignore the suffering of those far away because our minds would have adapted to live in an environment where we could hardly affect those people far away and thus would have adapted to live in an environment where people far away simply did not matter. On the other hand, if the groups had been less isolated by geographical accidents – ceteris paribus – we would be more strongly inclined to think it is impermissible to ignore the suffering of those far away, for analogous reasons. Admittedly, the influence of those innate intuitions over extensive ethical reflection might not be so straightforward but my goal here is to analyse moral traits, not ethical theories.

  18. For instance, the astronomical laws explaining the apparent retrograde motion of planets became substantially simpler after Kepler’s laws were proposed.

  19. Of course, it might be that we need not and that evolutionary explanations are completely unnecessary. My arguments rely on the assumption that we need evolutionary theory, even if sparingly, to explain human morality.

  20. Redundancy would happen regardless because there are more codons than encodable amino acids. But this does not explain why the third nucleotide is often the redundant one.

References

  1. Crockett, Molly J. 2014. Moral bioenhancement: A neuroscientific perspective. Journal of Medical Ethics 40 (6): 370–371. https://doi.org/10.1136/medethics-2012-101096.

    Article  Google Scholar 

  2. Douglas, T. 2008. Moral Enhancement. Journal of Applied Philosophy 25 (3): 228–245. https://doi.org/10.1111/j.1468-5930.2008.00412.x.

    Article  Google Scholar 

  3. Persson, I., and J. Savulescu. 2008. The perils of cognitive enhancement and the urgent imperative to enhance the moral character of humanity. Journal of Applied Philosophy 25 (3): 162–177. https://doi.org/10.1111/j.1468-5930.2008.00410.x.

    Article  Google Scholar 

  4. Raus, K., F. Focquaert, M. Schermer, J. Specker, and S. Sterckx. 2014. On defining moral enhancement: A Clarificatory taxonomy. Neuroethics 7 (3): 263–273. https://doi.org/10.1007/s12152-014-9205-4.

    Article  Google Scholar 

  5. Google Scholar. 2018. Moral Enhancement. https://scholar.google.com/scholar?q=%22moralenhancement%22. Accessed 10 June 2018.

  6. Fabiano, J. 2020. Technological moral enhancement or traditional moral progress? Why not both? Journal of Medical Ethics 46 (6): 405–411. https://doi.org/10.1136/medethics-2019-105915

  7. Portenoy, R.K., J.O. Jarden, J.J. Sidtis, R.B. Lipton, K.M. Foley, and D.A. Rottenberg. 1986. Compulsive thalamic self-stimulation: A case with metabolic, electrophysiologic and behavioral correlates. Pain 27 (3): 277–290. https://doi.org/10.1016/0304-3959(86)90155-7.

    Article  Google Scholar 

  8. Carter, A. 2011. Some groundwork for a multidimensional axiology. Philosophical Studies 154 (3): 389–408. https://doi.org/10.1007/s11098-010-9557-5.

    Article  Google Scholar 

  9. Douglas, T. 2013. Moral enhancement via direct emotion modulation: A reply to John Harris. Bioethics 27 (3): 160–168. https://doi.org/10.1111/j.1467-8519.2011.01919.x.

    Article  Google Scholar 

  10. Cobb-Clark, D.A., and S. Schurer. 2012. The stability of big-five personality traits. Economics Letters 115 (1): 11–15. https://doi.org/10.1016/j.econlet.2011.11.015.

    Article  Google Scholar 

  11. Douglas, T. 2014a. The morality of moral Neuroenhancement. In Handbook of Neuroethics, ed. J. Clausen and N. Levy. Dordrecht: Springer.

    Google Scholar 

  12. Agar, N. 2013. Moral bioenhancement is dangerous. Journal of Medical Ethics 41 (4): 1–4. https://doi.org/10.1136/medethics-2013-101325.

  13. Harris, J. 2013. ‘Ethics is for bad guys!’ Putting the ‘moral’ into moral enhancement. Bioethics 27 (3): 169–173. https://doi.org/10.1111/j.1467-8519.2011.01946.x.

    Article  Google Scholar 

  14. Sparrow, R. 2014. Unfit for the future: The need for moral enhancement, by Persson, Ingmar, and Julian Savulescu. Australasian Journal of Philosophy 92 (2): 404–407. https://doi.org/10.1080/00048402.2013.860180.

    Article  Google Scholar 

  15. Persson, I., and J. Savulescu. 2014. Against fetishism about egalitarianism and in defense of cautious moral bioenhancement. The American Journal of Bioethics 14 (4): 39–42. https://doi.org/10.1080/15265161.2014.889248.

    Article  Google Scholar 

  16. Douglas, T. 2014b. The relationship between effort and moral worth: Three amendments to Sorensen’s model. Ethical Theory and Moral Practice 17 (2): 325–334. https://doi.org/10.1007/s10677-013-9441-4.

    Article  Google Scholar 

  17. Crockett, Molly J., L. Clark, M.D. Hauser, and T.W. Robbins. 2010. Serotonin selectively influences moral judgment and behavior through effects on harm aversion. Proceedings of the National Academy of Sciences 107 (40): 17433–17438. https://doi.org/10.1073/pnas.1009396107.

    Article  Google Scholar 

  18. Tse, W., and A. Bond. 2002. Serotonergic intervention affects both social dominance and affiliative behaviour. Psychopharmacology 161 (3): 324–330. https://doi.org/10.1007/s00213-002-1049-7.

    Article  Google Scholar 

  19. Bilderbeck, A.C., G.D.A. Brown, J. Read, M. Woolrich, P.J. Cowen, T.E.J. Behrens, and R.D. Rogers. 2014. Serotonin and social norms. Psychological Science 25 (7): 1303–1313. https://doi.org/10.1177/0956797614527830.

    Article  Google Scholar 

  20. Crockett, M.J., L. Clark, G. Tabibnia, M.D. Lieberman, and T.W. Robbins. 2008. Serotonin modulates behavioral reactions to unfairness. Science 320 (5884): 1739–1739. https://doi.org/10.1126/science.1155577.

    Article  Google Scholar 

  21. Wood, R.M., J.K. Rilling, A.G. Sanfey, Z. Bhagwagar, and R.D. Rogers. 2006. Effects of tryptophan depletion on the performance of an iterated Prisoner’s dilemma game in healthy adults. Neuropsychopharmacology 31 (5): 1075–1084. https://doi.org/10.1038/sj.npp.1300932.

    Article  Google Scholar 

  22. Shook, J.R. 2012. Neuroethics and the possible types of moral enhancement. AJOB Neuroscience 3 (4): 3–14. https://doi.org/10.1080/21507740.2012.712602.

    Article  Google Scholar 

  23. Levy, N., T. Douglas, G. Kahane, S. Terbeck, P.J. Cowen, M. Hewstone, and J. Savulescu. 2014. Are you morally modified?: The moral effects of widely used pharmaceuticals. Philosophy, Psychiatry, and Psychology 21 (2): 111–125. https://doi.org/10.1353/ppp.2014.0023.

    Article  Google Scholar 

  24. Ostrom, E. 1990. Governing the commons the evolution of institutions for collective action cooperation commons. Cambridge: Cambridge University Press.

    Google Scholar 

  25. Balliet, D., J. Wu, and C.K.W. De Dreu. 2014. Ingroup favoritism in cooperation: A meta-analysis. Psychological Bulletin 140 (6): 1556–1581. https://doi.org/10.1037/a0037737.

    Article  Google Scholar 

  26. De Dreu, C.K.W., ed. 2014. Social conflict within and between groups. Current Issues in Social Psychology. London: Psychology Press. https://doi.org/10.4324/9781315772745.

  27. Cardenas, J.C., and C. Mantilla. 2015. Between-group competition, intra-group cooperation and relative performance. Frontiers in Behavioral Neuroscience 9: 1–9. https://doi.org/10.3389/fnbeh.2015.00033.

    Article  Google Scholar 

  28. Bornstein, G. 2003. Intergroup conflict: Individual, group, and collective interests. Personality and Social Psychology Review 7 (2): 129–145. https://doi.org/10.1207/S15327957PSPR0702_129-145.

    Article  Google Scholar 

  29. Bowles, S., and H. Gintis. 2013. The coevolution of institutions and behaviors. In A cooperative species: Human Reciprocity and Its Evolution, 119–146. Princeton: Princeton University Press.

  30. De Dreu, C.K.W., and M.E. Kret. 2016. Oxytocin conditions intergroup relations through Upregulated in-group empathy, cooperation, conformity, and defense. Biological Psychiatry 79 (3): 165–173. https://doi.org/10.1016/j.biopsych.2015.03.020.

    Article  Google Scholar 

  31. De Dreu, C.K.W. 2012. Oxytocin modulates cooperation within and competition between groups: An integrative review and research agenda. Hormones and Behavior 61 (3): 419–428. https://doi.org/10.1016/j.yhbeh.2011.12.009.

    Article  Google Scholar 

  32. Aaldering, H. 2014. Parochial and universal cooperation in intergroup conflicts. PhD Thesis, Amsterdam: Universiteit van Amsterdam.

  33. Smith, M., D. Lewis, and M. Johnston. 1989. Dispositional theories of value. Proceedings of the Aristotelian Society, Supplementary Volumes 63: 89–174.

    Article  Google Scholar 

  34. Van Lange, P.A.M., J. Joireman, C.D. Parks, and E. Van Dijk. 2013. The psychology of social dilemmas: A review. Organizational Behavior and Human Decision Processes 120 (2): 125–141. https://doi.org/10.1016/j.obhdp.2012.11.003.

    Article  Google Scholar 

  35. Douglas, T. 2010. Intertemporal disagreement and empirical slippery slope arguments. Utilitas 22 (2): 184–197. https://doi.org/10.1017/S0953820810000087.

    Article  Google Scholar 

  36. Krebs, D. 2015. The evolution of morality. In The handbook of evolutionary psychology, ed. D. Buss, 747–771. Hoboken: Wiley. https://doi.org/10.1002/9780470939376.ch26.

    Chapter  Google Scholar 

  37. Andrews, T., and F. Burke. 2007. What Does It Mean to Think Historically? Perspectives on History | American Historical Association, January 1.

  38. Gell-mann, M. 1995. What is complexity? Complexity 1 (1).

  39. Bennett, A., and C. Elman. 2006. Complex causal relations and case study methods: The example of path dependence. Political Analysis 14: 250–267. https://doi.org/10.1093/Pan/Mpj020.

    Article  Google Scholar 

  40. Lewis, D. 1987. Causal Explanation. In Philosophical Papers, Volume II, 214–240. New York: Oxford University Press. https://doi.org/10.1093/0195036468.003.0007.

  41. Buchanan, A. 2011. Beyond humanity? Oxford: Oxford University Press.

    Book  Google Scholar 

  42. Zhang, J. 2012. Genetic redundancies and their evolutionary maintenance. In Evolutionary Systems Biology, ed. O.S. Soyer, vol. 751, 279–300. New York: Springer. https://doi.org/10.1007/978-1-4614-3567-9_13.

    Chapter  Google Scholar 

  43. De Vos, J.M., L.N. Joppa, J.L. Gittleman, P.R. Stephens, and S.L. Pimm. 2015. Estimating the normal background rate of species extinction. Conservation Biology 29 (2): 452–462. https://doi.org/10.1111/cobi.12380.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joao Fabiano.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fabiano, J. The Fragility of Moral Traits to Technological Interventions. Neuroethics 14, 269–281 (2021). https://doi.org/10.1007/s12152-020-09452-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12152-020-09452-6

Keywords

Navigation