Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Perspective
  • Published:

Governing AI safety through independent audits

Abstract

Highly automated systems are becoming omnipresent. They range in function from self-driving vehicles to advanced medical diagnostics and afford many benefits. However, there are assurance challenges that have become increasingly visible in high-profile crashes and incidents. Governance of such systems is critical to garner widespread public trust. Governance principles have been previously proposed offering aspirational guidance to automated system developers; however, their implementation is often impractical given the excessive costs and processes required to enact and then enforce the principles. This Perspective, authored by an international and multidisciplinary team across government organizations, industry and academia, proposes a mechanism to drive widespread assurance of highly automated systems: independent audit. As proposed, independent audit of AI systems would embody three ‘AAA’ governance principles of prospective risk Assessments, operation Audit trails and system Adherence to jurisdictional requirements. Independent audit of AI systems serves as a pragmatic approach to an otherwise burdensome and unenforceable assurance challenge.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Similar content being viewed by others

References

  1. Oliver, N., Calvard, T. & Potočnik, K. The tragic crash of flight AF447 shows the unlikely but catastrophic consequences of automation. Harvard Business Review (15 September 2017).

  2. Sumwalt, R., Landsberg, B. & Homendy, J. Assumptions Used in the Safety Assessment Process and the Effects of Multiple Alerts and Indications on Pilot Performance (National Transportation Safety Board, 2019).

  3. Sumwalt, R. NHTSA-2020-0106-0617 Comment from Robert L. Sumwalt, III (National Transportation Safety Board, 2021).

  4. Maslow, A. H. A theory of human motivation. Psychol. Rev. 50, 370–396 (1943).

    Article  Google Scholar 

  5. Shneiderman, B. Human-centered artificial intelligence: reliable, safe & trustworthy. Int. J. Human Comput. Interact. 36, 495–504 (2020).

    Article  Google Scholar 

  6. Falco, G. et al. Cyber risk research impeded by disciplinary barriers. Science 366, 1066–1069 (2019).

    Article  Google Scholar 

  7. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. & Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (Berkman Klein Center, 2020).

  8. Brundage, M. et al. Toward trustworthy AI development: mechanisms for supporting verifiable claims. Preprint at https://arxiv.org/abs/2004.07213 (2020).

  9. Jobin, A., Ienca, M. & Vayena, E. The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1, 389–399 (2019).

    Article  Google Scholar 

  10. Winfield, A. & Jirotka, M. Ethical governance is essential to building trust in robotics and artificial intelligence systems. Phil. Trans. R. Soc. A 376, 20180085 (2018)..

  11. Mittelstadt, B. Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. 1, 501–507 (2019).

    Article  Google Scholar 

  12. IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems 1st edn (IEEE, 2019).

  13. Kueppers, R. J. & Sullivan, K. B. How and why an independent audit matters. Int. J. Discl. Gov. 7, 286–293 (2010).

    Article  Google Scholar 

  14. Fischer, G. Exploring design trade-offs for quality of life in human-centered design. Interactions 25, 26–33 (2017).

    Article  Google Scholar 

  15. Tomlin, C. J., Mitchell, I., Bayen, A. M. & Oishi, M. Computational techniques for the verification of hybrid systems. Proc. IEEE 91, 986–1001 (2003).

    Article  Google Scholar 

  16. Mackworth, A. K. & Zhang, Y. A formal approach to agent design: an overview of constraint-based agents. Constraints 8, 229–242 (2003).

    Article  MathSciNet  Google Scholar 

  17. Topcu, U. et al. Assured autonomy: path toward living with autonomous systems we can trust. Preprint at https://arxiv.org/abs/2010.14443 (2020).

  18. Falco, G. & Gilpin, L. A stress testing framework for autonomous system verification and validation (v&v). In IEEE International Conference on Autonomous Systems (IEEE, 2021).

  19. ISO 13849-1:2015 Safety of Machinery — Safety-Related Parts of Control Systems (International Organization for Standardization Technical Committee, 2015).

  20. IEC 31010:2019 Risk Management—Risk Assessment Techniques (ISO, 2019); https://www.iso.org/standard/72140.html

  21. BS8611:2016 Robots and Robotic Devices. Guide to the Ethical Design and Application of Robots and Robotic Systems (British Standards Institute, 2016).

  22. Winfield, A. Ethical standards in robotics and AI. Nat. Electron. 2, 46–48 (2019).

    Article  Google Scholar 

  23. Landwehr, C. E. A building code for building code: putting what we know works to work. In Proc. 29th Annual Computer Security Applications Conference 139–147 (ACM, 2013).

  24. Haigh, T. & Landwehr, C. Building Code for Medical Device Software Security (IEEE Cybersecurity, 2015).

  25. Gebru, T. et al. Datasheets for datasets. Preprint at https://arxiv.org/abs/1803.09010 (2018).

  26. Mitchell, M. et al. Model cards for model reporting. In Proc. Conference on Fairness, Accountability, and Transparency 220–229 (ACM, 2019).

  27. Arnold, M. et al. FactSheets: Increasing trust in AI services through supplier’s declarations of conformity. IBM J. Res. Dev. 63, 6 (2019).

    Article  Google Scholar 

  28. Personal Data Protection Commission of Singapore Model Artificial Intelligence Governance Framework 2nd edn (Infocomm Media Development Authority of Singapore, 2020).

  29. Cummings, M. & Britton, D. in Living with Robots 119–140 (Elsevier, 2020).

  30. Grossi, D. R. Aviation recorder overview. In International Symposium On Transportation Recorders 153–164 (ISTR, 1999).

  31. Winfield, A. & Jirotka, M. The case for an ethical black box. In Annual Conference Towards Autonomous Robotic Systems 262–273 (Springer, 2017).

  32. Yao, Y. & Atkins, E. The smart black box: a value-driven high-bandwidth automotive event data recorder. IEEE Trans. Intell. Transport. Syst. 22, 1484–1496 (2020).

    Article  Google Scholar 

  33. Falco, G. & Siegel, J. E. A distributed ‘black box’ audit trail design specification for connected and automated vehicle data and software assurance. SAE Int. J. Transport. Cybersecur. Privacy 3, 11-03-02-0006 (2020).

  34. Winfield, A. et al. in Software Engineering for Robotics (eds Cavalcanti, A. et al.) 165–187 (Springer, 2021).

  35. Williamsen, M. Near-miss reporting: a missing link in safety culture. Prof. Safety 58, 46–50 (2013).

    Google Scholar 

  36. Principles to Promote Fairness, Ethics, Accountability and Transparency (Feat) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector (Monetary Authority of Singapore & Fairness, Ethics, Accountability and Transparency Committee, 2018).

  37. World Economic Forum Companion to the Model AI Governance Framework—Implementation and Self-Assessment for Organizations (Infocomm Media Development Authority of Singapore, 2020).

  38. Preliminary Study on the Ethics of Artificial Intelligence (World Commission on the Ethics of Scientific Knowledge and Technology, 2019).

  39. Falco, G. A smart city internet for autonomous systems. In 2020 Symposium on Security and Privacy Workshops 215–220 (IEEE, 2020).

  40. Drogkaris, P. & Bourka, A. Towards a Framework for Policy Development in Cybersecurity: Security and Privacy Considerations in Autonomous Agents (European Union Agency for Cybersecurity, 2018).

  41. High-Level Expert Group on AI Ethics Guidelines for Trustworthy AI (European Commission, 2019).

  42. Regulation of the European Parliament and of the Council Laying Down Harmonized Rules of Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (European Commission, 2021).

  43. Personal Data Protection Commission of Singapore Model Artificial Intelligence Governance Framework 1st edn (Infocomm Media Development Authority of Singapore, 2019).

  44. Schmidt, E. et al. National Security Commission on Artificial Intelligence: Final Report (National Security Commission on Artificial Intelligence, 2021).

  45. Pasquale, F. The Black Box Society (Harvard Univ. Press, 2015).

  46. The American Recovery and Reinvestment Act of 2009 (US Government, 2009).

  47. Spiekermann, S. & Winkler, T. Value-based engineering for ethics by design. Preprint at https://arxiv.org/abs/2004.13676l (2020).

Download references

Acknowledgements

We owe special thanks to the participants of the CCC Workshop on Assured Autonomy for their ideas, inspiration and discussion that contributed to this paper. C.H. is a former Chairman of the National Transportation Safety Board.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gregory Falco.

Ethics declarations

Competing interests

G.F. is a consultant for the World Bank Group on autonomous vehicle regulation and is a ForHumanity fellow; he thanks the US National Institute for Standards and Technology (NIST), the Icelandic Fulbright Commission and the National Science Foundation for research funding. B.S. is a ForHumanity fellow. J.B. is a US government civil servant working for and funded by NASA. R.C. is the Executive Director of ForHumanity, a registered 501(c)(3) not-for-profit organization. A.D. is a member of the Maryland Cybersecurity Council, established by the Maryland State Legislature to work with the National Institute of Standards and Technology and other federal agencies, private sector businesses and private cybersecurity experts to improve cybersecurity in Maryland. D.D. is an external member of the Salesforce Advisory Council on Ethical & Humane Use of Technology. A.G. is a US government civil servant working for and funded by NASA; he is a voting member of SAE 34 working group on AI in Aviation that is writing guidelines for AI in aviation. C.H. is the Chairman of the Washington Metrorail Safety Commission where the opinions expressed in this article are his and not those of the Commission; he is bound by confidentiality agreements that prevent him from disclosing other competing interests in this work. M.J. holds an EPSRC Fellowship investigating ethical data recorders in robots, leads a project on legality and ethics of data recorders in autonomous vehicles and is a member of the All-Party Parliamentary Group on Data Analytics; she is a Director of ORBIT-RRI Ltd. H.J. is the Swedish Science and Innovation Counsellor to the United States. A.J.L. is Vice President and Global Outreach Director at Microsoft Research and currently serves as the Science Representative on the steering committee of the Global Partnership on Artificial Intelligence. A.K.M. is a Director of Minerva Intelligence Inc; he is a member of the Centre for AI Decision-making and Action (CAIDA) Steering Committee, the AI network of British Columbia (AInBC) Board and The Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIRE) international advisory board. C.M. is a Fellow of the Alan Turing Institute, a member of the Strategic Advisory Board of Zenzic and of the ENISA CarSec Expert Group. S.E.P. is the Chief National Cyber Security Adviser of the Icelandic Government, chairs the Icelandic Cyber Security Council and is a member of the Board of the European Union Agency for Cybersecurity (ENISA); he leads the development and implementation of Iceland’s cybersecurity strategy including the cybersecurity aspects of AI. A.W. sits on British Standards Institute committee AMT/010/01 Ethics for Robots and Autonomous Systems, the executive committee of the IEEE Standards Association Global Initiative on Ethics of Autonomous and Intelligent Systems and the WEF Global AI Council; he is a member of the Advisory Committee of robotics company Karakuri Ltd. Z.K.Y. leads the development and implementation of Singapore’s AI Governance Framework and is a member of OECD’s Network of Experts in AI and the Global Partnership on Artificial Intelligence’s expert working group on Data Governance. Authors not mentioned declare no competing interests.

Additional information

Peer review information Nature Machine Intelligence thanks Ryan Calo and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Falco, G., Shneiderman, B., Badger, J. et al. Governing AI safety through independent audits. Nat Mach Intell 3, 566–571 (2021). https://doi.org/10.1038/s42256-021-00370-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-021-00370-7

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics