当前位置: X-MOL 学术Nat. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Trusting artificial intelligence in cybersecurity is a double-edged sword
Nature Machine Intelligence ( IF 23.8 ) Pub Date : 2019-11-11 , DOI: 10.1038/s42256-019-0109-1
Mariarosaria Taddeo , Tom McCutcheon , Luciano Floridi

Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double-edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity. Current national cybersecurity and defence strategies of several governments mention explicitly the use of AI. However, it will be important to develop standards and certification procedures, which involves continuous monitoring and assessment of threats. The focus should be on the reliability of AI-based systems, rather than on eliciting users’ trust in AI.

中文翻译:

在网络安全中信任人工智能是一把双刃剑

人工智能(AI)在网络安全任务中的应用正吸引着私营和公共部门的更多关注。估计显示,到2025年,网络安全领域的人工智能市场将从2016年的10亿美元增长到348亿美元。一些政府的最新国家网络安全和防御战略明确提到了AI功能。同时,定义新标准和认证程序以激发用户对AI信任的计划正在全球范围内兴起。但是,对AI(包括机器学习和神经网络)执行网络安全任务的信任是一把双刃剑:它可以大大改善网络安全实践,但也可以促进对AI应用程序本身的新型攻击,这可能带来严重的安全性威胁。我们认为,对用于网络安全的AI的信任是不必要的,为了降低安全风险,必须采取某种形式的控制措施以确保为网络安全部署“可靠的AI”。为此,我们提供了三个针对网络安全的AI的设计,开发和部署的建议。几个国家/地区的当前国家网络安全和防御策略明确提到了AI的使用。但是,制定标准和认证程序非常重要,其中涉及对威胁的持续监视和评估。重点应该放在基于AI的系统的可靠性上,而不是引起用户对AI的信任。我们提供了三项建议,重点针对用于网络安全的AI的设计,开发和部署。几个国家/地区的当前国家网络安全和防御策略明确提到了AI的使用。但是,制定标准和认证程序非常重要,其中涉及对威胁的持续监视和评估。重点应该放在基于AI的系统的可靠性上,而不是引起用户对AI的信任。我们提供了三项建议,重点针对用于网络安全的AI的设计,开发和部署。几个国家/地区的当前国家网络安全和防御策略明确提到了AI的使用。但是,制定标准和认证程序非常重要,其中涉及对威胁的持续监视和评估。重点应该放在基于AI的系统的可靠性上,而不是引起用户对AI的信任。
更新日期:2020-01-14
down
wechat
bug