Adversarial attacks make imperceptible changes to a neural network’s inputs so that it recognizes it as something entirely different. This flaw can give us insight into how these networks work and how to make them more robust.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Buy this article
- Purchase on Springer Link
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
References
Woods, W., Chen, J. & Teuscher, C. Nat. Mach. Intell. https://doi.org/10.1038/s42256-019-0104-6 (2019).
Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT press, 2016).
Finlayson, S. G. et al. Science 363, 1287–1289 (2019).
Craven, M. & Shavlik, J. W. in Proc. 8th International Conference on Neural Information Processing Systems 24–30 (NIPS, 1996).
Gunning, D. & Aha, D. W. AI Mag. 40, 44–58 (2019).
Selvaraju, R. R. et al. in Proc. IEEE International Conference on Computer Vision 618–626 (IEEE, 2017).
Heaven, D. Nature 574, 163–166 (2019).
Author information
Authors and Affiliations
Corresponding authors
Rights and permissions
About this article
Cite this article
Pereira, G.T., de Carvalho, A.C.P.L.F. Bringing robustness against adversarial attacks. Nat Mach Intell 1, 499–500 (2019). https://doi.org/10.1038/s42256-019-0116-2
Published:
Issue Date:
DOI: https://doi.org/10.1038/s42256-019-0116-2