Skip to main content
Log in

Obfuscation of images via differential privacy: From facial images to general images

  • Published:
Peer-to-Peer Networking and Applications Aims and scope Submit manuscript

Abstract

Due to the pervasiveness of image capturing devices in every-day life, images of individuals are routinely captured. Although this has enabled many benefits, it also infringes on personal privacy. A promising direction in research on obfuscation of facial images has been the work in the k-same family of methods which employ the concept of k-anonymity from database privacy. However, there are a number of deficiencies of k-anonymity that carry over to the k-same methods, detracting from their usefulness in practice. In this paper, we first outline several of these deficiencies and discuss their implications in the context of facial obfuscation. We then develop a framework through which we obtain a formal differentially private guarantee for the obfuscation of facial images in generative machine learning models. Our approach provides a provable privacy guarantee that is not susceptible to the outlined deficiencies of k-same obfuscation and produces photo-realistic obfuscated output. In addition, we demonstrate through experimental comparisons that our approach can achieve comparable utility to k-same obfuscation in terms of preservation of useful features in the images. Furthermore, we propose a method to achieve differential privacy for any image (i.e., without restriction to facial images) through the direct modification of pixel intensities. Although the addition of noise to pixel intensities does not provide the high visual quality obtained via generative machine learning models, it offers greater versatility by eliminating the need for a trained model. We demonstrate that our proposed use of the exponential mechanism in this context is able to provide superior visual quality to pixel-space obfuscation using the Laplace mechanism.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

References

  1. Abadi M, Chu A, Goodfellow IJ, McMahan HB, Mironov I, Talwar K, Zhang L (2016) Deep Learning with Differential Privacy. In: ACM SIGSAC conference on computer and communications security, pp 308–318. https://doi.org/10.1145/2976749.2978318

  2. Acharya D, Huang Z, Paudel DP, Van Gool L (2018) Covariance pooling for facial expression recognition. In: IEE/CVF conference on computer vision and pattern recognition workshops, pp 480–4807. https://doi.org/10.1109/CVPRW.2018.00077

  3. Almasi MM, Siddiqui TR, Mohammed N, Hemmati H (2016) The risk-utility tradeoff for data privacy models. In: 8th IFIP international conference on new technologies, mobility and security, pp 1–5. https://doi.org/10.1109/NTMS.2016.7792481

  4. Andrés ME, Bordenabe NE, Chatzikokolakis K, Palamidessi C (2013) Geo-indistinguishability: Differential privacy for location-based systems. In: ACM SIGSAC Conference on Computer & Communications Security, pp 901–914, https://doi.org/10.1145/2508859.2516735

  5. Anwar F, Petrounias I, Morris T, Kodogiannis V (2012) Mining anomalous events against frequent sequences in surveillance videos from commercial environments. Expert Syst Appl 39(4):4511–4531. https://doi.org/10.1016/j.eswa.2011.09.134

    Article  Google Scholar 

  6. Basu A, Nakamura T, Hidano S, Kiyomoto S (2015) k-anonymity: Risks and the reality. In: IEEE trustcom/bigdataSE/ISPA, vol 1, pp 983–989, https://doi.org/10.1109/Trustcom.2015.473

  7. Bitouk D, Kumar N, Dhillon S, Belhumeur P, Nayar SK (2008) Face swapping: Automatically replacing faces in photographs. ACM Trans Graph 27(3):39:1–39:8. https://doi.org/10.1145/1360612.1360638

    Article  Google Scholar 

  8. Brand A, Lal JA (2012) European best practice for quality assurance, provision and use of genome-based information and technologies. Drug Metabolism and Drug Interactions 27:177–82. https://doi.org/10.1515/dmdi-2012-0026

    Article  Google Scholar 

  9. Cavailaro A (2007) Privacy in video surveillance [In the Spotlight]. IEEE Signal Processing Magazine 24(2):168–166. https://doi.org/10.1109/MSP.2007.323270

    Article  Google Scholar 

  10. Chatzikokolakis K, Andrés M E, Bordenabe N E, Palamidessi C (2013) Broadening the scope of differential privacy using metrics. In: Privacy Enhancing Technologies, pp 82–102, https://doi.org/10.1007/978-3-642-39077-7_5

  11. Chen J, Konrad J, Ishwar P (2018) VGAN-based image representation learning for privacy-preserving facial expression recognition. In: IEEE conference on computer vision and pattern recognition workshops, pp 1570–1579. https://doi.org/10.1109/CVPRW.2018.00207

  12. Chi H, Hu YH (2015) Face de-identification using facial identity preserving features. In: IEEE global conference on signal and information processing, pp 586–590. https://doi.org/10.1109/GlobalSIP.2015.7418263

  13. Cootes TF, Edwards GJ, Taylor CJ (2001) Active appearance models. IEEE Trans Pattern Anal Mach Intell 23(6):681–685. https://doi.org/10.1109/34.927467

    Article  Google Scholar 

  14. Cormode G, Procopiuc CM, Shen E, Srivastava D, Yu T (2013) Empirical privacy and empirical utility of anonymized data. In: IEEE 29th international conference on data engineering workshops, pp 77–82. https://doi.org/10.1109/ICDEW.2013.6547431

  15. Croft WL, Sack J, Shi W (2019) Differentially private obfuscation of facial images. In: Machine learning and knowledge extraction, pp 229–249. https://doi.org/10.1007/978-3-030-29726-8_15

  16. Dhall A, Goecke R, Lucey S, Gedeon T (2011) Static facial expression analysis in tough conditions: Data, evaluation protocol and benchmark. In: IEEE international conference on computer vision workshops, pp 2106–2112, https://doi.org/10.1109/ICCVW.2011.6130508

  17. Dosovitskiy A, Springenberg JT, Tatarchenko M, Brox T (2017) Learning to generate chairs, tables and cars with convolutional networks. IEEE Trans Pattern Anal Mach Intell 39(4):692–705. https://doi.org/10.1109/TPAMI.2016.2567384

    Article  Google Scholar 

  18. Du L, Yi M, Blasch E, Ling H (2014) GARP-face: balancing privacy protection and utility preservation in face de-identification. In: IEEE international joint conference on biometrics, pp 1–8. https://doi.org/10.1109/BTAS.2014.6996249

  19. Dwork C (2006) Differential privacy. In: 33rd international conference on automata, languages and programming - volume part II, pp 1–12, https://doi.org/10.1007/11787006_1

  20. Dwork C, Roth A (2014) The algorithmic foundations of differential privacy. Found Trends Theor Comput Sci 9(3–4):211–407. https://doi.org/10.1561/0400000042

    Article  MathSciNet  MATH  Google Scholar 

  21. Fan L (2018) Image pixelization with differential privacy. In: Data and applications security and privacy, pp 148–162, https://doi.org/10.1007/978-3-319-95729-6_10

  22. Fan L (2019) Practical image obfuscation with provable privacy. In: IEEE international conference on multimedia and expo, pp 784–789, https://doi.org/10.1109/ICME.2019.00140

  23. Ferguson T S (2019) Linear programming: A concise introduction. https://www.math.ucla.edu/tom/LP.pdf. Accessed 9 Sept 2019

  24. Flynn M (2016) Generating faces with deconvolution networks. https://github.com/zo7/deconvfaces. Accessed: 1 Nov 2018

  25. Fredrikson M, Jha S, Ristenpart T (2015) Model inversion attacks that exploit confidence information and basic countermeasures. In: 22nd ACM SIGSAC conference on computer and communications security, pp 1322–1333, https://doi.org/10.1145/2810103.2813677

  26. Frome A, Cheung G, Abdulkader A, Zennaro M, Wu B, Bissacco A, Adam H, Neven H, Vincent L (2009) Large-scale privacy protection in google street view. In: IEEE 12th international conference on computer vision, pp 2373–2380, https://doi.org/10.1109/ICCV.2009.5459413

  27. Ganta SR, Kasiviswanathan SP, Smith A (2008) Composition attacks and auxiliary information in data privacy. In: 14th ACM SIGKDD International conference on knowledge discovery and data mining, pp 265–273, https://doi.org/10.1145/1401890.1401926

  28. Glorot X, Bengio Y (2010) Understanding the difficulty of training deep feedforward neural networks. In: 13th international conference on artificial intelligence and statistics, vol 9, pp 249–256

  29. Google (2019) Google Maps. https://www.google.be/maps. Accessed 27 Feb 2019

  30. Gross R, Airoldi E, Malin B, Sweeney L (2006) Integrating utility into face de-identification. In: Privacy enhancing technologies, pp 227–242, https://doi.org/10.1007/11767831_15

  31. Gross R, Sweeney L, de la Torre F, Baker S (2006) Model-based face de-identification. In: Computer vision and pattern recognition workshop, pp 161–161, https://doi.org/10.1109/CVPRW.2006.125

  32. Harmon L (1973) The recognition of faces. Scientific American 229(5):71–82

    Article  Google Scholar 

  33. Harmon L, Julesz B (1973) Masking in visual recognition: Effects of two-dimensional filtered noise. Science 180(4091):1194–1197. https://doi.org/10.1126/science.180.4091.1194

    Article  Google Scholar 

  34. Hitaj B, Ateniese G, Pérez-Cruz F (2017) Deep models under the GAN: Information leakage from collaborative deep learning. In: ACM SIGSAC conference on computer and communications security, pp 603–618, https://doi.org/10.1145/3133956.3134012

  35. Hu X, Hu D, Zheng S, Li W, Chen F, Shu Z, Wang L (2018) How people share digital images in social networks: A questionnaire-based study of privacy decisions and access control. Multimed Tools Appl 77(14):18163–18185. https://doi.org/10.1007/s11042-017-4402-x

    Article  Google Scholar 

  36. Hudson SE, Smith I (1996) Techniques for addressing fundamental privacy and disruption tradeoffs in awareness support systems. In: ACM Conference on computer supported cooperative work, pp 248–257, https://doi.org/10.1145/240080.240295

  37. Korshunov P, Ebrahimi T (2013) Using face morphing to protect privacy. In: 10th IEEE international conference on advanced video and signal based surveillance, pp 208–213, https://doi.org/10.1109/AVSS.2013.6636641

  38. Korshunov P, Ebrahimi T (2013) Using warping for privacy protection in video surveillance. In: 18th international conference on digital signal processing, pp 1–6, https://doi.org/10.1109/ICDSP.2013.6622791

  39. Kröckel J, Bodendorf F (2012) Customer tracking and tracing data as a basis for service innovations at the point of sale. In: Annual SRII global conference, pp 691–696, https://doi.org/10.1109/SRII.2012.115

  40. Langner O, Dotsch R, Bijlstra G, Wigboldus D, Hawk S, van Knippenberg A (2010) Presentation and validation of the radboud faces database. Cognition and Emotion 24(8):1377–1388. https://doi.org/10.1080/02699930903485076

    Article  Google Scholar 

  41. Letournel G, Bugeau A, Ta V, Domenger J (2015) Face de-identification with expressions preservation. In: IEEE International conference on image processing, pp 4366–4370, https://doi.org/10.1109/ICIP.2015.7351631

  42. Levi G, Hassner T (2015) Age and gender classification using convolutional neural networks. In: IEEE computer vision and pattern recognition workshops, pp 34–42, https://doi.org/10.1109/CVPRW.2015.7301352

  43. Li S, Deng W, Du J (2017) Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild. In: IEEE Conference on computer vision and pattern recognition, vol 28, pp 2584–2593, https://doi.org/10.1109/TIP.2018.2868382

  44. Li Y, Vishwamitra N, Knijnenburg BP, Hu H, Caine K (2017) Blur vs. Block: Investigating the effectiveness of privacy-enhancing obfuscation for images. In: IEEE conference on computer vision and pattern recognition workshops, pp 1343–1351, https://doi.org/10.1109/CVPRW.2017.176

  45. Li Y, Vishwamitra N, Knijnenburg BP, Hu H, Caine K (2017) Effectiveness and users’ experience of obfuscation as a privacy-enhancing technology for sharing photos. Proc ACM Hum.-Comput Interact 1:1–24. https://doi.org/10.1145/3134702

    Article  Google Scholar 

  46. Liu X, Krahnstoever N, Yu T, Tu P (2007) What are customers looking at?. In: IEEE conference on advanced video and signal based surveillance, pp 405–410, https://doi.org/10.1109/AVSS.2007.4425345

  47. Lundqvist D, Flykt A, Öhman A (1998) The Karolinska directed emotional faces – KDEF. ISBN 91-630-7164-9

  48. Martin K, Shilton K (2016) Putting mobile application privacy in context: An empirical study of user privacy expectations for mobile devices. The Information Society 32(3):200–216. https://doi.org/10.1080/01972243.2016.1153012

    Article  Google Scholar 

  49. McSherry F, Talwar K (2007) Mechanism design via differential privacy. In: 48th Annual IEEE symposium on foundations of computer science, pp 94–103, https://doi.org/10.1109/FOCS.2007.66

  50. Meden B, Emersic Z, Struc V, Peer P (2017) k-Same-Net: Neural-network-based face deidentification. In: International conference and workshop on bioinspired intelligence, pp 1–7, https://doi.org/10.1109/IWOBI.2017.7985521

  51. Melis L, Song C, Cristofaro ED, Shmatikov V (2019) Exploiting unintended feature leakage in collaborative learning. In: IEEE symposium on security and privacy, pp 691–706, https://doi.org/10.1109/SP.2019.00029

  52. Meng L, Sun Z (2014) Face de-identification with perfect privacy protection. In: 37th international convention on information and communication technology, electronics and microelectronics, pp 1234–1239, https://doi.org/10.1109/MIPRO.2014.6859756

  53. Meng L, Sun Z, Ariyaeeinia A, Bennett KL (2014) Retaining expressions on de-identified faces. In: 37th international convention on information and communication technology, electronics and microelectronics, pp 1252–1257, https://doi.org/10.1109/MIPRO.2014.6859759

  54. Mosaddegh S, Simon L, Jurie F (2015) Photorealistic face de-identification by aggregating donors’ face components. In: Asian conference on computer vision, pp 159–174, https://doi.org/10.1007/978-3-319-16811-1_11

  55. Newton EM, Sweeney L, Malin B (2005) Preserving privacy by de-identifying face images. IEEE Trans Knowl Data Eng 17(2):232–243. https://doi.org/10.1109/TKDE.2005.32

    Article  Google Scholar 

  56. Ng CB, Tay YH, Goi BM (2012) Recognizing human gender in computer vision: A survey. In: PRICAI 2012: Trends in artificial intelligence, pp 335–346, https://doi.org/10.1007/978-3-642-32695-0_31

  57. Ng H, Winkler S (2014) A data-driven approach to cleaning large face datasets. In: IEEE international conference on image processing, pp 343–347, https://doi.org/10.1109/ICIP.2014.7025068

  58. Nissim K, Bembenek A, Wood A, Bun M, Gaboardi M, Gasser U, O’Brien DR, Vadhan S (2018) Bridging the gap between computer science and legal approaches to privacy. Harvard Journal of Law & Technology 31:687–780

    Google Scholar 

  59. Nissim K, Wood A (2018) Is privacy privacy? The Royal Society 376 https://doi.org/10.1098/rsta.2017.0358

  60. Padilla-López JR, Chaaraoui AA, Flórez-Revuelta F (2015) Visual privacy protection methods: A survey. Expert Syst Appl 42(9):4177–4195. https://doi.org/10.1016/j.eswa.2015.01.041

    Article  Google Scholar 

  61. Parkhi O M, Vedaldi A, Zisserman A (2015) Deep face recognition. In: British machine vision conference

  62. Cummings R, Desai D (2018) The role of differential privacy in GDPR Compliance. http://pwp.gatech.edu/rachel-cummings/wp-content/uploads/sites/679/2018/09/{GDPR}_DiffPrivacy.pdf. Accessed 2 Oct 2019

  63. Ren Z, Lee YJ, Ryoo MS (2018) Learning to anonymize faces for privacy preserving action detection. In: European conference on computer vision, pp 639–655, https://doi.org/10.1007/978-3-030-01246-5_38

  64. Ribaric S, Ariyaeeinia A, Pavesic N (2016) De-identification for privacy protection in multimedia content: A survey. Signal Processing: Image Communication 47:131–151. https://doi.org/10.1016/j.image.2016.05.020

    Article  Google Scholar 

  65. Roy PC, Boddeti VN (2019) Mitigating information leakage in image representations: A maximum entropy approach. In: IEEE conference on computer vision and pattern recognition, pp 2586–2594, https://doi.org/10.1109/CVPR.2019.00269

  66. Samarati P, Sweeney L (1998) Protecting privacy when disclosing information: k-Anonymity and its enforcement through generalization and suppression. Tech.rep., Computer Science Laboratory, SRI International

  67. Sampat MP, Wang Z, Gupta S, Bovik AC, Markey MK (2009) Complex wavelet structural similarity: A new image similarity index. IEEE Trans Image Process 18(11):2385–2401. https://doi.org/10.1109/TIP.2009.2025923

    Article  MathSciNet  MATH  Google Scholar 

  68. Schroff F, Kalenichenko D, Philbin J (2015) FaceNet: A unified embedding for face recognition and clustering. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), pp 815–823, https://doi.org/10.1109/CVPR.2015.7298682

  69. Shokri R, Stronati M, Song C, Shmatikov V (2017) Membership inference attacks against machine learning models. In: IEEE symposium on security and privacy, pp 3–18, https://doi.org/10.1109/SP.2017.41

  70. Sim T, Zhang L (2015) Controllable face privacy. In: 11th IEEE international conference and workshops on automatic face and gesture recognition, vol 04, pp 1–8, https://doi.org/10.1109/FG.2015.7285018

  71. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: 3rd international conference on learning representations

  72. Song C, Shmatikov V (2020) Overlearning reveals sensitive attributes. In: 8th international conference on learning representations

  73. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: 2015 IEEE conference on computer vision and pattern recognition (CVPR), pp 1–9, https://doi.org/10.1109/CVPR.2015.7298594

  74. Tian YL, Kanade T, Cohn JF (2005) Facial expression analysis. In: Li S., Jain A. (eds) Handbook of face recognition, chap. 11, pp 247–275. Springer Science+Business Media Inc

  75. Triastcyn A, Faltings B (2018) Generating differentially private datasets using GANs CoRR

  76. (2015) U.S Department of Health & Human Services: Guidance regarding methods for de-identification of protected health information in accordance with the health insurance portability and accountability act (HIPAA) privacy rule. https://www.hhs.gov/hipaa/for-professionals/privacy/special-topics/de-identification/index.html. Accessed 9 Feb 2018

  77. Venetianer P L, Zhang Z, Scanlon A, Hu Y, Lipton A J (2007) Video verification of point of sale transactions. In: IEEE conference on advanced video and signal based surveillance, pp 411–416, https://doi.org/10.1109/AVSS.2007.4425346

  78. Winkler T, Rinner B (2014) Security and privacy protection in visual sensor networks: A survey. ACM Comput Surv 47(1):1–42. https://doi.org/10.1145/2545883

    Article  Google Scholar 

  79. Wu Y, Yang F, Xu Y, Ling H (2019) Privacy-protective-GAN for privacy preserving face de-identification. J Comput Sci Technol 34(1):47–60. https://doi.org/10.1007/s11390-019-1898-8

    Article  Google Scholar 

  80. Xie L, Lin K, Wang S, Wang F, Zhou J (2018) Differentially private generative adversarial network CoRR

  81. Zhang X, Ji S, Wang T (2018) Differentially private releasing via deep generative model CoRR

  82. Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: From error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612. https://doi.org/10.1109/TIP.2003.819861

    Article  Google Scholar 

Download references

Acknowledgments

We gratefully acknowledge the financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC) under Grants No. RGPIN-2020-06482, No. RGPIN-2016-06253 and No. CGSD2-503941-2017.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William L. Croft.

Ethics declarations

Conflict of Interests

The authors declare that they have no conflict of interest.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: Special Issue on Privacy-Preserving Computing

Guest Editors: Kaiping Xue, Zhe Liu, Haojin Zhu, Miao Pan and David S.L. Wei

Appendices

Appendix A: Facial classification network architectures

Table 1 VGG D architecture [61]. We set the number of filters on the final fully connected layer to the number of identities in the dataset on which the network is applied
Table 2 FaceNet architecture [68]. Inception refers to the use of inception blocks as described in [73]. Inception blocks listed as using L2 pooling do so in place of the standard max pooling

Appendix B

Fig. 16
figure 16

A comparison of the MSE achieved by the mechanisms over a wide range of privacy budgets. The left graph compares variants of the exponential mechanism and the right graph compares variants of the Laplace mechanism

Fig. 17
figure 17

A comparison between the blur variants of the exponential and Laplace mechanism with respect to utility measured using MSE. The left graph compares all three pixelization settings for the mechanisms and the right graph focuses on the mechanisms using a pixelization grid of size 16 for efficient use of low privacy budgets

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Croft, W.L., Sack, JR. & Shi, W. Obfuscation of images via differential privacy: From facial images to general images. Peer-to-Peer Netw. Appl. 14, 1705–1733 (2021). https://doi.org/10.1007/s12083-021-01091-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12083-021-01091-9

Keywords

Navigation