当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Privacy-Preserving Cost-Sensitive Learning.
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.4 ) Pub Date : 2020-06-12 , DOI: 10.1109/tnnls.2020.2996972
Yi Yang , Shuai Huang , Wei Huang , Xiangyu Chang

Cost-sensitive learning methods guaranteeing privacy are becoming crucial nowadays in many applications where increasing use of sensitive personal information is observed. However, there has no optimal learning scheme developed in the literature to learn cost-sensitive classifiers under constraint of enforcing differential privacy. Our approach is to first develop a unified framework for existing cost-sensitive learning methods by incorporating the weight constant and weight functions into the classical regularized empirical risk minimization framework. Then, we propose two privacy-preserving algorithms with output perturbation and objective perturbation methods, respectively, to be integrated with the cost-sensitive learning framework. We showcase how this general framework can be used analytically by deriving the privacy-preserving cost-sensitive extensions of logistic regression and support vector machine. Experimental evidence on both synthetic and real data sets verifies that the proposed algorithms can reduce the misclassification cost effectively while satisfying the privacy requirement. A theoretical investigation is also conducted, revealing a very interesting analytic relation, i.e., that the choice of the weight constant and weight functions does not only influence the Fisher-consistent property (population minimizer of expected risk with a specific loss function leads to the Bayes optimal decision rule) but also interacts with privacy-preserving levels to affect the performance of classifiers significantly.

中文翻译:

保持隐私的成本敏感型学习。

如今,在许多越来越多地使用敏感的个人信息的应用中,成本敏感的学习方法可确保隐私的安全性正变得至关重要。然而,在文献中没有开发出最优的学习方案来在强制执行差异隐私的约束下学习成本敏感的分类器。我们的方法是首先通过将权重常数和权重函数合并到经典的正则化经验风险最小化框架中,为现有的成本敏感型学习方法开发一个统一的框架。然后,我们提出两种分别具有输出扰动和客观扰动方法的隐私保护算法,以与成本敏感型学习框架集成。我们展示了如何通过推导逻辑回归和支持向量机的隐私保护成本敏感扩展来分析性地使用此通用框架。综合和真实数据集的实验证据证明,所提出的算法可以在满足隐私要求的同时有效降低误分类成本。还进行了理论研究,揭示了一个非常有趣的分析关系,即,权重常数和权重函数的选择不仅影响费舍尔一致性属性(具有特定损失函数的预期风险的人口最小化导致贝叶斯最佳决策规则),而且还会与隐私保护级别相互作用,从而显着影响分类器的性能。综合数据集和真实数据集的实验证据证明,所提出的算法可以在满足隐私要求的同时有效降低误分类成本。还进行了理论研究,揭示了一个非常有趣的分析关系,即,权重常数和权函数的选择不仅影响费舍尔一致性属性(具有特定损失函数的预期风险的人口最小化导致贝叶斯最佳决策规则),而且还会与隐私保护级别相互作用,从而显着影响分类器的性能。综合数据集和真实数据集的实验证据证明,所提出的算法可以在满足隐私要求的同时有效降低误分类成本。还进行了理论研究,揭示了一个非常有趣的分析关系,即,权重常数和权重函数的选择不仅影响费舍尔一致性属性(具有特定损失函数的预期风险的人口最小化导致贝叶斯最佳决策规则),而且还会与隐私保护级别相互作用,从而显着影响分类器的性能。
更新日期:2020-06-12
down
wechat
bug