当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Absent Multiple Kernel Learning Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 20.8 ) Pub Date : 1-29-2019 , DOI: 10.1109/tpami.2019.2895608
Xinwang Liu , Lei Wang , Xinzhong Zhu , Miaomiao Li , En Zhu , Tongliang Liu , Li Liu , Yong Dou , Jianping Yin

Multiple kernel learning (MKL) has been intensively studied during the past decade. It optimally combines the multiple channels of each sample to improve classification performance. However, existing MKL algorithms cannot effectively handle the situation where some channels of the samples are missing, which is not uncommon in practical applications. This paper proposes three absent MKL (AMKL) algorithms to address this issue. Different from existing approaches where missing channels are first imputed and then a standard MKL algorithm is deployed on the imputed data, our algorithms directly classify each sample based on its observed channels, without performing imputation. Specifically, we define a margin for each sample in its own relevant space, a space corresponding to the observed channels of that sample. The proposed AMKL algorithms then maximize the minimum of all sample-based margins, and this leads to a difficult optimization problem. We first provide two two-step iterative algorithms to approximately solve this problem. After that, we show that this problem can be reformulated as a convex one by applying the representer theorem. This makes it readily be solved via existing convex optimization packages. In addition, we provide a generalization error bound to justify the proposed AMKL algorithms from a theoretical perspective. Extensive experiments are conducted on nine UCI and six MKL benchmark datasets to compare the proposed algorithms with existing imputation-based methods. As demonstrated, our algorithms achieve superior performance and the improvement is more significant with the increase of missing ratio.

中文翻译:


缺乏多核学习算法



在过去的十年里,多核学习(MKL)得到了深入的研究。它优化地组合每个样本的多个通道以提高分类性能。然而,现有的MKL算法无法有效处理样本部分通道缺失的情况,这种情况在实际应用中并不罕见。本文提出了三种缺失的 MKL (AMKL) 算法来解决这个问题。与先对缺失通道进行插补,然后在插补数据上部署标准 MKL 算法的现有方法不同,我们的算法直接根据其观察到的通道对每个样本进行分类,而不执行插补。具体来说,我们为每个样本在其自己的相关空间中定义一个裕度,该空间对应于该样本的观察到的通道。然后,所提出的 AMKL 算法会最大化所有基于样本的余量的最小值,这会导致一个困难的优化问题。我们首先提供两个两步迭代算法来近似解决这个问题。之后,我们证明这个问题可以通过应用表示定理重新表述为凸问题。这使得它很容易通过现有的凸优化包来解决。此外,我们还提供了泛化误差界,从理论角度证明了所提出的 AMKL 算法的合理性。在 9 个 UCI 和 6 个 MKL 基准数据集上进行了大量实验,以将所提出的算法与现有的基于插补的方法进行比较。如图所示,我们的算法实现了优越的性能,并且随着缺失率的增加,改进更加显着。
更新日期:2024-08-22
down
wechat
bug