当前位置: X-MOL 学术Pattern Anal. Applic. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Ensemble Adaptation Networks with low-cost unsupervised hyper-parameter search
Pattern Analysis and Applications ( IF 3.7 ) Pub Date : 2019-08-28 , DOI: 10.1007/s10044-019-00846-8
Haotian Zhang , Shifei Ding , Weikuan Jia

The development of deep learning makes the learning model have more parameters to be learned, and it means that sufficient samples are needed. On the other hand, it is extremely difficult to find tons of labels to support model training process. The existing methods can extend the model to a new domain by looking for domain-invariant features from different domains. In this paper, we propose a novel deep domain adaptation model. Firstly, we try to make a variety of statistics working on high-level feature layers at the same time to obtain better performance. What is more, inspired by the active learning, we propose ‘uncertainty’ metric to search for hyper-parameters under unsupervised setting. The ‘uncertainty’ uses entropy to describe the learning status of the current discriminator. The smaller the ‘uncertainty’, the more stable the discriminator predicts the data. Finally, the network parameters are obtained by fine-tuning a generic pre-trained deep network. As a conclusion, the performance of our algorithm has been further improved over other compared algorithms on standard benchmarks.

中文翻译:

具有低成本无监督超参数搜索的集成自适应网络

深度学习的发展使得学习模型具有更多需要学习的参数,这意味着需要足够的样本。另一方面,要找到大量标签来支持模型训练过程非常困难。通过从不同域中查找域不变特征,现有方法可以将模型扩展到新域。在本文中,我们提出了一种新颖的深域适应模型。首先,我们尝试使各种统计信息同时在高级要素图层上运行,以获得更好的性能。此外,受积极学习的启发,我们提出了“不确定性”指标来在无人监督的情况下搜索超参数。“不确定性”使用熵来描述当前鉴别器的学习状态。“不确定性”越小,鉴别器预测数据越稳定。最后,通过微调通用的预训练深度网络来获得网络参数。综上所述,我们的算法的性能比标准基准上的其他比较算法有了进一步的提高。
更新日期:2019-08-28
down
wechat
bug