Abstract
The development of deep learning makes the learning model have more parameters to be learned, and it means that sufficient samples are needed. On the other hand, it is extremely difficult to find tons of labels to support model training process. The existing methods can extend the model to a new domain by looking for domain-invariant features from different domains. In this paper, we propose a novel deep domain adaptation model. Firstly, we try to make a variety of statistics working on high-level feature layers at the same time to obtain better performance. What is more, inspired by the active learning, we propose ‘uncertainty’ metric to search for hyper-parameters under unsupervised setting. The ‘uncertainty’ uses entropy to describe the learning status of the current discriminator. The smaller the ‘uncertainty’, the more stable the discriminator predicts the data. Finally, the network parameters are obtained by fine-tuning a generic pre-trained deep network. As a conclusion, the performance of our algorithm has been further improved over other compared algorithms on standard benchmarks.
Similar content being viewed by others
References
Kifer D, Ben-David S, Gehrke J (2004) Detecting change in data streams. In: Proceedings of the very large databases
Saenko K, Kulis B, Fritz M et al (2011) Unbiased look at dataset bias. In: Proceedings of the IEEE conference on computer vision and pattern recognition
Saenko K, Kulis B, Fritz M et al (2010) Adapting visual category models to new domains. In: Proceedings of the European conference on computer vision
Blitzer J, Dredze M, Pereira F (2007) Biographies, Bollywood, boom-boxes and blenders: domain adaptation for sentiment classification. In: Proceedings of the association for computational linguistics
Gong B, Shi Y, Sha F et al (2012) Geodesic flow kernel for unsupervised domain adaptation. In: Proceedings of the IEEE conference on computer vision and pattern recognition
Duan L, Tsang IW, Xu D (2012) Domain transfer multiple kernel learning. IEEE Trans Pattern Anal Mach Intell 34(3):465–479
Pan SJ, Tsang IW, Kwok JT et al (2011) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw Learn Syst 22(2):199–210
Borgwardt KM, Gretton A, Rasch MJ et al (2006) Integrating structured biological data by kernel maximum mean discrepancy. Bioinformatics 22(14):49–57
Sejdinovic D, Sriperumbudur B, Gretton A et al (2013) Equivalence of distance-based and RKHS-based statistics in hypothesis testing. Ann Stat 41(5):2263–2291
Gretton A, Sejdinovic D, Strathmann H et al (2012) Optimal kernel choice for large-scale two-sample tests. In: Proceedings of the conference on neural information processing systems
Zhang L, Zhang Q, Zhang Z et al (2015) Ensemble manifold regularized sparse low-rank approximation for multiview feature embedding. Pattern Recogn 48(10):3102–3112
Tzeng E, Hoffman J, Saenko K et al (2017) Adversarial discriminative domain adaptation. arXiv preprint arXiv:1702.05464
Fang K, Bai Y, Hinterstoisser S et al (2017) Multi-task domain adaptation for deep learning of instance grasping from simulation. arXiv preprint arXiv:1710.06422
Yosinski J, Clune J, Bengio Y et al (2014) How transferable are features in deep neural networks. In: Proceedings of the conference on neural information processing systems
Zhang N, Ding S (2017) Unsupervised and semi-supervised extreme learning machine with wavelet kernel for high dimensional data. Memet Comput 9(2):129–139
Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828
Ding S, Zhang J, Jia H et al (2016) An adaptive density data stream clustering algorithm. Cognit Comput 8(1):30–38
Long M, Cao Y, Wang J et al (2015) Learning transferable features with deep adaptation networks. In: Proceedings of the conference on machine learning
Long M, Zhu H, Wang J et al (2016) Unsupervised domain adaptation with residual transfer networks. In: Proceedings of the conference on neural information processing systems
Zhang L, Du B et al (2019) Hyperspectral image unsupervised classification by robust manifold matrix factorization. Inf Sci 485:154–169
Bousmalis K, Trigeorgis G, Silberman N et al (2016) Domain separation networks. In: Proceedings of the conference on neural information processing systems
Bousmalis K, Silberman N, Dohan D et al (2017) Unsupervised pixel-level domain adaptation with generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition
Bousmalis K, Silberman N, Dohan D et al (2016) Domain adversarial training of neural networks. J Mach Learn Res 17(59):1–35
Ben-David S, Blitzer J, Crammer K et al (2007) Analysis of representations for domain adaptation. In: Proceedings of the conference on neural information processing systems
Shimodaira H (2000) Improving predictive inference under covariate shift by weighting the log-likelihood function. J Stat Plan Inference 90(7):227–244
Konyushkova K, Sznitman R, Fua P (2017) Learning active learning from data. In: Proceedings of the conference on neural information processing systems
Sun B, Saenko K (2016) Deep coral: correlation alignment for deep domain adaptation. In: Proceedings of the European conference on computer vision. Springer, Cham, pp 443–450
Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Proceedings of the conference on neural information processing systems
Tzeng E, Hoffman J, Zhang N et al (2014) Deep domain confusion: maximizing for domain invariance. arXiv preprint arXiv:1412.3474
Liu MY, Tuzel O (2016) Coupled generative adversarial networks. In: Proceedings of the conference on neural information processing systems
Acknowledgements
This work is supported by the Fundamental Research Funds for the Central Universities (No. 2017XKZD03).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zhang, H., Ding, S. & Jia, W. Ensemble Adaptation Networks with low-cost unsupervised hyper-parameter search. Pattern Anal Applic 23, 1215–1224 (2020). https://doi.org/10.1007/s10044-019-00846-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10044-019-00846-8