当前位置: X-MOL 学术Neural Process Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hyper Autoencoders
Neural Processing Letters ( IF 3.1 ) Pub Date : 2020-07-31 , DOI: 10.1007/s11063-020-10310-y
Derya Soydaner

We introduce the hyper autoencoder architecture where a secondary, hypernetwork is used to generate the weights of the encoder and decoder layers of the primary, actual autoencoder. The hyper autoencoder uses a one-layer linear hypernetwork to predict all weights of an autoencoder by taking only one embedding vector as input. The hypernetwork is smaller and as such acts as a regularizer. Just like the vanilla autoencoder, the hyper autoencoder can be used for unsupervised or semi-supervised learning. In this study, we also present a semi-supervised model using a combination of convolutional neural networks and autoencoders with the hypernetwork. Our experiments on five image datasets, namely, MNIST, Fashion MNIST, LFW, STL-10 and CelebA, show that the hyper autoencoder performs well on both unsupervised and semi-supervised learning problems.



中文翻译:

超级自动编码器

我们介绍了超级自动编码器体系结构,其中使用了辅助的超网络来生成主要的实际自动编码器的编码器和解码器层的权重。超级自动编码器使用一层线性超网络通过仅将一个嵌入矢量作为输入来预测自动编码器的所有权重。超网络较小,因此可以充当正则化器。就像普通自动编码器一样,超级自动编码器可用于无监督或半监督学习。在这项研究中,我们还提出了一种将卷积神经网络和自动编码器与超网络结合使用的半监督模型。我们对MNIST,Fashion MNIST,LFW,STL-10和CelebA五个图像数据集的实验表明,超级自动编码器在无监督和半监督学习问题上均表现出色。

更新日期:2020-07-31
down
wechat
bug