当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Encoding features robust to unseen modes of variation with attentive long short-term memory
Pattern Recognition ( IF 5.898 ) Pub Date : 2019-12-18 , DOI: 10.1016/j.patcog.2019.107159
Wissam J. Baddar; Yong Man Ro

Long short-term memory (LSTM) is a type of recurrent neural networks that is efficient for encoding spatio-temporal features in dynamic sequences. Recent work has shown that the LSTM retains information related to the mode of variation in the input dynamic sequence which reduces the discriminability of the encoded features. To encode features robust to unseen modes of variation, we devise an LSTM adaptation named attentive mode variational LSTM. The proposed attentive mode variational LSTM utilizes the concept of attention to separate the input dynamic sequence into two parts: (1) task-relevant dynamic sequence features and (2) task-irrelevant static sequence features. The task-relevant dynamic features are used to encode and emphasize the dynamics in the input sequence. The task-irrelevant static sequence features are utilized to encode the mode of variation in the input dynamic sequence. Finally, the attentive mode variational LSTM suppresses the effect of mode variation with a shared output gate and results in a spatio-temporal feature robust to unseen variations. The effectiveness of the proposed attentive mode variational LSTM has been verified using two tasks: facial expression recognition and human action recognition. Comprehensive and extensive experiments have verified that the proposed method encodes spatio-temporal features robust to variations unseen during the training.
更新日期:2020-01-04

 

全部期刊列表>>
Springer Nature 2019高下载量文章和章节
化学/材料学中国作者研究精选
《科学报告》最新环境科学研究
ACS材料视界
自然科研论文编辑服务
中南大学国家杰青杨华明
剑桥大学-
中国科学院大学化学科学学院
材料化学和生物传感方向博士后招聘
课题组网站
X-MOL
北京大学分子工程苏南研究院
华东师范大学分子机器及功能材料
中山大学化学工程与技术学院
试剂库存
天合科研
down
wechat
bug