样式: 排序: IF: - GO 导出 标记为已读
-
A Multi-Stage Visual Perception Approach for Image Emotion Analysis IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-03-08 Jicai Pan, Jinqiao Lu, Shangfei Wang
-
Can Large Language Models Assess Personality from Asynchronous Video Interviews? A Comprehensive Evaluation of Validity, Reliability, Fairness, and Rating Patterns IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-03-08 Tianyi Zhang, Antonis Koutsoumpis, Janneke K. Oostrom, Djurre Holtrop, Sina Ghassemi, Reinout E. de Vries
-
Analyzing Continuous-Time and Sentence-Level Annotations for Speech Emotion Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-03-01 Luz Martinez-Lucas, Wei-Cheng Lin, Carlos Busso
-
GDDN: Graph Domain Disentanglement Network for Generalizable EEG Emotion Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-29 Bianna Chen, C. L. Philip Chen, Tong Zhang
-
Guest Editorial: Ethics in Affective Computing IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-29 Jonathan Gratch, Gretchen Greene, Rosalind Picard, Lachlan Urquhart, Michel Valstar
Stunning advances in machine learning are heralding a new era in sensing, interpreting, simulating and stimulating human emotion. In the human sciences, research is increasingly highlighting the explanatory power of emotions, feelings, and other affective processes to predict how we think and behave. This is beginning to translate into an explosion of applications that can improve human wellbeing including
-
Dep-FER: Facial Expression Recognition in Depressed Patients Based on Voluntary Facial Expression Mimicry IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-27 Jiayu Ye, Yanhong Yu, Yunshao Zheng, Yang Liu, Qingxiang Wang
-
Vesper: A Compact and Effective Pretrained Model for Speech Emotion Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-26 Weidong Chen, Xiaofen Xing, Peihao Chen, Xiangmin Xu
-
An Analysis of Physiological and Psychological Responses in Virtual Reality and Flat Screen Gaming IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-22 Ritik Vatsal, Shrivatsa Mishra, Rushil Thareja, Mrinmoy Chakrabarty, Ojaswa Sharma, Jainendra Shukla
-
Continuous Emotion Ambiguity Prediction: Modeling with Beta Distributions IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-20 Deboshree Bose, Vidhyasaharan Sethu, Eliathamby Ambikairajah
-
Facial Action Unit Detection and Intensity Estimation from Self-supervised Representation IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-19 Bowen Ma, Rudong An, Wei Zhang, Yu Ding, Zeng Zhao, Rongsheng Zhang, Tangjie Lv, Changjie Fan, Zhipeng Hu
-
Cross-Task Inconsistency Based Active Learning (CTIAL) for Emotion Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-16 Yifan Xu, Xue Jiang, Dongrui Wu
-
Bodily Sensation Map vs. Bodily Motion Map: Visualizing and Analyzing Emotional Body Motions IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-14 Myeongul Jung, Youngwug Cho, Jejoong Kim, Hyungsook Kim, Kwanguk Kim
-
Looking into Gait for Perceiving Emotions via Bilateral Posture and Movement Graph Convolutional Networks IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-13 Yingjie Zhai, Guoli Jia, Yu-Kun Lai, Jing Zhang, Jufeng Yang, Dacheng Tao
-
Multi-Modal Hierarchical Empathetic Framework for Social Robots With Affective Body Control IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-12 Yue Gao, Yangqing Fu, Ming Sun, Feng Gao
-
Avatar-Based Feedback in Job Interview Training Impacts Action Identities and Anxiety IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-08 Sarinasadat Hosseini, Jingyu Quan, Xiaoqi Deng, Yoshihiro Miyake, Takayuki Nozawa
-
Novel VR-Based Biofeedback Systems: A Comparison Between Heart Rate Variability- and Electrodermal Activity-Driven Approaches IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-08 Andrea Baldini, Elisabetta Patron, Claudio Gentili, Enzo Pasquale Scilingo, Alberto Greco
-
An Open-source Benchmark of Deep Learning Models for Audio-visual Apparent and Self-reported Personality Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-08 Rongfan Liao, Siyang Song, Hatice Gunes
-
Emotion Recognition in Conversation Based on a Dynamic Complementary Graph Convolutional Network IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-02-01 Zhenyu Yang, Xiaoyang Li, Yuhu Cheng, Tong Zhang, Xuesong Wang
-
Learning With Rater-Expanded Label Space to Improve Speech Emotion Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-01-31 Shreya G. Upadhyay, Woan-Shiuan Chien, Bo-Hao Su, Chi-Chun Lee
-
A Multi-Level Alignment and Cross-Modal Unified Semantic Graph Refinement Network for Conversational Emotion Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-01-31 Xiaoheng Zhang, Weigang Cui, Bin Hu, Yang Li
-
Modeling the Interplay Between Cohesion Dimensions: a Challenge for Group Affective Emergent States IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-01-30 Lucien Maman, Nale Lehmann- Willenbrock, Mohamed Chetouani, Laurence Likforman-Sulem, Giovanna Varni
-
Exploring Retrospective Annotation in Long-videos for Emotion Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-01-29 Patricia Bota, Pablo Cesar, Ana Fred, Hugo Placido da Silva
-
CFDA-CSF: A Multi-modal Domain Adaptation Method for Cross-subject Emotion Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-01-23 Magdiel Jiménez-Guarneros, Gibran Fuentes-Pineda
-
Show me How You Use Your Mouse and I Tell You How You Feel? Sensing Affect with the Computer Mouse IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-01-23 Paul Freihaut, Anja S. Göritz
-
How Virtual Reality Therapy Affects Refugees from Ukraine - acute stress reduction pilot study IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-01-10 Dorota Kamińska, Grzegorz Zwoliński, Dorota Merecz-Kot
-
Anthropomorphism and Affective Perception: Dimensions, Measurements, and Interdependencies in Aerial Robotics IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-01-04 Viviane Herdel, Anastasia Kuzminykh, Yisrael Parmet, Jessica R. Cauchard
-
Gusa: Graph-based Unsupervised Subdomain Adaptation for Cross-Subject EEG Emotion Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-01-04 Xiaojun Li, C.L. Philip Chen, Bianna Chen, Tong Zhang
-
MASANet: Multi-Aspect Semantic Auxiliary Network for Visual Sentiment Analysis IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2024-01-04 Jinglun Cen, Chunmei Qing, Haochun Ou, Xiangmin Xu, Junpeng Tan
-
Editorial: Special Issue on Unobtrusive Physiological Measurement Methods for Affective Applications IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-11-28 Ioannis T. Pavlidis, Theodora Chaspari, Daniel McDuff
In The formative years of Affective Computing [1], from the late 1990s and into the early 2000s, a significant fraction of research attention was focused on the development of methods for unobtrusive physiological measurement . It quickly became obvious that wiring people with electrodes and strapping cumbersome hardware to their bodies was not only restricting the types of experiments that could be
-
Automatic Deceit Detection Through Multimodal Analysis of High-Stake Court-Trials IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-10-05 Berat Biçer, Hamdi Dibeklioğlu
In this article we propose the use of convolutional self-attention for attention-based representation learning, while replacing traditional vectorization methods with a transformer as the backbone of our speech model for transfer learning within our automatic deceit detection framework. This design performs a multimodal data analysis and applies fusion to merge visual, vocal, and speech(textual) channels;
-
Guest Editorial Neurosymbolic AI for Sentiment Analysis IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-09-18 Frank Xing, Björn Schuller, Iti Chaturvedi, Erik Cambria, Amir Hussain
Neural network-based methods, especially deep learning, have been a burgeoning area in AI research and have been successful in tackling the expanding data volume as we move into a digital age. Today, the neural network-based methods are not only used for low-level cognitive tasks, such as recognizing objects and spotting keywords, but they have also been deployed in various industrial information systems
-
Spatio-Temporal Graph Analytics on Secondary Affect Data for Improving Trustworthy Emotional AI IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-07-20 Md Taufeeq Uddin, Lijun Yin, Shaun Canavan
Ethical affective computing (AC) requires maximizing the benefits to users while minimizing its harm to obtain trust from users. This requires responsible development and deployment to ensure fairness, bias mitigation, privacy preservation, and accountability. To obtain this, we require methodologies that can quantify, visualize, analyze, and mine insights from affect data. Hence, in this article,
-
WiFE: WiFi and Vision Based Unobtrusive Emotion Recognition via Gesture and Facial Expression IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-06-13 Yu Gu, Xiang Zhang, Huan Yan, Jingyang Huang, Zhi Liu, Mianxiong Dong, Fuji Ren
Emotion plays a critical role in making the computer more human-like. As the first and most essential step, emotion recognition emerges recently as a hot but relatively nascent topic, i.e., current research mainly focuses on single modality (e.g., facial expression) while human emotion expressions are multi-modal in nature. To this end, we propose an unobtrusive emotion recognition system leveraging
-
Two Birds With One Stone: Knowledge-Embedded Temporal Convolutional Transformer for Depression Detection and Emotion Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-06-05 Wenbo Zheng, Lan Yan, Fei-Yue Wang
Depression is a critical problem in modern society that affects an estimated 350 million people worldwide, causing feelings of sadness and a lack of interest and pleasure. Emotional disorders are gaining interest and are closely entwined with depression, because one contributes to an understanding of the other. Despite the achievements in the two separate tasks of emotion recognition and depression
-
An (E)Affective Bind: Situated Affectivity and the Prospect of Affect Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-05-29 Jason Branford
Several prominent criticisms have recently challenged the possibility of algorithmically determining or recognising human affect. This paper ethically evaluates one underexplored avenue for overcoming such deficiencies in categorical affect recognition technologies (ARTs). Specifically, the emerging literature on “situated affectivity” offers valuable guidance on three fronts. First, it conceptually
-
Opacity, Transparency, and the Ethics of Affective Computing IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-05-23 Manohar Kumar, Aisha Aijaz, Omkar Chattar, Jainendra Shukla, Raghava Mutharaju
Human opacity is the intrinsic quality of unknowability of human beings with respect to machines. The descriptive relationship between humans and machines, which captures how much information one can gather about the other, can be explicated using an opacity-transparency relationship. This relationship allows us to describe and normatively evaluate a spectrum of opacity where humans and machines may
-
A Quantum Probability Driven Framework for Joint Multi-Modal Sarcasm, Sentiment and Emotion Analysis IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-05-23 Yaochen Liu, Yazhou Zhang, Dawei Song
Sarcasm, sentiment, and emotion are three typical kinds of spontaneous affective responses of humans to external events and they are tightly intertwined with each other. Such events may be expressed in multiple modalities (e.g., linguistic, visual and acoustic), e.g., multi-modal conversations. Joint analysis of humans’ multi-modal sarcasm, sentiment, and emotion is an important yet challenging topic
-
The Ethics of AI in Games IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-05-16 David Melhart, Julian Togelius, Benedikte Mikkelsen, Christoffer Holmgård, Georgios N. Yannakakis
Video games are one of the richest and most popular forms of human-computer interaction and, hence, their role is critical for our understanding of human behaviour and affect at a large scale. As artificial intelligence (AI) tools are gradually adopted by the game industry a series of ethical concerns arise. Such concerns, however, have so far not been extensively discussed in a video game context
-
Facial Expression Recognition in Classrooms: Ethical Considerations and Proposed Guidelines for Affect Detection in Educational Settings IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-05-15 Allison Macey Banzon, Jonathan Beever, Michelle Taub
Recent technological and educational shifts have made it possible to capture students’ facial expressions during learning with the goal of detecting learners’ emotional states. Those interested in affect detection argue these tools will support automated emotions-based learning interventions, providing educational professionals with the opportunity to develop individualized, emotionally responsive
-
Efficient Multimodal Transformer With Dual-Level Feature Restoration for Robust Multimodal Sentiment Analysis IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-05-10 Licai Sun, Zheng Lian, Bin Liu, Jianhua Tao
With the proliferation of user-generated online videos, Multimodal Sentiment Analysis (MSA) has attracted increasing attention recently. Despite significant progress, there are still two major challenges on the way towards robust MSA: 1) inefficiency when modeling cross-modal interactions in unaligned multimodal data; and 2) vulnerability to random modality feature missing which typically occurs in
-
Crowdsourcing Affective Annotations Via fNIRS-BCI IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-05-08 Tuukka Ruotsalo, Kalle Mäkelä, Michiel Spapé
Affective annotation refers to the process of labeling media content based on the emotions they evoke. Since such experiences are inherently subjective and depend on individual differences, the central challenge is associating digital content with its affective, interindividual experience. Here, we present a first-of-its-kind methodology for affective annotation directly from brain signals by monitoring
-
A Review of Tools and Methods for Detection, Analysis, and Prediction of Allostatic Load Due to Workplace Stress IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-05-04 Karl Magtibay, Karthikeyan Umapathy
Chronic stress risks an individual's overall well-being. Chronic stress is associated with allostatic load, the body's wear-and-tear due to prolonged heightened physiological and psychological states. Increased allostatic load among workers increases their risk of injuries and the likelihood of diseases and illnesses. An allostatic load model could explain the basis of a stress response. Stress research
-
WavDepressionNet: Automatic Depression Level Prediction via Raw Speech Signals IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-05-03 Mingyue Niu, Jianhua Tao, Yongwei Li, Yong Qin, Ya Li
Physiological reports have confirmed that there are differences in speech signals between depressed and healthy individuals. Therefore, as an application in the field of affective computing, automatic depression level prediction through speech signals has received the attention of researchers, which often estimate the depression severity of individuals by the Fourier or Mel spectrograms of speech signals
-
Empirical Validation of an Agent-Based Model of Emotion Contagion IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-05-01 Erik Stefan van Haeringen, Emmeke Anna Veltmeijer, Charlotte Gerritsen
In recent years, many agent-based models of human groups have implemented a mechanism of emotion contagion, yet empirical validation is lagging behind. The aim of the present paper is to validate an agent-based model of emotion contagion at the level of group emotion, by comparing simulations against the emotional development of real people in small groups. To study the effect of emotion contagion
-
The Role of Preprocessing for Word Representation Learning in Affective Tasks IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-04-25 Nastaran Babanejad, Heidar Davoudi, Ameeta Agrawal, Aijun An, Manos Papagelis
Affective tasks, including sentiment analysis, emotion classification, and sarcasm detection have drawn a lot of attention in recent years due to a broad range of useful applications in various domains. The main goal of affect detection tasks is to recognize states such as mood, sentiment, and emotions from textual data (e.g., news articles or product reviews). Despite the importance of utilizing preprocessing
-
Pose-Aware Facial Expression Recognition Assisted by Expression Descriptions IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-04-17 Shangfei Wang, Yi Wu, Yanan Chang, Guoming Li, Meng Mao
Although expression descriptions provide additional information about facial behaviors despite of different poses, and pose features are beneficial to adapt to pose variety, neither has been fully leveraged in facial expression recognition. This paper proposes a pose-aware text-assisted facial expression recognition method using cross-modality attention. Specifically, the method contains three components
-
Cross-Day Data Diversity Improves Inter-Individual Emotion Commonality of Spatio-Spectral EEG Signatures Using Independent Component Analysis IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-04-13 Yi-Wei Shen, Yuan-Pin Lin
Electroencephalogram (EEG) variability poses a great challenge to the affective brain-computer interface (aBCI) for practical applications. Most aBCI frameworks have been demonstrated successfully but deliberated on single-day data, which can be realistically susceptible to psychophysiological changes and further hinder the exploration of inter-individual EEG commonality. This study proposes a multiple-day
-
LGSNet: A Two-Stream Network for Micro- and Macro-Expression Spotting With Background Modeling IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-04-13 Wang-Wang Yu, Jingwen Jiang, Kai-Fu Yang, Hong-Mei Yan, Yong-Jie Li
Micro- and macro-expression spotting in an untrimmed video is a challenging task, due to the mass generation of false positive samples. Most existing methods localize higher response areas by extracting hand-crafted features or cropping specific regions from all or some key raw images. However, these methods either neglect the continuous temporal information or model the inherent human motion paradigms
-
Modeling Uncertainty for Low-Resolution Facial Expression Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-04-10 Ling Lo, Bo-Kai Ruan, Hong-Han Shuai, Wen-Huang Cheng
Recently, facial expression recognition techniques have made significant progress on high-resolution web images. However, in real-world applications, the obtained images are often with low resolution since they are mostly captured in a wide range of public spaces. As a result, the ambiguity of the expression labels hinders recognition performance due to not only subjective emotion annotations but also
-
Multimodal Sentiment Analysis Based on Attentional Temporal Convolutional Network and Multi-Layer Feature Fusion IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-04-07 Hongju Cheng, Zizhen Yang, Xiaoqi Zhang, Yang Yang
Multimodal sentiment analysis aims to extract and integrate information from different modalities to accurately identify the sentiment expressed in multimodal data. How to effectively capture the relevant information within a specific modality and how to fully exploit the complementary information among multiple modalities are two major challenges in multimodal sentiment analysis. Traditional approaches
-
Group Synchrony for Emotion Recognition Using Physiological Signals IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-04-07 Patrícia Bota, Tianyi Zhang, Abdallah El Ali, Ana Fred, Hugo Plácido da Silva, Pablo Cesar
During group interactions, we react and modulate our emotions and behaviour to the group through phenomena including emotion contagion and physiological synchrony. Previous work on emotion recognition through video/image has shown that group context information improves the classification performance. However, when using physiological data, literature mostly focuses on intrapersonal models that leave-out
-
Data Leakage and Evaluation Issues in Micro-Expression Analysis IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-04-06 Tuomas Varanka, Yante Li, Wei Peng, Guoying Zhao
Micro-expressions have drawn increasing interest lately due to various potential applications. The task is, however, difficult as it incorporates many challenges from the fields of computer vision, machine learning and emotional sciences. Due to the spontaneous and subtle characteristics of micro-expressions, the available training and testing data are limited, which make evaluation complex. We show
-
Emotion Arousal Assessment Based on Multimodal Physiological Signals for Game Users IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-04-06 Rongyang Li, Jianguo Ding, Huansheng Ning
Emotional arousal, an essential dimension of game users’ experience, plays a crucial role in determining whether a game is successful. Game users’ emotion arousal assessment (GUEA) is of great importance. However, GUEA often faces challenges, such as selecting emotion-inducing games, labeling emotional arousal, and improving accuracy. In this study, the scheme for verifying the effectiveness of emotion-induced
-
Transformer-Based Self-Supervised Multimodal Representation Learning for Wearable Emotion Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-04-03 Yujin Wu, Mohamed Daoudi, Ali Amad
Recently, wearable emotion recognition based on peripheral physiological signals has drawn massive attention due to its less invasive nature and its applicability in real-life scenarios. However, how to effectively fuse multimodal data remains a challenging problem. Moreover, traditional fully-supervised based approaches suffer from overfitting given limited labeled data. To address the above issues
-
Unconstrained Facial Expression Recognition With No-Reference De-Elements Learning IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-04-03 Hangyu Li, Nannan Wang, Xi Yang, Xiaoyu Wang, Xinbo Gao
Most unconstrained facial expression recognition (FER) methods take original facial images as inputs to learn discriminative features by well-designed loss functions, which cannot reflect important visual information in faces. Although existing methods have explored the visual information of constrained facial expressions, there is no explicit modeling of what visual information is important for unconstrained
-
Context-Aware Dynamic Word Embeddings for Aspect Term Extraction IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-03-29 Jingyun Xu, Jiayuan Xie, Yi Cai, Zehang Lin, Ho-Fung Leung, Qing Li, Tat-Seng Chua
The aspect term extraction (ATE) task aims to extract aspect terms describing a part or an attribute of a product from review sentences. Most existing works rely on either general or domain embedding to address this problem. Despite the promising results, the importance of general and domain embeddings is still ignored by most methods, resulting in degraded performances. Besides, word embedding is
-
GA2MIF: Graph and Attention Based Two-Stage Multi-Source Information Fusion for Conversational Emotion Detection IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-03-24 Jiang Li, Xiaoping Wang, Guoqing Lv, Zhigang Zeng
Multimodal Emotion Recognition in Conversation (ERC) plays an influential role in the field of human-computer interaction and conversational robotics since it can motivate machines to provide empathetic services. Multimodal data modeling is an up-and-coming research area in recent years, which is inspired by human capability to integrate multiple senses. Several graph-based approaches claim to capture
-
MIA-Net: Multi-Modal Interactive Attention Network for Multi-Modal Affective Analysis IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-03-20 Shuzhen Li, Tong Zhang, Bianna Chen, C. L. Philip Chen
When a multi-modal affective analysis model generalizes from a bimodal task to a trimodal or multi-modal task, it is usually transformed into a hierarchical fusion model based on every two pairwise modalities, similar to a binary tree structure. This easily leads to large growth in model parameters and computation as the number of modalities increases, which limits the model's generalization. Moreover
-
A High-Quality Landmarked Infrared Eye Video Dataset (IREye4Task): Eye Behaviors, Insights and Benchmarks for Wearable Mental State Analysis IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-03-20 Siyuan Chen, Julien Epps
Sensing the mental state induced by different task contexts, where cognition is a focus, is as important as sensing the affective state where emotion is induced in the foreground of consciousness, because completing tasks is part of every waking moment of life. However, few datasets are publicly available to advance mental state analysis, especially those using the eye as the sensing modality with
-
Applying Segment-Level Attention on Bi-Modal Transformer Encoder for Audio-Visual Emotion Recognition IEEE Trans. Affect. Comput. (IF 11.2) Pub Date : 2023-03-17 Jia-Hao Hsu, Chung-Hsien Wu
Emotions can be expressed through multiple complementary modalities. This study selected speech and facial expressions as modalities by which to recognize emotions. Current audiovisual emotion recognition models perform supervised learning using signal-level inputs. Such models are presumed to characterize the temporal relationships in signals. In this study, supervised learning was performed on segment-level