样式: 排序: IF: - GO 导出 标记为已读
-
Improving the Interpretability through Maximizing Mutual Information for EEG Emotion Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-09-18 Hua Yang, C. L. Philip Chen, Bianna Chen, Tong Zhang
-
Decoding Musical Neural Activity in Patients With Disorders of Consciousness Through Self-Supervised Contrastive Domain Generalization IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-09-17 Honghua Cai, Jiahui Pan, Qiuyi Xiao, Jiarui Jin, Yuanqing Li, Qiuyou Xie
-
Dynamic Emotion-Dependent Network with Relational Subgraph Interaction for Multimodal Emotion Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-09-16 Ye Wang, Wei Zhang, Ke Liu, Wei Wu, Feng Hu, Hong Yu, Guoyin Wang
-
Multi-scale Promoted Self-adjusting Correlation Learning for Facial Action Unit Detection IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-09-13 Xin Liu, Kaishen Yuan, Xuesong Niu, Jingang Shi, Zitong Yu, Huanjing Yue, Jingyu Yang
-
Multiscale Facial Expression Recognition Based on Dynamic Global and Static Local Attention IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-09-11 Jie Xu, Yang Li, Guanci Yang, Ling He, Kexin Luo
-
Deep Learning Approaches for Stress Detection: A Survey IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-09-09 Maria Kyrou, IoannisSenior Kompatsiaris, Panagiotis C. Petrantonakis
-
Hierarchical Knowledge Stripping for Multimodal Sentiment Analysis IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-09-09 Aolin Xiong, Ying Zeng, Haifeng Hu
-
JADFER: Exploring Spatial-Contextual Interaction with Joint Attention Dropping for Facial Expression Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-09-05 Yu Gao, Weihong Ren, Weibo Jiang, Qian Dong, Wei Nie, Wenhao Wu, Honghai Liu
-
FERMixNet: An Occlusion Robust Facial Expression Recognition Model with Facial Mixing Augmentation and Mid-Level Representation Learning IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-09-03 Yansong Huang, Junjie Peng, Wenqiang Zhang, Tong Zhao, Gan Chen, Shuhua Tan, Fen Yi, Lu Wang
-
From Static to Dynamic: Adapting Landmark-Aware Image Models for Facial Expression Recognition in Videos IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-09-03 Yin Chen, Jia Li, Shiguang Shan, Meng Wang, Richang Hong
-
Mobile Virtual Assistant for Multi-Modal Depression-Level Stratification IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-08-28 Eric Hsiao-Kuang Wu, Ting-Yu Gao, Chia-Ru Chung, Chun-Chuan Chen, Chia-Fen Tsai, Shih-Ching Yeh
-
IMGWOFS: A Feature Selector with Trade-off between Conflict Objectives for EEG-based Emotion Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-08-27 Gang Luo, Shuting Sun, Chang Yan, Shanshan Qu, Dixin Wang, Na Chu, Xuesong Liu, Fuze Tian, Kun Qian, Xiaowei Li, Bin Hu
-
Bayesian Optimization with Tree Ensembles to Improve Depression Screening on Textual Datasets IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-08-13 Tingting Zhao, ML Tlachac
-
Hierarchical Encoding and Fusion of Brain Functions for Depression Subtype Classification IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-05-15 Mengjun Liu, Huifeng Zhang, Mianxin Liu, Dongdong Chen, Rubai Zhou, Wenxian Lu, Lichi Zhang, Dinggang Shen, Qian Wang, Daihui Peng
Depression is a serious mental disorder with complex etiology, exhibiting strong heterogeneity in clinical manifestations such as various subtypes. Research on depression subtypes may deepen the understanding of the disease, contributing to the diagnosis and prognosis. While brain functional network and graph neural networks (GNNs) provide such a means, the task is still challenged by limited feature
-
Joint Training on Multiple Datasets With Inconsistent Labeling Criteria for Facial Expression Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-04-02 Chengyan Yu, Dong Zhang, Wei Zou, Ming Li
One potential way to enhance the performance of facial expression recognition (FER) is to augment the training set by increasing the number of samples. By incorporating multiple FER datasets, deep learning models can extract more discriminative features. However, the inconsistent labeling criteria and subjective biases found in annotated FER datasets can significantly hinder the recognition accuracy
-
VAD: A Video Affective Dataset with Danmu IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-03-28 Shangfei Wang, Xin Li, Feiyi Zheng, Jicai Pan, Xuewei Li, Yanan Chang, Zhou'an Zhu, Qiong Li, Jiahe Wang, Yufei Xiao
Although video affective content analysis has great potential in many applications, it has not been thoroughly studied due to limited datasets. In this paper, we construct a large-scale video affective dataset with danmu (VAD). It consists of 19,267 elaborately segmented video clips from user-generated videos. The VAD dataset is annotated by the crowdsourcing platform with discrete valence, arousal
-
Fusion and Discrimination: A Multimodal Graph Contrastive Learning Framework for Multimodal Sarcasm Detection IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-03-21 Bin Liang, Lin Gui, Yulan He, Erik Cambria, Ruifeng Xu
-
Emotion-Aware Multimodal Fusion for Meme Emotion Detection IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-03-20 Shivam Sharma, Ramaneswaran S, Md. Shad Akhtar, Tanmoy Chakraborty
The ever-evolving social media discourse has witnessed an overwhelming use of memes to express opinions or dissent. Besides being misused for spreading malcontent, they are mined by corporations and political parties to glean the public's opinion. Therefore, memes predominantly offer affect-enriched insights towards ascertaining the societal psyche. However, the current approaches are yet to model
-
Contrastive Learning based Modality-Invariant Feature Acquisition for Robust Multimodal Emotion Recognition with Missing Modalities IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-03-18 Rui Liu, Haolin Zuo, Zheng Lian, Bjorn W. Schuller, Haizhou Li
Multimodal emotion recognition (MER) aims to understand the way that humans express their emotions by exploring complementary information across modalities. However, it is hard to guarantee that full-modality data is always available in real-world scenarios. To deal with missing modalities, researchers focused on meaningful joint multimodal representation learning during cross-modal missing modality
-
A Multi-Stage Visual Perception Approach for Image Emotion Analysis IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-03-08 Jicai Pan, Jinqiao Lu, Shangfei Wang
Most current methods for image emotion analysis suffer from the affective gap, in which features directly extracted from images are supervised by a single emotional label, which may not align with users’ perceived emotions. To effectively address this limitation, this article introduces a novel multi-stage perception approach inspired by the human staged emotion perception process. The proposed approach
-
Can Large Language Models Assess Personality From Asynchronous Video Interviews? A Comprehensive Evaluation of Validity, Reliability, Fairness, and Rating Patterns IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-03-08 Tianyi Zhang, Antonis Koutsoumpis, Janneke K. Oostrom, Djurre Holtrop, Sina Ghassemi, Reinout E. de Vries
The advent of Artificial Intelligence (AI) technologies has precipitated the rise of asynchronous video interviews (AVIs) as an alternative to conventional job interviews. These one-way video interviews are conducted online and can be analyzed using AI algorithms to automate and speed up the selection procedure. In particular, the swift advancement of Large Language Models (LLMs) has significantly
-
Analyzing Continuous-Time and Sentence-Level Annotations for Speech Emotion Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-03-01 Luz Martinez-Lucas, Wei-Cheng Lin, Carlos Busso
The emotional content of several databases are annotated with continuous-time (CT) annotations, providing traces with frame-by-frame scores describing the instantaneous value of an emotional attribute. However, having a single score describing the global emotion of a short segment is more convenient for several emotion recognition formulations. A common approach is to derive sentence-level (SL) labels
-
GDDN: Graph Domain Disentanglement Network for Generalizable EEG Emotion Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-29 Bianna Chen, C. L. Philip Chen, Tong Zhang
Cross-subject EEG emotion recognition suffers a major setback due to high inter-subject variability in emotional responses. Many prior studies have endeavored to alleviate the inter-subject discrepancies of EEG feature distributions, ignoring the variable EEG connectivity and prediction deviation caused by individual differences, which may cause poor generalization to the unseen subject. This article
-
Guest Editorial: Ethics in Affective Computing IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-29 Jonathan Gratch, Gretchen Greene, Rosalind Picard, Lachlan Urquhart, Michel Valstar
Stunning advances in machine learning are heralding a new era in sensing, interpreting, simulating and stimulating human emotion. In the human sciences, research is increasingly highlighting the explanatory power of emotions, feelings, and other affective processes to predict how we think and behave. This is beginning to translate into an explosion of applications that can improve human wellbeing including
-
Dep-FER: Facial Expression Recognition in Depressed Patients Based on Voluntary Facial Expression Mimicry IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-27 Jiayu Ye, Yanhong Yu, Yunshao Zheng, Yang Liu, Qingxiang Wang
Facial expressions are important nonverbal behaviors that humans use to express their feelings. Clinical research have shown that depressed patients have poor facial expressiveness and mimicry. As a result, we propose a VFEM experiment with seven expressions to explore variations in facial expression features between depressed patients and normal people, including anger, disgust, fear, happiness, neutrality
-
Vesper: A Compact and Effective Pretrained Model for Speech Emotion Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-26 Weidong Chen, Xiaofen Xing, Peihao Chen, Xiangmin Xu
This article presents a paradigm that adapts general large-scale pretrained models (PTMs) to speech emotion recognition task. Although PTMs shed new light on artificial general intelligence, they are constructed with general tasks in mind, and thus, their efficacy for specific tasks can be further improved. Additionally, employing PTMs in practical applications can be challenging due to their considerable
-
An Analysis of Physiological and Psychological Responses in Virtual Reality and Flat Screen Gaming IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-22 Ritik Vatsal, Shrivatsa Mishra, Rushil Thareja, Mrinmoy Chakrabarty, Ojaswa Sharma, Jainendra Shukla
Recent research has focused on the effectiveness of Virtual Reality (VR) in games as a more immersive method of interaction. However, there is a lack of robust analysis of the physiological effects between VR and flatscreen (FS) gaming. This paper introduces the first systematic comparison and analysis of emotional and physiological responses to commercially available games in VR and FS environments
-
Continuous Emotion Ambiguity Prediction: Modeling With Beta Distributions IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-20 Deboshree Bose, Vidhyasaharan Sethu, Eliathamby Ambikairajah
Conventional continuous emotion prediction systems are typically trained to predict the ‘average’ of affect ratings obtained from multiple human annotators. These systems, however, ignore the ambiguity inherent in the perceived emotions, which is not captured by the ‘average rating’. This paper presents a novel ambiguity-aware continuous emotion prediction system that predicts the time-varying emotion
-
Facial Action Unit Detection and Intensity Estimation From Self-Supervised Representation IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-19 Bowen Ma, Rudong An, Wei Zhang, Yu Ding, Zeng Zhao, Rongsheng Zhang, Tangjie Lv, Changjie Fan, Zhipeng Hu
As a fine-grained and local expression behavior measurement, facial action unit (FAU) analysis (e.g., detection and intensity estimation) has been documented for its time-consuming, labor-intensive, and error-prone annotation. Thus a long-standing challenge of FAU analysis arises from the data scarcity of manual annotations, limiting the generalization ability of trained models to a large extent. Amounts
-
Cross-Task Inconsistency Based Active Learning (CTIAL) for Emotion Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-16 Yifan Xu, Xue Jiang, Dongrui Wu
Emotion recognition is a critical component of affective computing. Training accurate machine learning models for emotion recognition typically requires a large amount of labeled data. Due to the subtleness and complexity of emotions, multiple evaluators are usually needed for each affective sample to obtain its ground-truth label, which is expensive. To save the labeling cost, this paper proposes
-
Bodily Sensation Map vs. Bodily Motion Map: Visualizing and Analyzing Emotional Body Motions IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-14 Myeongul Jung, Youngwug Cho, Jejoong Kim, Hyungsook Kim, Kwanguk Kim
Emotion detection using features presented in the body has been comparatively understudied compared to other emotional modalities. This study investigated and compared how emotions are revealed through bodily sensations and body movement information. We propose a novel visualization method for addressing body part activation or deactivation associated with different emotions using motion capture data
-
Looking Into Gait for Perceiving Emotions via Bilateral Posture and Movement Graph Convolutional Networks IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-13 Yingjie Zhai, Guoli Jia, Yu-Kun Lai, Jing Zhang, Jufeng Yang, Dacheng Tao
Emotions can be perceived from a person's gait, i.e., their walking style. Existing methods on gait emotion recognition mainly leverage the posture information as input, but ignore the body movement, which contains complementary information for recognizing emotions evoked in the gait. In this paper, we propose a Bilateral Posture and Movement Graph Convolutional Network (BPM-GCN) that consists of two
-
Multi-Modal Hierarchical Empathetic Framework for Social Robots With Affective Body Control IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-12 Yue Gao, Yangqing Fu, Ming Sun, Feng Gao
Social robots require the ability to understand human emotions and provide affective and behavioral responses during human-robot interactions. However, current social robots lack empathy capabilities. In this work, we propose a novel Multi-modal Hierarchical Empathetic (MHE) framework for generating empathetic responses for social robots. MHE is composed of a multi-modal fusion and emotion recognition
-
Avatar-Based Feedback in Job Interview Training Impacts Action Identities and Anxiety IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-08 Sarinasadat Hosseini, Jingyu Quan, Xiaoqi Deng, Yoshihiro Miyake, Takayuki Nozawa
This study examined the use of avatars to provide feedback to influence action identities, anxiety, mood, and performance during job interview training. We recruited 36 university students for the experiment and divided them into two groups. The first group received avatar-based feedback whereas the other group received self-feedback after the first interview session. Results showed that the avatar-based
-
Novel VR-Based Biofeedback Systems: A Comparison Between Heart Rate Variability- and Electrodermal Activity-Driven Approaches IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-08 Andrea Baldini, Elisabetta Patron, Claudio Gentili, Enzo Pasquale Scilingo, Alberto Greco
Anxiety symptoms are important contributors to the global health-related burden. Low-intensity interventions have been proposed to reduce anxiety symptoms in the population. Among these, biofeedback (BF) offers an effective approach to reducing anxiety. In the present study, BF was integrated into a novel virtual reality (VR) architecture to enhance BF's effectiveness to 1) evaluate the feasibility
-
An Open-Source Benchmark of Deep Learning Models for Audio-Visual Apparent and Self-Reported Personality Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-08 Rongfan Liao, Siyang Song, Hatice Gunes
Personality determines various human daily and working behaviours. Recently, a large number of automatic personality computing approaches have been developed to predict either the apparent or self-reported personality of the subject based on non-verbal audio-visual behaviours. However, most of them suffer from complex and dataset-specific pre-processing steps and model training tricks. In the absence
-
Emotion Recognition in Conversation Based on a Dynamic Complementary Graph Convolutional Network IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-02-01 Zhenyu Yang, Xiaoyang Li, Yuhu Cheng, Tong Zhang, Xuesong Wang
Emotion recognition in conversation (ERC) is a widely used technology in both affective dialogue bots and dialogue recommendation scenarios, where motivating a system to correctly recognize human emotions is crucial. Uncovering as much contextual information as possible with a limited amount of dialogue information is essential for eventually identifying the correct emotion of each sentence. The integration
-
Learning With Rater-Expanded Label Space to Improve Speech Emotion Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-01-31 Shreya G. Upadhyay, Woan-Shiuan Chien, Bo-Hao Su, Chi-Chun Lee
Automatic sensing of emotional information in speech is important for numerous everyday applications. Conventional Speech Emotion Recognition (SER) models rely on averaging or consensus of human annotations for training, but emotions and raters’ interpretations are subjective in nature, leading to diverse variations in perceptions. To address this, our proposed approach integrates the rater's subjectivity
-
A Multi-Level Alignment and Cross-Modal Unified Semantic Graph Refinement Network for Conversational Emotion Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-01-31 Xiaoheng Zhang, Weigang Cui, Bin Hu, Yang Li
Emotion recognition in conversation (ERC) based on multiple modalities has attracted enormous attention. However, most research simply concatenated multimodal representations, generally neglecting the impact of cross-modal correspondences and uncertain factors, and leading to the cross-modal misalignment problems. Furthermore, recent methods only considered simple contextual features, commonly ignoring
-
Modeling the Interplay Between Cohesion Dimensions: A Challenge for Group Affective Emergent States IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-01-30 Lucien Maman, Nale Lehmann-Willenbrock, Mohamed Chetouani, Laurence Likforman-Sulem, Giovanna Varni
Emergent states are temporal group phenomena that arise from collective affective, behavioral, and cognitive processes shared among the group's members during their interactions. Cohesion is one such state, mainly conceptualized by scholars as affective in nature, and frequently distinguished into the two dimensions social and task cohesion. Whereas social cohesion is related to the need of belonging
-
Exploring Retrospective Annotation in Long-Videos for Emotion Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-01-29 Patrícia Bota, Pablo Cesar, Ana Fred, Hugo Plácido da Silva
Emotion recognition systems are typically trained to classify a given psychophysiological state into emotion categories. Current platforms for emotion ground-truth collection show limitations for real-world scenarios of long-duration content (e.g. $> $ 10 minutes), namely: 1) Real-time annotation tools are distracting and become exhausting; 2) Perform retrospective annotation of the whole content in
-
CFDA-CSF: A Multi-Modal Domain Adaptation Method for Cross-Subject Emotion Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-01-23 Magdiel Jiménez-Guarneros, Gibran Fuentes-Pineda
Multi-modal classifiers for emotion recognition have become prominent, as the emotional states of subjects can be more comprehensively inferred from Electroencephalogram (EEG) signals and eye movements. However, existing classifiers experience a decrease in performance due to the distribution shift when applied to new users. Unsupervised domain adaptation (UDA) emerges as a solution to address the
-
Show me How You Use Your Mouse and I Tell You How You Feel? Sensing Affect With the Computer Mouse IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-01-23 Paul Freihaut, Anja S. Göritz
Computer mouse tracking is a simple and cost-efficient way to gather continuous behavioral data. As theory suggests a relationship between affect and sensorimotor processes, the computer mouse might be usable for affect sensing. However, the processes underlying a connection between mouse usage and affect are complex, hitherto empirical evidence is ambiguous, and the research area lacks longitudinal
-
How Virtual Reality Therapy Affects Refugees From Ukraine - Acute Stress Reduction Pilot Study IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-01-10 Dorota Kamińska, Grzegorz Zwoliński, Dorota Merecz-Kot
This article extends and builds upon our previous research concerning Virtual Reality (VR) with bilateral stimulation as an automated stress-reduction therapy tool. The study coincided with Russia's invasion of Ukraine, thus the software was tailored to reduce the stress of war refugees. We created a 28 minutes relaxation training program in a virtual, relaxing environment in the form of a cozy apartment
-
Anthropomorphism and Affective Perception: Dimensions, Measurements, and Interdependencies in Aerial Robotics IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-01-04 Viviane Herdel, Anastasia Kuzminykh, Yisrael Parmet, Jessica R. Cauchard
Assigning lifelike qualities to robotic agents (Anthropomorphism) is associated with complex affective interpretations of their behavior. These anthropomorphized perceptions are traditionally elicited through robots’ designs. Yet, aerial robots (or drones) present a special case due to their – traditionally – non-anthropomorphic design, and prior research shows conflicting evidence on their perception
-
Gusa: Graph-Based Unsupervised Subdomain Adaptation for Cross-Subject EEG Emotion Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-01-04 Xiaojun Li, C. L. Philip Chen, Bianna Chen, Tong Zhang
EEG emotion recognition has been hampered by the clear individual differences in the electroencephalogram (EEG). Nowadays, domain adaptation is a good way to deal with this issue because it aligns the distribution of data across subjects. However, the performance for EEG emotion recognition is limited by the existing research, which mainly focuses on the global alignment between the source domain and
-
MASANet: Multi-Aspect Semantic Auxiliary Network for Visual Sentiment Analysis IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2024-01-04 Jinglun Cen, Chunmei Qing, Haochun Ou, Xiangmin Xu, Junpeng Tan
Recently, multi-modal affective computing has demonstrated that introducing multi-modal information can enhance performance. However, multi-modal research faces significant challenges due to its high requirements regarding data acquisition, modal integrity, and feature alignment. The widespread use of multi-modal pre-training methods offers the possibility of aiding visual sentiment analysis by introducing
-
Interaction Between Dynamic Affection and Arithmetic Cognitive Ability: A Practical Investigation With EEG Measurement IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2023-12-26 Xiaonan Yang, Yilu Peng, Yuyang Han, Fangyi Li, Qin Zhang, Shuo Wu, Xia Wu
Emotions play an essential role in affecting the performance of cognitive abilities in continuous cognitive tasks. Most previous studies share a common issue in that the evoked emotions are simply presumed to be real emotions, without taking into account the observation that emotions may be changed when carrying out cognitive activities. This may lead to the inaccurate detection of true emotions, which
-
Research on the Association Mechanism and Evaluation Model Between fNIRS Data and Aesthetic Quality in Product Aesthetic Quality Evaluation IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2023-12-19 Yong Wang, Fanghao Song, Yan Liu, Yaying Li, Weihao Wang, Qiqi Huang, Yang Hu
Aesthetic quality evaluation has been an important research question in the field of user experience in product design. However, the feasibility and accuracy of using fNIRS data for product aesthetic quality evaluation are unknown. In this article, we analyze the correlation and association between fNIRS data and aesthetic quality and designed a product aesthetic quality evaluation model to answer
-
Continuously Controllable Facial Expression Editing in Talking Face Videos IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2023-12-18 Zhiyao Sun, Yu-Hui Wen, Tian Lv, Yanan Sun, Ziyang Zhang, Yaoyuan Wang, Yong-Jin Liu
Recently audio-driven talking face video generation has attracted considerable attention. However, very few researches address the issue of emotional editing of these talking face videos with continuously controllable expressions, which is a strong demand in the industry. The challenge is that speech-related expressions and emotion-related expressions are often highly coupled. Meanwhile, traditional
-
A Classification Framework for Depressive Episode Using R-R Intervals From Smartwatch IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2023-12-15 Fenghua Li, Guoxiong Liu, Zhiling Zou, Yang Yan, Xin Huang, Xuanang Liu, Zhengkui Liu
Depressive episode is key symptom collection of mood disorders. Early intervention can prevent it from happening or reduce its impact, and close monitoring can greatly improve medical management. However, most current monitoring methods are ex post facto, coarse in time granularity and resource consuming. In this article, we aimed to develop a cost-friendly and high usability depressive episode detection
-
DFME: A New Benchmark for Dynamic Facial Micro-Expression Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2023-12-12 Sirui Zhao, Huaying Tang, Xinglong Mao, Shifeng Liu, Yiming Zhang, Hao Wang, Tong Xu, Enhong Chen
One of the most important subconscious reactions, micro-expression (ME), is a spontaneous, subtle, and transient facial expression that reveals human beings’ genuine emotion. Therefore, automatically recognizing ME (MER) is becoming increasingly crucial in the field of affective computing, providing essential technical support for lie detection, clinical psychological diagnosis, and public safety.
-
Dynamic Confidence-Aware Multi-Modal Emotion Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2023-12-08 Qi Zhu, Chuhang Zheng, Zheng Zhang, Wei Shao, Daoqiang Zhang
Multi-modal emotion recognition has attracted increasing attention in human-computer interaction, as it extracts complementary information from physiological and behavioral features. Compared to single modal approaches, multi-modal fusion methods are more susceptible to uncertainty in emotion recognition, such as heterogeneity and inconsistent predictions across different modalities. Previous multi-modal
-
Geometric Graph Representation With Learnable Graph Structure and Adaptive AU Constraint for Micro-Expression Recognition IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2023-12-06 Jinsheng Wei, Wei Peng, Guanming Lu, Yante Li, Jingjie Yan, Guoying Zhao
Micro-expression recognition (MER) holds significance in uncovering hidden emotions. Most works take image sequences as input and cannot effectively explore ME information because subtle ME-related motions are easily submerged in unrelated information. Instead, the facial landmark is a low-dimensional and compact modality, which achieves lower computational cost and potentially concentrates on ME-related
-
Olfactory-Enhanced VR: What's the Difference in Brain Activation Compared to Traditional VR for Emotion Induction? IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2023-11-29 Xinyue Zhong, Wanqing Liu, Jialan Xie, Yun Gu, Guangyuan Liu
Olfactory-enhanced virtual reality (OVR) creates a complex and rich emotional experience, thus promoting a new generation of human-computer interaction experiences in real-world scenarios. However, with the rise of virtual reality (VR) as a mood induction procedure (MIP), few studies have incorporated olfactory stimuli into emotion induction in three-dimensional (3D) environments. Considering the differences
-
Editorial: Special Issue on Unobtrusive Physiological Measurement Methods for Affective Applications IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2023-11-28 Ioannis T. Pavlidis, Theodora Chaspari, Daniel McDuff
In The formative years of Affective Computing [1], from the late 1990s and into the early 2000s, a significant fraction of research attention was focused on the development of methods for unobtrusive physiological measurement. It quickly became obvious that wiring people with electrodes and strapping cumbersome hardware to their bodies was not only restricting the types of experiments that could be
-
Frustration Recognition Using Spatio Temporal Data: A Novel Dataset and GCN Model to Recognize In-Vehicle Frustration IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2023-11-28 Esther Bosch, Raquel Le Houcq Corb铆, Klas Ihme, Stefan H枚rmann, Meike Jipp, David K盲thner
Frustration is an unpleasant emotion prevalent in several target applications of affective computing, such as human-machine interaction, learning, (online) customer interaction, and gaming. One idea to redeem this issue is to recognize frustration to offer help or mitigation in real-time, e.g., by a personal assistant. However, the recognition of frustration is not limited to these applied contexts
-
Teardrops on My Face: Automatic Weeping Detection From Nonverbal Behavior IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2023-11-28 Dennis K眉ster, Lars Steinert, Marc Baker, Nikhil Bhardwaj, Eva G. Krumhuber
Human emotional tears are a powerful socio-emotional signal. Yet, they have received relatively little attention in empirical research compared to facial expressions or body posture. While humans are highly sensitive to others’ tears, to date, no automatic means exist for detecting spontaneous weeping. This article employed facial and postural features extracted using four pre-trained classifiers (FACET
-
Emotion Recognition From Few-Channel EEG Signals by Integrating Deep Feature Aggregation and Transfer Learning IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2023-11-24 Fang Liu, Pei Yang, Yezhi Shu, Niqi Liu, Jenny Sheng, Junwen Luo, Xiaoan Wang, Yong-Jin Liu
Electroencephalogram (EEG) signals have been widely studied in human emotion recognition. The majority of existing EEG emotion recognition algorithms utilize dozens or hundreds of electrodes covering the whole scalp region (denoted as full-channel EEG devices in this paper). Nowadays, more and more portable and miniature EEG devices with only a few electrodes (denoted as few-channel EEG devices in
-
Emotion Dictionary Learning With Modality Attentions for Mixed Emotion Exploration IEEE Trans. Affect. Comput. (IF 9.6) Pub Date : 2023-11-20 Fang Liu, Pei Yang, Yezhi Shu, Fei Yan, Guanhua Zhang, Yong-Jin Liu
Most existing multi-modal emotion recognition studies are targeted at a classification task that aims to assign a specific emotion category to a combination of several heterogeneous input data, including multimedia signals and physiological signals. A growing number of recent psychological evidence suggests that different discrete emotions may co-exist at the same time, which promotes the development