-
Facial expression and action unit recognition augmented by their dependencies on graph convolutional networks J. Multimodal User Interfaces (IF 1.511) Pub Date : 2021-01-26 Jun He, Xiaocui Yu, Bo Sun, Lejun Yu
Understanding human facial expressions is one of the key steps towards achieving human–computer interaction. Owing to the anatomic mechanism that governs facial muscular interactions, there exist powerful dependencies between expressions and action units (AUs) that are useful for exploiting such rules of knowledge to guide the model learning process. However, they have not yet been represented directly
-
Neighborhood based decision theoretic rough set under dynamic granulation for BCI motor imagery classification J. Multimodal User Interfaces (IF 1.511) Pub Date : 2021-01-25 K. Renuga Devi, H. Hannah Inbarani
Brain Computer Interface is an interesting and important research field that has contributed widespread application systems. In the medical field, it is important for physically challenged persons to aid in rehabilitation and restoration. In Brain Computer Interface, computer acts as interface between brain signals and external device. The computer processes the brain signals and sends necessary instructions
-
Comparing mind perception in strategic exchanges: human-agent negotiation, dictator and ultimatum games J. Multimodal User Interfaces (IF 1.511) Pub Date : 2021-01-21 Minha Lee, Gale Lucas, Jonathan Gratch
Recent research shows that how we respond to other social actors depends on what sort of mind we ascribe to them. In a comparative manner, we observed how perceived minds of agents shape people’s behavior in the dictator game, ultimatum game, and negotiation against artificial agents. To do so, we varied agents’ minds on two dimensions of the mind perception theory: agency (cognitive aptitude) and
-
Verbal empathy and explanation to encourage behaviour change intention J. Multimodal User Interfaces (IF 1.511) Pub Date : 2021-01-18 Amal Abdulrahman, Deborah Richards, Hedieh Ranjbartabar, Samuel Mascarenhas
Inspired by the role of therapist-patient relationship in fostering behaviour change, agent-human relationship has been an active research area. This trusted relationship could be a result of the agent’s behavioural cues or the content it delivers that shows its knowledge. However, the impact of the resulting relationship using the various strategies on behaviour change is understudied. In this paper
-
A novel focus encoding scheme for addressee detection in multiparty interaction using machine learning algorithms J. Multimodal User Interfaces (IF 1.511) Pub Date : 2021-01-17 Usman Malik, Mukesh Barange, Julien Saunier, Alexandre Pauchet
Addressee detection is a fundamental task for seamless dialogue management and turn taking in human-agent interaction. Though addressee detection is implicit in dyadic interaction, it becomes a challenging task when more than two participants are involved. This article proposes multiple addressee detection models based on smart feature selection and focus encoding schemes. The models are trained using
-
PLAAN: Pain Level Assessment with Anomaly-detection based Network J. Multimodal User Interfaces (IF 1.511) Pub Date : 2021-01-06 Yi Li, Shreya Ghosh, Jyoti Joshi
Automatic chronic pain assessment and pain intensity estimation has been attracting growing attention due to its widespread applications. One of the prevalent issues in automatic pain analysis is inadequate balanced expert-labelled data for pain estimation. This work proposes an anomaly detection based network addressing one of the existing limitations of automatic pain assessment. The evaluation of
-
Correction to: The Augmented Movement Platform For Embodied Learning (AMPEL): development and reliability J. Multimodal User Interfaces (IF 1.511) Pub Date : 2021-01-04 Lousin Moumdjian, Thomas Vervust, Joren Six, Ivan Schepers, Micheline Lesaffre, Peter Feys, Marc Leman
There was an error in the affiliations of the co-authors Dr. Thomas Vervust and Prof. Peter Feys. Their correct affiliations are given in this correction
-
Does an agent’s touch always matter? Study on virtual Midas touch, masculinity, social status, and compliance in Polish men J. Multimodal User Interfaces (IF 1.511) Pub Date : 2021-01-03 Justyna Świdrak, Grzegorz Pochwatko, Andrea Insabato
Traditional gender roles that define what is feminine and masculine also imply that men have higher social status than women. These stereotypes still influence how people interact with each other and with computers. Touch behaviour, essential in social interactions, is an interesting example of such social behaviours. The Midas touch effect describes a situation when a brief touch is used to influence
-
Internet-based tailored virtual human health intervention to promote colorectal cancer screening: design guidelines from two user studies J. Multimodal User Interfaces (IF 1.511) Pub Date : 2021-01-02 Mohan Zalake, Fatemeh Tavassoli, Kyle Duke, Thomas George, Francois Modave, Jordan Neil, Janice Krieger, Benjamin Lok
To influence user behaviors, Internet-based virtual humans (VH) have been used to deliver health interventions. When developing Internet-based VH health interventions, the developers have to make several design decisions on VH’s appearance, role, language, or medium. The design decisions can affect the outcomes of the Internet-based VH health intervention. To help make design decisions, the current
-
The Augmented Movement Platform For Embodied Learning (AMPEL): development and reliability J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-11-25 Lousin Moumdjian, Thomas Vervust, Joren Six, Ivan Schepers, Micheline Lesaffre, Peter Feys, Marc Leman
Balance and gait impairments are highly prevalent in the neurological population. Although current rehabilitation strategies focus on motor learning principles, it is of interest to expand into embodied sensori-motor learning; that is learning through a continuous interaction between cognitive and motor systems, within an enriched sensory environment. Current developments in engineering allow for the
-
Words of encouragement: how praise delivered by a social robot changes children’s mindset for learning J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-11-24 Daniel P. Davison, Frances M. Wijnen, Vicky Charisi, Jan van der Meij, Dennis Reidsma, Vanessa Evers
This paper describes a longitudinal study in which children could interact unsupervised and at their own initiative with a fully autonomous computer aided learning (CAL) system situated in their classroom. The focus of this study was to investigate how the mindset of children is affected when delivering effort-related praise through a social robot. We deployed two versions: a CAL system that delivered
-
An audiovisual interface-based drumming system for multimodal human–robot interaction J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-11-13 Gökhan Ince, Rabia Yorganci, Ahmet Ozkul, Taha Berkay Duman, Hatice Köse
This paper presents a study of an audiovisual interface-based drumming system for multimodal human–robot interaction. The interactive multimodal drumming game is used in conjunction with humanoid robots to establish an audiovisual interactive interface. This study is part of a project to design robot and avatar assistants for education and therapy, especially for children with special needs. It specifically
-
Virtual agents as supporting media for scientific presentations J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-11-06 Timothy Bickmore, Everlyne Kimani, Ameneh Shamekhi, Prasanth Murali, Dhaval Parmar, Ha Trinh
The quality of scientific oral presentations is often poor, owing to a number of factors, including public speaking anxiety. We present DynamicDuo, a system that uses an automated, life-sized, animated agent to help inexperienced scientists deliver their presentations in front of an audience. The design of the system was informed by an analysis of TED talks given by pairs of human presenters to identify
-
Multimodal analysis of personality traits on videos of self-presentation and induced behavior J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-11-02 Dersu Giritlioğlu, Burak Mandira, Selim Firat Yilmaz, Can Ufuk Ertenli, Berhan Faruk Akgür, Merve Kınıklıoğlu, Aslı Gül Kurt, Emre Mutlu, Şeref Can Gürel, Hamdi Dibeklioğlu
Personality analysis is an important area of research in several fields, including psychology, psychiatry, and neuroscience. With the recent dramatic improvements in machine learning, it has also become a popular research area in computer science. While the current computational methods are able to interpret behavioral cues (e.g., facial expressions, gesture, and voice) to estimate the level of (apparent)
-
Developing a scenario-based video game generation framework for computer and virtual reality environments: a comparative usability study J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-10-31 Elif Surer, Mustafa Erkayaoğlu, Zeynep Nur Öztürk, Furkan Yücel, Emin Alp Bıyık, Burak Altan, Büşra Şenderin, Zeliha Oğuz, Servet Gürer, H. Şebnem Düzgün
Serious games—games that have additional purposes rather than only entertainment—aim to educate people, solve, and plan several real-life tasks and circumstances in an interactive, efficient, and user-friendly way. Emergency training and planning provide structured curricula, rule-based action items, and interdisciplinary collaborative entities to imitate and teach real-life tasks. This rule-based
-
Evaluation of avatar and voice transform in programming e-learning lectures J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-10-26 Rex Hsieh, Hisashi Sato
This article reports the effectiveness of high frame rate facial animated avatar and voice transformer in eLearning. Three avatars: (real male professor, male avatar, female avatar) were combined with male professor’s voice or VT-4 vocoder transformed voice to create six distinguished videos which were then viewed by university freshmen students. A total of 186 students divided into 15 groups participated
-
Multimodal interfaces and communication cues for remote collaboration J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-10-03 Seungwon Kim, Mark Billinghurst, Kangsoo Kim
Remote collaboration has been studied for more than two decades and now there is the possibilities for new types of collaboration with the recent advances in immersive technologies such as Virtual, Augmented, Mixed Reality (VR/AR/MR). However, despite the increasing research interest in remote collaboration study with VR/AR/MR technologies, there is still a lack of academic venues specifically focusing
-
Circus in Motion: a multimodal exergame supporting vestibular therapy for children with autism J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-08-16 Oscar Peña, Franceli L. Cibrian, Monica Tentori
Exergames are serious games that involve physical exertion and are thought of as a form of exercise by using novel input models. Exergames are promising in improving the vestibular differences of children with autism but often lack of adaptation mechanisms that adjust the difficulty level of the exergame. In this paper, we present the design and development of Circus in Motion, a multimodal exergame
-
Haptic and audio interaction design J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-08-07 Thomas Pietrzak; Marcelo M. Wanderley
Haptics, audio and human–computer interaction are three scientific disciplines that share interests, issues and methodologies. Despite these common points, interaction between these communities are sparse, because each of them have their own publication venues, meeting places, etc. A venue to foster interaction between these three communities was created in 2006, the Haptic and Audio Interaction Design
-
Empirical evaluation and pathway modeling of visual attention to virtual humans in an appearance fidelity continuum J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-08-06 Matias Volonte, Reza Ghaiumy Anaraky, Rohith Venkatakrishnan, Roshan Venkatakrishnan, Bart P. Knijnenburg, Andrew T. Duchowski, Sabarish V. Babu
In this contribution we studied how different rendering styles of a virtual human impacted users’ visual attention in an interactive medical training simulator. In a mixed design experiment, 78 participants interacted with a virtual human representing a sample from the non-photorealistic (NPR) to the photorealistic (PR) rendering continuity. We presented five rendering style samples scenarios, namely
-
A BCI video game using neurofeedback improves the attention of children with autism J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-07-31 Jose Mercado, Lizbeth Escobedo, Monica Tentori
Major usability and technical challenges have created mistrust of the potential of brain computer interfaces used to control video games in challenging environments like healthcare. Despite several studies showing low cost commercial headsets can read the brainwave patterns of its users with great potential for long term adoption; there are limited studies showing its efficacy in concrete healthcare
-
Defining a vibrotactile toolkit for digital musical instruments: characterizing voice coil actuators, effects of loading, and equalization of the frequency response J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-07-26 Aditya Tirumala Bukkapatnam; Philippe Depalle; Marcelo M. Wanderley
The integration of vibrotactile feedback in digital music instruments (DMIs) is thought to improve the instrument’s response and make it more suitable for expert musical interactions. However, given the extreme requirements of musical performances, there is a need for solutions allowing for independent control of frequency and amplitude over a wide frequency bandwidth (40–1000 Hz) and low harmonic
-
Exploring interaction techniques for 360 panoramas inside a 3D reconstructed scene for mixed reality remote collaboration J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-07-25 Theophilus Teo, Mitchell Norman, Gun A. Lee, Mark Billinghurst, Matt Adcock
Remote collaboration using mixed reality (MR) enables two separated workers to collaborate by sharing visual cues. A local worker can share his/her environment to the remote worker for a better contextual understanding. However, prior techniques were using either 360 video sharing or a complicated 3D reconstruction configuration. This limits the interactivity and practicality of the system. In this
-
Developing a mobile activity game for stroke survivors—lessons learned J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-07-23 Charlotte Magnusson; Kirsten Rassmus-Gröhn; Bitte Rydeman
Persons who have survived a stroke might lower the risk of having recurrent strokes by adopting a healthier lifestyle with more exercise. One way to promote exercising is by fitness or exergame apps for mobile phones. Health and fitness apps are used by a significant portion of the consumers, but these apps are not targeted to stroke survivors, who may experience cognitive limitations (like fatigue
-
Guidelines for the design of a virtual patient for psychiatric interview training J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-07-21 Lucile Dupuy, Etienne de Sevin, Hélène Cassoudesalle, Orlane Ballot, Patrick Dehail, Bruno Aouizerate, Emmanuel Cuny, Jean-Arthur Micoulaud-Franchi, Pierre Philip
A psychiatric diagnosis involves the physician’s ability to create an empathic interaction with the patient in order to accurately extract symptomatology (i.e., clinical manifestations). Virtual patients (VPs) can be used to train these skills but need to propose a structured and multimodal interaction situation, in order to simulate a realistic psychiatric interview. In this study we present a simulated
-
Psychophysical comparison of the auditory and tactile perception: a survey J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-07-21 Sebastian Merchel; M. Ercan Altinsoy
In this paper, the psychophysical abilities and limitations of the auditory and vibrotactile modality will be discussed. A direct comparison reveals similarities and differences. The knowledge of those is the basis for the design of perceptually optimized auditory-tactile human–machine interfaces or multimodal music applications. Literature data and own results for psychophysical characteristics are
-
The combination of visual communication cues in mixed reality remote collaboration J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-07-15 Seungwon Kim, Gun Lee, Mark Billinghurst, Weidong Huang
Many researchers have studied various visual communication cues (e.g. pointer, sketch, and hand gesture) in Mixed Reality remote collaboration systems for real-world tasks. However, the effect of combining them has not been so well explored. We studied the effect of these cues in four combinations: hand only, hand + pointer, hand + sketch, and hand + pointer + sketch, within the two user studies when
-
Exploring crossmodal perceptual enhancement and integration in a sequence-reproducing task with cognitive priming J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-07-13 Feng Feng, Puhong Li, Tony Stockman
Crossmodal correspondence, a perceptual phenomenon which has been extensively studied in cognitive science, has been shown to play a critical role in people’s information processing performance. However, the evidence has been collected mostly based on strictly-controlled stimuli and displayed in a noise-free environment. In real-world interaction scenarios, background noise may blur crossmodal effects
-
Tactile discrimination of material properties: application to virtual buttons for professional appliances J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-07-13 Yuri De Pra; Stefano Papetti; Federico Fontana; Hanna Järveläinen; Michele Simonato
An experiment is described that tested the possibility to classify wooden, plastic, and metallic objects based on reproduced auditory and vibrotactile stimuli. The results show that recognition rates are considerably above chance level with either unimodal auditory or vibrotactile feedback. Supported by those findings, the possibility to render virtual buttons for professional appliances with different
-
“Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-07-09 Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, Elisabeth André
While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility, especially for end-users. In this paper, we explore the effects of incorporating virtual agents into explainable artificial intelligence (XAI) designs on the perceived trust of end-users
-
Virtual intimacy in human-embodied conversational agent interactions: the influence of multimodality on its perception J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-07-09 Delphine Potdevin, Céline Clavel, Nicolas Sabouret
Interacting with an embodied conversational agent (ECA) in a professional context addresses social considerations to satisfy customer-relationship. This paper presents an experimental study about the perception of virtual intimacy in human-ECA interactions. We explore how an ECA’s multimodal communication affects our perception of virtual intimacy. To this end, we developed a virtual Tourism Information
-
Sharing gaze rays for visual target identification tasks in collaborative augmented reality J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-07-09 Austin Erickson, Nahal Norouzi, Kangsoo Kim, Ryan Schubert, Jonathan Jules, Joseph J. LaViola, Gerd Bruder, Gregory F. Welch
Augmented reality (AR) technologies provide a shared platform for users to collaborate in a physical context involving both real and virtual content. To enhance the quality of interaction between AR users, researchers have proposed augmenting users’ interpersonal space with embodied cues such as their gaze direction. While beneficial in achieving improved interpersonal spatial communication, such shared
-
Multisensory instrumental dynamics as an emergent paradigm for digital musical creation J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-07-07 James Leonard; Jérôme Villeneuve; Alexandros Kontogeorgakopoulos
The nature of human/instrument interaction is a long-standing area of study, drawing interest from fields as diverse as philosophy, cognitive sciences, anthropology, human–computer-interaction, and artistic creation. In particular, the case of the interaction between performer and musical instrument provides an enticing framework for studying the instrumental dynamics that allow for embodiment, skill
-
The effects of spatial auditory and visual cues on mixed reality remote collaboration J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-06-28 Jing Yang, Prasanth Sasikumar, Huidong Bai, Amit Barde, Gábor Sörös, Mark Billinghurst
Collaborative Mixed Reality (MR) technologies enable remote people to work together by sharing communication cues intrinsic to face-to-face conversations, such as eye gaze and hand gestures. While the role of visual cues has been investigated in many collaborative MR systems, the use of spatial auditory cues remains underexplored. In this paper, we present an MR remote collaboration system that shares
-
Improving robot’s perception of uncertain spatial descriptors in navigational instructions by evaluating influential gesture notions J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-06-12 M. A. Viraj J. Muthugala, P. H. D. Arjuna S. Srimal, A. G. Buddhika P. Jayasekara
Human-friendly interactive features are preferred for service robots used in emerging areas of robotic applications such as caretaking, health care, assistance, education and entertainment since they are intended to be operated by non-expert users. Humans prefer to use voice instructions, responses, and suggestions in their daily interactions. Such voice instructions and responses often include uncertain
-
Multimodal, visuo-haptic games for abstract theory instruction: grabbing charged particles J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-06-06 Felix G. Hamza-Lup, Ioana R. Goldbach
An extensive metamorphosis is currently taking place in the education industry due to the rapid adoption of different technologies and the proliferation of new student-instructor and student–student interaction models. While traditional face-to-face interaction is still the norm, mobile, online and virtual augmentations are increasingly adopted worldwide. Moreover, with the advent of gaming technology
-
fNIRS-based classification of mind-wandering with personalized window selection for multimodal learning interfaces J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-06-02 Ruixue Liu, Erin Walker, Leah Friedman, Catherine M. Arrington, Erin T. Solovey
Automatic detection of an individual’s mind-wandering state has implications for designing and evaluating engaging and effective learning interfaces. While it is difficult to differentiate whether an individual is mind-wandering or focusing on the task only based on externally observable behavior, brain-based sensing offers unique insights to internal states. To explore the feasibility, we conducted
-
Effects of personality traits on user trust in human–machine collaborations J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-05-29 Jianlong Zhou, Simon Luo, Fang Chen
Data analytics-driven solutions are widely used in various intelligent systems, where humans and machines make decisions collaboratively based on predictions. Human factors such as personality and trust have significant effects on such human–machine collaborations. This paper investigates effects of personality traits on user trust in human–machine collaborations under uncertainty and cognitive load
-
Auditory displays and auditory user interfaces: art, design, science, and research J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-04-23 Myounghoon Jeon; Areti Andreopoulou; Brian F. G. Katz
For almost 3 decades, research on auditory displays and sonification has been well advanced. Now, the auditory display community has arrived at the stage of sonic information design with a more systematic, refined necessity, going beyond random mappings between the referents and sounds. Due to its innate transdisciplinary nature of auditory display, it would be difficult to unify the methods to study
-
Speech and web-based technology to enhance education for pupils with visual impairment J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-04-19 Jindřich Matoušek; Zdeněk Krňoul; Michal Campr; Zbyněk Zajíc; Zdeněk Hanzlíček; Martin Grůber; Marie Kocurová
This paper describes a new web-based system specially adapted to the education of Czech pupils with visual impairment. The system integrates speech and language technologies with a web framework in lower secondary education, especially in mathematics and physics subjects. A new interface utilized the text-to-speech (TTS) synthesis for online automatic reading of educational texts. The interface provides
-
Correction to: Haptic feedback combined with movement sonification using a friction sound improves task performance in a virtual throwing task J. Multimodal User Interfaces (IF 1.511) Pub Date : 2018-09-14 Emma Frid, Jonas Moll, Roberto Bresin, Eva-Lotta Sallnäs Pysander
The original version of this article unfortunately contained mistakes. The presentation order of Fig 5 and Fig. 6 was incorrect. The plots should have been presented according to the order of the sections in the text; the “Mean Task Duration” plot should have been presented first, followed by the “Perceived Intuitiveness” plot.
-
Movement sonification expectancy model: leveraging musical expectancy theory to create movement-altering sonifications J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-03-30 Joseph Newbold; Nicolas E. Gold; Nadia Bianchi-Berthouze
When designing movement sonifications, their effect on people’s movement must be considered. Recent work has shown how real-time sonification can be designed to alter the way people move. However, the mechanisms through which these sonifications alter people’s expectations of their movement is not well explained. This is especially important when considering musical sonifications, to which people bring
-
Interactive sonification strategies for the motion and emotion of dance performances J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-03-14 Steven Landry; Myounghoon Jeon
Sonification has the potential to communicate a variety of data types to listeners including not just cognitive information, but also emotions and aesthetics. The goal of our dancer sonification project is to “sonify emotions as well as motions” of a dance performance via musical sonification. To this end, we developed and evaluated sonification strategies for adding a layer of emotional mappings to
-
A multimodal auditory equal-loudness comparison of air and bone conducted sounds J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-02-20 Rafael N. C. Patrick; Tomasz R. Letowski; Maranda E. McBride
The term ‘multimodal’ typically refers to the combination of two or more sensory modalities; however, through the advancement of technology, modality variations within specific sensory systems are being discovered and compared in regards to physiological perception and response. The ongoing evaluation of air vs bone conduction auditory perception modalities is one such comparison. Despite an increased
-
ECG sonification to support the diagnosis and monitoring of myocardial infarction J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-02-19 Andrea Lorena Aldana Blanco; Steffen Grautoff; Thomas Hermann
This paper presents the design and evaluation of four sonification methods to support monitoring and diagnosis in Electrocardiography (ECG). In particular we focus on an ECG abnormality called ST-elevation which is an important indicator of a myocardial infarction. Since myocardial infarction represents a life-threatening condition it is of essential value to detect an ST-elevation as early as possible
-
Mapping for meaning: the embodied sonification listening model and its implications for the mapping problem in sonic information design J. Multimodal User Interfaces (IF 1.511) Pub Date : 2020-02-03 Stephen Roddy; Brian Bridges
This is a theoretical paper that considers the mapping problem, a foundational issue which arises when designing a sonification, as it applies to sonic information design. We argue that this problem can be addressed by using models from the field of embodied cognitive science, including embodied image schema theory, conceptual metaphor theory and conceptual blends, and from research which treats sound
-
Focused Audification and the optimization of its parameters J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-12-18 Katharina Groß-Vogt; Matthias Frank; Robert Höldrich
We present a sonification method which we call Focused Audification (FA; previously: Augmented Audification) that allows to expand pure audification in a flexible way. It is based on a combination of single-side-band modulation and a pitch modulation of the original data stream. Based on two free parameters, the sonification’s frequency range is adjustable to the human hearing range and allows to interactively
-
Interactive gaze and finger controlled HUD for cars J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-11-23 Gowdham Prabhakar; Aparna Ramakrishnan; Modiksha Madan; L. R. D. Murthy; Vinay Krishna Sharma; Sachin Deshmukh; Pradipta Biswas
Modern infotainment systems in automobiles facilitate driving at the cost of secondary tasks in addition to the primary task of driving. These secondary tasks have considerable chance to distract a driver from his primary driving task, thereby reducing safety or increasing cognitive workload. This paper presents an intelligent interactive head up display (HUD) on the windscreen of the driver that does
-
Comparison of spatial and temporal interaction techniques for 3D audio trajectory authoring J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-11-20 Justin D. Mathew; Stéphane Huot; Brian F. G. Katz
With the popularity of immersive media, developing usable tools for content development is important for the production process. In the context of 3D audio production, user interfaces for authoring and editing 3D audio trajectories enable content developers, composers, practitioners, and recording and mixing engineers to define how audio sources travel in time. However, common interaction techniques
-
A comparative assessment of Wi-Fi and acoustic signal-based HCI methods on the practicality J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-11-20 Hayoung Jeong; Taeho Kang; Jiwon Choi; Jong Kim
Wi-Fi and acoustic signal-based human–computer interaction (HCI) methods have received growing attention in academia. However, there still are issues to be addressed despite their flourishing. In this work, we evaluate the practicality of the state-of-the-art signal-based HCI research in terms of the following six aspects—granularity, robustness, usability, efficiency, stability, and deployability
-
Analysis of conversational listening skills toward agent-based social skills training J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-10-16 Hiroki Tanaka; Hidemi Iwasaka; Hideki Negoro; Satoshi Nakamura
Listening skills are critical for human communication. Social skills training (SST), performed by human trainers, is a well-established method for obtaining appropriate skills in social interaction. Previous work automated the process of social skills training by developing a dialogue system that teaches speaking skills through interaction with a computer agent. Even though previous work that simulated
-
Are older people any different from younger people in the way they want to interact with robots? Scenario based survey J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-07-24 Mriganka Biswas; Marta Romeo; Angelo Cangelosi; Ray B. Jones
Numerous projects, normally run by younger people, are exploring robot use by older people. But are older any different from younger people in the way they want to interact with robots? Understanding older compared to younger people’s preferences will give researchers more insight into good design. We compared views on multi-modal human–robot interfaces, of older people living independently, with students
-
Time Well Spent with multimodal mobile interactions J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-07-22 Nadia Elouali
Mobile users lose a lot of time on their smartphones. They interact even with busy hands (hands-free interactions), distracted eyes (eyes-free interactions) and in different life situations (while walking, eating, working, etc.). The Time Well Spent (TWS) is a movement that aims to design applications which respect the users choices and availability. In this paper, we discuss how the multimodal mobile
-
Elderly users’ acceptance of mHealth user interface (UI) design-based culture: the moderator role of age J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-07-20 Ahmed Alsswey; Hosam Al-Samarraie
In the Arab world, mobile health (mHealth) applications are an effective way to provide health benefits to medically needy in the absence of health services. However, end users around the world use technology to perform tasks in a way that appears more natural, and closer to their cultural and personal preferences. Evidence from prior studies shows that culture is a vital factor in the success of a
-
Gaze-based interactions in the cockpit of the future: a survey J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-07-19 David Rudi; Peter Kiefer; Ioannis Giannopoulos; Martin Raubal
Flying an aircraft is a mentally demanding task where pilots must process a vast amount of visual, auditory and vestibular information. They have to control the aircraft by pulling, pushing and turning different knobs and levers, while knowing that mistakes in doing so can have fatal outcomes. Therefore, attempts to improve and optimize these interactions should not increase pilots’ mental workload
-
GG Interaction: a gaze–grasp pose interaction for 3D virtual object selection J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-07-19 Kunhee Ryu; Joong-Jae Lee; Jung-Min Park
During the last two decades, development of 3D object selection techniques has been widely studied because it is critical for providing an interactive virtual environment to users. Previous techniques encounter difficulties with selecting small or distant objects, as well as naturalness and physical fatigue. Although eye-hand based interaction techniques have been promoted as the ideal solution to
-
Sonification supports perception of brightness contrast J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-07-18 Niklas Rönnberg
In complex visual representations, there are several possible challenges for the visual perception that might be eased by adding sound as a second modality (i.e. sonification). It was hypothesized that sonification would support visual perception when facing challenges such as simultaneous brightness contrast or the Mach band phenomena. This hypothesis was investigated with an interactive sonification
-
Multi-modal facial expression feature based on deep-neural networks J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-07-17 Wei Wei; Qingxuan Jia; Yongli Feng; Gang Chen; Ming Chu
Emotion recognition based on facial expression is a challenging research topic and has attracted a great deal of attention in the past few years. This paper presents a novel method, utilizing multi-modal strategy to extract emotion features from facial expression images. The basic idea is to combine the low-level empirical feature and the high-level self-learning feature into a multi-modal feature
-
Spatial-temporal dynamic hand gesture recognition via hybrid deep learning model J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-05-14 Jinghua Li; Huarui Huai; Junbin Gao; Dehui Kong; Lichun Wang
Hand gesture is a kind of natural interaction way and hand gesture recognition has recently become more and more popular in human–computer interaction. However, the complexity and variations of hand gesture like various illuminations, views, and self-structural characteristics make the hand gesture recognition still challengeable. How to design an appropriate feature representation and classifier are
-
Object acquisition and selection using automatic scanning and eye blinks in an HCI system J. Multimodal User Interfaces (IF 1.511) Pub Date : 2019-04-25 Hari Singh; Jaswinder Singh
This paper presents an object acquisition and selection approach in human computer interaction systems. In this approach, objects placed over computer screen are automatically scanned and the user performs voluntary eye blinks for object selection when the focus comes over the object of interest. Here, scanning means moving the focus over objects placed on the computer screen one by one and the scanning
Contents have been reproduced by permission of the publishers.