样式: 排序: IF: - GO 导出 标记为已读
-
generAItor: Tree-in-the-Loop Text Generation for Language Model Explainability and Adaptation ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2024-03-14 Thilo Spinner, Rebecca Kehlbeck, Rita Sevastjanova, Tobias Stähle, Daniel A. Keim, Oliver Deussen, Mennatallah El-Assady
Large language models (LLMs) are widely deployed in various downstream tasks, e.g., auto-completion, aided writing, or chat-based text generation. However, the considered output candidates of the underlying search algorithm are under-explored and under-explained. We tackle this shortcoming by proposing a tree-in-the-loop approach, where a visual representation of the beam search tree is the central
-
“It would work for me too”: How Online Communities Shape Software Developers’ Trust in AI-Powered Code Generation Tools ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2024-03-09 Ruijia Cheng, Ruotong Wang, Thomas Zimmermann, Denae Ford
While revolutionary AI-powered code generation tools have been rising rapidly, we know little about how and how to help software developers form appropriate trust in those AI tools. Through a two-phase formative study, we investigate how online communities shape developers’ trust in AI tools and how we can leverage community features to facilitate appropriate user trust. Through interviewing 17 developers
-
Insights into Natural Language Database Query Errors: From Attention Misalignment to User Handling Strategies ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2024-03-02 Zheng Ning, Yuan Tian, Zheng Zhang, Tianyi Zhang, Toby Jia-Jun Li
Querying structured databases with natural language (NL2SQL) has remained a difficult problem for years. Recently, the advancement of machine learning (ML), natural language processing (NLP), and large language models (LLM) have led to significant improvements in performance, with the best model achieving ∼ 85% percent accuracy on the benchmark Spider dataset. However, there is a lack of a systematic
-
Man and the Machine: Effects of AI-assisted Human Labeling on Interactive Annotation of Real-Time Video Streams ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2024-02-29 Marko Radeta, Ruben Freitas, Claudio Rodrigues, Agustin Zuniga, Ngoc Thi Nguyen, Huber Flores, Petteri Nurmi
AI-assisted interactive annotation is a powerful way to facilitate data annotation – a prerequisite for constructing robust AI models. While AI-assisted interactive annotation has been extensively studied in static settings, less is known about its usage in dynamic scenarios where the annotators operate under time and cognitive constraints, e.g., while detecting suspicious or dangerous activities from
-
Talk2Data : A Natural Language Interface for Exploratory Visual Analysis via Question Decomposition ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2024-02-07 Yi Guo, Danqing Shi, Mingjuan Guo, Yanqiu Wu, Nan Cao, Qing Chen
Through a natural language interface (NLI) for exploratory visual analysis, users can directly “ask” analytical questions about the given tabular data. This process greatly improves user experience and lowers the technical barriers of data analysis. Existing techniques focus on generating a visualization from a concrete question. However, complex questions, requiring multiple data queries and visualizations
-
Entity Footprinting: Modeling Contextual User States via Digital Activity Monitoring ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2024-02-05 Zeinab R. Yousefi, Tung Vuong, Marie AlGhossein, Tuukka Ruotsalo, Giulio Jaccuci, Samuel Kaski
Our digital life consists of activities that are organized around tasks and exhibit different user states in the digital contexts around these activities. Previous works have shown that digital activity monitoring can be used to predict entities that users will need to perform digital tasks. There have been methods developed to automatically detect the tasks of a user. However, these studies typically
-
I Know This Looks Bad, But I Can Explain: Understanding When AI Should Explain Actions In Human-AI Teams ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2024-02-05 Rui Zhang, Christopher Flathmann, Geoff Musick, Beau Schelble, Nathan J. McNeese, Bart Knijnenburg, Wen Duan
Explanation of artificial intelligence (AI) decision-making has become an important research area in human–computer interaction (HCI) and computer-supported teamwork research. While plenty of research has investigated AI explanations with an intent to improve AI transparency and human trust in AI, how AI explanations function in teaming environments remains unclear. Given that a major benefit of AI
-
Predicting Group Choices from Group Profiles ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2024-02-05 Hanif Emamgholizadeh, Amra Delić, Francesco Ricci
Group recommender systems (GRSs) identify items to recommend to a group of people by aggregating group members’ individual preferences into a group profile and selecting the items that have the largest score in the group profile. The GRS predicts that these recommendations would be chosen by the group by assuming that the group is applying the same preference aggregation strategy as the one adopted
-
Simulation-based Optimization of User Interfaces for Quality-assuring Machine Learning Model Predictions ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2024-01-09 Yu Zhang, Martijn Tennekes, Tim De Jong, Lyana Curier, Bob Coecke, Min Chen
Quality-sensitive applications of machine learning (ML) require quality assurance (QA) by humans before the predictions of an ML model can be deployed. QA for ML (QA4ML) interfaces require users to view a large amount of data and perform many interactions to correct errors made by the ML model. An optimized user interface (UI) can significantly reduce interaction costs. While UI optimization can be
-
Toward Addressing Ambiguous Interactions and Inferring User Intent with Dimension Reduction and Clustering Combinations in Visual Analytics ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2024-01-09 John Wenskovitch, Michelle Dowling, Chris North
Direct manipulation interactions on projections are often incorporated in visual analytics applications. These interactions enable analysts to provide incremental feedback to the system in a semi-supervised manner, demonstrating relationships that the analyst wishes to find within the data. However, determining the precise intent of the analyst is a challenge. When an analyst interacts with a projection
-
VERB: Visualizing and Interpreting Bias Mitigation Techniques Geometrically for Word Representations ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2024-01-09 Archit Rathore, Sunipa Dev, Jeff M. Phillips, Vivek Srikumar, Yan Zheng, Chin-Chia Michael Yeh, Junpeng Wang, Wei Zhang, Bei Wang
Word vector embeddings have been shown to contain and amplify biases in the data they are extracted from. Consequently, many techniques have been proposed to identify, mitigate, and attenuate these biases in word representations. In this article, we utilize interactive visualization to increase the interpretability and accessibility of a collection of state-of-the-art debiasing techniques. To aid this
-
Integrity-based Explanations for Fostering Appropriate Trust in AI Agents ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2024-01-09 Siddharth Mehrotra, Carolina Centeio Jorge, Catholijn M. Jonker, Myrthe L. Tielman
Appropriate trust is an important component of the interaction between people and AI systems, in that “inappropriate” trust can cause disuse, misuse, or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from their users. Out of the aspects that influence trust, this article focuses on the effect of showing integrity. In particular
-
How Should an AI Trust its Human Teammates? Exploring Possible Cues of Artificial Trust ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2024-01-09 Carolina Centeio Jorge, Catholijn M. Jonker, Myrthe L. Tielman
In teams composed of humans, we use trust in others to make decisions, such as what to do next, who to help and who to ask for help. When a team member is artificial, they should also be able to assess whether a human teammate is trustworthy for a certain task. We see trustworthiness as the combination of (1) whether someone will do a task and (2) whether they can do it. With building beliefs in trustworthiness
-
Meaningful Explanation Effect on User’s Trust in an AI Medical System: Designing Explanations for Non-Expert Users ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-12-20 Retno Larasati, Anna De Liddo, Enrico Motta
Whereas most research in AI system explanation for healthcare applications looks at developing algorithmic explanations targeted at AI experts or medical professionals, the question we raise is: How do we build meaningful explanations for laypeople? And how does a meaningful explanation affect user’s trust perceptions? Our research investigates how the key factors affecting human-AI trust change in
-
Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation Using Eye-tracking Technology ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-12-08 Miguel Angel Meza Martínez, Mario Nadj, Moritz Langner, Peyman Toreini, Alexander Maedche
In Explainable Artificial Intelligence (XAI) research, various local model-agnostic methods have been proposed to explain individual predictions to users in order to increase the transparency of the underlying Artificial Intelligence (AI) systems. However, the user perspective has received less attention in XAI research, leading to a (1) lack of involvement of users in the design process of local model-agnostic
-
Visual Analytics of Neuron Vulnerability to Adversarial Attacks on Convolutional Neural Networks ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-12-08 Yiran Li, Junpeng Wang, Takanori Fujiwara, Kwan-Liu Ma
Adversarial attacks on a convolutional neural network (CNN)—injecting human-imperceptible perturbations into an input image—could fool a high-performance CNN into making incorrect predictions. The success of adversarial attacks raises serious concerns about the robustness of CNNs, and prevents them from being used in safety-critical applications, such as medical diagnosis and autonomous driving. Our
-
Co-design of Human-centered, Explainable AI for Clinical Decision Support ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-12-08 Cecilia Panigutti, Andrea Beretta, Daniele Fadda, Fosca Giannotti, Dino Pedreschi, Alan Perotti, Salvatore Rinzivillo
eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing
-
Effects of AI and Logic-Style Explanations on Users’ Decisions Under Different Levels of Uncertainty ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-12-08 Federico Maria Cau, Hanna Hauptmann, Lucio Davide Spano, Nava Tintarev
Existing eXplainable Artificial Intelligence (XAI) techniques support people in interpreting AI advice. However, although previous work evaluates the users’ understanding of explanations, factors influencing the decision support are largely overlooked in the literature. This article addresses this gap by studying the impact of user uncertainty, AI correctness, and the interaction between AI uncertainty
-
Directive Explanations for Actionable Explainability in Machine Learning Applications ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-12-08 Ronal Singh, Tim Miller, Henrietta Lyons, Liz Sonenberg, Eduardo Velloso, Frank Vetere, Piers Howe, Paul Dourish
In this article, we show that explanations of decisions made by machine learning systems can be improved by not only explaining why a decision was made but also explaining how an individual could obtain their desired outcome. We formally define the concept of directive explanations (those that offer specific actions an individual could take to achieve their desired outcome), introduce two forms of
-
LIMEADE: From AI Explanations to Advice Taking ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-12-08 Benjamin Charles Germain Lee, Doug Downey, Kyle Lo, Daniel S. Weld
Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow AI to take advice from humans in response to explanations are similarly useful. While both capabilities are well developed for transparent learning models (e.g., linear models and GA2Ms) and recent techniques (e.g., LIME and SHAP) can generate explanations for opaque models, little
-
How Do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-12-08 Tim Schrills, Thomas Franke
When interacting with artificial intelligence (AI) in the medical domain, users frequently face automated information processing, which can remain opaque to them. For example, users with diabetes may interact daily with automated insulin delivery (AID). However, effective AID therapy requires traceability of automated decisions for diverse users. Grounded in research on human-automation interaction
-
The Role of Explainable AI in the Research Field of AI Ethics ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-12-08 Heidi Vainio-Pekka, Mamia Ori-Otse Agbese, Marianna Jantunen, Ville Vakkuri, Tommi Mikkonen, Rebekah Rousi, Pekka Abrahamsson
Ethics of Artificial Intelligence (AI) is a growing research field that has emerged in response to the challenges related to AI. Transparency poses a key challenge for implementing AI ethics in practice. One solution to transparency issues is AI systems that can explain their decisions. Explainable AI (XAI) refers to AI systems that are interpretable or understandable to humans. The research fields
-
XAutoML: A Visual Analytics Tool for Understanding and Validating Automated Machine Learning ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-12-08 Marc-André Zöller, Waldemar Titov, Thomas Schlegel, Marco F. Huber
In the last 10 years, various automated machine learning (AutoML) systems have been proposed to build end-to-end machine learning (ML) pipelines with minimal human interaction. Even though such automatically synthesized ML pipelines are able to achieve competitive performance, recent studies have shown that users do not trust models constructed by AutoML due to missing transparency of AutoML systems
-
Explainable Activity Recognition in Videos using Deep Learning and Tractable Probabilistic Models ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-12-08 Chiradeep Roy, Mahsan Nourani, Shivvrat Arya, Mahesh Shanbhag, Tahrima Rahman, Eric D. Ragan, Nicholas Ruozzi, Vibhav Gogate
We consider the following video activity recognition (VAR) task: given a video, infer the set of activities being performed in the video and assign each frame to an activity. Although VAR can be solved accurately using existing deep learning techniques, deep networks are neither interpretable nor explainable and as a result their use is problematic in high stakes decision-making applications (in healthcare
-
The Impact of Intelligent Pedagogical Agents’ Interventions on Student Behavior and Performance in Open-Ended Game Design Environments ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-09-11 Özge Nilay Yalçın, Sébastien Lallé, Cristina Conati
Research has shown that free-form Game-Design (GD) environments can be very effective in fostering Computational Thinking (CT) skills at a young age. However, some students can still need some guidance during the learning process due to the highly open-ended nature of these environments. Intelligent Pedagogical Agents (IPAs) can be used to provide personalized assistance in real-time to alleviate this
-
Learning and Understanding User Interface Semantics from Heterogeneous Networks with Multimodal and Positional Attributes ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-09-11 Gary Ang, Ee-Peng Lim
User interfaces (UI) of desktop, web, and mobile applications involve a hierarchy of objects (e.g., applications, screens, view class, and other types of design objects) with multimodal (e.g., textual and visual) and positional (e.g., spatial location, sequence order, and hierarchy level) attributes. We can therefore represent a set of application UIs as a heterogeneous network with multimodal and
-
Enabling Efficient Web Data-Record Interaction for People with Visual Impairments via Proxy Interfaces ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-09-11 Javedul Ferdous, Hae-Na Lee, Sampath Jayarathna, Vikas Ashok
Web data records are usually accompanied by auxiliary webpage segments, such as filters, sort options, search form, and multi-page links, to enhance interaction efficiency and convenience for end users. However, blind and visually impaired (BVI) persons are presently unable to fully exploit the auxiliary segments like their sighted peers, since these segments are scattered all across the screen, and
-
Crowdsourcing Thumbnail Captions: Data Collection and Validation ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-09-11 Carlos Aguirre, Shiye Cao, Amama Mahmood, Chien-Ming Huang
Speech interfaces, such as personal assistants and screen readers, read image captions to users. Typically, however, only one caption is available per image, which may not be adequate for all situations (e.g., browsing large quantities of images). Long captions provide a deeper understanding of an image but require more time to listen to, whereas shorter captions may not allow for such thorough comprehension
-
Conversational Context-sensitive Ad Generation with a Few Core-Queries ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-09-11 Ryoichi Shibata, Shoya Matsumori, Yosuke Fukuchi, Tomoyuki Maekawa, Mitsuhiko Kimoto, Michita Imai
When people are talking together in front of digital signage, advertisements that are aware of the context of the dialogue will work the most effectively. However, it has been challenging for computer systems to retrieve the appropriate advertisement from among the many options presented in large databases. Our proposed system, the Conversational Context-sensitive Advertisement generator (CoCoA), is
-
RadarSense: Accurate Recognition of Mid-air Hand Gestures with Radar Sensing and Few Training Examples ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-09-11 Arthur Sluÿters, Sébastien Lambot, Jean Vanderdonckt, Radu-Daniel Vatavu
Microwave radars bring many benefits to mid-air gesture sensing due to their large field of view and independence from environmental conditions, such as ambient light and occlusion. However, radar signals are highly dimensional and usually require complex deep learning approaches. To understand this landscape, we report results from a systematic literature review of (N=118) scientific papers on radar
-
When Biased Humans Meet Debiased AI: A Case Study in College Major Recommendation ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-09-11 Clarice Wang, Kathryn Wang, Andrew Y. Bian, Rashidul Islam, Kamrun Naher Keya, James Foulds, Shimei Pan
Currently, there is a surge of interest in fair Artificial Intelligence (AI) and Machine Learning (ML) research which aims to mitigate discriminatory bias in AI algorithms, e.g., along lines of gender, age, and race. While most research in this domain focuses on developing fair AI algorithms, in this work, we examine the challenges which arise when humans and fair AI interact. Our results show that
-
Generalisable Dialogue-based Approach for Active Learning of Activities of Daily Living ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-09-11 Ronnie Smith, Mauro Dragone
While Human Activity Recognition systems may benefit from Active Learning by allowing users to self-annotate their Activities of Daily Living (ADLs), many proposed methods for collecting such annotations are for short-term data collection campaigns for specific datasets. We present a reusable dialogue-based approach to user interaction for active learning in activity recognition systems, which utilises
-
Integrity Based Explanations for Fostering Appropriate Trust in AI Agents ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-07-24 Siddharth Mehrotra, Carolina Centeio Jorge, Catholijn M. Jonker, Myrthe L. Tielman
Appropriate trust is an important component of the interaction between people and AI systems, in that ‘inappropriate’ trust can cause disuse, misuse or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from their users. Out of the aspects that influence trust, this paper focuses on the effect of showing integrity. In particular
-
Does this Explanation Help? Designing Local Model-agnostic Explanation Representations and an Experimental Evaluation using Eye-tracking Technology ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-07-13 Miguel Angel Meza Martínez, Mario Nadj, Moritz Langner, Peyman Toreini, Alexander Maedche
In Explainable Artificial Intelligence (XAI) research, various local model-agnostic methods have been proposed to explain individual predictions to users in order to increase the transparency of the underlying Artificial Intelligence (AI) systems. However, the user perspective has received less attention in XAI research, leading to a (1) lack of involvement of users in the design process of local model-agnostic
-
VERB: Visualizing and Interpreting Bias Mitigation Techniques Geometrically for Word Representations ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-06-22 Archit Rathore, Sunipa Dev, Jeff M. Phillips, Vivek Srikumar, Yan Zheng, Chin-Chia Michael Yeh, Junpeng Wang, Wei Zhang, Bei Wang
Word vector embeddings have been shown to contain and amplify biases in the data they are extracted from. Consequently, many techniques have been proposed to identify, mitigate, and attenuate these biases in word representations. In this paper, we utilize interactive visualization to increase the interpretability and accessibility of a collection of state-of-the-art debiasing techniques. To aid this
-
Visual Analytics of Co-Occurrences to Discover Subspaces in Structured Data ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-06-19 Wolfgang Jentner, Giuliana Lindholz, Hanna Hauptmann, Mennatallah El-Assady, Kwan-Liu Ma, Daniel Keim
We present an approach that shows all relevant subspaces of categorical data condensed in a single picture. We model the categorical values of the attributes as co-occurrences with data partitions generated from structured data using pattern mining. We show that these co-occurrences are a-priori, allowing us to greatly reduce the search space, effectively generating the condensed picture where conventional
-
The Role of Explainable AI in the Research Field of AI Ethics ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-06-01 Heidi Vainio-Pekka, Mamia Ori-otse Agbese, Marianna Jantunen, Ville Vakkuri, Tommi Mikkonen, Rebekah Rousi, Pekka Abrahamsson
Ethics of Artificial Intelligence (AI) is a growing research field that has emerged in response to the challenges related to AI. Transparency poses a key challenge for implementing AI ethics in practice. One solution to transparency issues is AI systems that can explain their decisions. Explainable AI (XAI) refers to AI systems that are interpretable or understandable to humans. The research fields
-
Simulation-Based Optimization of User Interfaces for Quality-Assuring Machine Learning Model Predictions ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-05-17 Yu Zhang, Martijn Tennekes, Tim de Jong, Lyana Curier, Bob Coecke, Min Chen
Quality-sensitive applications of machine learning (ML) require quality assurance (QA) by humans before the predictions of an ML model can be deployed. QA for ML (QA4ML) interfaces require users to view a large amount of data and perform many interactions to correct errors made by the ML model. An optimized user interface (UI) can significantly reduce interaction costs. While UI optimization can be
-
Explainable Activity Recognition for Smart Home Systems ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-05-05 Devleena Das, Yasutaka Nishimura, Rajan P. Vivek, Naoto Takeda, Sean T. Fish, Thomas Plötz, Sonia Chernova
Smart home environments are designed to provide services that help improve the quality of life for the occupant via a variety of sensors and actuators installed throughout the space. Many automated actions taken by a smart home are governed by the output of an underlying activity recognition system. However, activity recognition systems may not be perfectly accurate, and therefore inconsistencies in
-
Combining the Projective Consciousness Model and Virtual Humans for Immersive Psychological Research: A Proof-of-concept Simulating a ToM Assessment ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-05-05 D. Rudrauf, G. Sergeant-Perhtuis, Y. Tisserand, T. Monnor, V. De Gevigney, O. Belli
Relating explicit psychological mechanisms and observable behaviours is a central aim of psychological and behavioural science. One of the challenges is to understand and model the role of consciousness and, in particular, its subjective perspective as an internal level of representation (including for social cognition) in the governance of behaviour. Toward this aim, we implemented the principles
-
GRAFS: Graphical Faceted Search System to Support Conceptual Understanding in Exploratory Search ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-05-05 Mengtian Guo, Zhilan Zhou, David Gotz, Yue Wang
When people search for information about a new topic within large document collections, they implicitly construct a mental model of the unfamiliar information space to represent what they currently know and guide their exploration into the unknown. Building this mental model can be challenging as it requires not only finding relevant documents but also synthesizing important concepts and the relationships
-
Towards Addressing Ambiguous Interactions and Inferring User Intent with Dimension Reduction and Clustering Combinations in Visual Analytics ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-04-17 John Wenskovitch, Michelle Dowling, Chris North
Direct manipulation interactions on projections are often incorporated in visual analytics applications. These interactions enable analysts to provide incremental feedback to the system in a semi-supervised manner, demonstrating relationships that the analyst wishes to find within the data. However, determining the precise intent of the analyst is a challenge. When an analyst interacts with a projection
-
Explaining Recommendations through Conversations: Dialog Model and the Effects of Interface Type and Degree of Interactivity ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-04-12 Diana C. Hernandez-Bocanegra, Jürgen Ziegler
Explaining system-generated recommendations based on user reviews can foster users’ understanding and assessment of the recommended items and the recommender system (RS) as a whole. While up to now explanations have mostly been static, shown in a single presentation unit, some interactive explanatory approaches have emerged in explainable artificial intelligence (XAI), making it easier for users to
-
RadarSense: Accurate Recognition of Mid-Air Hand Gestures with Radar Sensing and Few Training Examples ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-03-31 Arthur SluŸters, Sébastien Lambot, Jean Vanderdonckt, Radu-Daniel Vatavu
Microwave radars bring many benefits to mid-air gesture sensing due to their large field of view and independence from environmental conditions, such as ambient light and occlusion. However, radar signals are highly dimensional and usually require complex deep learning approaches. To understand this landscape, we report results from a systematic literature review of (N = 118) scientific papers on radar
-
Crowdsourcing Thumbnail Captions: Data Collection and Validation ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-03-28 Carlos Aguirre, Shiye Cao, Amama Mahmood, Chien-Ming Huang
Speech interfaces, such as personal assistants and screen readers, read image captions to users—but typically only one caption is available per image, which may not be adequate for all situations (e.g., browsing large quantities of images). Long captions provide a deeper understanding of an image but require more time to listen to, whereas shorter captions may not allow for such thorough comprehension
-
LIMEADE: From AI Explanations to Advice Taking ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-03-28 Benjamin Charles Germain Lee, Doug Downey, Kyle Lo, Daniel S. Weld
Research in human-centered AI has shown the benefits of systems that can explain their predictions. Methods that allow an AI to take advice from humans in response to explanations are similarly useful. While both capabilities are well-developed for transparent learning models (e.g., linear models and GA2Ms), and recent techniques (e.g., LIME and SHAP) can generate explanations for opaque models, little
-
How do Users Experience Traceability of AI Systems? Examining Subjective Information Processing Awareness in Automated Insulin Delivery (AID) Systems ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-03-24 Tim Schrills, Thomas Franke
When interacting with artificial intelligence (AI) in the medical domain, users frequently face automated information processing, which can remain opaque to them. For example, users with diabetes may interact daily with automated insulin delivery (AID). However, effective AID therapy requires traceability of automated decisions for diverse users. Grounded in research on human-automation interaction
-
Conversational Context-Sensitive Ad Generation With a Few Core-Queries ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-03-23 Ryoichi Shibata, Shoya Matsumori, Yosuke Fukuchi, Tomoyuki Maekawa, Mitsuhiko Kimoto, Michita Imai
When people are talking together in front of digital signage, advertisements that are aware of the context of the dialogue will work the most effectively. However, it has been challenging for computer systems to retrieve the appropriate advertisement from among the many options presented in large databases. Our proposed system, the Conversational Context-sensitive Advertisement generator (CoCoA), is
-
Effects of AI and Logic-Style Explanations on Users’ Decisions under Different Levels of Uncertainty ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-03-16 Federico Maria Cau, Hanna Hauptmann, Lucio Davide Spano, Nava Tintarev
Existing eXplainable Artificial Intelligence (XAI) techniques support people in interpreting AI advice. However, while previous work evaluates the users’ understanding of explanations, factors influencing the decision support are largely overlooked in the literature. This paper addresses this gap by studying the impact of user uncertainty, AI correctness, and the interaction between AI uncertainty
-
Visual Analytics of Neuron Vulnerability to Adversarial Attacks on Convolutional Neural Networks ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-03-15 Yiran Li, Junpeng Wang, Takanori Fujiwara, Kwan-Liu Ma
Adversarial attacks on a convolutional neural network (CNN)—injecting human-imperceptible perturbations into an input image—could fool a high-performance CNN into making incorrect predictions. The success of adversarial attacks raises serious concerns about the robustness of CNNs, and prevents them from being used in safety-critical applications, such as medical diagnosis and autonomous driving. Our
-
Co-design of human-centered, explainable AI for clinical decision support ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-03-14 Cecilia Panigutti, Andrea Beretta, Daniele Fadda, Fosca Giannotti, Dino Pedreschi, Alan Perotti, Salvatore Rinzivillo
eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models, and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing
-
The Influence of Personality Traits on User Interaction with Recommendation Interfaces ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-03-10 Dongning Yan, Li Chen
Users’ personality traits can take an active role in affecting their behavior when they interact with a computer interface. However, in the area of recommender systems (RS), though personality-based RS has been extensively studied, most works focus on algorithm design, with little attention paid to studying whether and how the personality may influence users’ interaction with the recommendation interface
-
EDAssistant: Supporting Exploratory Data Analysis in Computational Notebooks with In Situ Code Search and Recommendation ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-03-09 Xingjun Li, Yizhi Zhang, Justin Leung, Chengnian Sun, Jian Zhao
Using computational notebooks (e.g., Jupyter Notebook), data scientists rationalize their exploratory data analysis (EDA) based on their prior experience and external knowledge, such as online examples. For novices or data scientists who lack specific knowledge about the dataset or problem to investigate, effectively obtaining and understanding the external information is critical to carrying out EDA
-
Synthesizing Game Levels for Collaborative Gameplay in a Shared Virtual Environment ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-03-09 Huimin Liu, Minsoo Choi, Dominic Kao, Christos Mousas
We developed a method to synthesize game levels that accounts for the degree of collaboration required by two players to finish a given game level. We first asked a game level designer to create playable game level chunks. Then, two artificial intelligence (AI) virtual agents driven by behavior trees played each game level chunk. We recorded the degree of collaboration required to accomplish each game
-
A Personalized Interaction Mechanism Framework for Micro-moment Recommender Systems ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-03-09 Yi-Ling Lin, Shao-Wei Lee
The emergence of the micro-moment concept highlights the influence of context; recommender system design should reflect this trend. In response to different contexts, a micro-moment recommender system (MMRS) requires an effective interaction mechanism that allows users to easily interact with the system in a way that supports autonomy and promotes the creation and expression of self. We study four
-
Visualization and Visual Analytics Approaches for Image and Video Datasets: A Survey ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-03-09 Shehzad Afzal, Sohaib Ghani, Mohamad Mazen Hittawe, Sheikh Faisal Rashid, Omar M. Knio, Markus Hadwiger, Ibrahim Hoteit
Image and video data analysis has become an increasingly important research area with applications in different domains such as security surveillance, healthcare, augmented and virtual reality, video and image editing, activity analysis and recognition, synthetic content generation, distance education, telepresence, remote sensing, sports analytics, art, non-photorealistic rendering, search engines
-
Directive Explanations for Actionable Explainability in Machine Learning Applications ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-01-12 Ronal Singh, Tim Miller, Henrietta Lyons, Liz Sonenberg, Eduardo Velloso, Frank Vetere, Piers Howe, Paul Dourish
In this paper, we show that explanations of decisions made by machine learning systems can be improved by not only explaining why a decision was made but also by explaining how an individual could obtain their desired outcome. We formally define the concept of directive explanations (those that offer specific actions an individual could take to achieve their desired outcome), introduce two forms of
-
Enabling Efficient Web Data-Record Interaction for People with Visual Impairments via Proxy Interfaces ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-01-10 Javedul Ferdous, Hae-Na Lee, Sampath Jayarathna, Vikas Ashok
Web data records are usually accompanied by auxiliary webpage segments, such as filters, sort options, search form, and multi-page links, to enhance interaction efficiency and convenience for end users. However, blind and visually impaired (BVI) persons are presently unable to fully exploit the auxiliary segments like their sighted peers, since these segments are scattered all across the screen, and
-
The Impact of Intelligent Pedagogical Agents’ Interventions on Student Behavior and Performance in Open-Ended Game Design Environments ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2023-01-04 Özge Nilay Yalçın, Sébastien Lallé, Cristina Conati
Research has shown that free-form Game-Design (GD) environments can be very effective in fostering Computational Thinking (CT) skills at a young age. However, some students can still need some guidance during the learning process due to the highly open-ended nature of these environments. Intelligent Pedagogical Agents (IPAs) can be used to provide personalized assistance in real-time to alleviate this
-
Learning and Understanding User Interface Semantics from Heterogeneous Networks with Multimodal and Positional Attributes ACM Trans. Interact. Intell. Syst. (IF 3.4) Pub Date : 2022-12-23 Gary Ang, Ee-Peng Lim
User interfaces (UI) of desktop, web, and mobile applications involve a hierarchy of objects (e.g., applications, screens, view class, and other types of design objects) with multimodal (e.g., textual, visual) and positional (e.g., spatial location, sequence order and hierarchy level) attributes. We can therefore represent a set of application UIs as a heterogeneous network with multimodal and positional