Evaluation of haptic virtual reality user interfaces for medical marking on 3D models

https://doi.org/10.1016/j.ijhcs.2020.102561Get rights and content

Highlights

  • Two haptic VR interfaces are proposed to interact with volumetric medical images for improving current computer-aided medical diagnosis and planning.

  • Compared with the traditional 2D interface using a screen-based 2D display and a mouse, haptic VR interfaces provide a flexible head movement-based viewing perspective and allow 3D hand gestures as the input modality with haptic (vibrotactile or kinesthetic) feedback.

  • The study evaluated two haptic VR interfaces in an experiment involving medical marking on 3D models of the human anatomic structures, in terms of task completion time, marking accuracy and user experience, and demonstrated their practical usability by comparing them with the traditional 2D interface as the baseline.

Abstract

Three-dimensional (3D) visualization has been widely used in computer-aided medical diagnosis and planning. To interact with 3D models, current user interfaces in medical systems mainly rely on the traditional 2D interaction techniques by employing a mouse and a 2D display. There are promising haptic virtual reality (VR) interfaces which can enable intuitive and realistic 3D interaction by using VR equipment and haptic devices. However, the practical usability of the haptic VR interfaces in this medical field remains unexplored. In this study, we propose two haptic VR interfaces, a vibrotactile VR interface and a kinesthetic VR interface, for medical diagnosis and planning on volumetric medical images. The vibrotactile VR interface used a head-mounted VR display as the visual output channel and a VR controller with vibrotactile feedback as the manipulation tool. Similarly, the kinesthetic VR interface used a head-mounted VR display as the visual output channel and a kinesthetic force-feedback device as the manipulation tool. We evaluated these two VR interfaces in an experiment involving medical marking on 3D models, by comparing them with the present state-of-the-art 2D interface as the baseline. The results showed that the kinesthetic VR interface performed the best in terms of marking accuracy, whereas the vibrotactile VR interface performed the best in terms of task completion time. Overall, the participants preferred to use the kinesthetic VR interface for the medical task.

Introduction

Three-dimensional (3D) visualization has been widely used in many professional fields, such as medicine (Preim and Bartz, 2007), architecture and industrial manufacturing (Bouchlaghem et al., 2005). In medicine, 3D visualizations of the human skeleton, organs and other anatomic structures are implemented based on radiological imaging, such as computed tomography (CT) and magnetic resonance imaging (MRI) scans (Sutton, 1993). 3D visualization technique offers numerous benefits. For example, volumetric medical images can enable medical students to better understand the spatial anatomy of body organs (Silén et al., 2008), improve the accuracy of medical diagnoses (Satava and Robb, 1997) and help surgeons plan and simulate surgical procedures (Gross, 1998).

Highlighting relevant points on volumetric medical images of CT and MRI scans is an important 3D manipulation task performed by medical practitioners in computer-aided medical diagnosis and planning. Medical practitioners manipulate the models (i.e., rotate, pan and zoom) and mark critical points for later inspection, measurement and analysis of skeletal relationships (Kula and Ghoneima, 2018), treatment planning (Harrell, 2007) and as a tool for discussing and developing treatment consensus (Reinschluessel et al., 2019). For example, during cephalometric tracing, medical practitioners select and mark a point on the skeleton model or surrounding soft tissue as a point of reference for operations related to positioning, measurement and orientation. The task difficulty depends on the marking locations and the structure complexity of the virtual models (Medellín-Castillo et al., 2016). The accuracy of the markers directly influences the results of the medical analyses, and thus, the overall quality of the medical services (Lindner et al., 2016).

Despite the recent advances in 3D visualization technology, the tools used to present and interact with these volumetric images have not changed in the field of medicine. A conventional 2D display is still the main visual channel to present volumetric data from CT and MRI scans, which provides the user with a fixed screen-based viewing perspective. Further, it is still a common practice to use a mouse with the rotate-pan-zoom technique to indirectly manipulate 3D models (Jankowski and Hachet, 2013). However, previous studies (Hinckley et al., 1997; Bowman et al., 2004) have argued that using the mouse-based interface for 3D manipulation is difficult. Some researchers have investigated the mouse-based rotation techniques to understand their issues for 3D manipulation (Bade et al., 2005). Other researchers have conducted comparative studies to examine the usability of other user interfaces such as the tangible interface (Besançon et al., 2017) and the touchscreen-based interface (Yu et al., 2010). However, these interaction methods either did not exceed the performance of the mouse (Yu et al., 2010) or had a limited application area (Besançon et al., 2017).

Following the technical advances in virtual reality (VR), VR equipment (e.g., Oculus Rift (Oculus, 2020) and HTC Vive (VIVE, 2020)) has been developed. A combination of a VR headset and a handheld VR controller can provide the user with an intuitive and immersive interaction environment. In this VR interface, 3D models are presented to the user through the head-mounted display and the models can be manipulated by the user using the VR controller with six degrees of freedom (Oculus, 2020; VIVE, 2020). Compared with the traditional 2D interface, VR devices offer a flexible 3D view based on the position and orientation of the user's head and allow using 3D hand gestures to manipulate the objects. In addition, the VR controller can provide vibrotactile feedback to the user's hand and enable tactile interaction. Vibrotactile feedback as an augmentative sensory channel has many medical applications, such as, robot-assisted teleoperation (Peddamatham et al., 2008), minimally invasive surgery (Schoonmaker and Cao, 2006) and rehabilitation medicine (Shing et al., 2003). Because of the flexible viewing perspective and the natural hand-based input, the VR interface has been proposed to use in the field of medicine. For example, it has been employed to interact with skeleton and organ models for anatomy learning (Fahmi et al., 2019) and treatment planning (Reinschluessel et al., 2019). Multiple companies have employed it to develop software for medical diagnosis services (e.g., Surgical Theater, 2020; Adesante, 2020). However, the potentially beneficial vibrotactile feedback generated from the VR controller was not used in their interactive VR systems.

Further, force-feedback devices, such as the Geomagic Touch (3D systems, 2020) and the Novint Falcon (Novint, 2020), have been proposed as another beneficial interaction device for medical services (Ribeiro et al., 2016). These devices can support bidirectional kinesthetic exploration. The mechanical arm of the devices not only allows hand-based motions with six degrees of freedom for object manipulation but also transfers the generated kinesthetic feedback to the hand, to simulate the feeling of touch (Massie and Salisbury, 1994). Force-feedback devices have been used with a 2D display for, for example, anatomy education (Kinnison et al., 2009), surgery training (Steinberg et al., 2007; Webster et al., 2004) and medical analysis (Medellín-Castillo et al., 2016). The only study that has combined the force-feedback device with the VR headset, to the best of our knowledge, is the work by Saad et al. (2018). They have technically investigated the feasibility to connect these devices.

We combined a VR headset with a VR controller and a force-feedback device to create haptic VR interfaces which provide the user with a flexible viewing perspective, a natural hand-based input and haptic feedback simultaneously. These VR interfaces can enable intuitive and realistic 3D interaction, thus promising for the tasks involving 3D manipulation (Bowman et al., 2004) such as medical diagnosis and planning tasks. However, the usability of the haptic VR interfaces for these medical tasks, covering effectiveness, efficiency and satisfaction (Issa and Isaias, 2015), has not been explored. Furthermore, these two VR interfaces are based on similar interaction models but employ different interaction devices with different types of haptic feedback. Their difference in usability remains unclear in the context of medical diagnosis and planning. A comparison of two VR interfaces can help better understand the suitability of their interaction methods for 3D manipulation and reveal the effects of different types of haptic feedback for these high-standard medical tasks. More importantly, 2D interaction method using a mouse and a 2D display is still a powerful user interface and dominant in the field of medicine. A comparative study with the 2D interaction technique is necessary to explore the potential of the haptic VR interfaces to improve current medical diagnosis and planning work.

In the present study, we examined the two haptic VR interfaces, the kinesthetic VR interface using a force-feedback device and the vibrotactile VR interface using a VR controller, in an experiment involving medical marking on 3D models. To examine their practical usability, we compared two VR interfaces with the traditional 2D interface that uses a mouse and a 2D display as the baseline. In the experiment, because the structural complexity of the models and the marking locations can influence users’ performance in the medical marking task, we employed three human anatomic structures as the experimental models with two different difficulty levels for the marking positions. To evaluate the three user interfaces, we collected both objective and subjective data. The objective data included task completion time and marking accuracy, and the subjective data included rating data for the perceived mental effort, hand fatigue, naturalness, immersiveness and user preference. The aim of the study was to answer the following questions in the context of medical marking:

  • What are the differences between the kinesthetic VR interface and the vibrotactile VR interface, in terms of task completion time, marking accuracy and user experience? How do the marking locations affect users' performance with the two VR interfaces?

  • What are the differences between the two VR interfaces and the traditional 2D interface? How do the marking locations affect users' performance with the traditional 2D interface?

This study makes the following contributions: We proposed two haptic VR interfaces to interact with volumetric medical images for computer-aided medical diagnosis and planning. The vibrotactile VR interface and the kinesthetic VR interface were evaluated based on a medical diagnosis and planning task on virtual models of the human skeleton and organ. The results revealed the strengths and weaknesses of two VR interfaces associated with current popular VR equipment and haptic device, which simultaneously provided empirical understanding for developing efficient and user-friendly interactive VR systems. In addition, through comparing with the 2D interaction technique, the better performances of two VR interfaces, in terms of marking accuracy (the kinesthetic VR interface) and task completion time (the vibrotactile VR interface), demonstrated their potential to replace the traditional 2D interface for these medical tasks.

The paper is organized as follows. Relevant previous studies are introduced, and then the prototype system and the experiment are described. The results are presented in detail, followed by the discussion of the main findings and the conclusion.

Section snippets

2D and 3D visualization in the field of medicine

In current medical imaging systems, the most common information visualization is based on 2D slices. Slice-by-slice views support accurate exploration and diagnosis of medical imaging data (Tietjen et al., 2006). At the same time, 3D volume-rendering visualization has become a valuable technique in the diagnosis and planning phases. It helps medical staff in understanding 3D spatial relations and an overview of the model structure, as well as facilitates diagnostic analysis (Tietjen et al., 2006

Design of the prototype system

The experimental prototype system included three user interfaces: vibrotactile VR interface (V), kinesthetic VR interface (K) and traditional 2D interface (T). We use the abbreviations (V, K, T) for the three interfaces in figures.

The vibrotactile VR interface employed a VR headset as the visual output channel and a VR controller as the manipulation tool with vibrotactile feedback. The VR headset provided realistic visual feedback with a flexible head movement-based viewing perspective. While

Apparatus and environment

The host computer in the experiment was an MSI GS63VR 7RF Stealth Pro laptop with an Intel i7-7700HQ processor, a GeForce GTX 1060 graphics card and a 16GB RAM. We used a Vive VR headset (VIVE, 2020) and a Samsung 245B Plus 24” LCD display as the visual displays in the experiment. A standard computer mouse, a Vive controller and a Geomagic Touch X force-feedback device (3D Systems, 2020) were utilized as the manipulation tools for the three user interfaces. A standard keyboard was used for the

Results

We collected two types of objective data from the experiment: task completion times and the positions of the participants’ markers. We used the task completion times to evaluate the interaction speed and calculated the error distances between the participants’ markers and the marking positions required by the tasks to evaluate the marking accuracy. The Shapiro–Wilk Normality test showed that the data were not normally distributed (all p < .001). Thus, we analyzed the data using the 3 × 2

Discussion

This study experimentally compared two haptic VR interfaces (vibrotactile and kinesthetic VR interfaces) with the traditional 2D interface. The interfaces varied in terms of the visual display as well as the manipulation tool. They were evaluated in the experiment of 3D medical marking task and the results showed that the different user interfaces influenced the task performance in terms of interaction speed and accuracy. In addition, the difference in the marking locations modulated the task

Conclusion

Three-dimensional visualization has been widely used in computer-aided medical services such as highly accurate diagnosis and planning. The user interfaces for interacting with the 3D models are still largely based on the traditional 2D interaction method with a mouse and a 2D display. In this study, we proposed haptic VR interfaces to manipulate 3D models for the medical diagnosis and planning tasks and implemented a prototype system including two types of haptic VR interfaces (kinesthetic and

CRediT authorship contribution statement

Zhenxing Li: Conceptualization, Methodology, Software, Formal analysis, Investigation, Writing - original draft, Writing - review & editing, Visualization. Maria Kiiveri: Methodology, Software, Investigation, Writing - review & editing. Jussi Rantala: Validation, Writing - review & editing. Roope Raisamo: Resources, Writing - review & editing, Supervision, Project administration, Funding acquisition.

Declaration of competing interest

There is no conflict of interest regarding the publication of this article.

Zhenxing Li received his MSc degree in Radio Frequency Electronics from Tampere University of Technology and MSc degree in User Interface Software Development from University of Tampere, in 2009 and 2013 respectively. He is a researcher in Tampere Unit for Computer-Human Interaction (TAUCHI) Research Center at Tampere University and working toward PhD degree in Interactive Technology. His research interests include haptics, VR, gaze, multimodal interaction, mathematical modelling and algorithms.

References (73)

  • Adesante, 2020. VR solution for Surgeons. https://www.surgeryvision.com/ (accessed 15 September...
  • A. Alaraj et al.

    Virtual Reality Cerebral Aneurysm Clipping Simulation with Real-Time Haptic Feedback

    Operative Neurosurgery

    (2015)
  • A. Al-Khalifah et al.

    Using virtual reality for medical diagnosis, training and education

    Int. J. Dis. Human Dev.

    (2006)
  • R. Bade et al.

    Usability Comparison of Mouse-Based Interaction Techniques for Predictable 3d Rotation

  • N.E. Berthier et al.

    Visual information and object size in the control of Reaching

    J. Mot. Behav.

    (1996)
  • L. Besançon et al.

    Mouse, Tactile, and Tangible Input for 3D Manipulation

  • W. Bholsithi et al.

    3D vs. 2D cephalometric analysis comparisons with repeated measurements from 20 Thai males and 20 Thai females

    Biomed. Imaging Interv. J.

    (2009)
  • D. Bielser et al.

    Interactive simulation of surgical cuts

  • D.A. Bowman et al.

    3D User interfaces: theory and practice

    (2004)
  • M. Chen et al.

    A study in interactive 3-D rotation using 2-D control devices

    ACM SIGGRAPH Computer Graphics

    (1988)
  • A. El Saddik et al.

    Haptics Technologies: Bringing Touch to Multimedia

  • F. Fahmi et al.

    3D anatomy learning system using Virtual Reality and VR Controller

    Journal of Physics: Conference Series

    (2019)
  • A. Fischer et al.

    PHANToM haptic device implemented in a projection screen virtual environment

  • J.D. Foley et al.

    Computer Graphics: Principles and Practice in C

    (1995)
  • D.H. Gates et al.

    The effects of muscle fatigue and movement height on movement stability and variability

    Exp. Brain Res.

    (2011)
  • M.H. Gross

    Computer graphics in medicine: from visualization to surgery simulation

    ACM SIGGRAPH Computer Graphics

    (1998)
  • W.E. Harrell

    Three-dimensional diagnosis & treatment planning: The use of 3D facial imaging & ConeBeam CT (CBCT) in orthodontics & dentistry

    Australasian Dental Practice.

    (2007)
  • J.D. Hincapié-Ramos et al.

    Consumed endurance: a metric to quantify arm fatigue of mid-air interactions

  • K. Hinckley et al.

    Usability analysis of 3D rotation techniques

  • S. Holm

    A simple sequentially rejective multiple test procedure

    Scand. J. Statist.

    (1979)
  • The Next Generation 1996 Lexicon A to Z: TrackBall

    Next Generation

    (1996)
  • T. Issa et al.

    Usability and Human Computer Interaction (HCI)

    Sustainable Design

    (2015)
  • J. Jankowski et al.

    A survey of interaction techniques for interactive 3d environments

  • T. Jones et al.

    The Impact of Virtual Reality on Chronic Pain

    PLoS ONE

    (2016)
  • H. Kim et al.

    Performance Comparison of User Interface Devices for Controlling Mining Software in Virtual Reality Environments

    Appl. Sci.

    (2019)
  • F. King et al.

    An Immersive Virtual Reality Environment for Diagnostic Imaging

    J. Medical Robotics Research.

    (2016)
  • Cited by (14)

    • The use of virtual reality and augmented reality in oral and maxillofacial surgery: A narrative review

      2024, Oral Surgery, Oral Medicine, Oral Pathology and Oral Radiology
    • Using HTC Vive to Design a Virtual Reality Simulation Environment on Radiography

      2023, Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
    View all citing articles on Scopus

    Zhenxing Li received his MSc degree in Radio Frequency Electronics from Tampere University of Technology and MSc degree in User Interface Software Development from University of Tampere, in 2009 and 2013 respectively. He is a researcher in Tampere Unit for Computer-Human Interaction (TAUCHI) Research Center at Tampere University and working toward PhD degree in Interactive Technology. His research interests include haptics, VR, gaze, multimodal interaction, mathematical modelling and algorithms.

    Maria Kiiveri received her MSc degree in Computer Science from University of Tampere in 2019. She has worked in Tampere Unit for Computer-Human Interaction (TAUCHI) Research Center in Tampere University in a project that develops innovative technical solutions for medical and surgical use. She is currently employed in a company that provides software solutions for clinical laboratories.

    Jussi Rantala received his MSc in Computer Science (2007) and PhD in Interactive Technology (2014) from the University of Tampere. He is a Senior Research Fellow at the Tampere Unit of Computer-Human Interaction (TAUCHI) at Tampere University. His research focuses on human-computer interaction, with special interest in haptics, olfaction, multisensory interfaces and XR.

    Roope Raisamo is a professor of computer science in Tampere University, Faculty of Information Technology and Communication Sciences. He received his PhD degree in computer science from the University of Tampere in 1999. He is the head of TAUCHI Research Center (Tampere Unit for Computer-Human Interaction), leading Multimodal Interaction Research Group. His 25-year research experience in the field of human-technology interaction is focused on multimodal interaction, XR, haptics, gaze, gestures, interaction techniques, and software architectures.

    View full text