skip to main content
research-article
Open Access

Emerging ExG-based NUI Inputs in Extended Realities: A Bottom-up Survey

Authors Info & Claims
Published:21 July 2021Publication History
Skip Abstract Section

Abstract

Incremental and quantitative improvements of two-way interactions with extended realities (XR) are contributing toward a qualitative leap into a state of XR ecosystems being efficient, user-friendly, and widely adopted. However, there are multiple barriers on the way toward the omnipresence of XR; among them are the following: computational and power limitations of portable hardware, social acceptance of novel interaction protocols, and usability and efficiency of interfaces. In this article, we overview and analyse novel natural user interfaces based on sensing electrical bio-signals that can be leveraged to tackle the challenges of XR input interactions. Electroencephalography-based brain-machine interfaces that enable thought-only hands-free interaction, myoelectric input methods that track body gestures employing electromyography, and gaze-tracking electrooculography input interfaces are the examples of electrical bio-signal sensing technologies united under a collective concept of ExG. ExG signal acquisition modalities provide a way to interact with computing systems using natural intuitive actions enriching interactions with XR. This survey will provide a bottom-up overview starting from (i) underlying biological aspects and signal acquisition techniques, (ii) ExG hardware solutions, (iii) ExG-enabled applications, (iv) discussion on social acceptance of such applications and technologies, as well as (v) research challenges, application directions, and open problems; evidencing the benefits that ExG-based Natural User Interfaces inputs can introduce to the area of XR.

References

  1. [n.d.]. 13E200 MyoBock Electrode. Retrieved July 8, 2019 from https://professionals.ottobock.com.au/Products/Prosthetics/Prosthetics-Upper-Limb/Adult-Terminal-Devices/13E200-MyoBock-electrode/p/13E200.Google ScholarGoogle Scholar
  2. [n.d.]. Air—Dry EEG Device for Basic Brain States Estimation| Bitbrain. Retrieved February 11, 2021 from https://www.bitbrain.com/neurotechnology-products/dry-eeg/air.Google ScholarGoogle Scholar
  3. [n.d.]. Athos Coaching System. Retrieved November 11, 2020 from https://shop.liveathos.com/.Google ScholarGoogle Scholar
  4. [n.d.]. Auris. Retrieved June 22, 2019 from https://www.cognionics.net/auris.Google ScholarGoogle Scholar
  5. [n.d.]. B-Alert X24. Retrieved June 22, 2019 from https://www.advancedbrainmonitoring.com/xseries/x24/.Google ScholarGoogle Scholar
  6. [n.d.]. BR32S. Wireless EEG System for Brain-Computer-Interface. Retrieved Jun 22, 2019 from http://www.physio-tech.co.jp/pdf/br/br32s.pdf.Google ScholarGoogle Scholar
  7. [n.d.]. BR8 PLUS. Retrieved June 22, 2019 from http://www.physio-tech.co.jp/products/br/br8_plus.html.Google ScholarGoogle Scholar
  8. [n.d.]. cEEGrid. Retrieved June 22, 2019 from http://ceegrid.com/home/.Google ScholarGoogle Scholar
  9. [n.d.]. Dev Kit EEG Headband with Dry Electrodes. Retrieved June 22, 2019 from https://www.cognionics.net/copy-of-dev-kit.Google ScholarGoogle Scholar
  10. [n.d.]. Dev Kit Specifications. Retrieved February 11, 2021 from https://www.next-mind.com/product/specs/.Google ScholarGoogle Scholar
  11. [n.d.]. Diadem–Wearable Dry-EEG Headset. Retrieved February 11, 2021 from https://www.bitbrain.com/neurotechnology-products/dry-eeg/diadem.Google ScholarGoogle Scholar
  12. [n.d.]. DSI 24—Wearable Sensing. Retrieved June 22, 2019 from https://wearablesensing.com/products/dsi-24/.Google ScholarGoogle Scholar
  13. [n.d.]. DSI 7—Wearable Sensing. Retrieved June 22, 2019 from https://wearablesensing.com/products/dsi-7-wireless-dry-eeg-headset/.Google ScholarGoogle Scholar
  14. [n.d.]. DSI-VR300—NEUROSPEC AG Research Neurosciences. Retrieved June 22, 2019 from https://www.neurospec.com/Products/Details/1077/dsi-vr300.Google ScholarGoogle Scholar
  15. [n.d.]. DTing ONE—DTing. Retrieved June 24, 2019 from http://www.dtingsmart.com/dting-one.Google ScholarGoogle Scholar
  16. [n.d.]. EMOTIV EPOC+ 14 Channel Mobile EEG—Emotiv. Retrieved June 22, 2019 from https://www.emotiv.com/product/emotiv-epoc-14-channel-mobile-eeg/.Google ScholarGoogle Scholar
  17. [n.d.]. EMOTIV Insight 5 Channel Mobile EEG—Emotiv. Retrieved June 22, 2019 from https://www.emotiv.com/product/emotiv-insight-5-channel-mobile-eeg/.Google ScholarGoogle Scholar
  18. [n.d.]. EPOC Flex Gel Sensor Kit—Emotiv. Retrieved June 22, 2019 from https://www.emotiv.com/product/epoc-flex-gel-sensor-kit/.Google ScholarGoogle Scholar
  19. [n.d.]. EPOC Flex Saline Sensor Kit—Emotiv. Retrieved June 22, 2019 from https://www.emotiv.com/product/epoc-flex-saline-sensor-kit/.Google ScholarGoogle Scholar
  20. [n.d.]. Gabe Newell Says Brain-Computer Interface Tech Will Allow Video Games Far Beyond What Human ‘Meat Peripherals’ Can Comprehend. Retrieved February 11, 2021 from https://www.tvnz.co.nz/one-news/new-zealand/gabe-newell-says-brain-computer-interface-tech-allow-video-games-far-beyond-human-meat-peripherals-can-comprehend.Google ScholarGoogle Scholar
  21. [n.d.]. Galea. Retrieved February 11, 2021 from https://galea.co/.Google ScholarGoogle Scholar
  22. [n.d.]. g.Nautilus—g.tec’s Wireless EEG System with Active Electrodes. Retrieved June 22 2019 from http://www.gtec.at/Products/Hardware-and-Accessories/g.Nautilus-Specs-Features.Google ScholarGoogle Scholar
  23. [n.d.]. Hero—Dry-EEG system for Motor Cortex monitoring. Retrieved February 11, 2021 https://www.bitbrain.com/neurotechnology-products/dry-eeg/hero.Google ScholarGoogle Scholar
  24. [n.d.]. List of Interface Bit Rates. Retrieved July 18, 2019 from https://en.wikipedia.org/wiki/List_of_interface_bit_rates.Google ScholarGoogle Scholar
  25. [n.d.]. LooxidVR: Make Your Research Limitless. Retrieved June 22 2019 from https://looxidlabs.com/looxidvr/.Google ScholarGoogle Scholar
  26. [n.d.]. Mindo-64 Coral. Retrieved June 22, 2019 from http://www.mindo.com.tw/en/goods.php?act=view&no=18.Google ScholarGoogle Scholar
  27. [n.d.]. MINT EEG Acquisition System: JellyFish 1.0. Retrieved February 11, 2021 from https://github.com/UBCMint/MINT_NTX2020_FixedChallenge.Google ScholarGoogle Scholar
  28. [n.d.]. Mobile-128 Wireless High Density EEG. Retrieved June 22, 2019 from https://www.cognionics.net/mobile-128.Google ScholarGoogle Scholar
  29. [n.d.]. MuscleBANBE Data Sheet. Retrieved October 15, 2020 from https://biosignalsplux.com/downloads/docs/datasheets/muscleBAN_Datasheet.pdf.Google ScholarGoogle Scholar
  30. [n.d.]. Muse 2: Brain Sensing Headband—Technology Enhanced Meditation. Retrieved June 22, 2019 from https://choosemuse.com/muse-2/.Google ScholarGoogle Scholar
  31. [n.d.]. Muse The Brain Sensing Headband—Technology Enhanced Meditation. Retrieved June 22, 2019 from https://choosemuse.com/muse/.Google ScholarGoogle Scholar
  32. [n.d.]. OpenBCI—Open Source Biosensing Tools (EEG, EMG, EKG, and more). Retrieved June 22, 2019 from https://openbci.com/.Google ScholarGoogle Scholar
  33. [n.d.]. OYMotion. Retrieved June 24, 2019 https://oymotion.github.io/.Google ScholarGoogle Scholar
  34. [n.d.]. Product—ZETO INC. Retrieved June 28, 2019 https://zeto-inc.com/device/.Google ScholarGoogle Scholar
  35. [n.d.]. Project EMGeyboard. Github. Retrieved February 11, 2021 from https://github.com/NTX-McGill/NeuroTechX-McGill-2020.Google ScholarGoogle Scholar
  36. [n.d.]. Quick-30 Wireless Dry EEG Headset with Dry Electrodes. Retrieved June 22, 2019 from https://www.cognionics.net/quick-30.Google ScholarGoogle Scholar
  37. [n.d.]. Research Tools. Retrieved June 22, 2019 from https://store.neurosky.com/products/mindset-research-tools.Google ScholarGoogle Scholar
  38. [n.d.]. Smarting Device—High-end fully Mobile EEG devices. Retrieved June 22, 2019 from https://mbraintrain.com/smarting/.Google ScholarGoogle Scholar
  39. [n.d.]. Tobii Pro Glasses 2—Product Description. Retrieved July 5, 2019 from https://www.tobiipro.com/siteassets/tobii-pro/product-descriptions/tobii-pro-glasses-2-product-description.pdf.Google ScholarGoogle Scholar
  40. [n.d.]. Typing. Retrieved July 18, 2019 from https://en.wikipedia.org/wiki/Typing.Google ScholarGoogle Scholar
  41. [n.d.]. Welcome to Myo Support. Retrieved June 24, 2019 from https://support.getmyo.com/hc/en-us.Google ScholarGoogle Scholar
  42. Tobii AB. 2019. Tobii: The World Leader in Eye Tracking. Retrieved October 17, 2019 from https://www.tobii.com/.Google ScholarGoogle Scholar
  43. Gizem Acar, Ozberk Ozturk, Ata Jedari Golparvar, Tamador Alkhidir Elboshra, Karl Bühringer, and Murat Kaya Yapici. 2019. Wearable and flexible textile electrodes for biopotential signal monitoring: A review. Electronics 8, 5 (2019). DOI:https://doi.org/10.3390/electronics8050479Google ScholarGoogle Scholar
  44. Mahesh Kumar Adimulam and M. B. Srinivas. 2016. Modeling of EXG (ECG, EMG and EEG) non-idealities using MATLAB. In Proceedings of the 2016 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI’16). IEEE, 1584–1589.Google ScholarGoogle Scholar
  45. Sunggeun Ahn, Jeongmin Son, Sangyoon Lee, and Geehyuk Lee. 2020. Verge-it: Gaze interaction for a binocular head-worn display using modulated disparity vergence eye movement. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA’20). Association for Computing Machinery, New York, NY, 1–7. DOI:https://doi.org/10.1145/3334480.3382908 Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. S. Aleksandar. 2019. How Much Time Do People Spend on Social Media in 2019? Retrieved October 21, 2019 from https://techjury.net/blog/time-spent-on-social-media/.Google ScholarGoogle Scholar
  47. Mohammed Alnemari. 2017. Integration of a Low Cost EEG Headset with The Internet of Thing Framework. Ph.D. Dissertation. UC Irvine.Google ScholarGoogle Scholar
  48. Leopoldo Angrisani, Pasquale Arpaia, Antonio Esposito, and Nicola Moccaldi. 2019. A wearable brain-computer interface instrument for augmented reality-based inspection in industry 4.0. IEEE Trans. Instrum. Meas. (2019).Google ScholarGoogle Scholar
  49. Inc. Apple. 2019. AR Kit 3—Apple Developers. Retrieved October 17, 2019 from https://developer.apple.com/augmented-reality/.Google ScholarGoogle Scholar
  50. Kikuo Asai, Yuji Y. Sugimoto, Narong Hemmakorn, Noritaka Osawa, and Kimio Kondo. 2004. Language-support system using character recognition. In Proceedings of the 2004 ACM SIGGRAPH International Conference on Virtual Reality Continuum and Its Applications in Industry (VRCAI’04). Association for Computing Machinery, New York, NY, 173–179. DOI:https://doi.org/10.1145/1044588.1044623 Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Ilhan Aslan, Andreas Uhl, Alexander Meschtscherjakov, and Manfred Tscheligi. 2014. Mid-air authentication gestures: An exploration of authentication based on palm and finger motions. In Proceedings of the 16th International Conference on Multimodal Interaction (ICMI’14). Association for Computing Machinery, New York, NY, 311–318. DOI:https://doi.org/10.1145/2663204.2663246 Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Ilhan Aslan, Andreas Uhl, Alexander Meschtscherjakov, and Manfred Tscheligi. 2016. Design and exploration of mid-air authentication gestures. ACM Trans. Interact. Intell. Syst. 6, 3, Article 23 (Sep. 2016), 22 pages. DOI:https://doi.org/10.1145/2832919 Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Md Moin Uddin Atique, Sakhawat Hossen Rakib, and Khondkar Siddique-e Rabbani. 2016. An electrooculogram based control system. In Proceedings of the 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV’16). IEEE, 809–812.Google ScholarGoogle ScholarCross RefCross Ref
  54. Manfredo Atzori, Matteo Cognolato, and Henning Muller. 2016. Deep learning with convolutional neural networks applied to electromyography data: A resource for the classification of movements for prosthetic hands. Front. Neurorobot. 10 (2016), 9. DOI:https://doi.org/10.3389/fnbot.2016.00009Google ScholarGoogle ScholarCross RefCross Ref
  55. Heber Avalos-Viveros, Guillermo Molero-Castillo, Edgard Benitez-Guerrro, and Everardo Bárcenas. [n.d.]. Towards a method for biosignals analysis as support for the design of adaptive user-interfaces. Adv. Pattern Recogn. ([n. d.]), 9.Google ScholarGoogle Scholar
  56. S. Bakker and K. Niemantsverdriet. 2016. The interaction-attention continuum: Considering various levels of human attention in interaction design. Int. J. Des. 10 (2016), 1–14.Google ScholarGoogle Scholar
  57. Domna Banakou, Parasuram D. Hanumanthu, and Mel Slater. 2016. Virtual embodiment of white people in a black virtual body leads to a sustained reduction in their implicit racial bias. Front. Hum. Neurosci. 10 (2016), 601. DOI:https://doi.org/10.3389/fnhum.2016.00601Google ScholarGoogle ScholarCross RefCross Ref
  58. Javier A. Bargas-Avila and Kasper Hornbæk. 2011. Old wine in new bottles or novel challenges: A critical analysis of empirical studies of user experience. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’11). Association for Computing Machinery, New York, NY, 2689–2698. DOI:https://doi.org/10.1145/1978942.1979336 Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Vincent Becker, Pietro Oldrati, Liliana Barrios, and Gábor Sörös. 2018. Touchsense: Classifying finger touches and measuring their force with an electromyography armband. In Proceedings of the 2018 ACM International Symposium on Wearable Computers. ACM, 1–8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Ilyasse Belkacem, Isabelle Pecci, Benoît Martin, and Anthony Faiola. 2019. TEXTile: Eyes-free text input on smart glasses using touch enabled textile on the forearm. In Proceedings of the Human-Computer Interaction (INTERACT’19), David Lamas, Fernando Loizides, Lennart Nacke, Helen Petrie, Marco Winckler, and Panayiotis Zaphiris (Eds.). Springer International Publishing, Cham, 351–371. Google ScholarGoogle ScholarCross RefCross Ref
  61. Carlos Bermejo, Tristan Braud, Ji Yang, Shayan Mirjafari, Bowen Shi, Yu Xiao, and Pan Hui. 2020. VIMES: A wearable memory assistance system for automatic information retrieval. In Proceedings of the 28th ACM International Conference on Multimedia. 3191–3200. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Carlos Bermejo and Pan Hui. 2017. A survey on haptic technologies for mobile augmented reality. arXiv:1709.00698. Retrieved from https://arxiv.org/abs/1709.00698.Google ScholarGoogle Scholar
  63. Guillermo Bernal and Pattie Maes. 2017. Emotional beasts: Visually expressing emotions through avatars in VR. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. 2395–2402. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Guillermo Bernal, Tao Yang, Abhinandan Jain, and Pattie Maes. 2018. PhysioHMD: A conformable, modular toolkit for collecting physiological data from head-mounted displays. In Proceedings of the 2018 ACM International Symposium on Wearable Computers. 160–167. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. BITalino. 2019. Project BITalino. Retrieved October 17, 2019 from https://bitalino.com/en/.Google ScholarGoogle Scholar
  66. Damien Brun, Charles Gouin-Vallerand, and Sébastien George. 2017. Augmented human mind: Case of reasoning. In Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers (UbiComp’17). Association for Computing Machinery, New York, NY, 717–723. DOI:https://doi.org/10.1145/3123024.3129273 Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. BTS. 2019. BTS Bioengineering. Retrieved October 17, 2019 from https://www.btsbioengineering.com/?page_id=12786.Google ScholarGoogle Scholar
  68. Andreas Bulling, Daniel Roggen, and Gerhard Tröster. 2009. Wearable EOG Goggles: Eye-based Interaction in Everyday Environments. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Darryl H. Burnet and Matthew D. Turner. 2017. Expanding EEG research into the clinic and classroom with consumer EEG Systems. (2017). Google ScholarGoogle Scholar
  70. E. Cafaro, L. Saue-Fletcher, M. J. Washington, M. Bijanzadeh, A. M. Lee, E. F. Chang, A. D. Huberman, et al. 2020. Human responses to visually evoked threat.Curr. Biol. (2020).Google ScholarGoogle Scholar
  71. D. Chatzopoulos, C. Bermejo, Z. Huang, and P. Hui. 2017. Mobile augmented reality survey: From where we are to where we go. IEEE Access 5 (2017), 6917–6950. DOI:https://doi.org/10.1109/ACCESS.2017.2698164Google ScholarGoogle ScholarCross RefCross Ref
  72. Aritra Chaudhuri, Anirban Dasgupta, Suvodip Chakrborty, and Aurobinda Routray. 2016. A low-cost, wearable, portable EOG recording system. In Proceedings of the 2016 International Conference on Systems in Medicine and Biology (ICSMB’16). IEEE, 102–105.Google ScholarGoogle ScholarCross RefCross Ref
  73. C. G. Coogan and B. He. 2018. Brain-computer interface control in a virtual reality environment and applications for the internet of things. IEEE Access 6 (2018), 10840–10849.Google ScholarGoogle ScholarCross RefCross Ref
  74. David Costa and Carlos Duarte. 2015. From one to many users and contexts: A classifier for hand and arm gestures. In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI’15). Association for Computing Machinery, New York, NY, 115–120. DOI:https://doi.org/10.1145/2678025.2701388 Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Ulysse Côté-Allard, Gabriel Gagnon-Turcotte, François Laviolette, and Benoit Gosselin. 2019. A low-cost, wireless, 3-d-printed custom armband for semg hand gesture recognition. Sensors 19, 12 (2019), 2811.Google ScholarGoogle ScholarCross RefCross Ref
  76. Keith M. Davis, Lauri Kangassalo, Michiel Spapé, and Tuukka Ruotsalo. 2020. Brainsourcing: Crowdsourcing recognition tasks via collaborative brain-computer interfacing. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI’20). Association for Computing Machinery, New York, NY, 1–14. DOI:https://doi.org/10.1145/3313831.3376288 Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Yunbin Deng, James T. Heaton, and Geoffrey S. Meltzner. 2014. Towards a practical silent speech recognition system. In Proceedings of the 15th Annual Conference of the International Speech Communication Association.Google ScholarGoogle Scholar
  78. C. Nathan DeWall and Brad J. Bushman. 2011. Social acceptance and rejection: The sweet and the bitter. Curr. Direct. Psychol. Sci. 20, 4 (2011), 256–260.Google ScholarGoogle ScholarCross RefCross Ref
  79. Murtaza Dhuliawala, Juyoung Lee, Junichi Shimizu, Andreas Bulling, Kai Kunze, Thad Starner, and Woontack Woo. 2016. Smooth eye movement interaction using EOG glasses. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, 307–311. Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. S. Z. Diya, R. A. Prorna, I. I. Rahman, A. B. Islam, and M. N. Islam. 2019. Applying brain-computer interface technology for evaluation of user experience in playing games. In Proceedings of the 2019 International Conference on Electrical, Computer and Communication Engineering (ECCE’19). 1–6. DOI:https://doi.org/10.1109/ECACE.2019.8679203Google ScholarGoogle Scholar
  81. Andreas Dünser, Lawrence Walker, Heather Horner, and Daniel Bentall. 2012. Creating interactive physics education books with augmented reality. In Proceedings of the 24th Australian Computer-Human Interaction Conference (OzCHI’12). Association for Computing Machinery, New York, NY, 107–114. DOI:https://doi.org/10.1145/2414536.2414554 Google ScholarGoogle ScholarDigital LibraryDigital Library
  82. Ellen Dupler and Lucy E. Dunne. 2019. Effects of the textile-sensor interface on stitched strain sensor performance. In Proceedings of the 23rd International Symposium on Wearable Computers (ISWC’19). 45–53. DOI:https://doi.org/10.1145/3341163.3347717 Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Boyu Fan, Xuefeng Liu, Xiang Su, Pan Hui, and Jianwei Niu. 2020. Emgauth: An emg-based smartphone unlocking system using siamese network. In Proceedings of the IEEE International Conference on Pervasive Computing and Communications (PerCom’20). IEEE, 1–10.Google ScholarGoogle Scholar
  84. Jacqui Fashimpaur, Kenrick Kin, and Matt Longest. 2020. PinchType: Text entry for virtual and augmented reality using comfortable thumb to fingertip pinches. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA’20). Association for Computing Machinery, New York, NY, 1–7. DOI:https://doi.org/10.1145/3334480.3382888 Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. Mehrdad Fatourechi, Ali Bashashati, Rabab K. Ward, and Gary E. Birch. 2007. EMG and EOG artifacts in brain computer interface systems: A survey. Clin. Neurophysiol. 118, 3 (2007), 480–494.Google ScholarGoogle ScholarCross RefCross Ref
  86. Oliver Faust, Yuki Hagiwara, Tan Jen Hong, Oh Shu Lih, and U. Rajendra Acharya. 2018. Deep learning for healthcare applications based on physiological signals: A review. Comput. Methods Progr. Biomed. 161 (2018), 1–13.Google ScholarGoogle ScholarCross RefCross Ref
  87. Jan Fischer, Markus Neff, Dirk Freudenstein, and Dirk Bartz. 2004. Medical augmented reality based on commercial image guided surgery. In Proceedings of the Eurographics Symposium on Virtual Environments (EGVE’04). 83–86. Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. João Freitas, Artur Ferreira, Mário Figueiredo, António Teixeira, and Miguel Sales Dias. 2014. Enhancing multimodal silent speech interfaces with feature selection. In Proceedings of the 15th Annual Conference of the International Speech Communication Association.Google ScholarGoogle Scholar
  89. Yang Gao, Wei Wang, Vir V. Phoha, Wei Sun, and Zhanpeng Jin. 2019. EarEcho: Using ear canal echo for wearable authentication. Proc. ACM Interact. Mobile Wear. Ubiq. Technol. 3, 3 (2019), 1–24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  90. Inc. Google. 2019. AR Core—Google Developers. Retrieved October 17, 2019 from https://developers.google.com/ar/.Google ScholarGoogle Scholar
  91. Qiong Gui, Maria V. Ruiz-Blondet, Sarah Laszlo, and Zhanpeng Jin. 2019. A survey on brain biometrics. ACM Comput. Surv. 51, 6, Article 112 (Feb. 2019), 38 pages. DOI:https://doi.org/10.1145/3230632 Google ScholarGoogle ScholarDigital LibraryDigital Library
  92. Faizan Haque, Mathieu Nancel, and Daniel Vogel. 2015. Myopoint: Pointing and clicking using forearm mounted electromyography and inertial motion sensors. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 3653–3656. Google ScholarGoogle ScholarDigital LibraryDigital Library
  93. Takuto Hayashi, Masaru Ohkubo, Sho Sakurai, Koichi Hirota, and Takuya Nojima. 2019. Towards making kinetic garments based on conductive fabric and smart hair. In Proceedings of the 23rd International Symposium on Wearable Computers (ISWC’19). ACM, New York, NY, 89–90. DOI:https://doi.org/10.1145/3341163.3347733 Google ScholarGoogle ScholarDigital LibraryDigital Library
  94. Suzana Herculano-Houzel. 2009. The human brain in numbers: A linearly scaled-up primate brain. Front. Hum. Neurosci. 3 (2009), 31.Google ScholarGoogle ScholarCross RefCross Ref
  95. Arne Holst. [n.d.]. Number of mobile devices worldwide 2019-2023. Retrieved July 6, 2019 from https://www.statista.com/statistics/245501/multiple-mobile-device-ownership-worldwide/.Google ScholarGoogle Scholar
  96. Keum-Shik Hong and Muhammad Jawad Khan. 2017. Hybrid brain-computer interface techniques for improved classification accuracy and increased number of commands: A review. Front. Neurorobot. 11 (2017), 35. DOI:https://doi.org/10.3389/fnbot.2017.00035Google ScholarGoogle ScholarCross RefCross Ref
  97. Zhanpeng Huang, Weikai Li, and Pan Hui. 2015. Ubii: Towards seamless interaction between digital and physical worlds. In Proceedings of the 23rd ACM International Conference on Multimedia (MM’15). Association for Computing Machinery, New York, NY, 341–350. DOI:https://doi.org/10.1145/2733373.2806266 Google ScholarGoogle ScholarDigital LibraryDigital Library
  98. Hyunjin Yoon, Sang-Wook Park, Yong-Kwi Lee, and Jong-Hyun Jang. 2013. Emotion recognition of serious game players using a simple brain computer interface. In Proceedings of the 2013 International Conference on ICT Convergence (ICTC’13). 783–786. DOI:https://doi.org/10.1109/ICTC.2013.6675478Google ScholarGoogle ScholarCross RefCross Ref
  99. Awesome Inc.2019. Advancer Technologies. Retrieved October 17, 2019 from http://www.advancertechnologies.com/.Google ScholarGoogle Scholar
  100. Google Inc.2019. Glass Project: Glass Enterprise Edition. Retrieved October 21, 2019 from https://www.google.com/glass/start/.Google ScholarGoogle Scholar
  101. Delsys Incorporated. 2019. Trigno Avanti Platform. Retrieved October 17, 2019 from https://www.delsys.com/trigno/.Google ScholarGoogle Scholar
  102. IMEC INT.2019. Eye-Tracking Glasses: Eye-Tracking with EOG. Retrieved October 17, 2019 from https://www.imec-int.com/en/eog.Google ScholarGoogle Scholar
  103. Shoya Ishimaru, Kai Kunze, Yuji Uema, Koichi Kise, Masahiko Inami, and Katsuma Tanaka. 2014. Smarter eyewear: Using commercial EOG glasses for activity recognition. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication. ACM, 239–242. Google ScholarGoogle ScholarDigital LibraryDigital Library
  104. Krzysztof Izdebski, Anderson Souza Oliveira, Bryan R. Schlink, Petr Legkov, Silke Kärcher, W. David Hairston, Daniel P. Ferris, and Peter König. 2016. Usability of EEG systems: User experience study. In Proceedings of the 9th ACM International Conference on PErvasive Technologies Related to Assistive Environments (PETRA’16). Association for Computing Machinery, New York, NY. DOI:https://doi.org/10.1145/2910674.2910714 Google ScholarGoogle ScholarDigital LibraryDigital Library
  105. Alice F. Jackson and Donald J. Bolger. 2014. The neurophysiological bases of EEG and EEG measurement: A review for the rest of us. Psychophysiology 51, 11 (2014), 1061–1071.Google ScholarGoogle ScholarCross RefCross Ref
  106. Robert J. K. Jacob, Audrey Girouard, Leanne M. Hirshfield, Michael S. Horn, Orit Shaer, Erin Treacy Solovey, and Jamie Zigelbaum. 2008. Reality-based interaction: A framework for post-WIMP interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’08). Association for Computing Machinery, New York, NY, 201–210. DOI:https://doi.org/10.1145/1357054.1357089 Google ScholarGoogle ScholarDigital LibraryDigital Library
  107. Jhilmil Jain, Arnold Lund, and Dennis Wixon. 2011. The future of natural user interfaces. In CHI’11 Extended Abstracts on Human Factors in Computing Systems. ACM, 211–214. Google ScholarGoogle ScholarDigital LibraryDigital Library
  108. Lars-Erik Janlert. 2014. The ubiquitous button. Interactions 21, 3 (May 2014), 26–33. DOI:https://doi.org/10.1145/2592234 Google ScholarGoogle ScholarDigital LibraryDigital Library
  109. Jay Jantz, Adam Molnar, and Ramses Alcaide. 2017. A brain-computer interface for extended reality interfaces. In ACM SIGGRAPH 2017 VR Village. 1–2. Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. Charles Jorgensen and Sorin Dusan. 2010. Speech interfaces based upon surface electromyography. Speech Commun. 52, 4 (2010), 354–366. Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. Arnav Kapur, Shreyas Kapur, and Pattie Maes. 2018. Alterego: A personalized wearable silent speech interface. In Proceedings of the 23rd International Conference on Intelligent User Interfaces. ACM, 43–53. Google ScholarGoogle ScholarDigital LibraryDigital Library
  112. Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An open source platform for pervasive eye tracking and mobile gaze-based interaction. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication. ACM, 1151–1160. Google ScholarGoogle ScholarDigital LibraryDigital Library
  113. Yufeng Ke, Pengxiao Liu, Xingwei An, Xizi Song, and Dong Ming. 2020. An online SSVEP-BCI system in an optical see-through augmented reality environment. J. Neur. Eng. 17, 1 (2020), 016066.Google ScholarGoogle ScholarCross RefCross Ref
  114. Dominik Kirst and Andreas Bulling. 2016. On the verge: Voluntary convergences for accurate and precise timing of gaze input. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA’16). Association for Computing Machinery, New York, NY, 1519–1525. DOI:https://doi.org/10.1145/2851581.2892307 Google ScholarGoogle ScholarDigital LibraryDigital Library
  115. Scott R. Klemmer, Björn Hartmann, and Leila Takayama. 2006. How bodies matter: Five themes for interaction design. In Proceedings of the 6th Conference on Designing Interactive Systems (DIS’06). 140–149. DOI:https://doi.org/10.1145/1142405.1142429 Google ScholarGoogle ScholarDigital LibraryDigital Library
  116. Pascal Knierim, Thomas Kosch, Johannes Groschopp, and Albrecht Schmidt. 2020. Opportunities and challenges of text input in portable virtual reality. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA’20). Association for Computing Machinery, New York, NY, 1–8. DOI:https://doi.org/10.1145/3334480.3382920 Google ScholarGoogle ScholarDigital LibraryDigital Library
  117. Martin Kocur, Valentin Schwind, and Niels Henze. 2019. Utilizing the proteus effect to improve interactions using full-body avatars in virtual reality. In Mensch und Computer 2019-Workshopband (2019).Google ScholarGoogle Scholar
  118. Nataliya Kosmyna, Caitlin Morris, Utkarsh Sarawgi, Thanh Nguyen, and Pattie Maes. 2019. AttentivU: A wearable pair of EEG and EOG glasses for real-time physiological processing. In Proceedings of the 2019 IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN’19). IEEE, 1–4.Google ScholarGoogle ScholarCross RefCross Ref
  119. Panagiotis Kourtesis, Danai Korre, Simona Collina, Leonidas A. A. Doumas, and Sarah E. MacPherson. 2020. Guidelines for the development of immersive virtual reality software for cognitive neuroscience and neuropsychology: The development of virtual reality everyday assessment lab (VR-EAL), a neuropsychological test battery in immersive virtual reality. Front. Comput. Sci. 1 (2020), 12. DOI:https://doi.org/10.3389/fcomp.2019.00012Google ScholarGoogle ScholarCross RefCross Ref
  120. Hiroki Kurosawa, Daisuke Sakamoto, and Tetsuo Ono. 2018. MyoTilt: A target selection method for smartwatches using the tilting operation and electromyography. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’18). Association for Computing Machinery, New York, NY, Article 43, 11 pages. DOI:https://doi.org/10.1145/3229434.3229457 Google ScholarGoogle ScholarDigital LibraryDigital Library
  121. Young D. Kwon, Kirill A. Shatilov, Lik-Hang Lee, Serkan Kumyol, Kit-Yung Lam, Yui-Pan Yau, and Pan Hui. 2020. Myokey: Surface electromyography and inertial motion sensing-based text entry in ar. In Proceedings of the 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops). IEEE, 1–4.Google ScholarGoogle ScholarCross RefCross Ref
  122. Mikko Kytö, Barrett Ens, Thammathip Piumsomboon, Gun A. Lee, and Mark Billinghurst. 2018. Pinpointing: Precise head- and eye-based target selection for augmented reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18). Association for Computing Machinery, New York, NY, 1–14. DOI:https://doi.org/10.1145/3173574.3173655 Google ScholarGoogle ScholarDigital LibraryDigital Library
  123. The McGill Physiology Virtual Lab. [n.d.]. EOG. Retrieved July 17, 2019 from https://www.medicine.mcgill.ca/physio/vlab/Other_exps/EOG/eogintro_n.htm.Google ScholarGoogle Scholar
  124. Karl LaFleur, Kaitlin Cassady, Alexander Doud, Kaleb Shades, Eitan Rogin, and Bin He. 2013. Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface. J. Neur. Eng. 10, 4 (2013), 046003.Google ScholarGoogle ScholarCross RefCross Ref
  125. K. Y. Lam, L. H. Lee, T. Braud, and P. Hui. 2019. M2A: A framework for visualizing information from mobile web to mobile augmented reality. In Proceedings of the 2019 IEEE International Conference on Pervasive Computing and Communications (PerCom’19). 1–10. DOI:https://doi.org/10.1109/PERCOM.2019.8767388Google ScholarGoogle ScholarCross RefCross Ref
  126. Hong Ji Lee, Hyun Seok Kim, and Kwang Suk Park. 2013. A study on the reproducibility of biometric authentication based on electroencephalogram (EEG). In Proceedings of the 2013 6th international IEEE/EMBS Conference on Neural Engineering (NER’13). IEEE, 13–16.Google ScholarGoogle ScholarCross RefCross Ref
  127. Jaehyung Lee, Kabmun Cha, Hyungmin Kim, Junhyuk Choi, Choonghyun Kim, and Songjoo Lee. 2019. Hybrid MI-SSSEP paradigm for classifying left and right movement toward bci for exoskeleton control. In Proceedings of the 2019 7th International Winter Conference on Brain-Computer Interface (BCI’19). IEEE, 1–3.Google ScholarGoogle ScholarCross RefCross Ref
  128. Lik-Hang LEE. 2019. Embodied Interaction on Constrained Interfaces. HKUST Online Thesis. Google ScholarGoogle Scholar
  129. Lik Hang Lee, Tristan Braud, Farshid Hassani Bijarbooneh, and Pan Hui. 2019. TiPoint: Detecting fingertip for mid-air interaction on computational resource constrained smartglasses. In Proceedings of the 23rd International Symposium on Wearable Computers (ISWC’19). ACM, New York, NY, 118–122. DOI:https://doi.org/10.1145/3341163.3347723 Google ScholarGoogle ScholarDigital LibraryDigital Library
  130. Lik Hang Lee, Tristan Braud, Farshid Hassani Bijarbooneh, and Pan Hui. 2020. UbiPoint: Towards non-intrusive mid-air interaction for hardware constrained smart glasses. In Proceedings of the 11th ACM Multimedia Systems Conference (MMSys’20). Association for Computing Machinery, New York, NY, 190–201. DOI:https://doi.org/10.1145/3339825.3391870 Google ScholarGoogle ScholarDigital LibraryDigital Library
  131. Lik-Hang Lee and Pan Hui. 2018. Interaction methods for smart glasses: A survey. IEEE Access 6 (2018), 28712–28732.Google ScholarGoogle ScholarCross RefCross Ref
  132. Lik Hang Lee, Kit Yung Lam, Tong Li, Tristan Braud, Xiang Su, and Pan Hui. 2019. Quadmetric optimized thumb-to-finger interaction for force assisted one-handed text entry on mobile headsets. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 3, 3, Article 94 (Sep. 2019), 27 pages. DOI:https://doi.org/10.1145/3351252 Google ScholarGoogle ScholarDigital LibraryDigital Library
  133. L. H. Lee, K. Yung Lam, Y. P. Yau, T. Braud, and P. Hui. 2019. HIBEY: Hide the keyboard in augmented reality. In Proceedings of the 2019 IEEE International Conference on Pervasive Computing and Communications (PerCom’19). 1–10. DOI:https://doi.org/10.1109/PERCOM.2019.8767420Google ScholarGoogle ScholarCross RefCross Ref
  134. Lik-Hang Lee, Yiming Zhu, Yui-Pan Yau, Tristan Braud, Xiang Su, and Pan Hui. 2020. One-thumb text acquisition on force-assisted miniature interfaces for mobile headsets. In Proceedings of the 2020 IEEE International Conference on Pervasive Computing and Communications (PerCom’20). 1–10. Google ScholarGoogle ScholarCross RefCross Ref
  135. Chin-Teng Lin, Li-Wei Ko, Meng-Hsiu Chang, Jeng-Ren Duann, Jing-Ying Chen, Tung-Ping Su, and Tzyy-Ping Jung. 2010. Review of wireless and wearable electroencephalogram systems and brain-computer interfaces–a mini-review. Gerontology 56, 1 (2010), 112–119.Google ScholarGoogle ScholarCross RefCross Ref
  136. Feng Lin, Kun Woo Cho, Chen Song, Wenyao Xu, and Zhanpeng Jin. 2018. Brain password: A secure and truly cancelable brain biometrics for smart headwear. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services. 296–309. Google ScholarGoogle ScholarDigital LibraryDigital Library
  137. Feng Lin, Kun Woo Cho, Chen Song, Wenyao Xu, and Zhanpeng Jin. 2018. Brain password: A secure and truly cancelable brain biometrics for smart headwear. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services. 296–309. Google ScholarGoogle ScholarDigital LibraryDigital Library
  138. Zhonglin Lin, Changshui Zhang, Wei Wu, and Xiaorong Gao. 2006. Frequency recognition based on canonical correlation analysis for SSVEP-based BCIs. IEEE Trans. Biomed. Eng. 53, 12 (2006), 2610–2614.Google ScholarGoogle ScholarCross RefCross Ref
  139. Kang Ling, Haipeng Dai, Yuntang Liu, and Alex X. Liu. 2018. UltraGesture: Fine-grained gesture sensing and recognition. In Proceedings of the 2018 15th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON’18). IEEE, 1–9.Google ScholarGoogle Scholar
  140. Ruibo Liu, Qijia Shao, Siqi Wang, Christina Ru, Devin Balkcom, and Xia Zhou. 2019. Reconstructing human joint motion with computational fabrics. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 3, 1, Article 19 (Mar. 2019), 26 pages. DOI:https://doi.org/10.1145/3314406 Google ScholarGoogle ScholarDigital LibraryDigital Library
  141. Shanhong Liu. [n.d.]. Global Augmented/Virtual Reality Market Size 2016-2023 Statistic. Retrieved July 6, 2019 from https://www.statista.com/statistics/591181/global-augmented-virtual-reality-market-size/.Google ScholarGoogle Scholar
  142. Shiqing Luo, Anh Nguyen, Chen Song, Feng Lin, Wenyao Xu, and Zhisheng Yan. [n.d.]. OcuLock: Exploring human visual system for authentication in virtual reality head-mounted display.Google ScholarGoogle Scholar
  143. S. Vaisman M. Abuhasira and A. B. Geva. [n.d.]. Retrieved July 3, 2019 from https://developer.download.nvidia.com/video/gputechconf/gtc/2019/presentation/s9554-fast-training-of-deep-neural-networks-using-brain-generated-labels.pdf.Google ScholarGoogle Scholar
  144. Jiaxin Ma, Yu Zhang, Andrzej Cichocki, and Fumitoshi Matsuno. 2014. A novel EOG/EEG hybrid human–machine interface adopting eye movements and ERPs: Application to robot control. IEEE Trans. Biomed. Eng. 62, 3 (2014), 876–889.Google ScholarGoogle ScholarCross RefCross Ref
  145. Zheren Ma, Brandon C. Li, and Zeyu Yan. 2016. Wearable driver drowsiness detection using electrooculography signal. In Proceedings of the 2016 IEEE Topical Conference on Wireless Sensors and Sensor Networks (WiSNet’16). IEEE, 41–43.Google ScholarGoogle ScholarCross RefCross Ref
  146. Jory MacKay. 2019. Screen Time Stats 2019: Here’s How Much You Use Your Phone during the Workday. Retrieved October 21, 2019 from https://blog.rescuetime.com/screen-time-stats-2018/.Google ScholarGoogle Scholar
  147. Steve Mann, Tom Furness, Yu Yuan, Jay Iorio, and Zixin Wang. 2018. All reality: Virtual, augmented, mixed (x), mediated (x, y), and multimediated reality. arXiv:1804.08386. Retrieved from https://arxiv.org/abs/1804.08386. Google ScholarGoogle Scholar
  148. Octavio Marin-Pardo, Athanasios Vourvopoulos, Meghan Neureither, David Saldana, Esther Jahng, and Sook-Lei Liew. 2019. Electromyography as a suitable input for virtual reality-based biofeedback in stroke rehabilitation. In Proceedings of the International Conference on Human-Computer Interaction. Springer, 274–281.Google ScholarGoogle ScholarCross RefCross Ref
  149. Diogo Marques, Luís Carriço, Tiago Guerreiro, Alexander De Luca, Pattie Maes, Ildar Muslukhov, Ian Oakley, and Emanuel von Zezschwitz. 2014. Workshop on inconspicuous interaction. In CHI’14 Extended Abstracts on Human Factors in Computing Systems. 91–94. Google ScholarGoogle ScholarDigital LibraryDigital Library
  150. Santosh Mathan. 2008. FEATUREImage search at the speed of thought. Interactions 15, 4 (Jul. 2008), 76–77. DOI:https://doi.org/10.1145/1374489.1374509 Google ScholarGoogle ScholarDigital LibraryDigital Library
  151. Fabrice Matulic, Brian Vogel, Naoki Kimura, and Daniel Vogel. 2019. Eliciting pen-holding postures for general input with suitability for EMG armband detection. In Proceedings of the 2019 ACM International Conference on Interactive Surfaces and Spaces (ISS’19). Association for Computing Machinery, New York, NY, 89–100. DOI:https://doi.org/10.1145/3343055.3359720 Google ScholarGoogle ScholarDigital LibraryDigital Library
  152. Sven Mayer, Gierad Laput, and Chris Harrison. 2020. Enhancing mobile voice assistants with worldgaze. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI’20). Association for Computing Machinery, New York, NY, 1–10. DOI:https://doi.org/10.1145/3313831.3376479 Google ScholarGoogle ScholarDigital LibraryDigital Library
  153. JINS MEME. 2019. JINS MEME ES: Eye Sensing. Retrieved October 17, 2019 from https://jins-meme.com/en/products/es/.Google ScholarGoogle Scholar
  154. Jonathan Mercier-Ganady, Maud Marchal, and Anatole Lécuyer. 2015. B-C-invisibility power: Introducing optical camouflage based on mental activity in augmented reality. In Proceedings of the 6th Augmented Human International Conference. 97–100. DOI:https://doi.org/10.1145/2735711.2735835 Google ScholarGoogle ScholarDigital LibraryDigital Library
  155. Microsoft. 2019. Kinect for Windows. Retrieved October 17, 2019 from https://developer.microsoft.com/en-us/windows/kinect.Google ScholarGoogle Scholar
  156. Kusuma Mohanchandra, Snehanshu Saha, and G. M. Lingaraju. 2015. EEG based brain computer interface for speech communication: Principles and applications. In Brain-Computer Interfaces. Springer, 273–293.Google ScholarGoogle Scholar
  157. Ali Moin, Andy Zhou, Simone Benatti, Abbas Rahimi, George Alexandrov, Alisha Menon, Senam Tamakloe, Jonathan Ting, Natasha Yamamoto, Yasser Khan, Fred Burghardt, Ana C. Arias, Luca Benini, and Jan M. Rabaey. 2019. Adaptive EMG-based hand gesture recognition using hyperdimensional computing. arxiv:1901.00234. Retrieved from http://arxiv.org/abs/1901.00234.Google ScholarGoogle Scholar
  158. Calkin S. Montero, Jason Alexander, Mark T. Marshall, and Sriram Subramanian. 2010. Would you do that? understanding social acceptance of gestural interfaces. In Proceedings of the 12th International Conference on Human Computer Interaction with Mobile Devices and Services. 275–278. Google ScholarGoogle ScholarDigital LibraryDigital Library
  159. Inc. Motion Lab Systems. 2019. Multi-Channel Electromyography Systems. Retrieved October 17, 2019 from https://www.motion-labs.com/.Google ScholarGoogle Scholar
  160. NA. 2019. Low Cost Open Source eeg Device Completely Assembled USB Interface. Retrieved October 17, 2019 from https://www.olimex.com/Products/EEG/OpenEEG/EEG-SMT/open-source-hardware.Google ScholarGoogle Scholar
  161. Masaki Nakanishi, Yijun Wang, Xiaogang Chen, Yu-Te Wang, Xiaorong Gao, and Tzyy-Ping Jung. 2017. Enhancing detection of SSVEPs for a high-speed brain speller using task-related component analysis. IEEE Trans. Biomed. Eng. 65, 1 (2017), 104–112.Google ScholarGoogle ScholarCross RefCross Ref
  162. Masaki Nakanishi, Yijun Wang, Yu-Te Wang, Yasue Mitsukura, and Tzyy-Ping Jung. 2014. A high-speed brain speller using steady-state visual evoked potentials. Int. J. Neur. Syst. 24, 06 (2014), 1450019.Google ScholarGoogle ScholarCross RefCross Ref
  163. Neuralink. 2019. Neuralink Launch Event. Retrieved October 17, 2019 from https://www.youtube.com/watch?v=r-vbh3t7WVI.Google ScholarGoogle Scholar
  164. Neuralink. 2019. ZEPTH: Intelligent Fitness and Sport Shirts. Retrieved October 17, 2019 from https://www.xiaomiyoupin.com/detail?gid=108157.Google ScholarGoogle Scholar
  165. Trung-Hau Nguyen and Wan-Young Chung. 2018. A Single-Channel SSVEP-Based BCI speller using deep learning. IEEE Access 7 (2018), 1752–1763.Google ScholarGoogle ScholarCross RefCross Ref
  166. Arinobu Niijima, Takashi Isezaki, Ryosuke Aoki, Tomoki Watanabe, and Tomohiro Yamada. 2018. Biceps fatigue estimation with an E-Textile headband. In Proceedings of the 2018 ACM International Symposium on Wearable Computers (ISWC’18). Association for Computing Machinery, New York, NY, 222–223. DOI:https://doi.org/10.1145/3267242.3267281 Google ScholarGoogle ScholarDigital LibraryDigital Library
  167. James J. S. Norton, Jessica Mullins, Birgit E. Alitz, and Timothy Bretl. 2018. The performance of 9–11-year-old children using an SSVEP-based BCI for target selection. J. Neur. Eng. 15, 5 (2018), 056012.Google ScholarGoogle ScholarCross RefCross Ref
  168. Amin Nourmohammadi, Mohammad Jafari, and Thorsten O. Zander. 2018. A survey on unmanned aerial vehicle remote control using brain–computer interface. IEEE Trans. Hum.-Mach. Syst. 48, 4 (2018), 337–348.Google ScholarGoogle ScholarCross RefCross Ref
  169. Michel Obbink, Hayrettin Gürkök, Danny Plass-Oude Bos, Gido Hakvoort, Mannes Poel, and Anton Nijholt. 2012. Social interaction in a cooperative brain-computer interface game. In Intelligent Technologies for Interactive Entertainment, Antonio Camurri and Cristina Costa (Eds.). Springer, Berlin, 183–192. Google ScholarGoogle Scholar
  170. OpenEEG. 2019. Retrieved October 17, 2019 from http://openeeg.sourceforge.net/doc/index.html.Google ScholarGoogle Scholar
  171. Mohammadreza Asghari Oskoei and Huosheng Hu. 2007. Myoelectric control systems survey. Biomed. Sign. Process. Contr. 2, 4 (2007), 275–294.Google ScholarGoogle ScholarCross RefCross Ref
  172. Kseniia Palin, Anna Feit, Sunjun Kim, Per Ola Kristensson, and Antti Oulasvirta. 2019. How do people type on mobile devices? observations from a study with 37,000 volunteers.. In Proceedings of 21st International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI’19). Google ScholarGoogle ScholarDigital LibraryDigital Library
  173. Shijia Pan, Carlos Ruiz, Jun Han, Adeola Bannis, Patrick Tague, Hae Young Noh, and Pei Zhang. 2018. UniverSense: IoT device pairing through heterogeneous sensing signals. In Proceedings of the 19th International Workshop on Mobile Computing Systems & Applications (HotMobile’18). Association for Computing Machinery, New York, NY, 55–60. DOI:https://doi.org/10.1145/3177102.3177108 Google ScholarGoogle ScholarDigital LibraryDigital Library
  174. Nelusa Pathmanathan, Michael Becher, Nils Rodrigues, Guido Reina, Thomas Ertl, Daniel Weiskopf, and Michael Sedlmair. 2020. Eye vs. head: Comparing gaze methods for interaction in augmented reality. In Proceedings of the ACM Symposium on Eye Tracking Research and Applications (ETRA’20 Short Papers). Association for Computing Machinery, New York, NY, Article 50, 5 pages. DOI:https://doi.org/10.1145/3379156.3391829 Google ScholarGoogle ScholarDigital LibraryDigital Library
  175. Ken Pfeuffer, Matthias J. Geiger, Sarah Prange, Lukas Mecke, Daniel Buschek, and Florian Alt. 2019. Behavioural biometrics in vr: Identifying people from body motion and relations in virtual reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  176. Esteban J. Pino, Yerko Arias, and Pablo Aqueveque. 2018. Wearable EMG shirt for upper limb training. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC’18). IEEE, 4406–4409.Google ScholarGoogle ScholarCross RefCross Ref
  177. Leigh Ellen Potter and Alexandra Thompson. 2019. New and emerging technology: Ownership and adoption. In Proceedings of the 2019 on Computers and People Research Conference. 57–66. Google ScholarGoogle ScholarDigital LibraryDigital Library
  178. Dominic Potts, Kate Loveys, HyunYoung Ha, Shaoyan Huang, Mark Billinghurst, and Elizabeth Broadbent. 2019. ZenG: AR neurofeedback for meditative mixed reality. In Proceedings of the 2019 on Creativity and Cognition. ACM, 583–590. Google ScholarGoogle ScholarDigital LibraryDigital Library
  179. Jane Prophet, Yong Ming Kow, and Mark Hurry. 2018. Small trees, big data: Augmented reality model of air quality data via the chinese art of “Artificial” tray planting. In ACM SIGGRAPH 2018 Posters. Article 16, 2 pages. DOI:https://doi.org/10.1145/3230744.3230753 Google ScholarGoogle ScholarDigital LibraryDigital Library
  180. Felix Putze. 2019. Methods and tools for using BCI with augmented and virtual reality. In Brain Art. Springer, 433–446.Google ScholarGoogle Scholar
  181. Thea Radüntz. 2018. Signal quality evaluation of emerging EEG devices. Front. Physiol. 9 (2018), 98.Google ScholarGoogle ScholarCross RefCross Ref
  182. Rabie A. Ramadan, S. Refat, Marwa A. Elshahed, and Rasha A. Ali. 2015. Basics of brain computer interface. In Brain-Computer Interfaces. Springer, 31–50.Google ScholarGoogle Scholar
  183. Grigory Rashkov, Anatoly Bobe, Dmitry Fastovets, and Maria Komarova. 2019. Natural image reconstruction from brain waves: A novel visual BCI system with native feedback. DOI:https://doi.org/10.1101/787101Google ScholarGoogle Scholar
  184. Elena Ratti, Shani Waninger, Chris Berka, Giulio Ruffini, and Ajay Verma. 2017. Comparison of medical and consumer wireless EEG systems for use in clinical trials. Front. Hum. Neurosci. 11 (2017), 398.Google ScholarGoogle ScholarCross RefCross Ref
  185. Charlotte M. Reed and Nathaniel I. Durlach. 1998. Note on information transfer rates in human communication. Presence 7, 5 (1998), 509–518. Google ScholarGoogle ScholarDigital LibraryDigital Library
  186. Aya Rezeika, Mihaly Benda, Piotr Stawicki, Felix Gembler, Abdul Saboor, and Ivan Volosyak. 2018. Brain–computer interface spellers: A review. Brain Sci. 8, 4 (2018), 57.Google ScholarGoogle ScholarCross RefCross Ref
  187. Michael Rietzler, Gabriel Haas, Thomas Dreja, Florian Geiselhart, and Enrico Rukzio. 2019. Virtual muscle force: Communicating kinesthetic forces through pseudo-haptic feedback and muscle input. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST’19). Association for Computing Machinery, New York, NY, 913–922. DOI:https://doi.org/10.1145/3332165.3347871 Google ScholarGoogle ScholarDigital LibraryDigital Library
  188. Jaime Ruiz and Daniel Vogel. 2015. Soft-constraints to reduce legacy and performance bias to elicit whole-body gestures with low arm fatigue. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI’15). Association for Computing Machinery, New York, NY, 3347–3350. DOI:https://doi.org/10.1145/2702123.2702583 Google ScholarGoogle ScholarDigital LibraryDigital Library
  189. PLUX Wireless Biosignals S.A.2019. Bio-signal PLUX. Retrieved October 17, 2019 from https://plux.info/.Google ScholarGoogle Scholar
  190. Arun Sahayadhas, Kenneth Sundaraj, and Murugappan Murugappan. 2012. Detecting driver drowsiness based on sensors: A review. Sensors 12, 12 (2012), 16937–16953.Google ScholarGoogle ScholarCross RefCross Ref
  191. Christina Schneegass, Thomas Kosch, Andrea Baumann, Marius Rusu, Mariam Hassib, and Heinrich Hussmann. 2020. BrainCoDe: Electroencephalography-based comprehension detection during reading and listening. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI’20). Association for Computing Machinery, New York, NY, 1–13. DOI:https://doi.org/10.1145/3313831.3376707 Google ScholarGoogle ScholarDigital LibraryDigital Library
  192. Tanja Schultz, Michael Wand, Thomas Hueber, Dean J. Krusienski, Christian Herff, and Jonathan S. Brumberg. 2017. Biosignal-based spoken communication: A survey. IEEE/ACM Trans. Aud. Speech Lang. Process. 25, 12 (2017), 2257–2271. Google ScholarGoogle ScholarDigital LibraryDigital Library
  193. Valentin Schwind, Jens Reinhardt, Rufat Rzayev, Niels Henze, and Katrin Wolf. 2018. On the need for standardized methods to study the social acceptability of emerging technologies. In CHI’18 Workshop on (Un) Acceptable.Google ScholarGoogle Scholar
  194. Claude Elwood Shannon. 1948. A mathematical theory of communication. Bell Syst. Techn. J. 27, 3 (1948), 379–423.Google ScholarGoogle ScholarCross RefCross Ref
  195. Adwait Sharma, Joan Sol Roo, and Jürgen Steimle. 2019. Grasping microgestures: Eliciting single-hand microgestures for handheld objects. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Article Paper 402, 13 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  196. Kirill A. Shatilov, Dimitris Chatzopoulos, Alex Wong Tat Hang, and Pan Hui. 2019. Using deep learning and mobile offloading to control a 3d-printed prosthetic hand. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 3, 3, Article 102 (Sep. 2019), 19 pages. DOI:https://doi.org/10.1145/3351260 Google ScholarGoogle ScholarDigital LibraryDigital Library
  197. Hakim Si-Mohammed, Jimmy Petit, Camille Jeunet, Ferran Argelaguet, Fabien Spindler, Andéol Evain, Nicolas Roussel, Géry Casiez, and Anatole Lécuyer. 2018. Towards BCI-based interfaces for augmented reality: Feasibility, design and evaluation. IEEE Trans. Vis. Comput. Graph. (2018).Google ScholarGoogle ScholarDigital LibraryDigital Library
  198. Hakim Si-Mohammed, Ferran Argelaguet Sanz, Géry Casiez, Nicolas Roussel, and Anatole Lécuyer. 2017. Brain-computer interfaces and augmented reality: A state of the art. In Proceedings of the Graz Brain-Computer Interface Conference.Google ScholarGoogle Scholar
  199. Hannah Slay, Bruce Thomas, and Rudi Vernik. 2002. Tangible user interaction using augmented reality. In Proceedings of the 3rd Australasian Conference on User Interfaces, Volume 7 (AUIC’02). Australian Computer Society, Inc., 13–20. Google ScholarGoogle ScholarDigital LibraryDigital Library
  200. R. S. Soundariya and R. Renuga. 2017. Eye movement based emotion recognition using electrooculography. In Proceedings of the 2017 Innovations in Power and Advanced Computing Technologies (i-PACT’17). IEEE, 1–5.Google ScholarGoogle Scholar
  201. Cometa srl. 2019. PicoEMG. Retrieved October 17, 2019 from https://www.cometasystems.com/products/picoemg.Google ScholarGoogle Scholar
  202. Liz Stinson. 2019. Stop the Endless Scroll. Delete Social Media from Your Phone. Retrieved October 21, 2019 from https://www.wired.com/story/rants-and-raves-desktop-social-media/.Google ScholarGoogle Scholar
  203. Dag Svanæs. 2013. Interaction design for and with the lived body: Some implications of merleau-ponty’s phenomenology. ACM Trans. Comput.-Hum. Interact. 20, 1, Article 8 (Apr. 2013), 30 pages. DOI:https://doi.org/10.1145/2442106.2442114 Google ScholarGoogle ScholarDigital LibraryDigital Library
  204. Daisuke Tamaki, Hiromi Fujimori, and Hisaya Tanaka. 2019. An interface using electrooculography with closed eyes. In Proceedings of the International Symposium on Affective Science and Engineering (ISASE’19). Japan Society of Kansei Engineering, 1–4.Google ScholarGoogle ScholarCross RefCross Ref
  205. Zied Tayeb, Nicolai Waniek, Juri Fedjaev, Nejla Ghaboosi, Leonard Rychly, Christian Widderich, Christoph Richter, Jonas Braun, Matteo Saveriano, Gordon Cheng, et al. 2018. Gumpy: A python toolbox suitable for hybrid brain–computer interfaces. J. Neur. Eng. 15, 6 (2018), 065003.Google ScholarGoogle ScholarCross RefCross Ref
  206. TMSi. 2019. Small Wearable Amplifier System. Retrieved October 17, 2019 from https://www.tmsi.com/products/mobi/.Google ScholarGoogle Scholar
  207. Boris B. Velichkovsky, Mikhail A. Rumyantsev, and Mikhail A. Morozov. 2014. New solution to the midas touch problem: Identification of visual commands via extraction of focal fixations. Proc. Comput. Sci. 39 (2014), 75–82.Google ScholarGoogle ScholarCross RefCross Ref
  208. Mélodie Vidal, Andreas Bulling, and Hans Gellersen. 2011. Analysing EOG signal features for the discrimination of eye movements with wearable devices. In Proceedings of the 1st International Workshop on Pervasive Eye Tracking & Mobile Eye-based Interaction. 15–20. Google ScholarGoogle ScholarDigital LibraryDigital Library
  209. Oleg Špakov and Päivi Majaranta. 2012. Enhanced gaze interaction using simple head gestures. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing (UbiComp’12). Association for Computing Machinery, New York, NY, 705–710. DOI:https://doi.org/10.1145/2370216.2370369 Google ScholarGoogle ScholarDigital LibraryDigital Library
  210. Michael Wand and Jürgen Schmidhuber. 2016. Deep neural network frontend for continuous EMG-based speech recognition. In Proceedings of the Conference of the International Speech Communication Association (INTERSPEECH’16). 3032–3036.Google ScholarGoogle ScholarCross RefCross Ref
  211. Meng Wang, Renjie Li, Ruofan Zhang, Guangye Li, and Dingguo Zhang. 2018. A wearable SSVEP-Based BCI system for quadcopter control using head-mounted device. IEEE Access 6 (2018), 26789–26798.Google ScholarGoogle ScholarCross RefCross Ref
  212. Yuntao Wang, Jianyu Zhou, Hanchuan Li, Tengxiang Zhang, Minxuan Gao, Zhuolin Cheng, Chun Yu, Shwetak Patel, and Yuanchun Shi. 2019. FlexTouch: Enabling large-scale interaction sensing beyond touchscreens using flexible and conductive materials. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 3, 3, Article 109 (Sep. 2019), 20 pages. DOI:https://doi.org/10.1145/3351267 Google ScholarGoogle ScholarDigital LibraryDigital Library
  213. Daniel Wigdor and Dennis Wixon. 2011. Brave NUI World: Designing Natural User Interfaces for Touch and Gesture (1st ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  214. Woontack Woo. 2018. Augmented human: Augmented reality and beyond. In Proceedings of the 3rd International Workshop on Multimedia Alternate Realities (AltMM’18). Association for Computing Machinery, New York, NY, 1. DOI:https://doi.org/10.1145/3268998.3268999 Google ScholarGoogle ScholarDigital LibraryDigital Library
  215. Shang-Lin Wu, Lun-De Liao, Shao-Wei Lu, Wei-Ling Jiang, Shi-An Chen, and Chin-Teng Lin. 2013. Controlling a human–computer interface system with a novel classification method that uses electrooculography signals. IEEE Trans. Biomed. Eng. 60, 8 (2013), 2133–2141.Google ScholarGoogle ScholarCross RefCross Ref
  216. Yang Xiao, Junsong Yuan, and Daniel Thalmann. 2013. Human-virtual human interaction by upper body gesture understanding. In Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology (VRST’13). Association for Computing Machinery, New York, NY, 133–142. DOI:https://doi.org/10.1145/2503713.2503727 Google ScholarGoogle ScholarDigital LibraryDigital Library
  217. Minpeng Xu, Xiaolin Xiao, Yijun Wang, Hongzhi Qi, Tzyy-Ping Jung, and Dong Ming. 2018. A brain–computer interface based on miniature-event-related potentials induced by very small lateral visual stimuli. IEEE Trans. Biomed. Eng. 65, 5 (2018), 1166–1175.Google ScholarGoogle ScholarCross RefCross Ref
  218. Zheer Xu, Pui Chung Wong, Jun Gong, Te-Yen Wu, Aditya Shekhar Nittala, Xiaojun Bi, Jürgen Steimle, Hongbo Fu, Kening Zhu, and Xing-Dong Yang. 2019. TipText: Eyes-free text entry on a fingertip keyboard. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST’19). Association for Computing Machinery, New York, NY, 883–899. DOI:https://doi.org/10.1145/3332165.3347865 Google ScholarGoogle ScholarDigital LibraryDigital Library
  219. Yukang Yan, Yingtian Shi, Chun Yu, and Yuanchun Shi. 2020. HeadCross: Exploring head-based crossing selection on head-mounted displays. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 4, 1, Article 35 (Mar. 2020), 22 pages. DOI:https://doi.org/10.1145/3380983 Google ScholarGoogle ScholarDigital LibraryDigital Library
  220. Chun Yu, Ke Sun, Mingyuan Zhong, Xincheng Li, Peijun Zhao, and Yuanchun Shi. 2016. One-dimensional handwriting: Inputting letters and words on smart glasses. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI’16). Association for Computing Machinery, New York, NY, 71–82. DOI:https://doi.org/10.1145/2858036.2858542 Google ScholarGoogle ScholarDigital LibraryDigital Library
  221. Nan Yu, Wei Wang, Alex X. Liu, and Lingtao Kong. 2018. QGesture: Quantifying gesture distance and direction with wifi signals. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 2, 1 (2018), 51. Google ScholarGoogle ScholarDigital LibraryDigital Library
  222. Hong Zeng, Yanxin Wang, Changcheng Wu, Aiguo Song, Jia Liu, Peng Ji, Baoguo Xu, Lifeng Zhu, Huijun Li, and Pengcheng Wen. 2017. Closed-loop hybrid gaze brain-machine interface based robotic arm control with augmented reality feedback. Front. Neurorobot. 11 (2017), 60.Google ScholarGoogle ScholarCross RefCross Ref
  223. André Zenner and Antonio Krüger. 2019. Drag:On: A virtual reality controller providing haptic feedback based on drag and weight shift. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). Association for Computing Machinery, New York, NY, 1–12. DOI:https://doi.org/10.1145/3290605.3300441 Google ScholarGoogle ScholarDigital LibraryDigital Library
  224. Xiaolong Zhai, Beth Jelfs, Rosa H. M. Chan, and Chung Tin. 2017. Self-recalibrating surface EMG pattern recognition for neuroprosthesis control based on convolutional neural network. Front. Neurosci. 11 (2017), 379. DOI:https://doi.org/10.3389/fnins.2017.00379Google ScholarGoogle ScholarCross RefCross Ref
  225. Jinhua Zhang, Baozeng Wang, Cheng Zhang, Yanqing Xiao, and Michael Yu Wang. 2019. An EEG/EMG/EOG-based multimodal human-machine interface to real-time control of a soft robot hand. Front. Neurorobot. 13 (2019).Google ScholarGoogle Scholar
  226. Wenxiao Zhang, Bo Han, and Pan Hui. 2017. On the networking challenges of mobile augmented reality. In Proceedings of the Workshop on Virtual Reality and Augmented Reality Network. ACM, 24–29. Google ScholarGoogle ScholarDigital LibraryDigital Library
  227. Wenxiao Zhang, Bo Han, and Pan Hui. 2018. Jaguar: Low latency mobile augmented reality with flexible tracking. In Proceedings of the 2018 ACM Multimedia Conference on Multimedia Conference. ACM, 355–363. Google ScholarGoogle ScholarDigital LibraryDigital Library
  228. Xiang Zhang, Lina Yao, Quan Z. Sheng, Salil S. Kanhere, Tao Gu, and Dalin Zhang. 2018. Converting your thoughts to texts: Enabling brain typing via deep feature learning of eeg signals. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom’18). IEEE, 1–10.Google ScholarGoogle ScholarCross RefCross Ref
  229. Xiang Zhang, Lina Yao, Shuai Zhang, Salil Kanhere, Michael Sheng, and Yunhao Liu. 2018. Internet of things meets brain–computer interface: A unified deep learning framework for enabling human-thing cognitive interactivity. IEEE IoT J. 6, 2 (2018), 2084–2092.Google ScholarGoogle Scholar
  230. Yu Zhang, Tao Gu, Chu Luo, Vassilis Kostakos, and Aruna Seneviratne. 2018. FinDroidHR: Smartwatch gesture input with optical heartrate monitor. Proc. ACM Interact. Mob. Wear. Ubiq. Technol. 2, 1 (2018), 56. Google ScholarGoogle ScholarDigital LibraryDigital Library
  231. Xuemin Zhu, Wei-Long Zheng, Bao-Liang Lu, Xiaoping Chen, Shanguang Chen, and Chunhui Wang. 2014. EOG-based drowsiness detection using convolutional neural networks. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN’14). IEEE, 128–134.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Emerging ExG-based NUI Inputs in Extended Realities: A Bottom-up Survey

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader

          HTML Format

          View this article in HTML Format .

          View HTML Format