当前位置: X-MOL 学术Int. J. Hum. Comput. Interact. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exploring the Privacy Concerns in Using Intelligent Virtual Assistants under Perspectives of Information Sensitivity and Anthropomorphism
International Journal of Human-Computer Interaction ( IF 4.7 ) Pub Date : 2020-10-27 , DOI: 10.1080/10447318.2020.1834728
Quang-An Ha, Jengchung Victor Chen, Ha Uy Uy, Erik Paolo Capistrano

ABSTRACT

Intelligent Virtual Assistants (IVA) such as Apple Siri, Google Assistant, are increasingly being used to assist users with performing different tasks. However, their characteristics also raise user privacy concerns related to the provision of information to the IVA. Drawing upon the communication privacy management theory, two experiments were conducted to investigate the impact of information sensitivity, types of IVA (anthropomorphized versus objectified IVA), and the roles of IVA (servant versus partner) on privacy concerns and user willingness to disclose information to IVA. Study 1 showed that information sensitivity and anthropomorphism significantly impact user privacy concerns. Study 2 revealed that if highly sensitive information was required, a partner IVA would trigger greater privacy concerns, while in low sensitive information contexts, it would evoke a more secure feeling than a servant IVA. Subsequent theoretical and managerial implications of these studies are discussed accordingly.



中文翻译:

在信息敏感性和拟人化视角下探索使用智能虚拟助手的隐私问题

抽象的

Apple Siri,Google Assistant等智能虚拟助手(IVA)越来越多地用于协助用户执行不同的任务。但是,它们的特征还引起与向IVA提供信息有关的用户隐私问题。根据通信隐私管理理论,进行了两个实验,以调查信息敏感性,IVA的类型(拟人化与客体化IVA)以及IVA(服务对象与合作伙伴)在隐私问题和用户向他人披露信息的意愿方面的作用。 IVA。研究1表明,信息敏感性和拟人化极大地影响了用户的隐私问题。研究2显示,如果需要高度敏感的信息,则合作伙伴IVA会引发更大的隐私问题,而在敏感度较低的信息环境中,它会比仆人IVA唤起更安全的感觉。相应地讨论了这些研究的后续理论和管理意义。

更新日期:2020-10-27
down
wechat
bug