当前位置: X-MOL 学术Proc. Natl. Acad. Sci. U.S.A. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms
Proceedings of the National Academy of Sciences of the United States of America ( IF 9.4 ) Pub Date : 2022-11-23 , DOI: 10.1073/pnas.2216035119
Matyáš Boháček 1 , Hany Farid 2
Affiliation  

Since their emergence a few years ago, artificial intelligence (AI)-synthesized media—so-called deep fakes—have dramatically increased in quality, sophistication, and ease of generation. Deep fakes have been weaponized for use in nonconsensual pornography, large-scale fraud, and disinformation campaigns. Of particular concern is how deep fakes will be weaponized against world leaders during election cycles or times of armed conflict. We describe an identity-based approach for protecting world leaders from deep-fake imposters. Trained on several hours of authentic video, this approach captures distinct facial, gestural, and vocal mannerisms that we show can distinguish a world leader from an impersonator or deep-fake imposter.

中文翻译:

使用面部、手势和声音举止保护世界领导人免受深度造假

自几年前出现以来,人工智能 (AI) 合成媒体(即所谓的深度造假)在质量、复杂性和生成容易度方面有了显着提高。Deep fakes 已被武器化,用于非自愿色情、大规模欺诈和虚假宣传活动。特别值得关注的是,在选举周期或武装冲突时期,深度造假将如何被用来攻击世界领导人。我们描述了一种基于身份的方法来保护世界领导人免受深度造假冒名顶替者的侵害。通过几个小时的真实视频训练,这种方法捕捉到明显的面部、手势和声音举止,我们展示了这些举止可以区分世界领导者与模仿者或假冒冒名顶替者。
更新日期:2022-11-23
down
wechat
bug