当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders
arXiv - CS - Human-Computer Interaction Pub Date : 2021-06-16 , DOI: arxiv-2106.08867
Tim Murray-Browne, Panagiotis Tigas

In many contexts, creating mappings for gestural interactions can form part of an artistic process. Creators seeking a mapping that is expressive, novel, and affords them a sense of authorship may not know how to program it up in a signal processing patch. Tools like Wekinator and MIMIC allow creators to use supervised machine learning to learn mappings from example input/output pairings. However, a creator may know a good mapping when they encounter it yet start with little sense of what the inputs or outputs should be. We call this an open-ended mapping process. Addressing this need, we introduce the latent mapping, which leverages the latent space of an unsupervised machine learning algorithm such as a Variational Autoencoder trained on a corpus of unlabelled gestural data from the creator. We illustrate it with Sonified Body, a system mapping full-body movement to sound which we explore in a residency with three dancers.

中文翻译:

潜在映射:使用变分自编码器生成开放式表达映射

在许多情况下,为手势交互创建映射可以构成艺术过程的一部分。寻求具有表现力、新颖性并赋予他们作者身份的映射的创作者可能不知道如何在信号处理补丁中对其进行编程。Wekinator 和 MIMIC 等工具允许创作者使用监督机器学习从示例输入/输出配对中学习映射。然而,创建者在遇到一个好的映射时可能知道它,但开始时却对输入或输出应该是什么知之甚少。我们称之为开放式映射过程。为了满足这一需求,我们引入了潜在映射,它利用了无监督机器学习算法的潜在空间,例如在来自创建者的未标记手势数据语料库上训练的变分自动编码器。我们用 Sonified Body 来说明,
更新日期:2021-06-17
down
wechat
bug