当前位置: X-MOL 学术IEEE Multimed. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Toward Content-Driven Intelligent Authoring of Mulsemedia Applications
IEEE Multimedia ( IF 2.3 ) Pub Date : 2020-07-23 , DOI: 10.1109/mmul.2020.3011383
Raphael Silva de Abreu 1 , Douglas Mattos 1 , Joel dos Santos 2 , Gheorghita Ghinea 3 , Débora Christina Muchaluat-Saade 1
Affiliation  

Synchronization of sensory effects with multimedia content is a nontrivial and error-prone task that can discourage authoring of mulsemedia applications. Although there are authoring tools that perform some automatic authoring of sensory effect metadata, the analysis techniques that they use are not generally enough to identify complex components that may be related to sensory effects. In this article, we present a new method, which allows the semiautomatic definition of sensory effects in an authoring tool. We outline a software component to be integrated into authoring tools that uses content analysis assistance to indicate moments of sensory effects activation, according to author preferences. The proposed method was implemented in the STEVE 2.0 authoring tool and an evaluation was performed to assess the precision of the generated sensory effects in comparison with human authoring. This solution is expected to considerably reduce the effort of synchronizing audiovisual content with sensory effects—in particular, by easing the author’s repetitive task of synchronizing recurring effects with lengthy media.

中文翻译:

迈向内容驱动的Mulsemedia应用程序智能创作

感官效果与多媒体内容的同步是一项不平凡且容易出错的任务,可以阻止创作多播媒体应用程序。尽管有一些创作工具可以执行感官效果元数据的一些自动创作,但是它们使用的分析技术通常不足以识别可能与感官效果相关的复杂组件。在本文中,我们提出了一种新方法,该方法允许在创作工具中半自动定义感官效果。我们根据作者的喜好概述了要集成到创作工具中的软件组件,该组件使用内容分析辅助功能来指示激活感官效果的时刻。所提出的方法已在STEVE 2中实现。0个创作工具,并进行了评估,以评估与人类创作相比所产生的感官效果的精度。预计该解决方案将大大减少将视听内容与感官效果同步的工作,特别是通过减轻作者将重复性效果与冗长的媒体同步的重复性任务。
更新日期:2020-07-23
down
wechat
bug