当前位置: X-MOL 学术arXiv.cs.GR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Layered Neural Rendering for Retiming People in Video
arXiv - CS - Graphics Pub Date : 2020-09-16 , DOI: arxiv-2009.07833
Erika Lu, Forrester Cole, Tali Dekel, Weidi Xie, Andrew Zisserman, David Salesin, William T. Freeman, Michael Rubinstein

We present a method for retiming people in an ordinary, natural video---manipulating and editing the time in which different motions of individuals in the video occur. We can temporally align different motions, change the speed of certain actions (speeding up/slowing down, or entirely "freezing" people), or "erase" selected people from the video altogether. We achieve these effects computationally via a dedicated learning-based layered video representation, where each frame in the video is decomposed into separate RGBA layers, representing the appearance of different people in the video. A key property of our model is that it not only disentangles the direct motions of each person in the input video, but also correlates each person automatically with the scene changes they generate---e.g., shadows, reflections, and motion of loose clothing. The layers can be individually retimed and recombined into a new video, allowing us to achieve realistic, high-quality renderings of retiming effects for real-world videos depicting complex actions and involving multiple individuals, including dancing, trampoline jumping, or group running.

中文翻译:

用于重新定时视频中人物的分层神经渲染

我们提出了一种在普通的、自然的视频中重新定时人物的方法——操纵和编辑视频中个人不同动作发生的时间。我们可以在时间上对齐不同的动作,改变某些动作的速度(加速/减速,或完全“冻结”人),或者完全从视频中“删除”选定的人。我们通过专用的基于学习的分层视频表示以计算方式实现这些效果,其中视频中的每一帧都被分解为单独的 RGBA 层,代表视频中不同人的外观。我们模型的一个关键特性是它不仅可以解开输入视频中每个人的直接运动,而且还可以自动将每个人与他们生成的场景变化相关联——例如,阴影、反射、和宽松衣服的运动。这些图层可以单独重新定时并重新组合成一个新视频,使我们能够为描绘复杂动作并涉及多个人(包括跳舞、蹦床或团体跑步)的真实世界视频实现逼真、高质量的重新定时效果渲染。
更新日期:2020-09-17
down
wechat
bug