当前位置: X-MOL 学术Comput. Animat. Virtual Worlds › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Editorial issue 32.1
Computer Animation and Virtual Worlds ( IF 1.1 ) Pub Date : 2021-02-25 , DOI: 10.1002/cav.1991
Nadia Magnenat Thalmann 1 , Daniel Thalmann 2
Affiliation  

This issue contains six papers. In the first paper, Shiguang Liu, Si Gao, and Siqi Xu, from Tianjin University, in China propose an automatic sound synthesis method for explosion scenes. Explosion animation consists of two major events, namely explosive event and interactive fracture event. A new spectral flux‐based approach is developed to segment the extracted sounds into a grain dictionary. Then, a cascade sound synthesis method based on impulse response is designed to synchronize the chain‐reaction sound for fracture event. For the synthesis of exploding sound, due to its similar characteristics in frequency domain, an example‐based automatic sound synthesis method is adopted in this paper.

In the second paper, Jian Zhu, Silong Li, Ruichu Cai, and Guoheng Huang, from Guangdong University of Technology, Guangzhou, in China, Zhifeng Hao, from Foshan University, Guangdong, in China, Bin Sheng, from Shanghai Jiao Tong University, Shanghai, in China, and Enhua Wu, from University of Macau & Institute of Software, Academia Sinica, Beijing, in China propose an adaptive vorticity confinement method to compensate the vorticity loss during advection with little extra cost. The main idea is to first calculate a scale factor whose value depends on the vorticity loss during advection, and then use it to adaptively control the vorticity confinement force for vorticity compensation with high stability. The experiment results show the effectiveness and efficiency of their method.

In the third paper, Yi Han and Xiaogang Jin, from Zhejiang University, Hangzhou, in China, and Qianwen Chao, from Xidian University, Xian, in China present a simplified force‐based heterogeneous traffic simulation model to facilitate consistent adjustment of the parameters involved. Different from previous work which requires the adjustment of multiple ad hoc parameters to produce satisfactory results, their approach can achieve similar results by using clear and meaningful parameters to simulate interactions between various kinds of road users. To simulate diverse and realistic motions of road users, the authors parameterize the coefficients of the force model for better detailed motion control. Their approach is also scalable to new types of road users and facilitates an object‐oriented implementation with high performance. They validate their framework with extensive experiments.

The fourth paper, by Hong Guo, Shanchen Zou, Chuyin Lai, and Hongxin Zhang, from Zhejiang University, Hangzhou, in China, deals with a visualization‐driven approach for analyzing dance videos. They first encode extracted video frames into a set of heat maps via neural network, which calculates a skeleton structure for pose estimation with enhanced post‐processing to help capture dance moves. A subsequent pose similarity method allows users to quantize differences between student training videos and the standard one. Finally, an interactive visualization tool enables users and domain experts to interactively analyze the quality of dance moves along the timeline. The authors demonstrate the applicability and effectiveness of their proposed tool using case studies involving physical coordination research.

In the fifth paper, Chenxu Xu, Wenjie Yu, and Meili Wang, from Northwest Agriculture and Forestry University, Yangling, in China, Yanran Li and Xiaosong Yang, from Bournemouth University, in UK, and Xuequan Lu, from Deakin University, Geelong, in Australia make two contributions. The first is to propose a new keyframe extraction algorithm, which reduces the keyframe redundancy and reduces the motion sequence reconstruction error. Secondly, a new motion sequence reconstruction method is proposed, which further reduces the error of motion sequence reconstruction. Specifically, the authors treated the input motion sequence as curves, then the binomial fitting was extended to obtain the points where the slope changes dramatically in the vicinity. Then they took these points as inputs to obtain keyframes by density clustering. Finally, the motion curves were segmented by keyframes and the segmented curves were fitted by binomial formula again to obtain the binomial parameters for motion reconstruction.

In the last paper, Qiang Chen, Guoliang Luo, and Yang Tong, from East China Jiao Tong University, Nanchang, in China, Xiaogang Jin, from Zhejiang University, Hangzhou, China, and Zhigang Deng, from University of Houston, USA present a Lagrangian hydrodynamics method to simulate the fluid‐like motion of crowd and a triggering approach to generate the linear stop‐and‐go wave behavior. Specifically, the authors impose a self‐propulsion force on the leading agents of the crowd to push the crowd to move forward and introduce a Smoothed Particle Hydrodynamics (SPH)‐based model to simulate the dynamics of dense crowds. Besides, they present a motion signal propagation approach to trigger the rest of the crowd so that they respond to the immediate leaders linearly, which can lead to the linear stop‐and‐go wave effect of the fluid‐like motion for the crowd. Their experiments demonstrate that their model can simulate large‐scale dense crowds with linear wave propagation.

更新日期:2021-02-25
down
wechat
bug