当前位置: X-MOL 学术ACM Trans. Graph. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
NeuralVDB: High-resolution Sparse Volume Representation using Hierarchical Neural Networks
ACM Transactions on Graphics  ( IF 6.2 ) Pub Date : 2024-02-28 , DOI: 10.1145/3641817
Doyub Kim 1 , Minjae Lee 1 , Ken Museth 1
Affiliation  

We introduce NeuralVDB, which improves on an existing industry standard for efficient storage of sparse volumetric data, denoted VDB [Museth 2013], by leveraging recent advancements in machine learning. Our novel hybrid data structure can reduce the memory footprints of VDB volumes by orders of magnitude, while maintaining its flexibility and only incurring small (user-controlled) compression errors. Specifically, NeuralVDB replaces the lower nodes of a shallow and wide VDB tree structure with multiple hierarchical neural networks that separately encode topology and value information by means of neural classifiers and regressors respectively. This approach is proven to maximize the compression ratio while maintaining the spatial adaptivity offered by the higher-level VDB data structure. For sparse signed distance fields and density volumes, we have observed compression ratios on the order of 10× to more than 100× from already compressed VDB inputs, with little to no visual artifacts. Furthermore, NeuralVDB is shown to offer more effective compression performance compared to other neural representations such as Neural Geometric Level of Detail [Takikawa et al. 2021], Variable Bitrate Neural Fields [Takikawa et al. 2022a], and Instant Neural Graphics Primitives [Müller et al. 2022]. Finally, we demonstrate how warm-starting from previous frames can accelerate training, i.e., compression, of animated volumes as well as improve temporal coherency of model inference, i.e., decompression.



中文翻译:

NeuralVDB:使用分层神经网络的高分辨率稀疏体积表示

我们引入 NeuralVDB,它通过利用机器学习的最新进展,改进了有效存储稀疏体积数据的现有行业标准,表示为 VDB [Museth 2013]。我们新颖的混合数据结构可以将 VDB 卷的内存占用量减少几个数量级,同时保持其灵活性并且仅产生较小的(用户控制的)压缩错误。具体来说,NeuralVDB用多个分层神经网络代替了浅而宽的VDB树结构的较低节点,这些神经网络分别通过神经分类器和回归器分别对拓扑和值信息进行编码。事实证明,这种方法可以最大限度地提高压缩比,同时保持更高级别的 VDB 数据结构提供的空间适应性。对于稀疏符号距离场和密度体积,我们观察到已经压缩的 VDB 输入的压缩比约为 10 倍到超过 100 倍,几乎没有视觉伪影。此外,与其他神经表示(例如神经几何细节级别)相比,NeuralVDB 被证明可以提供更有效的压缩性能 [Takikawa 等人。2021],可变比特率神经场 [Takikawa 等人。2022a] 和即时神经图形基元 [Müller 等人。2022]。最后,我们演示了从先前帧开始的热启动如何加速动画体积的训练(即压缩)以及提高模型推理(即解压缩)的时间一致性。

更新日期:2024-02-28
down
wechat
bug