当前位置: X-MOL 学术ACM J. Comput. Cult. Herit. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Web-based Exploration of Annotated Multi-Layered Relightable Image Models
ACM Journal on Computing and Cultural Heritage ( IF 2.4 ) Pub Date : 2021-05-08 , DOI: 10.1145/3430846
Alberto Jaspe-Villanueva 1 , Moonisa Ahsan 1 , Ruggero Pintus 1 , Andrea Giachetti 2 , Fabio Marton 1 , Enrico Gobbetti 1
Affiliation  

We introduce a novel approach for exploring image-based shape and material models registered with structured descriptive information fused in multi-scale overlays. We represent the objects of interest as a series of registered layers of image-based shape and material data. These layers are represented at different scales and can come out of a variety of pipelines. These layers can include both Reflectance Transformation Imaging representations, and spatially varying normal and Bidirectional Reflectance Distribution Function fields, possibly as a result of fusing multi-spectral data. An overlay image pyramid associates visual annotations to the various scales. The overlay pyramid of each layer is created at data preparation time by either one of the three subsequent methods: (1) by importing it from other pipelines, (2) by creating it with the simple annotation drawing toolkit available within the viewer, and (3) with external image editing tools. This makes it easier for the user to seamlessly draw annotations over the region of interest. At runtime, clients can access an annotated multi-layered dataset by a standard web server. Users can explore these datasets on a variety of devices; they range from small mobile devices to large-scale displays used in museum installations. On all these aforementioned platforms, JavaScript/WebGL2 clients running in browsers are fully capable of performing layer selection, interactive relighting, enhanced visualization, and annotation display. We address the problem of clutter by embedding interactive lenses. This focus-and-context-aware (multiple-layer) exploration tool supports exploration of more than one representation in a single view. That allows mixing and matching of presentation modes and annotation display. The capabilities of our approach are demonstrated on a variety of cultural heritage use-cases. That involves different kinds of annotated surface and material models.

中文翻译:

带注释的多层可重亮图像模型的基于 Web 的探索

我们引入了一种新的方法来探索基于图像的形状和材料模型,该模型与融合在多尺度叠加中的结构化描述信息注册。我们将感兴趣的对象表示为一系列基于图像的形状和材料数据的注册层。这些层以不同的比例表示,可以来自各种管道。这些层可以包括反射变换成像表示,以及空间变化的正常和双向反射分布函数字段,这可能是融合多光谱数据的结果。叠加图像金字塔将视觉注释与各种比例相关联。每层的覆盖金字塔是在数据准备时通过以下三种方法之一创建的:(1)通过从其他管道导入,(2) 使用查看器中可用的简单注释绘图工具包创建它,以及 (3) 使用外部图像编辑工具。这使用户更容易在感兴趣的区域上无缝地绘制注释。在运行时,客户端可以通过标准 Web 服务器访问带注释的多层数据集。用户可以在各种设备上探索这些数据集;它们的范围从小型移动设备到博物馆装置中使用的大型显示器。在所有上述平台上,在浏览器中运行的 JavaScript/WebGL2 客户端完全能够执行图层选择、交互式重新照明、增强的可视化和注释显示。我们通过嵌入交互式镜头来解决杂乱问题。这种焦点和上下文感知(多层)探索工具支持在单个视图中探索多个表示。这允许演示模式和注释显示的混合和匹配。我们的方法的能力在各种文化遗产用例上得到了证明。这涉及不同类型的带注释的表面和材料模型。
更新日期:2021-05-08
down
wechat
bug