当前位置: X-MOL 学术IEEE Robot. Automation Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SilhoNet-Fisheye: Adaptation of A ROI-Based Object Pose Estimation Network to Monocular Fisheye Images
IEEE Robotics and Automation Letters ( IF 5.2 ) Pub Date : 2020-01-01 , DOI: 10.1109/lra.2020.2994036
Gideon Billings , Matthew Johnson-Roberson

There has been much recent interest in deep learning methods for monocular image based object pose estimation. While object pose estimation is an important problem for autonomous robot interaction with the physical world, and the application space for monocular-based methods is expansive, there has been little work on applying these methods with fisheye imaging systems. Also, little exists in the way of annotated fisheye image datasets on which these methods can be developed and tested. The research landscape is even more sparse for object detection methods applied in the underwater domain, fisheye image based or otherwise. In this work, we present a novel framework for adapting a ROI-based 6D object pose estimation method to work on full fisheye images. The method incorporates the gnomic projection of regions of interest from an intermediate spherical image representation to correct for the fisheye distortions. Further, we contribute a fisheye image dataset, called UWHandles, collected in natural underwater environments, with 6D object pose and 2D bounding box annotations.

中文翻译:

SilhoNet-Fisheye:基于 ROI 的对象姿态估计网络对单目鱼眼图像的适应

最近人们对基于单目图像的物体姿态估计的深度学习方法很感兴趣。虽然物体姿态估计是自主机器人与物理世界交互的一个重要问题,并且基于单目方法的应用空间广阔,但将这些方法应用于鱼眼成像系统的工作很少。此外,几乎没有可以开发和测试这些方法的带注释的鱼眼图像数据集。对于应用于水下领域、基于鱼眼图像或其他领域的目标检测方法,研究领域更加稀疏。在这项工作中,我们提出了一种新颖的框架,用于采用基于 ROI 的 6D 对象姿态估计方法来处理全鱼眼图像。该方法结合了来自中间球面图像表示的感兴趣区域的几何投影,以校正鱼眼失真。此外,我们贡献了一个鱼眼图像数据集,称为 UWHandles,在自然水下环境中收集,具有 6D 对象姿态和 2D 边界框注释。
更新日期:2020-01-01
down
wechat
bug