当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
3D-Guided Face Manipulation of 2D Images for the Prediction of Post-Operative Outcome After Cranio-Maxillofacial Surgery
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2021-08-24 , DOI: 10.1109/tip.2021.3096081
Robin Andlauer , Andreas Wachter , Matthias Schaufelberger , Frederic Weichel , Reinald Kuhle , Christian Freudlsperger , Werner Nahm

Cranio-maxillofacial surgery often alters the aesthetics of the face which can be a heavy burden for patients to decide whether or not to undergo surgery. Today, physicians can predict the post-operative face using surgery planning tools to support the patient’s decision-making. While these planning tools allow a simulation of the post-operative face, the facial texture must usually be captured by another 3D texture scan and subsequently mapped on the simulated face. This approach often results in face predictions that do not appear realistic or lively looking and are therefore ill-suited to guide the patient’s decision-making. Instead, we propose a method using a generative adversarial network to modify a facial image according to a 3D soft-tissue estimation of the post-operative face. To circumvent the lack of available data pairs between pre- and post-operative measurements we propose a semi-supervised training strategy using cycle losses that only requires paired open-source data of images and 3D surfaces of the face’s shape. After training on “in-the-wild” images we show that our model can realistically manipulate local regions of a face in a 2D image based on a modified 3D shape. We then test our model on four clinical examples where we predict the post-operative face according to a 3D soft-tissue prediction of surgery outcome, which was simulated by a surgery planning tool. As a result, we aim to demonstrate the potential of our approach to predict realistic post-operative images of faces without the need of paired clinical data, physical models, or 3D texture scans.

中文翻译:


3D 引导的 2D 图像面部操控用于预测颅颌面手术后的术后结果



颅颌面手术常常会改变面部美观,这对于患者决定是否接受手术来说是一个沉重的负担。如今,医生可以使用手术计划工具来预测术后面部,以支持患者的决策。虽然这些规划工具允许模拟术后面部,但面部纹理通常必须通过另一个 3D 纹理扫描捕获,然后映射到模拟面部。这种方法通常会导致面部预测看起来不真实或不生动,因此不适合指导患者的决策。相反,我们提出了一种使用生成对抗网络根据术后面部 3D 软组织估计来修改面部图像的方法。为了避免术前和术后测量之间缺乏可用数据对,我们提出了一种使用循环损失的半监督训练策略,该策略仅需要成对的图像和面部形状的 3D 表面的开源数据。在对“野外”图像进行训练后,我们表明我们的模型可以基于修改后的 3D 形状真实地操纵 2D 图像中面部的局部区域。然后,我们在四个临床示例中测试我们的模型,在这些示例中,我们根据手术结果的 3D 软组织预测来预测术后面部,这是由手术计划工具模拟的。因此,我们的目标是展示我们的方法在无需配对临床数据、物理模型或 3D 纹理扫描的情况下预测真实的术后面部图像的潜力。
更新日期:2021-08-24
down
wechat
bug