当前位置: X-MOL 学术Comput. Electr. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-feature fusion network for road scene semantic segmentation
Computers & Electrical Engineering ( IF 4.0 ) Pub Date : 2021-04-22 , DOI: 10.1016/j.compeleceng.2021.107155
Jiaxing Sun , Yujie Li

Road scene semantic segmentation often requires a deeper neural network to obtain higher accuracy, which makes the segmentation model more complex and slower. In this paper, we use shallow neural networks to achieve semantic segmentation for intelligent transportation system. Specifically, we propose a lightweight semantic segmentation model. First, the image features are extracted by using a simple superimposed convolutional layer and the three branches of ResNet and optimized by the attention mechanism. Then element multiplication and feature fusion are performed. Finally, the segmentation mask is obtained. Fewer convolutional layers and ResNet will not take up a lot of resources, we use the main resources to calculate the fusion between features. Experiments show that our method achieves high accuracy and comparable speed on the Cityscapes and CamVid datasets. On the Cityscapes dataset, our method achieves 75.0% mIoU, which is 0.2% higher than the better-performing BiSeNet.



中文翻译:

用于道路场景语义分割的多特征融合网络

道路场景语义分割通常需要更深的神经网络来获得更高的准确性,这使得分割模型更加复杂和缓慢。在本文中,我们使用浅层神经网络来实现智能交通系统的语义分割。具体来说,我们提出了一个轻量级的语义分割模型。首先,通过使用简单的叠加卷积层和ResNet的三个分支来提取图像特征,并通过注意力机制对其进行优化。然后执行元素乘法和特征融合。最后,获得分割蒙版。较少的卷积层,ResNet不会占用大量资源,我们使用主要资源来计算要素之间的融合。实验表明,我们的方法在Cityscapes和CamVid数据集上实现了高精度和相当的速度。在Cityscapes数据集上,我们的方法实现了75.0%的mIoU,比性能更好的BiSeNet高0.2%。

更新日期:2021-04-23
down
wechat
bug