当前位置: X-MOL 学术Signal Image Video Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Co-learning saliency detection with coupled channels and low-rank factorization
Signal, Image and Video Processing ( IF 2.0 ) Pub Date : 2020-05-10 , DOI: 10.1007/s11760-020-01683-7
Yuteng Gao , Shuyuan Yang

In this paper, a co-learning saliency detection method is proposed via coupled channels and low-rank factorization, by imitating the structural sparse coding and cooperative processing mechanism of two dorsal “where” and ventral “what” pathways in human vision system (HVS). First, images are partitioned into some superpixels and their structural sparsity are explored to locate pure background from image borders. Second, images are processed by two cortical pathways, to cooperatively learn a “where” feature map and a “what” feature map, by taking the background as the dictionary and using sparse coding errors as an indication of saliency. Finally, two feature maps are integrated to generate saliency map. Because the “where” and “what” feature maps are complementary to each other, our method can highlight the salient region and restrain the background. Some experiments are taken on several public benchmarks and the results show its superiority to its counterparts.

中文翻译:

具有耦合通道和低秩分解的共同学习显着性检测

在本文中,通过模仿人类视觉系统(HVS)中两条背侧“where”和腹侧“what”通路的结构稀疏编码和协同处理机制,提出了一种通过耦合通道和低秩分解的协同学习显着性检测方法。 )。首先,图像被划分为一些超像素,并探索它们的结构稀疏性以从图像边界定位纯背景。其次,图像由两条皮层通路处理,以背景为字典,使用稀疏编码错误作为显着性的指示,协同学习“哪里”特征图和“什么”特征图。最后,集成两个特征图以生成显着图。因为“where”和“what”特征图是互补的,我们的方法可以突出显着区域并抑制背景。在几个公共基准上进行了一些实验,结果表明其优于同行。
更新日期:2020-05-10
down
wechat
bug