RGB-D saliency detection via cascaded mutual information minimization
Proceedings of the IEEE/CVF international conference on …, 2021•openaccess.thecvf.com
Existing RGB-D saliency detection models do not explicitly encourage RGB and depth to
achieve effective multi-modal learning. In this paper, we introduce a novel multi-stage
cascaded learning framework via mutual information minimization to explicitly model the
multi-modal information between RGB image and depth data. Specifically, we first map the
feature of each mode to a lower dimensional feature vector, and adopt mutual information
minimization as a regularizer to reduce the redundancy between appearance features from …
achieve effective multi-modal learning. In this paper, we introduce a novel multi-stage
cascaded learning framework via mutual information minimization to explicitly model the
multi-modal information between RGB image and depth data. Specifically, we first map the
feature of each mode to a lower dimensional feature vector, and adopt mutual information
minimization as a regularizer to reduce the redundancy between appearance features from …
Abstract
Existing RGB-D saliency detection models do not explicitly encourage RGB and depth to achieve effective multi-modal learning. In this paper, we introduce a novel multi-stage cascaded learning framework via mutual information minimization to explicitly model the multi-modal information between RGB image and depth data. Specifically, we first map the feature of each mode to a lower dimensional feature vector, and adopt mutual information minimization as a regularizer to reduce the redundancy between appearance features from RGB and geometric features from depth. We then perform multi-stage cascaded learning to impose the mutual information minimization constraint at every stage of the network. Extensive experiments on benchmark RGB-D saliency datasets illustrate the effectiveness of our framework. Further, to prosper the development of this field, we contribute the largest (7x larger than NJU2K) COME20K dataset, which contains 15,625 image pairs with high quality polygon-/scribble-/object-/instance-/rank-level annotations. Based on these rich labels, we additionally construct four new benchmarks (Code, results, and benchmarks will be made publicly available.) with strong baselines and observe some interesting phenomena, which can motivate future model design.
openaccess.thecvf.com
Showing the best result for this search. See all results