CASINet: Content-Adaptive Scale Interaction Networks for scene parsing
- Xin Jin ,
- Cuiling Lan ,
- Wenjun Zeng ,
- Zhizheng Zhang ,
- Zhibo Chen
Neurocomputing | , Vol 419: pp. 9-22
Abstract Objects at different spatial positions in an image exhibit different scales. Adaptive receptive fields are expected to capture suitable ranges of context for accurate pixel level semantic prediction. Recently, atrous convolution with different dilation rates has been used to generate features of multi-scales through several branches which are then fused for prediction. However, there is a lack of explicit interaction among the branches of different scales to adaptively make full use of the contexts. In this paper, we propose a Content-Adaptive Scale Interaction Network (CASINet) to exploit the multi-scale features for scene parsing. We build CASINet based on the classic Atrous Spatial Pyramid Pooling (ASPP) module, followed by a proposed contextual scale interaction (CSI) module, and a scale adaptation (SA) module. Specifically, in the CSI module, for each spatial position of some scale, instead of being limited by a fixed set of convolutional filters that are shared across different spatial positions for feature learning, we promote the adaptivity of the convolutional filters to spatial positions. We achieve this by the context interaction among the features of different scales. The SA module explicitly and softly selects the suitable scale for each spatial position and each channel. Ablation studies demonstrate the effectiveness of the proposed modules. We achieve state-of-the-art performance on three scene parsing benchmarks Cityscapes, ADE20K and LIP.