On this page, we provide detailed results containing the performances of all methods in terms of all metrics on all classes and categories. Please refer to the Benchmark Suite for details on the evaluation and metrics. Jump to the individual tables via the following links:
Pixel-Level Semantic Labeling Task
Instance-Level Semantic Labeling Task
Panoptic Semantic Labeling Task
3D Vehicle Detection Task
Usage
Within each table, use the buttons in the first row to hide columns or to export the visible data to various formats. Use the widgets in the second row to filter the table by selecting values of interest (multiple selections possible). Click the numeric columns for sorting.
Pixel-Level Semantic Labeling Task
IoU on class-level
name | fine | fine | coarse | coarse | 16-bit | 16-bit | depth | depth | video | video | sub | sub | code | code | title | authors | venue | description | Runtime [s] | average | road | sidewalk | building | wall | fence | pole | traffic light | traffic sign | vegetation | terrain | sky | person | rider | car | truck | bus | train | motorcycle | bicycle |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
FCN 8s | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Fully Convolutional Networks for Semantic Segmentation | J. Long, E. Shelhamer, and T. Darrell | CVPR 2015 | Trained by Marius Cordts on a pre-release version of the dataset more details | 0.5 | 65.3 | 97.4 | 78.4 | 89.2 | 34.9 | 44.2 | 47.4 | 60.1 | 65.0 | 91.4 | 69.3 | 93.9 | 77.1 | 51.4 | 92.6 | 35.3 | 48.6 | 46.5 | 51.6 | 66.8 |
RRR-ResNet152-MultiScale | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | update: this submission actually used the coarse labels, which was previously not marked accordingly more details | n/a | 75.8 | 98.3 | 84.0 | 92.0 | 50.8 | 54.5 | 62.6 | 67.7 | 73.7 | 92.8 | 70.8 | 95.0 | 82.6 | 60.6 | 95.0 | 65.3 | 83.1 | 76.6 | 63.3 | 71.3 | ||
Dilation10 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Multi-Scale Context Aggregation by Dilated Convolutions | Fisher Yu and Vladlen Koltun | ICLR 2016 | Dilation10 is a convolutional network that consists of a front-end prediction module and a context aggregation module. Both are described in the paper. The combined network was trained jointly. The context module consists of 10 layers, each of which has C=19 feature maps. The larger number of layers in the context module (10 for Cityscapes versus 8 for Pascal VOC) is due to the high input resolution. The Dilation10 model is a pure convolutional network: there is no CRF and no structured prediction. Dilation10 can therefore be used as the baseline input for structured prediction models. Note that the reported results were produced by training on the training set only; the network was not retrained on train+val. more details | 4.0 | 67.1 | 97.6 | 79.2 | 89.9 | 37.3 | 47.6 | 53.2 | 58.6 | 65.2 | 91.8 | 69.4 | 93.7 | 78.9 | 55.0 | 93.3 | 45.5 | 53.4 | 47.7 | 52.2 | 66.0 |
Adelaide | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Piecewise Training of Deep Structured Models for Semantic Segmentation | G. Lin, C. Shen, I. Reid, and A. van den Hengel | arXiv preprint 2015 | Trained on a pre-release version of the dataset more details | 35.0 | 66.4 | 97.3 | 78.5 | 88.4 | 44.5 | 48.3 | 34.1 | 55.5 | 61.7 | 90.1 | 69.5 | 92.2 | 72.5 | 52.3 | 91.0 | 54.6 | 61.6 | 51.6 | 55.0 | 63.1 |
DeepLab LargeFOV StrongWeak | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | yes | yes | Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image Segmentation | G. Papandreou, L.-C. Chen, K. Murphy, and A. L. Yuille | ICCV 2015 | Trained on a pre-release version of the dataset more details | 4.0 | 64.8 | 97.4 | 78.3 | 88.1 | 47.5 | 44.2 | 29.5 | 44.4 | 55.4 | 89.4 | 67.3 | 92.8 | 71.0 | 49.3 | 91.4 | 55.9 | 66.6 | 56.7 | 48.1 | 58.1 |
DeepLab LargeFOV Strong | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs | L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille | ICLR 2015 | Trained on a pre-release version of the dataset more details | 4.0 | 63.1 | 97.3 | 77.7 | 87.7 | 43.6 | 40.5 | 29.7 | 44.5 | 55.4 | 89.4 | 67.0 | 92.7 | 71.2 | 49.4 | 91.4 | 48.7 | 56.7 | 49.1 | 47.9 | 58.6 |
DPN | yes | yes | yes | yes | no | no | no | no | no | no | 3 | 3 | no | no | Semantic Image Segmentation via Deep Parsing Network | Z. Liu, X. Li, P. Luo, C. C. Loy, and X. Tang | ICCV 2015 | Trained on a pre-release version of the dataset more details | n/a | 59.1 | 96.3 | 71.7 | 86.7 | 43.7 | 31.7 | 29.2 | 35.8 | 47.4 | 88.4 | 63.1 | 93.9 | 64.7 | 38.7 | 88.8 | 48.0 | 56.4 | 49.4 | 38.3 | 50.0 |
Segnet basic | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | yes | yes | SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation | V. Badrinarayanan, A. Kendall, and R. Cipolla | arXiv preprint 2015 | Trained on a pre-release version of the dataset more details | 0.06 | 57.0 | 96.4 | 73.2 | 84.0 | 28.5 | 29.0 | 35.7 | 39.8 | 45.2 | 87.0 | 63.8 | 91.8 | 62.8 | 42.8 | 89.3 | 38.1 | 43.1 | 44.2 | 35.8 | 51.9 |
Segnet extended | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | yes | yes | SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation | V. Badrinarayanan, A. Kendall, and R. Cipolla | arXiv preprint 2015 | Trained on a pre-release version of the dataset more details | 0.06 | 56.1 | 95.6 | 70.1 | 82.8 | 29.9 | 31.9 | 38.1 | 43.1 | 44.6 | 87.3 | 62.3 | 91.7 | 67.3 | 50.7 | 87.9 | 21.7 | 29.0 | 34.7 | 40.5 | 56.6 |
CRFasRNN | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Conditional Random Fields as Recurrent Neural Networks | S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. S. Torr | ICCV 2015 | Trained on a pre-release version of the dataset more details | 0.7 | 62.5 | 96.3 | 73.9 | 88.2 | 47.6 | 41.3 | 35.2 | 49.5 | 59.7 | 90.6 | 66.1 | 93.5 | 70.4 | 34.7 | 90.1 | 39.2 | 57.5 | 55.4 | 43.9 | 54.6 |
Scale invariant CNN + CRF | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Convolutional Scale Invariance for Semantic Segmentation | I. Kreso, D. Causevic, J. Krapac, and S. Segvic | GCPR 2016 | We propose an effective technique to address large scale variation in images taken from a moving car by cross-breeding deep learning with stereo reconstruction. Our main contribution is a novel scale selection layer which extracts convolutional features at the scale which matches the corresponding reconstructed depth. The recovered scaleinvariant representation disentangles appearance from scale and frees the pixel-level classifier from the need to learn the laws of the perspective. This results in improved segmentation results due to more effi- cient exploitation of representation capacity and training data. We perform experiments on two challenging stereoscopic datasets (KITTI and Cityscapes) and report competitive class-level IoU performance. more details | n/a | 66.3 | 96.3 | 76.8 | 88.8 | 40.0 | 45.4 | 50.1 | 63.3 | 69.6 | 90.6 | 67.1 | 92.2 | 77.6 | 55.9 | 90.1 | 39.2 | 51.3 | 44.4 | 54.4 | 66.1 |
DPN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Semantic Image Segmentation via Deep Parsing Network | Z. Liu, X. Li, P. Luo, C. C. Loy, and X. Tang | ICCV 2015 | DPN trained on full resolution images more details | n/a | 66.8 | 97.5 | 78.5 | 89.5 | 40.4 | 45.9 | 51.1 | 56.8 | 65.3 | 91.5 | 69.4 | 94.5 | 77.5 | 54.2 | 92.5 | 44.5 | 53.4 | 49.9 | 52.1 | 64.8 |
Pixel-level Encoding for Instance Segmentation | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Pixel-level Encoding and Depth Layering for Instance-level Semantic Labeling | J. Uhrig, M. Cordts, U. Franke, and T. Brox | GCPR 2016 | We predict three encoding channels from a single image using an FCN: semantic labels, depth classes, and an instance-aware representation based on directions towards instance centers. Using low-level computer vision techniques, we obtain pixel-level and instance-level semantic labeling paired with a depth estimate of the instances. more details | n/a | 64.3 | 97.4 | 77.7 | 88.8 | 27.7 | 40.1 | 51.5 | 60.1 | 64.7 | 91.1 | 67.6 | 93.5 | 77.7 | 54.2 | 92.4 | 33.7 | 42.0 | 42.5 | 52.5 | 66.5 |
Adelaide_context | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Piecewise Training of Deep Structured Models for Semantic Segmentation | Guosheng Lin, Chunhua Shen, Anton van den Hengel, Ian Reid | CVPR 2016 | We explore contextual information to improve semantic image segmentation. Details are described in the paper. We trained contextual networks for coarse level prediction and a refinement network for refining the coarse prediction. Our models are trained on the training set only (2975 images) without adding the validation set. more details | n/a | 71.6 | 98.0 | 82.6 | 90.6 | 44.0 | 50.7 | 51.1 | 65.0 | 71.7 | 92.0 | 72.0 | 94.1 | 81.5 | 61.1 | 94.3 | 61.1 | 65.1 | 53.8 | 61.6 | 70.6 |
NVSegNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | In the inference, we use the image of 2 different scales. The same for training! more details | 0.4 | 67.4 | 98.0 | 81.9 | 90.1 | 35.7 | 39.8 | 57.4 | 60.6 | 69.3 | 91.7 | 67.6 | 94.6 | 79.3 | 54.5 | 93.5 | 43.8 | 52.4 | 50.3 | 53.0 | 67.8 | ||
ENet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation | Adam Paszke, Abhishek Chaurasia, Sangpil Kim, Eugenio Culurciello | more details | 0.013 | 58.3 | 96.3 | 74.2 | 85.0 | 32.2 | 33.2 | 43.5 | 34.1 | 44.0 | 88.6 | 61.4 | 90.6 | 65.5 | 38.4 | 90.6 | 36.9 | 50.5 | 48.1 | 38.8 | 55.4 | |
DeepLabv2-CRF | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs | Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L. Yuille | arXiv preprint | DeepLabv2-CRF is based on three main methods. First, we employ convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool to repurpose ResNet-101 (trained on image classification task) in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within DCNNs. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and fully connected Conditional Random Fields (CRFs). The model is only trained on train set. more details | n/a | 70.4 | 97.9 | 81.3 | 90.3 | 48.8 | 47.4 | 49.6 | 57.9 | 67.3 | 91.9 | 69.4 | 94.2 | 79.8 | 59.8 | 93.7 | 56.5 | 67.5 | 57.5 | 57.7 | 68.8 |
m-TCFs | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Convolutional Neural Network more details | 1.0 | 71.8 | 98.2 | 83.6 | 91.2 | 48.4 | 53.2 | 55.8 | 64.3 | 70.3 | 92.2 | 70.2 | 94.5 | 79.9 | 59.2 | 94.1 | 56.0 | 69.1 | 58.2 | 56.7 | 68.4 | ||
DeepLab+DynamicCRF | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ru.nl | more details | n/a | 64.5 | 96.8 | 77.9 | 88.9 | 37.5 | 45.4 | 39.1 | 51.5 | 61.6 | 90.8 | 58.0 | 93.6 | 76.6 | 53.8 | 92.6 | 41.8 | 52.5 | 50.5 | 53.2 | 64.2 | ||
LRR-4x | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation | Golnaz Ghiasi, Charless C. Fowlkes | ECCV 2016 | We introduce a CNN architecture that reconstructs high-resolution class label predictions from low-resolution feature maps using class-specific basis functions. Our multi-resolution architecture also uses skip connections from higher resolution feature maps to successively refine segment boundaries reconstructed from lower resolution maps. The model used for this submission is based on VGG-16 and it was trained on the training set (2975 images). The segmentation predictions were not post-processed using CRF. (This is a revision of a previous submission in which we didn't use the correct basis functions; the method name changed from 'LLR-4x' to 'LRR-4x') more details | n/a | 69.7 | 97.7 | 79.9 | 90.7 | 44.4 | 48.6 | 58.6 | 68.2 | 72.0 | 92.5 | 69.3 | 94.7 | 81.6 | 60.0 | 94.0 | 43.6 | 56.8 | 47.2 | 54.8 | 69.7 |
LRR-4x | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation | Golnaz Ghiasi, Charless C. Fowlkes | ECCV 2016 | We introduce a CNN architecture that reconstructs high-resolution class label predictions from low-resolution feature maps using class-specific basis functions. Our multi-resolution architecture also uses skip connections from higher resolution feature maps to successively refine segment boundaries reconstructed from lower resolution maps. The model used for this submission is based on VGG-16 and it was trained using both coarse and fine annotations. The segmentation predictions were not post-processed using CRF. more details | n/a | 71.8 | 97.9 | 81.5 | 91.4 | 50.5 | 52.7 | 59.4 | 66.8 | 72.7 | 92.5 | 70.1 | 95.0 | 81.3 | 60.1 | 94.3 | 51.2 | 67.7 | 54.6 | 55.6 | 69.6 |
Le_Selfdriving_VGG | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 65.9 | 97.5 | 78.7 | 88.9 | 42.7 | 44.2 | 46.4 | 53.4 | 61.1 | 90.2 | 68.6 | 93.4 | 74.1 | 48.5 | 91.9 | 44.9 | 62.4 | 52.3 | 51.2 | 61.3 | ||
SQ | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Speeding up Semantic Segmentation for Autonomous Driving | Michael Treml, José Arjona-Medina, Thomas Unterthiner, Rupesh Durgesh, Felix Friedmann, Peter Schuberth, Andreas Mayr, Martin Heusel, Markus Hofmarcher, Michael Widrich, Bernhard Nessler, Sepp Hochreiter | NIPS 2016 Workshop - MLITS Machine Learning for Intelligent Transportation Systems Neural Information Processing Systems 2016, Barcelona, Spain | more details | 0.06 | 59.8 | 96.9 | 75.4 | 87.9 | 31.6 | 35.7 | 50.9 | 52.0 | 61.7 | 90.9 | 65.8 | 93.0 | 73.8 | 42.6 | 91.5 | 18.8 | 41.2 | 33.3 | 34.0 | 59.9 |
SAIT | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous more details | 4.0 | 76.9 | 98.5 | 85.8 | 92.6 | 59.2 | 56.6 | 62.4 | 69.4 | 75.3 | 93.2 | 72.1 | 95.2 | 83.6 | 66.1 | 95.2 | 68.6 | 80.9 | 73.0 | 61.7 | 72.4 | ||
FoveaNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | FoveaNet | Xin Li, Jiashi Feng | 1.caffe-master 2.resnet-101 3.single scale testing Previously listed as "LXFCRN". more details | n/a | 74.1 | 98.2 | 83.2 | 91.5 | 44.4 | 51.2 | 63.2 | 70.8 | 75.5 | 92.7 | 70.1 | 94.5 | 83.3 | 64.2 | 94.6 | 60.8 | 70.7 | 63.3 | 63.0 | 73.2 | |
RefineNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation | Guosheng Lin; Anton Milan; Chunhua Shen; Ian Reid; | Please refer to our technical report for details: "RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation" (https://arxiv.org/abs/1611.06612). Our source code is available at: https://github.com/guosheng/refinenet 2975 images (training set with fine labels) are used for training. more details | n/a | 73.6 | 98.2 | 83.3 | 91.3 | 47.8 | 50.4 | 56.1 | 66.9 | 71.3 | 92.3 | 70.3 | 94.8 | 80.9 | 63.3 | 94.5 | 64.6 | 76.1 | 64.3 | 62.2 | 70.0 | |
SegModel | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Both train set (2975) and val set (500) are used to train model for this submission. more details | 0.8 | 78.5 | 98.6 | 86.4 | 92.8 | 52.4 | 59.7 | 59.6 | 72.5 | 78.3 | 93.3 | 72.8 | 95.5 | 85.4 | 70.0 | 95.7 | 75.4 | 84.1 | 75.1 | 68.7 | 75.0 | ||
TuSimple | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Understanding Convolution for Semantic Segmentation | Panqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, Garrison Cottrell | more details | n/a | 77.6 | 98.5 | 85.5 | 92.8 | 58.6 | 55.5 | 65.0 | 73.5 | 77.9 | 93.3 | 72.0 | 95.2 | 84.8 | 68.5 | 95.4 | 70.9 | 78.8 | 68.7 | 65.9 | 73.8 | |
Global-Local-Refinement | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Global-residual and Local-boundary Refinement Networks for Rectifying Scene Parsing Predictions | Rui Zhang, Sheng Tang, Min Lin, Jintao Li, Shuicheng Yan | International Joint Conference on Artificial Intelligence (IJCAI) 2017 | global-residual and local-boundary refinement The method was previously listed as "RefineNet". To avoid confusions with a recently appeared and similarly named approach, the submission name was updated. more details | n/a | 77.3 | 98.6 | 86.1 | 92.8 | 57.0 | 58.3 | 63.3 | 70.8 | 76.8 | 93.4 | 72.2 | 95.4 | 84.9 | 67.9 | 95.6 | 68.5 | 77.5 | 69.4 | 65.2 | 74.5 |
XPARSE | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 73.4 | 98.3 | 83.9 | 91.6 | 47.6 | 53.4 | 59.5 | 66.8 | 72.5 | 92.7 | 70.9 | 95.2 | 82.4 | 63.5 | 94.7 | 57.4 | 68.8 | 62.2 | 62.6 | 71.5 | ||
ResNet-38 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Wider or Deeper: Revisiting the ResNet Model for Visual Recognition | Zifeng Wu, Chunhua Shen, Anton van den Hengel | arxiv | single model, single scale, no post-processing with CRFs Model A2, 2 conv., fine only, single scale testing The submissions was previously listed as "Model A2, 2 conv.". The name was changed for consistency with the other submission of the same work. more details | n/a | 78.4 | 98.5 | 85.7 | 93.1 | 55.5 | 59.1 | 67.1 | 74.8 | 78.7 | 93.7 | 72.6 | 95.5 | 86.6 | 69.2 | 95.7 | 64.5 | 78.8 | 74.1 | 69.0 | 76.7 |
SegModel | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 79.2 | 98.6 | 86.2 | 93.0 | 53.7 | 60.4 | 64.2 | 73.5 | 78.5 | 93.4 | 72.2 | 95.5 | 85.3 | 68.6 | 95.8 | 77.9 | 87.0 | 78.0 | 68.0 | 75.1 | ||
Deep Layer Cascade (LC) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Not All Pixels Are Equal: Difficulty-aware Semantic Segmentation via Deep Layer Cascade | Xiaoxiao Li, Ziwei Liu, Ping Luo, Chen Change Loy, Xiaoou Tang | CVPR 2017 | We propose a novel deep layer cascade (LC) method to improve the accuracy and speed of semantic segmentation. Unlike the conventional model cascade (MC) that is composed of multiple independent models, LC treats a single deep model as a cascade of several sub-models. Earlier sub-models are trained to handle easy and confident regions, and they progressively feed-forward harder regions to the next sub-model for processing. Convolutions are only calculated on these regions to reduce computations. The proposed method possesses several advantages. First, LC classifies most of the easy regions in the shallow stage and makes deeper stage focuses on a few hard regions. Such an adaptive and 'difficulty-aware' learning improves segmentation performance. Second, LC accelerates both training and testing of deep network thanks to early decisions in the shallow stage. Third, in comparison to MC, LC is an end-to-end trainable framework, allowing joint learning of all sub-models. We evaluate our method on PASCAL VOC and more details | n/a | 71.1 | 98.1 | 82.8 | 91.2 | 47.1 | 52.8 | 57.3 | 63.9 | 70.7 | 92.5 | 70.5 | 94.2 | 81.2 | 57.9 | 94.1 | 50.1 | 59.6 | 57.0 | 58.6 | 71.1 |
FRRN | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes | Tobias Pohlen, Alexander Hermans, Markus Mathias, Bastian Leibe | Arxiv | Full-Resolution Residual Networks (FRRN) combine multi-scale context with pixel-level accuracy by using two processing streams within one network: One stream carries information at the full image resolution, enabling precise adherence to segment boundaries. The other stream undergoes a sequence of pooling operations to obtain robust features for recognition. more details | n/a | 71.8 | 98.2 | 83.3 | 91.6 | 45.8 | 51.1 | 62.2 | 69.4 | 72.4 | 92.6 | 70.0 | 94.9 | 81.6 | 62.7 | 94.6 | 49.1 | 67.1 | 55.3 | 53.5 | 69.5 |
MNet_MPRG | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Chubu University, MPRG | without val dataset, external dataset (e.g. image net) and post-processing more details | 0.6 | 71.9 | 98.1 | 82.9 | 91.8 | 43.6 | 50.5 | 64.3 | 71.4 | 74.6 | 92.7 | 70.3 | 94.7 | 82.4 | 60.9 | 94.1 | 50.9 | 62.5 | 57.2 | 53.8 | 70.0 | ||
ResNet-38 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Wider or Deeper: Revisiting the ResNet Model for Visual Recognition | Zifeng Wu, Chunhua Shen, Anton van den Hengel | arxiv | single model, no post-processing with CRFs Model A2, 2 conv., fine+coarse, multi scale testing more details | n/a | 80.6 | 98.7 | 86.9 | 93.3 | 60.4 | 62.9 | 67.6 | 75.0 | 78.7 | 93.7 | 73.7 | 95.5 | 86.8 | 71.1 | 96.1 | 75.2 | 87.6 | 81.9 | 69.8 | 76.7 |
FCN8s-QunjieYu | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 57.4 | 96.7 | 74.5 | 88.0 | 30.7 | 37.8 | 45.5 | 8.3 | 63.1 | 91.7 | 68.5 | 93.3 | 75.8 | 45.4 | 92.0 | 15.4 | 30.5 | 25.7 | 42.5 | 64.9 | ||
RGB-D FCN | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | no | no | Anonymous | GoogLeNet + depth branch, single model no data augmentation, no training on validation set, no graphical model Used coarse labels to initialize depth branch more details | n/a | 67.4 | 97.9 | 81.2 | 90.7 | 41.0 | 44.8 | 56.8 | 65.3 | 69.4 | 91.9 | 68.7 | 94.7 | 78.9 | 52.9 | 93.1 | 38.8 | 53.1 | 43.7 | 51.0 | 67.0 | ||
MultiBoost | yes | yes | yes | yes | no | no | yes | yes | no | no | 2 | 2 | no | no | Anonymous | Boosting based solution. Publication is under review. more details | 0.25 | 59.3 | 95.9 | 69.5 | 87.3 | 34.4 | 32.7 | 40.5 | 54.9 | 58.6 | 89.2 | 65.3 | 90.3 | 68.4 | 42.5 | 89.0 | 22.5 | 51.9 | 40.9 | 36.5 | 55.7 | ||
GoogLeNet FCN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Going Deeper with Convolutions | Christian Szegedy , Wei Liu , Yangqing Jia , Pierre Sermanet , Scott Reed , Dragomir Anguelov , Dumitru Erhan , Vincent Vanhoucke , Andrew Rabinovich | CVPR 2015 | GoogLeNet No data augmentation, no graphical model Trained by Lukas Schneider, following "Fully Convolutional Networks for Semantic Segmentation", Long et al. CVPR 2015 more details | n/a | 63.0 | 97.4 | 77.9 | 89.2 | 35.0 | 39.0 | 50.6 | 59.8 | 64.1 | 91.2 | 66.9 | 93.7 | 76.2 | 45.1 | 92.6 | 33.4 | 40.4 | 32.7 | 47.3 | 64.6 |
ERFNet (pretrained) | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ERFNet: Efficient Residual Factorized ConvNet for Real-time Semantic Segmentation | Eduardo Romera, Jose M. Alvarez, Luis M. Bergasa and Roberto Arroyo | Transactions on Intelligent Transportation Systems (T-ITS) | ERFNet pretrained on ImageNet and trained only on the fine train (2975) annotated images more details | 0.02 | 69.7 | 97.9 | 82.1 | 90.7 | 45.2 | 50.4 | 59.0 | 62.6 | 68.4 | 91.9 | 69.4 | 94.2 | 78.5 | 59.8 | 93.4 | 52.3 | 60.8 | 53.7 | 49.9 | 64.2 |
ERFNet (from scratch) | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Efficient ConvNet for Real-time Semantic Segmentation | Eduardo Romera, Jose M. Alvarez, Luis M. Bergasa and Roberto Arroyo | IV2017 | ERFNet trained entirely on the fine train set (2975 images) without any pretraining nor coarse labels more details | 0.02 | 68.0 | 97.7 | 81.0 | 89.8 | 42.5 | 48.0 | 56.2 | 59.8 | 65.3 | 91.4 | 68.2 | 94.2 | 76.8 | 57.1 | 92.8 | 50.8 | 60.1 | 51.8 | 47.3 | 61.6 |
TuSimple_Coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Understanding Convolution for Semantic Segmentation | Panqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, Garrison Cottrell | Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are better for practical use. First, we implement dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields of the network to aggregate global information; 2) alleviates what we call the "gridding issue" caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a new state-of-art result of 80.1% mIOU in the test set. We also are state-of-the-art overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Pretrained models are available at https://goo.gl/DQMeun. more details | n/a | 80.1 | 98.5 | 85.9 | 93.2 | 57.7 | 61.1 | 67.2 | 73.7 | 78.0 | 93.4 | 72.3 | 95.4 | 85.9 | 70.5 | 95.9 | 76.1 | 90.6 | 83.7 | 67.4 | 75.7 | |
SAC-multiple | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scale-adaptive Convolutions for Scene Parsing | Rui Zhang, Sheng Tang, Yongdong Zhang, Jintao Li, and Shuicheng Yan | International Conference on Computer Vision (ICCV) 2017 | more details | n/a | 78.1 | 98.7 | 86.5 | 93.1 | 56.3 | 59.5 | 65.1 | 73.0 | 78.2 | 93.5 | 72.6 | 95.6 | 85.9 | 70.8 | 95.9 | 71.2 | 78.6 | 66.2 | 67.7 | 76.0 |
NetWarp | yes | yes | yes | yes | no | no | no | no | yes | yes | no | no | no | no | Anonymous | more details | n/a | 80.5 | 98.6 | 86.7 | 93.4 | 60.6 | 62.6 | 68.6 | 75.9 | 80.0 | 93.5 | 72.0 | 95.3 | 86.5 | 72.1 | 95.9 | 72.9 | 89.9 | 77.4 | 70.5 | 76.4 | ||
depthAwareSeg_RNN_ff | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | training with fine-annotated training images only (val set is not used); flip-augmentation only in training; single GPU for train&test; softmax loss; resnet101 as front end; multiscale test. more details | n/a | 78.2 | 98.5 | 85.4 | 92.5 | 54.4 | 60.9 | 60.2 | 72.3 | 76.8 | 93.1 | 71.6 | 94.8 | 85.2 | 69.0 | 95.7 | 70.1 | 86.5 | 75.5 | 68.3 | 75.5 | ||
Ladder DenseNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Ladder-style DenseNets for Semantic Segmentation of Large Natural Images | Ivan Krešo, Josip Krapac, Siniša Šegvić | ICCV 2017 | https://ivankreso.github.io/publication/ladder-densenet/ more details | 0.45 | 74.3 | 97.4 | 80.2 | 92.0 | 47.6 | 53.9 | 64.6 | 72.8 | 76.3 | 92.8 | 66.4 | 95.5 | 83.8 | 66.1 | 94.3 | 55.6 | 70.3 | 67.0 | 62.1 | 73.0 |
Real-time FCN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Understanding Cityscapes: Efficient Urban Semantic Scene Understanding | Marius Cordts | Dissertation | Combines the following concepts: Network architecture: "Going deeper with convolutions". Szegedy et al., CVPR 2015 Framework and skip connections: "Fully convolutional networks for semantic segmentation". Long et al., CVPR 2015 Context modules: "Multi-scale context aggregation by dilated convolutions". Yu and Kolutin, ICLR 2016 more details | 0.044 | 72.6 | 98.0 | 81.4 | 91.1 | 44.6 | 50.7 | 57.3 | 64.1 | 71.2 | 92.1 | 68.5 | 94.7 | 81.2 | 61.2 | 94.6 | 54.5 | 76.5 | 72.2 | 57.6 | 68.7 |
GridNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Conv-Deconv Grid-Network for semantic segmentation. Using only the training set without extra coarse annotated data (only 2975 images). No pre-training (ImageNet). No post-processing (like CRF). more details | n/a | 69.5 | 98.0 | 82.8 | 90.8 | 41.8 | 48.3 | 59.3 | 65.4 | 69.4 | 92.4 | 69.2 | 93.8 | 81.8 | 62.3 | 93.1 | 41.8 | 56.2 | 49.0 | 55.2 | 69.1 | ||
PEARL | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Video Scene Parsing with Predictive Feature Learning | Xiaojie Jin, Xin Li, Huaxin Xiao, Xiaohui Shen, Zhe Lin, Jimei Yang, Yunpeng Chen, Jian Dong, Luoqi Liu, Zequn Jie, Jiashi Feng, and Shuicheng Yan | ICCV 2017 | We proposed a novel Parsing with prEdictive feAtuRe Learning (PEARL) model to address the following two problems in video scene parsing: firstly, how to effectively learn meaningful video representations for producing the temporally consistent labeling maps; secondly, how to overcome the problem of insufficient labeled video training data, i.e. how to effectively conduct unsupervised deep learning. To our knowledge, this is the first model to employ predictive feature learning in the video scene parsing. more details | n/a | 75.4 | 98.4 | 84.5 | 92.1 | 54.1 | 56.6 | 60.4 | 69.0 | 74.0 | 92.9 | 70.9 | 95.2 | 83.5 | 65.7 | 95.0 | 61.8 | 72.2 | 69.6 | 64.8 | 72.8 |
pruned & dilated inception-resnet-v2 (PD-IR2) | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Anonymous | more details | 0.69 | 67.3 | 97.9 | 81.9 | 90.2 | 39.5 | 47.7 | 54.8 | 58.1 | 69.9 | 91.3 | 70.4 | 94.4 | 77.0 | 51.9 | 92.9 | 40.7 | 54.3 | 55.2 | 45.5 | 65.1 | ||
PSPNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Pyramid Scene Parsing Network | Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, Jiaya Jia | CVPR 2017 | This submission is trained on coarse+fine(train+val set, 2975+500 images). Former submission is trained on coarse+fine(train set, 2975 images) which gets 80.2 mIoU: https://www.cityscapes-dataset.com/method-details/?submissionID=314 Previous versions of this method were listed as "SenseSeg_1026". more details | n/a | 81.2 | 98.7 | 86.9 | 93.5 | 58.4 | 63.7 | 67.7 | 76.1 | 80.5 | 93.6 | 72.2 | 95.3 | 86.8 | 71.9 | 96.2 | 77.7 | 91.5 | 83.6 | 70.8 | 77.5 |
motovis | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | motovis.com | more details | n/a | 81.3 | 98.7 | 86.6 | 93.5 | 55.5 | 62.7 | 69.4 | 76.3 | 80.4 | 93.8 | 72.6 | 95.8 | 87.1 | 72.4 | 96.2 | 77.9 | 91.3 | 88.6 | 69.5 | 77.1 | ||
ML-CRNN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Multi-level Contextual RNNs with Attention Model for Scene Labeling | Heng Fan, Xue Mei, Danil Prokhorov, Haibin Ling | arXiv | A framework based on CNNs and RNNs is proposed, in which the RNNs are used to model spatial dependencies among image units. Besides, to enrich deep features, we use different features from multiple levels, and adopt a novel attention model to fuse them. more details | n/a | 71.2 | 97.9 | 81.0 | 91.0 | 50.3 | 52.4 | 56.7 | 65.7 | 71.4 | 92.2 | 69.6 | 94.6 | 80.2 | 59.3 | 93.9 | 51.1 | 67.6 | 54.5 | 55.1 | 68.6 |
Hybrid Model | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 65.8 | 97.5 | 78.5 | 89.0 | 39.0 | 46.1 | 48.6 | 58.7 | 64.0 | 91.2 | 68.3 | 91.8 | 76.8 | 51.9 | 92.2 | 40.0 | 50.6 | 44.9 | 54.3 | 66.6 | ||
tek-Ifly | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Iflytek | Iflytek-yin | using a fusion strategy of three single models, the best result of a single model is 80.01%,multi-scale more details | n/a | 81.1 | 98.6 | 86.3 | 93.5 | 61.2 | 64.1 | 66.0 | 75.6 | 79.1 | 93.7 | 72.8 | 95.6 | 86.3 | 69.9 | 96.0 | 76.8 | 90.7 | 86.8 | 71.0 | 77.1 | |
GridNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Residual Conv-Deconv Grid Network for Semantic Segmentation | Damien Fourure, Rémi Emonet, Elisa Fromont, Damien Muselet, Alain Tremeau & Christian Wolf | BMVC 2017 | We used a new architecture for semantic image segmentation called GridNet, following a grid pattern allowing multiple interconnected streams to work at different resolutions (see paper). We used only the training set without extra coarse annotated data (only 2975 images) and no pre-training (ImageNet) nor pre or post-processing. more details | n/a | 69.8 | 98.1 | 83.0 | 90.9 | 41.4 | 49.2 | 60.1 | 66.5 | 70.2 | 92.5 | 69.8 | 93.8 | 82.3 | 63.2 | 93.2 | 42.6 | 55.8 | 48.5 | 55.4 | 69.8 |
firenet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | n/a | 68.2 | 94.1 | 74.2 | 87.4 | 40.1 | 44.6 | 54.2 | 65.4 | 65.1 | 90.0 | 66.5 | 92.1 | 76.7 | 61.8 | 92.8 | 45.0 | 64.9 | 59.3 | 54.4 | 67.5 | ||
DeepLabv3 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Rethinking Atrous Convolution for Semantic Image Segmentation | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | arXiv preprint | In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we employ a module, called Atrous Spatial Pyrmid Pooling (ASPP), which adopts atrous convolution in parallel to capture multi-scale context with multiple atrous rates. Furthermore, we propose to augment ASPP module with image-level features encoding global context and further boost performance. Results obtained with a single model (no ensemble), trained with fine + coarse annotations. More details will be shown in the updated arXiv report. more details | n/a | 81.3 | 98.6 | 86.2 | 93.5 | 55.2 | 63.2 | 70.0 | 77.1 | 81.3 | 93.8 | 72.3 | 95.9 | 87.6 | 73.4 | 96.3 | 75.1 | 90.4 | 85.1 | 72.1 | 78.3 |
EdgeSenseSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Deep segmentation network with hard negative mining and other tricks. more details | n/a | 76.8 | 98.4 | 84.8 | 92.5 | 52.0 | 58.1 | 61.5 | 73.0 | 76.1 | 93.3 | 71.8 | 95.0 | 85.2 | 68.5 | 95.4 | 62.4 | 77.6 | 70.7 | 66.8 | 75.5 | ||
ScaleNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | ScaleNet: Scale Invariant Network for Semantic Segmentation in Urban Driving Scenes | Mohammad Dawud Ansari, Stephan Krarß, Oliver Wasenmüller and Didier Stricker | International Conference on Computer Vision Theory and Applications, Funchal, Portugal, 2018 | The scale difference in driving scenarios is one of the essential challenges in semantic scene segmentation. Close objects cover significantly more pixels than far objects. In this paper, we address this challenge with a scale invariant architecture. Within this architecture, we explicitly estimate the depth and adapt the pooling field size accordingly. Our model is compact and can be extended easily to other research domains. Finally, the accuracy of our approach is comparable to the state-of-the-art and superior for scale problems. We evaluate on the widely used automotive dataset Cityscapes as well as a self-recorded dataset. more details | n/a | 75.1 | 98.3 | 84.8 | 92.4 | 50.1 | 59.6 | 62.8 | 71.8 | 76.8 | 93.2 | 71.4 | 94.6 | 83.6 | 65.2 | 95.1 | 56.0 | 71.6 | 59.9 | 66.3 | 73.6 |
K-net | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | XinLiang Zhong | more details | n/a | 76.0 | 98.3 | 84.3 | 92.1 | 52.3 | 56.5 | 59.0 | 69.8 | 73.2 | 92.7 | 70.4 | 94.6 | 83.0 | 66.3 | 95.2 | 68.5 | 79.9 | 74.0 | 62.5 | 72.1 | ||
MSNET | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | previously also listed as "MultiPathJoin" and "MultiPath_Scale". more details | 0.2 | 76.8 | 98.3 | 85.0 | 92.5 | 48.6 | 56.7 | 67.5 | 75.5 | 78.4 | 93.3 | 72.8 | 94.9 | 85.8 | 71.1 | 95.3 | 59.8 | 73.3 | 65.9 | 68.5 | 76.4 | ||
Multitask Learning | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics | Alex Kendall, Yarin Gal and Roberto Cipolla | Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task. more details | n/a | 78.5 | 98.4 | 85.2 | 92.8 | 54.2 | 60.8 | 62.4 | 73.4 | 77.5 | 93.3 | 71.5 | 95.1 | 84.9 | 69.5 | 95.3 | 68.5 | 86.2 | 80.0 | 67.8 | 75.6 | |
DeepMotion | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | We propose a novel method based on convnets to extract multi-scale features in a large range particularly for solving street scene segmentation. more details | n/a | 81.4 | 98.7 | 87.0 | 93.5 | 61.6 | 62.6 | 65.4 | 74.6 | 78.6 | 93.6 | 72.5 | 95.4 | 86.2 | 72.3 | 96.1 | 82.3 | 92.8 | 85.7 | 70.2 | 76.6 | ||
SR-AIC | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 81.9 | 98.7 | 87.2 | 93.7 | 62.6 | 64.7 | 69.0 | 76.4 | 80.8 | 93.7 | 73.3 | 95.5 | 86.8 | 72.2 | 96.2 | 77.9 | 90.6 | 87.9 | 71.2 | 77.3 | ||
Roadstar.ai_CV(SFNet) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Roadstar.ai-CV | Maosheng Ye, Guang Zhou, Tongyi Cao, YongTao Huang, Yinzi Chen | same foucs net(SFNet), based on only fine labels, with focus on the loss distribution and same focus on the every layer of feature map more details | 0.2 | 79.2 | 98.4 | 85.4 | 93.0 | 59.6 | 59.2 | 67.5 | 76.4 | 79.3 | 93.7 | 73.6 | 95.3 | 86.8 | 73.8 | 95.7 | 67.5 | 81.2 | 72.1 | 69.2 | 77.1 | |
DFN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Learning a Discriminative Feature Network for Semantic Segmentation | Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, Nong Sang | arxiv | Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2% mean IOU on PASCAL VOC 2012 and 80.3% mean IOU on Cityscapes dataset. more details | n/a | 80.3 | 98.6 | 85.9 | 93.2 | 59.6 | 61.0 | 66.6 | 73.2 | 78.2 | 93.5 | 71.6 | 95.5 | 86.5 | 70.5 | 96.1 | 77.1 | 89.9 | 84.7 | 68.2 | 76.5 |
RelationNet_Coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | RelationNet: Learning Deep-Aligned Representation for Semantic Image Segmentation | Yueqing Zhuang | ICPR | Semantic image segmentation, which assigns labels in pixel level, plays a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning. However, one central problem of these methods is that deep convolution neural network gives little consideration to the correlation among pixels. To handle this issue, in this paper, we propose a novel deep neural network named RelationNet, which utilizes CNN and RNN to aggregate context information. Besides, a spatial correlation loss is applied to supervise RelationNet to align features of spatial pixels belonging to same category. Importantly, since it is expensive to obtain pixel-wise annotations, we exploit a new training method for combining the coarsely and finely labeled data. Separate experiments show the detailed improvements of each proposal. Experimental results demonstrate the effectiveness of our proposed method to the problem of semantic image segmentation. more details | n/a | 82.4 | 98.8 | 87.9 | 94.0 | 67.7 | 64.4 | 70.2 | 77.1 | 81.1 | 93.9 | 73.5 | 95.8 | 87.8 | 73.4 | 96.4 | 75.3 | 89.4 | 88.1 | 72.0 | 78.3 |
ARSAIT | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | anonymous more details | 1.0 | 73.6 | 98.2 | 82.7 | 91.9 | 48.5 | 51.3 | 60.7 | 67.7 | 73.4 | 92.8 | 69.9 | 95.2 | 82.6 | 61.9 | 94.9 | 58.8 | 66.7 | 66.7 | 62.3 | 71.4 | ||
Mapillary Research: In-Place Activated BatchNorm | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | In-Place Activated BatchNorm for Memory-Optimized Training of DNNs | Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder | arXiv | In-Place Activated Batch Normalization (InPlace-ABN) is a novel approach to drastically reduce the training memory footprint of modern deep neural networks in a computationally efficient way. Our solution substitutes the conventionally used succession of BatchNorm + Activation layers with a single plugin layer, hence avoiding invasive framework surgery while providing straightforward applicability for existing deep learning frameworks. We obtain memory savings of up to 50% by dropping intermediate results and by recovering required information during the backward pass through the inversion of stored forward results, with only minor increase (0.8-2%) in computation time. Test results are obtained using a single model. more details | n/a | 82.0 | 98.4 | 85.0 | 93.6 | 61.7 | 63.9 | 67.7 | 77.4 | 80.8 | 93.7 | 71.9 | 95.6 | 86.7 | 72.8 | 95.7 | 79.9 | 93.1 | 89.7 | 72.6 | 78.2 |
EFBNET | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 81.8 | 98.6 | 86.6 | 93.4 | 60.6 | 63.4 | 65.4 | 75.4 | 79.6 | 93.5 | 72.1 | 95.2 | 86.5 | 72.9 | 96.0 | 81.4 | 92.7 | 89.8 | 73.6 | 77.2 | ||
Ladder DenseNet v2 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Journal submission | Anonymous | DenseNet-121 model used in downsampling path with ladder-style skip connections upsampling path on top of it. more details | 1.0 | 78.4 | 98.7 | 86.6 | 93.0 | 52.9 | 56.9 | 67.2 | 74.4 | 78.9 | 93.6 | 73.1 | 95.5 | 86.2 | 69.7 | 95.9 | 65.9 | 82.1 | 76.3 | 66.8 | 75.7 | |
ESPNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation | Sachin Mehta, Mohammad Rastegari, Anat Caspi, Linda Shapiro, and Hannaneh Hajishirzi | We introduce a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a new convolutional module, efficient spatial pyramid (ESP), which is efficient in terms of computation, memory, and power. ESPNet is 22 times faster (on a standard GPU) and 180 times smaller than the state-of-the-art semantic segmentation network PSPNet, while its category-wise accuracy is only 8% less. We evaluated EPSNet on a variety of semantic segmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy whole slide image dataset. Under the same constraints on memory and computation, ESPNet outperforms all the current efficient CNN networks such as MobileNet, ShuffleNet, and ENet on both standard metrics and our newly introduced performance metrics that measure efficiency on edge devices. Our network can process high resolution images at a rate of 112 and 9 frames per second on a standard GPU and edge device, respectively more details | 0.0089 | 60.3 | 95.7 | 73.3 | 86.6 | 32.8 | 36.4 | 47.1 | 46.9 | 55.4 | 89.8 | 66.0 | 92.5 | 68.5 | 45.8 | 89.9 | 40.0 | 47.7 | 40.7 | 36.4 | 54.9 | |
ENet with the Lovász-Softmax loss | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks | Maxim Berman, Amal Rannen Triki, Matthew B. Blaschko | arxiv | The Lovász-Softmax loss is a novel surrogate for optimizing the IoU measure in neural networks. Here we finetune the weights provided by the authors of ENet (arXiv:1606.02147) with this loss, for 10'000 iterations on training dataset. The runtimes are unchanged with respect to the ENet architecture. more details | 0.013 | 63.1 | 97.3 | 77.2 | 87.2 | 36.1 | 39.0 | 48.5 | 52.0 | 58.1 | 89.9 | 67.7 | 92.7 | 71.4 | 49.6 | 91.0 | 39.4 | 49.3 | 50.5 | 41.6 | 59.8 |
DRN_CRL_Coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Dense Relation Network: Learning Consistent and Context-Aware Representation For Semantic Image Segmentation | Yueqing Zhuang | ICIP | DRN_CoarseSemantic image segmentation, which aims at assigning pixel-wise category, is one of challenging image understanding problems. Global context plays an important role on local pixel-wise category assignment. To make the best of global context, in this paper, we propose dense relation network (DRN) and context-restricted loss (CRL) to aggregate global and local information. DRN uses Recurrent Neural Network (RNN) with different skip lengths in spatial directions to get context-aware representations while CRL helps aggregate them to learn consistency. Compared with previous methods, our proposed method takes full advantage of hierarchical contextual representations to produce high-quality results. Extensive experiments demonstrate that our methods achieves significant state-of-the-art performances on Cityscapes and Pascal Context benchmarks, with mean-IoU of 82.8% and 49.0% respectively. more details | n/a | 82.8 | 98.8 | 87.7 | 94.0 | 65.1 | 64.2 | 70.1 | 77.4 | 81.6 | 93.9 | 73.5 | 95.8 | 88.0 | 74.9 | 96.5 | 80.8 | 92.1 | 88.5 | 72.1 | 78.8 |
ShuffleSeg | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | ShuffleSeg: Real-time Semantic Segmentation Network | Mostafa Gamal, Mennatullah Siam, Mo'men Abdel-Razek | Under Review by ICIP 2018 | ShuffleSeg: An efficient realtime semantic segmentation network with skip connections and ShuffleNet units more details | n/a | 58.3 | 95.6 | 72.0 | 85.1 | 31.9 | 33.7 | 39.4 | 44.0 | 51.1 | 88.7 | 63.8 | 92.5 | 64.4 | 38.5 | 89.1 | 37.0 | 51.1 | 40.9 | 35.9 | 52.8 |
SkipNet-MobileNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | RTSeg: Real-time Semantic Segmentation Framework | Mennatullah Siam, Mostafa Gamal, Moemen Abdel-Razek, Senthil Yogamani, Martin Jagersand | Under Review by ICIP 2018 | An efficient realtime semantic segmentation network with skip connections based on MobileNet. more details | n/a | 61.5 | 95.8 | 73.9 | 86.2 | 36.7 | 39.4 | 44.5 | 47.2 | 54.3 | 89.5 | 66.0 | 92.9 | 69.3 | 45.1 | 89.9 | 35.6 | 53.9 | 45.6 | 44.8 | 58.2 |
ThunderNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.0104 | 64.0 | 97.3 | 77.4 | 88.3 | 41.2 | 38.4 | 48.5 | 55.6 | 60.8 | 90.7 | 67.7 | 93.0 | 71.6 | 46.6 | 91.6 | 39.3 | 49.9 | 49.8 | 45.5 | 62.3 | ||
PAC: Perspective-adaptive Convolutions | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Perspective-adaptive Convolutions for Scene Parsing | Rui Zhang, Sheng Tang, Yongdong Zhang, Jintao Li, and Shuicheng Yan | IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) | Many existing scene parsing methods adopt Convolutional Neural Networks with receptive fields of fixed sizes and shapes, which frequently results in inconsistent predictions of large objects and invisibility of small objects. To tackle this issue, we propose perspective-adaptive convolutions to acquire receptive fields of flexible sizes and shapes during scene parsing. Through adding a new perspective regression layer, we can dynamically infer the position-adaptive perspective coefficient vectors utilized to reshape the convolutional patches. Consequently, the receptive fields can be adjusted automatically according to the various sizes and perspective deformations of the objects in scene images. Our proposed convolutions are differentiable to learn the convolutional parameters and perspective coefficients in an end-to-end way without any extra training supervision of object sizes. Furthermore, considering that the standard convolutions lack contextual information and spatial dependencies, we propose a context adaptive bias to capture both local and global contextual information through average pooling on the local feature patches and global feature maps, followed by flexible attentive summing to the convolutional results. The attentive weights are position-adaptive and context-aware, and can be learned through adding an additional context regression layer. Experiments on Cityscapes and ADE20K datasets well demonstrate the effectiveness of the proposed methods. more details | n/a | 78.9 | 98.7 | 86.9 | 93.3 | 58.9 | 60.4 | 65.8 | 73.0 | 78.3 | 93.6 | 72.8 | 95.6 | 86.0 | 71.3 | 96.0 | 73.4 | 82.4 | 69.5 | 67.3 | 75.9 |
SU_Net | no | no | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 75.3 | 98.4 | 84.7 | 91.9 | 54.3 | 56.4 | 54.6 | 67.4 | 73.8 | 92.7 | 71.1 | 94.5 | 83.7 | 67.3 | 95.2 | 65.1 | 76.8 | 64.1 | 65.6 | 73.9 | ||
MobileNetV2Plus | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Huijun Liu | MobileNetV2Plus more details | n/a | 70.7 | 98.0 | 81.9 | 90.9 | 47.1 | 48.6 | 57.1 | 62.8 | 68.3 | 92.3 | 68.6 | 94.5 | 80.3 | 60.4 | 93.8 | 59.1 | 65.4 | 55.7 | 52.0 | 67.0 | ||
DeepLabv3+ | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation | Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, Hartwig Adam | arXiv | Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We will provide more details in the coming update on the arXiv report. more details | n/a | 82.1 | 98.7 | 87.0 | 93.9 | 59.5 | 63.7 | 71.4 | 78.2 | 82.2 | 94.0 | 73.0 | 95.8 | 88.0 | 73.3 | 96.4 | 78.0 | 90.9 | 83.9 | 73.8 | 78.9 |
RFMobileNetV2Plus | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Huijun Liu | Receptive Filed MobileNetV2Plus for Semantic Segmentation more details | n/a | 70.7 | 97.8 | 80.6 | 90.9 | 44.3 | 47.0 | 59.5 | 65.6 | 72.3 | 92.5 | 68.6 | 94.1 | 81.6 | 60.6 | 94.3 | 51.3 | 61.2 | 57.1 | 54.0 | 69.5 | ||
GoogLeNetV1_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | GoogLeNet-v1 FCN trained on Cityscapes, KITTI, and ScanNet, as required by the Robust Vision Challenge at CVPR'18 (http://robustvision.net/) more details | n/a | 59.6 | 96.6 | 73.8 | 87.1 | 27.1 | 31.6 | 47.2 | 53.2 | 59.0 | 89.6 | 55.1 | 92.2 | 72.3 | 48.3 | 90.9 | 29.8 | 40.0 | 33.8 | 42.9 | 62.0 | ||
SAITv2 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.025 | 70.0 | 97.9 | 80.8 | 89.6 | 54.3 | 45.8 | 47.7 | 52.9 | 60.0 | 90.5 | 67.8 | 93.7 | 73.2 | 54.5 | 92.8 | 67.2 | 80.6 | 72.0 | 48.1 | 60.5 | ||
GUNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Guided Upsampling Network for Real-Time Semantic Segmentation | Davide Mazzini | arxiv | Guided Upsampling Network for Real-Time Semantic Segmentation more details | 0.03 | 70.4 | 98.2 | 82.7 | 90.6 | 47.3 | 45.4 | 51.9 | 59.1 | 66.6 | 91.7 | 68.5 | 94.8 | 79.0 | 59.5 | 94.1 | 60.3 | 71.4 | 54.0 | 54.9 | 67.2 |
RMNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | A fast and light net for semantic segmentation. more details | 0.014 | 64.5 | 97.5 | 79.3 | 88.5 | 37.0 | 40.2 | 52.6 | 55.7 | 58.2 | 90.5 | 67.2 | 93.3 | 72.8 | 51.2 | 91.4 | 42.5 | 53.7 | 52.7 | 41.9 | 58.5 | ||
ContextNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ContextNet: Exploring Context and Detail for Semantic Segmentation in Real-time | Rudra PK Poudel, Ujwal Bonde, Stephan Liwicki, Christopher Zach | arXiv | Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naive adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representations to produce competitive semantic segmentation in real-time with low memory requirements. ContextNet combines a deep branch at low resolution that captures global context information efficiently with a shallow branch that focuses on high-resolution segmentation details. We analyze our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024x2048) resolution. more details | 0.0238 | 66.1 | 97.6 | 79.2 | 88.8 | 43.8 | 42.9 | 37.9 | 52.0 | 58.9 | 90.0 | 66.9 | 92.0 | 72.2 | 53.9 | 91.7 | 54.0 | 66.5 | 58.4 | 48.9 | 61.1 |
RFLR | yes | yes | yes | yes | yes | yes | no | no | no | no | 4 | 4 | no | no | Random Forest with Learned Representations for Semantic Segmentation | Byeongkeun Kang, Truong Q. Nguyen | IEEE Transactions on Image Processing | Random Forest with Learned Representations for Semantic Segmentation more details | 0.03 | 30.0 | 82.0 | 32.9 | 58.9 | 9.9 | 7.1 | 10.1 | 11.9 | 17.9 | 74.0 | 43.6 | 84.6 | 26.7 | 2.1 | 64.9 | 8.9 | 10.6 | 4.4 | 2.4 | 17.7 |
DPC | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Searching for Efficient Multi-Scale Architectures for Dense Image Prediction | Liang-Chieh Chen, Maxwell D. Collins, Yukun Zhu, George Papandreou, Barret Zoph, Florian Schroff, Hartwig Adam, Jonathon Shlens | NIPS 2018 | In this work we explore the construction of meta-learning techniques for dense image prediction focused on the tasks of scene parsing. Constructing viable search spaces in this domain is challenging because of the multi-scale representation of visual information and the necessity to operate on high resolution imagery. Based on a survey of techniques in dense image prediction, we construct a recursive search space and demonstrate that even with efficient random search, we can identify architectures that achieve state-of-the-art performance. Additionally, the resulting architecture (called DPC for Dense Prediction Cell) is more computationally efficient, requiring half the parameters and half the computational cost as previous state of the art systems. more details | n/a | 82.7 | 98.7 | 87.1 | 93.8 | 57.7 | 63.5 | 71.0 | 78.0 | 82.1 | 94.0 | 73.3 | 95.4 | 88.2 | 74.5 | 96.5 | 81.2 | 93.3 | 89.0 | 74.1 | 79.0 |
NV-ADLR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | NVIDIA Applied Deep Learning Research more details | n/a | 83.2 | 98.7 | 87.6 | 94.1 | 63.5 | 66.7 | 72.0 | 78.8 | 82.4 | 94.3 | 75.3 | 96.1 | 87.8 | 73.3 | 96.4 | 78.9 | 93.0 | 90.4 | 72.7 | 78.6 | ||
Adaptive Affinity Field on PSPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Adaptive Affinity Field for Semantic Segmentation | Tsung-Wei Ke*, Jyh-Jing Hwang*, Ziwei Liu, Stella X. Yu | ECCV 2018 | Existing semantic segmentation methods mostly rely on per-pixel supervision, unable to capture structural regularity present in natural images. Instead of learning to enforce semantic labels on individual pixels, we propose to enforce affinity field patterns in individual pixel neighbourhoods, i.e., the semantic label patterns of whether neighbouring pixels are in the same segment should match between the prediction and the ground-truth. The affinity fields characterize geometric relationships within the image, such as "motorcycles have round wheels". We further develop a novel method for learning the optimal neighbourhood size for each semantic category, with an adversarial loss that optimizes over worst-case scenarios. Unlike the common Conditional Random Field (CRF) approaches, our adaptive affinity field (AAF) method has no extra parameters during inference, and is less sensitive to appearance changes in the image. more details | n/a | 79.1 | 98.5 | 85.6 | 93.0 | 53.8 | 59.0 | 65.9 | 75.0 | 78.4 | 93.7 | 72.4 | 95.6 | 86.4 | 70.5 | 95.9 | 73.9 | 82.7 | 76.9 | 68.7 | 76.4 |
APMoE_seg_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Pixel-wise Attentional Gating for Parsimonious Pixel Labeling | Shu Kong, Charless Fowlkes | arxiv | The Pixel-level Attentional Gating (PAG) unit is trained to choose for each pixel the pooling size to adopt to aggregate contextual region around it. There are multiple branches with different dilate rates for varied pooling size, thus varying receptive field. For this ROB challenge, PAG is expected to robustly aggregate information for final prediction. This is our entry for Robust Vision Challenge 2018 workshop (ROB). The model is based on ResNet50, trained over mixed dataset of Cityscapes, ScanNet and Kitti. more details | 0.9 | 56.5 | 96.1 | 73.9 | 87.6 | 30.7 | 29.5 | 45.4 | 48.0 | 60.2 | 90.5 | 66.8 | 92.3 | 69.9 | 20.6 | 91.2 | 27.8 | 35.6 | 19.4 | 30.2 | 57.9 |
BatMAN_ROB | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | batch-normalized multistage attention network more details | 1.0 | 55.4 | 97.4 | 75.7 | 88.2 | 29.0 | 34.8 | 48.4 | 49.4 | 60.6 | 90.3 | 63.3 | 94.2 | 66.3 | 16.8 | 91.2 | 20.9 | 35.2 | 16.6 | 21.1 | 52.7 | ||
HiSS_ROB | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.06 | 58.9 | 97.3 | 77.1 | 87.6 | 38.0 | 38.9 | 36.4 | 47.2 | 53.9 | 89.3 | 66.1 | 92.9 | 67.8 | 46.4 | 89.5 | 35.7 | 32.3 | 24.1 | 39.9 | 58.3 | ||
VENUS_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | VENUS_ROB more details | n/a | 66.4 | 96.9 | 77.9 | 89.6 | 48.1 | 45.7 | 44.1 | 55.0 | 62.5 | 91.1 | 67.2 | 93.8 | 75.2 | 47.1 | 92.7 | 52.6 | 61.8 | 51.6 | 46.8 | 63.0 | ||
VlocNet++_ROB | no | no | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 62.7 | 97.4 | 78.1 | 88.4 | 37.8 | 28.8 | 45.9 | 48.8 | 57.1 | 89.3 | 66.2 | 92.9 | 72.2 | 50.3 | 91.7 | 47.1 | 56.0 | 41.5 | 42.6 | 59.0 | ||
AHiSS_ROB | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | Augmented Hierarchical Semantic Segmentation more details | 0.06 | 70.6 | 98.0 | 81.6 | 89.9 | 56.5 | 52.3 | 43.0 | 55.2 | 60.8 | 90.6 | 68.1 | 93.7 | 73.5 | 55.4 | 93.1 | 67.5 | 79.1 | 66.4 | 53.9 | 62.1 | ||
IBN-PSP-SA_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | IBN-PSP-SA_ROB more details | n/a | 75.1 | 98.4 | 84.8 | 92.2 | 48.0 | 55.5 | 60.3 | 67.7 | 73.5 | 92.7 | 71.7 | 95.4 | 83.0 | 63.7 | 95.1 | 71.7 | 77.6 | 63.6 | 60.7 | 71.5 | ||
LDN2_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Ladder DenseNet: https://ivankreso.github.io/publication/ladder-densenet/ more details | 1.0 | 77.9 | 98.5 | 85.1 | 92.8 | 55.6 | 56.7 | 63.7 | 70.9 | 76.1 | 93.3 | 71.3 | 95.6 | 85.0 | 68.8 | 95.5 | 74.9 | 81.0 | 74.7 | 66.5 | 73.8 | ||
MiniNet | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 0.004 | 40.7 | 94.2 | 59.9 | 77.6 | 15.5 | 11.3 | 20.5 | 20.7 | 27.4 | 82.1 | 58.9 | 89.0 | 41.9 | 16.1 | 80.0 | 14.5 | 18.4 | 11.0 | 11.5 | 23.0 | ||
AdapNetv2_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 63.8 | 97.6 | 79.2 | 88.7 | 37.8 | 33.1 | 48.3 | 50.1 | 59.1 | 90.0 | 67.0 | 93.8 | 73.5 | 50.5 | 92.4 | 46.3 | 58.5 | 42.9 | 42.6 | 59.9 | ||
MapillaryAI_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 80.5 | 98.5 | 85.7 | 93.6 | 60.6 | 64.7 | 67.6 | 76.4 | 80.2 | 93.8 | 73.6 | 95.8 | 86.4 | 71.0 | 95.8 | 72.3 | 83.9 | 80.8 | 71.3 | 76.7 | ||
FCN101_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 30.4 | 91.6 | 51.8 | 75.0 | 14.0 | 7.6 | 0.9 | 0.0 | 0.9 | 76.6 | 40.5 | 76.4 | 31.8 | 0.1 | 74.9 | 8.9 | 0.0 | 15.4 | 0.0 | 11.2 | ||
MaskRCNN_BOSH | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Jin shengtao, Yi zhihao, Liu wei [Our team name is firefly] | Bosh autodrive challenge more details | n/a | 73.9 | 97.7 | 80.4 | 90.2 | 49.0 | 51.7 | 60.9 | 60.1 | 69.8 | 92.2 | 65.4 | 88.3 | 82.0 | 64.5 | 94.5 | 71.9 | 84.2 | 72.3 | 56.8 | 71.2 | ||
EnsembleModel_Bosch | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Jin shengtao, Yi zhihao, Liu wei [Our team name was MaskRCNN_BOSH,firefly] | we've ensembled three model(erfnet,deeplab-mobilenet,tusimple) and gained 0.57 improvment of IoU Classes value. The best single model is 73.8549 more details | n/a | 74.4 | 98.2 | 83.6 | 91.4 | 48.5 | 53.7 | 59.3 | 66.8 | 71.9 | 92.6 | 70.9 | 94.5 | 82.4 | 64.5 | 94.7 | 61.8 | 77.5 | 71.2 | 59.9 | 71.1 | ||
EVANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 69.8 | 98.0 | 82.4 | 90.8 | 43.4 | 49.0 | 60.0 | 64.2 | 68.9 | 92.0 | 69.0 | 94.6 | 79.2 | 58.8 | 93.5 | 52.1 | 55.7 | 53.6 | 53.5 | 67.3 | ||
CLRCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | CLRCNet: Cascaded Low-Rank Convolutions for Semantic Segmentation in Real-time | Anonymous | A lightweight and real-time semantic segmentation method. more details | 0.013 | 63.3 | 97.2 | 77.3 | 88.0 | 35.9 | 40.1 | 50.1 | 55.1 | 60.0 | 90.9 | 66.7 | 93.4 | 73.1 | 49.8 | 90.8 | 37.4 | 51.5 | 45.1 | 41.9 | 58.0 | |
Edgenet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | A lightweight semantic segmentation network combined with edge information and channel-wise attention mechanism. more details | 0.03 | 71.0 | 98.1 | 83.1 | 91.6 | 45.4 | 50.6 | 62.6 | 67.2 | 71.4 | 92.4 | 69.7 | 94.9 | 80.4 | 61.1 | 94.3 | 50.0 | 60.9 | 52.5 | 55.3 | 67.7 | ||
L2-SP | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Explicit Inductive Bias for Transfer Learning with Convolutional Networks | Xuhong Li, Yves Grandvalet, Franck Davoine | ICML-2018 | With a simple variant of weight decay, L2-SP regularization (see the paper for details), we reproduced PSPNet based on the original ResNet-101 using "train_fine + val_fine + train_extra" set (2975 + 500 + 20000 images), with a small batch size 8. The sync batch normalization layer is implemented in Tensorflow (see the code). more details | n/a | 81.2 | 98.7 | 86.8 | 93.6 | 64.7 | 63.4 | 67.4 | 74.5 | 79.4 | 93.6 | 73.1 | 95.6 | 86.5 | 72.5 | 96.1 | 76.0 | 90.7 | 83.1 | 70.5 | 76.5 |
ALV303 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.2 | 72.2 | 98.3 | 84.2 | 92.0 | 42.1 | 51.8 | 65.9 | 73.7 | 77.5 | 92.9 | 71.6 | 95.0 | 83.0 | 61.6 | 94.6 | 46.8 | 60.0 | 54.3 | 55.5 | 71.8 | ||
NCTU-ITRI | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | For the purpose of fast semantic segmentation, we design a CNN-based encoder-decoder architecture, which is called DSNet. The encoder part is constructed based on the concept of DenseNet, and a simple decoder is adopted to make the network more efficient without degrading the accuracy. We pre-train the encoder network on the ImageNet dataset. Then, only the fine-annotated Cityscapes dataset (2975 training images) is used to train the complete DSNet. The DSNet demonstrates a good trade-off between accuracy and speed. It can process 68 frames per second on 1024x512 resolution images on a single GTX 1080 Ti GPU. more details | 0.0147 | 69.1 | 98.0 | 82.1 | 90.4 | 42.4 | 45.8 | 56.4 | 61.1 | 66.6 | 91.7 | 70.0 | 94.3 | 77.2 | 59.1 | 93.2 | 49.5 | 59.4 | 56.3 | 53.5 | 65.8 | ||
ADSCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ADSCNet: Asymmetric Depthwise Separable Convolution for Semantic Segmentation in Real-time | Anonymous | A lightweight and real-time semantic segmentation method for mobile devices. more details | 0.013 | 64.5 | 97.3 | 78.0 | 88.6 | 39.5 | 40.1 | 51.4 | 55.0 | 60.3 | 91.1 | 66.9 | 93.5 | 73.7 | 50.6 | 91.4 | 44.9 | 51.7 | 48.0 | 43.4 | 59.9 | |
SRC-B-MachineLearningLab | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Jianlong Yuan, Zelu Deng, Shu Wang, Zhenbo Luo | Samsung Research Center MachineLearningLab. The result is tested by multi scale and filp. The paper is in preparing. more details | n/a | 82.5 | 98.7 | 87.3 | 94.0 | 64.8 | 64.5 | 70.7 | 76.7 | 81.2 | 94.0 | 73.9 | 95.9 | 87.9 | 72.7 | 96.3 | 79.2 | 92.2 | 88.1 | 71.5 | 77.8 | ||
Tencent AI Lab | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 82.9 | 98.6 | 86.9 | 94.1 | 63.5 | 63.0 | 70.7 | 77.7 | 80.2 | 94.0 | 73.1 | 95.9 | 87.8 | 74.5 | 96.3 | 82.8 | 94.3 | 90.4 | 74.0 | 77.5 | ||
ERINet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | Efficient residual inception networks for real-time semantic segmentation more details | 0.023 | 69.8 | 98.0 | 82.4 | 90.4 | 41.7 | 47.7 | 59.0 | 63.0 | 67.7 | 92.0 | 69.6 | 94.7 | 78.3 | 57.3 | 93.4 | 50.5 | 60.7 | 61.5 | 53.3 | 65.7 | ||
PGCNet_Res101_fine | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | we choose the ResNet101 pretrained on ImageNet as our backbone, then we use both the train-fine and the val-fine data to train our model with batch size=8 for 8w iterations without any bells and whistles. We will release our paper latter. more details | n/a | 80.5 | 98.7 | 87.1 | 93.6 | 59.6 | 62.7 | 69.3 | 77.6 | 80.4 | 93.9 | 73.1 | 95.6 | 87.3 | 73.1 | 96.4 | 70.8 | 84.9 | 76.5 | 71.5 | 78.1 | ||
EDANet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Efficient Dense Modules of Asymmetric Convolution for Real-Time Semantic Segmentation | Shao-Yuan Lo (NCTU), Hsueh-Ming Hang (NCTU), Sheng-Wei Chan (ITRI), Jing-Jhih Lin (ITRI) | Training data: Fine annotations only (train+val. set, 2975+500 images) without any pretraining nor coarse annotations. For training on fine annotations (train set only, 2975 images), it attains a mIoU of 66.3%. Runtime: (resolution 512x1024) 0.0092s on a single GTX 1080Ti, 0.0123s on a single Titan X. more details | 0.0092 | 67.3 | 97.8 | 80.6 | 89.5 | 42.0 | 46.0 | 52.3 | 59.8 | 65.0 | 91.4 | 68.7 | 93.6 | 75.7 | 54.3 | 92.4 | 40.9 | 58.7 | 56.0 | 50.4 | 64.0 | |
OCNet_ResNet101_fine | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Context is essential for various computer vision tasks. The state-of-the-art scene parsing methods define the context as the prior of the scene categories (e.g., bathroom, badroom, street). Such scene context is not suitable for the street scene parsing tasks as most of the scenes are similar. In this work, we propose the Object Context that captures the prior of the object's category that the pixel belongs to. We compute the object context by aggregating all the pixels' features according to a attention map that encodes the probability of each pixel that it belongs to the same category with the associated pixel. Specifically, We employ the self-attention method to compute the pixel-wise attention map. We further propose the Pyramid Object Context and Atrous Spatial Pyramid Object Context to handle the problem of multi-scales. more details | n/a | 81.2 | 98.7 | 87.1 | 93.7 | 59.4 | 62.3 | 69.6 | 78.0 | 80.8 | 93.9 | 72.6 | 95.8 | 87.5 | 73.5 | 96.4 | 73.6 | 88.2 | 80.6 | 71.9 | 78.3 | ||
Knowledge-Aware | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Knowledge-Aware Semantic Segmentation more details | n/a | 79.3 | 97.6 | 80.1 | 93.0 | 56.8 | 57.3 | 66.0 | 72.6 | 78.2 | 93.4 | 71.0 | 95.7 | 85.9 | 69.7 | 95.9 | 74.9 | 88.5 | 86.3 | 68.1 | 75.5 | ||
CASIA_IVA_DANet_NoCoarse | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Dual Attention Network for Scene Segmentation | Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang,and Hanqing Lu | CVPR2019 | we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results more details | n/a | 81.5 | 98.6 | 86.1 | 93.5 | 56.2 | 63.3 | 69.7 | 77.3 | 81.3 | 93.9 | 72.9 | 95.7 | 87.3 | 72.9 | 96.2 | 76.8 | 89.5 | 86.5 | 72.2 | 78.2 |
LDFNet | yes | yes | no | no | no | no | yes | yes | no | no | 2 | 2 | yes | yes | Incorporating Luminance, Depth and Color Information by Fusion-based Networks for Semantic Segmentation | Shang-Wei Hung, Shao-Yuan Lo | We propose a preferred solution, which incorporates Luminance, Depth and color information by a Fusion-based network named LDFNet. It includes a distinctive encoder sub-network to process the depth maps and further employs the luminance images to assist the depth information in a process. LDFNet achieves very competitive results compared to the other state-of-art systems on the challenging Cityscapes dataset, while it maintains an inference speed faster than most of the existing top-performing networks. The experimental results show the effectiveness of the proposed information-fused approach and the potential of LDFNet for road scene understanding tasks. more details | n/a | 71.3 | 98.1 | 83.5 | 91.1 | 45.9 | 49.5 | 62.0 | 67.1 | 70.7 | 92.5 | 70.7 | 94.8 | 81.0 | 62.9 | 93.9 | 52.6 | 61.0 | 55.1 | 54.5 | 68.0 | |
CGNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Tianyi Wu et al | we propose a novel Context Guided Network for semantic segmentation on mobile devices. We first design a Context Guided (CG) block by considering the inherent characteristic of semantic segmentation. CG Block aggregates local feature, surrounding context feature and global context feature effectively and efficiently. Based on the CG block, we develop Context Guided Network (CGNet), which not only has a strong capacity of localization and recognition, but also has a low computational and memory footprint. Under a similar number of parameters, the proposed CGNet significantly outperforms existing segmentation networks. Extensive experiments on Cityscapes and CamVid datasets verify the effectiveness of the proposed approach. Specifically, without any post-processing, the proposed approach achieves 64.8% mean IoU on Cityscapes test set with less than 0.5 M parameters, and has a frame-rate of 50 fps on one NVIDIA Tesla K80 card for 2048 × 1024 high-resolution image. more details | 0.02 | 64.8 | 95.9 | 73.9 | 89.9 | 43.9 | 46.0 | 52.9 | 55.9 | 63.8 | 91.7 | 68.3 | 94.1 | 76.7 | 54.2 | 91.3 | 41.3 | 56.0 | 32.8 | 41.1 | 60.9 | ||
SAITv2-light | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.025 | 73.0 | 98.2 | 83.5 | 91.2 | 55.2 | 52.8 | 56.5 | 61.9 | 68.4 | 92.4 | 70.8 | 94.5 | 78.8 | 59.4 | 93.9 | 61.6 | 78.0 | 67.6 | 54.8 | 67.4 | ||
Deform_ResNet_Balanced | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.258 | 51.3 | 94.8 | 69.0 | 83.0 | 17.4 | 18.6 | 38.5 | 28.0 | 36.5 | 88.6 | 56.0 | 88.2 | 63.0 | 29.3 | 89.4 | 23.6 | 36.8 | 31.4 | 28.4 | 54.6 | ||
NfS-Seg | yes | yes | yes | yes | no | no | yes | yes | yes | yes | no | no | no | no | Uncertainty-Aware Knowledge Distillation for Real-Time Scene Segmentation: 7.43 GFLOPs at Full-HD Image with 120 fps | Anonymous | more details | 0.00837312 | 73.1 | 98.2 | 83.7 | 91.3 | 55.6 | 53.1 | 57.2 | 62.6 | 68.7 | 92.4 | 70.9 | 94.6 | 78.9 | 59.7 | 94.0 | 60.7 | 76.3 | 67.3 | 55.4 | 67.8 | |
Improving Semantic Segmentation via Video Propagation and Label Relaxation | yes | yes | yes | yes | no | no | no | no | yes | yes | no | no | yes | yes | Improving Semantic Segmentation via Video Propagation and Label Relaxation | Yi Zhu, Karan Sapra, Fitsum A. Reda, Kevin J. Shih, Shawn Newsam, Andrew Tao, Bryan Catanzaro | CVPR 2019 | Semantic segmentation requires large amounts of pixel-wise annotations to learn accurate models. In this paper, we present a video prediction-based methodology to scale up training sets by synthesizing new training samples in order to improve the accuracy of semantic segmentation networks. We exploit video prediction models' ability to predict future frames in order to also predict future labels. A joint propagation strategy is also proposed to alleviate mis-alignments in synthesized samples. We demonstrate that training segmentation models on datasets augmented by the synthesized samples lead to significant improvements in accuracy. Furthermore, we introduce a novel boundary label relaxation technique that makes training robust to annotation noise and propagation artifacts along object boundaries. Our proposed methods achieve state-of-the-art mIoUs of 83.5% on Cityscapes and 82.9% on CamVid. Our single model, without model ensembles, achieves 72.8% mIoU on the KITTI semantic segmentation test set, which surpasses the winning entry of the ROB challenge 2018. more details | n/a | 83.5 | 98.8 | 87.8 | 94.2 | 64.1 | 65.0 | 72.4 | 79.0 | 82.8 | 94.2 | 74.0 | 96.1 | 88.2 | 75.4 | 96.5 | 78.8 | 94.0 | 91.6 | 73.7 | 79.0 |
Spatial Sampling Net | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Spatial Sampling Network for Fast Scene Understanding | Davide Mazzini, Raimondo Schettini | CVPR 2019 Workshop on Autonomous Driving | We propose a network architecture to perform efficient scene understanding. This work presents three main novelties: the first is an Improved Guided Upsampling Module that can replace in toto the decoder part in common semantic segmentation networks. Our second contribution is the introduction of a new module based on spatial sampling to perform Instance Segmentation. It provides a very fast instance segmentation, needing only thresholding as post-processing step at inference time. Finally, we propose a novel efficient network design that includes the new modules and we test it against different datasets for outdoor scene understanding. more details | 0.00884 | 68.9 | 97.9 | 81.2 | 90.2 | 47.4 | 47.8 | 50.9 | 57.3 | 64.6 | 91.4 | 67.9 | 94.0 | 77.1 | 57.3 | 93.5 | 52.6 | 69.0 | 53.1 | 51.0 | 64.1 |
SwiftNetRN-18 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images | Marin Oršić, Ivan Krešo, Petra Bevandić, Siniša Šegvić | CVPR 2019 | more details | 0.0243 | 75.5 | 98.3 | 83.9 | 92.2 | 46.3 | 52.8 | 63.2 | 70.6 | 75.8 | 93.1 | 70.3 | 95.4 | 84.0 | 64.5 | 95.3 | 63.9 | 78.0 | 71.9 | 61.6 | 73.6 |
Fast-SCNN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Fast-SCNN: Fast Semantic Segmentation Network | Rudra PK Poudel, Stephan Liwicki, Roberto Cipolla | The encoder-decoder framework is state-of-the-art for offline semantic image segmentation. Since the rise in autonomous systems, real-time computation is increasingly desirable. In this paper, we introduce fast segmentation convolutional neural network (Fast-SCNN), an above real-time semantic segmentation model on high resolution image data (1024x2048px) suited to efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, we introduce our `learning to downsample' module which computes low-level features for multiple resolution branches simultaneously. Our network combines spatial detail at high resolution with deep features extracted at lower resolution, yielding an accuracy of 68.0% mean intersection over union at 123.5 frames per second on Cityscapes. We also show that large scale pre-training is unnecessary. We thoroughly validate our metric in experiments with ImageNet pre-training and the coarse labeled data of Cityscapes. Finally, we show even faster computation with competitive results on subsampled inputs, without any network modifications. more details | 0.0081 | 68.0 | 97.9 | 81.6 | 89.7 | 46.4 | 48.6 | 48.3 | 53.1 | 60.5 | 90.7 | 67.2 | 94.3 | 74.0 | 54.6 | 93.0 | 57.4 | 65.5 | 58.2 | 50.0 | 61.2 | |
Fast-SCNN (Half-resolution) | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Fast-SCNN: Fast Semantic Segmentation Network | Rudra P K Poudel, Stephan Liwicki, Roberto Cipolla | The encoder-decoder framework is state-of-the-art for offline semantic image segmentation. Since the rise in autonomous systems, real-time computation is increasingly desirable. In this paper, we introduce fast segmentation convolutional neural network (Fast-SCNN), an above real-time semantic segmentation model on high resolution image data (1024x2048px) suited to efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, we introduce our `learning to downsample' module which computes low-level features for multiple resolution branches simultaneously. Our network combines spatial detail at high resolution with deep features extracted at lower resolution, yielding an accuracy of 68.0% mean intersection over union at 123.5 frames per second on Cityscapes. We also show that large scale pre-training is unnecessary. We thoroughly validate our metric in experiments with ImageNet pre-training and the coarse labeled data of Cityscapes. Finally, we show even faster computation with competitive results on subsampled inputs, without any network modifications. more details | 0.0035 | 62.8 | 97.4 | 77.8 | 87.4 | 39.7 | 41.8 | 35.0 | 39.4 | 50.5 | 88.5 | 63.3 | 92.7 | 65.7 | 46.4 | 91.0 | 56.9 | 70.3 | 56.5 | 40.9 | 52.6 | |
Fast-SCNN (Quarter-resolution) | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Fast-SCNN: Fast Semantic Segmentation Network | Rudra P K Poudel, Stephan Liwicki, Roberto Cipolla | The encoder-decoder framework is state-of-the-art for offline semantic image segmentation. Since the rise in autonomous systems, real-time computation is increasingly desirable. In this paper, we introduce fast segmentation convolutional neural network (Fast-SCNN), an above real-time semantic segmentation model on high resolution image data (1024x2048px) suited to efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, we introduce our `learning to downsample' module which computes low-level features for multiple resolution branches simultaneously. Our network combines spatial detail at high resolution with deep features extracted at lower resolution, yielding an accuracy of 68.0% mean intersection over union at 123.5 frames per second on Cityscapes. We also show that large scale pre-training is unnecessary. We thoroughly validate our metric in experiments with ImageNet pre-training and the coarse labeled data of Cityscapes. Finally, we show even faster computation with competitive results on subsampled inputs, without any network modifications. more details | 0.00206 | 51.9 | 96.3 | 70.5 | 83.1 | 26.1 | 23.5 | 18.7 | 26.1 | 33.1 | 84.5 | 55.7 | 89.5 | 55.2 | 35.4 | 86.8 | 38.7 | 47.4 | 46.7 | 27.3 | 42.0 | |
DSNet | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | yes | yes | DSNet for Real-Time Driving Scene Semantic Segmentation | Wenfu Wang | DSNet for Real-Time Driving Scene Semantic Segmentation more details | 0.027 | 69.3 | 97.1 | 79.7 | 89.4 | 37.8 | 50.4 | 56.7 | 63.1 | 68.5 | 91.0 | 67.9 | 93.5 | 75.7 | 61.4 | 91.9 | 50.4 | 66.8 | 56.8 | 54.1 | 65.4 | |
SwiftNetRN-18 pyramid | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 74.4 | 98.4 | 84.7 | 92.0 | 48.2 | 50.9 | 62.7 | 69.1 | 74.5 | 92.6 | 69.8 | 95.3 | 83.8 | 66.2 | 95.4 | 64.8 | 75.4 | 55.9 | 60.7 | 72.4 | ||
DF-Seg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search | Xin Li, Yiming Zhou, Zheng Pan, Jiashi Feng | CVPR 2019 | DF1-Seg-d8 more details | 0.007 | 71.4 | 97.9 | 81.6 | 90.6 | 44.8 | 50.4 | 53.2 | 62.7 | 68.0 | 91.7 | 68.3 | 93.9 | 78.8 | 58.9 | 93.4 | 50.5 | 71.4 | 77.4 | 56.4 | 66.4 |
DF-Seg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | DF2-Seg2 more details | 0.018 | 75.3 | 98.4 | 84.6 | 92.3 | 51.4 | 54.6 | 62.1 | 68.8 | 73.3 | 92.9 | 70.3 | 95.3 | 82.8 | 64.8 | 95.0 | 59.5 | 74.4 | 78.7 | 60.6 | 71.1 | ||
DDAR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | DiDi Labs, AR Group more details | n/a | 82.2 | 98.7 | 87.3 | 93.6 | 57.2 | 62.9 | 70.9 | 77.8 | 82.0 | 93.8 | 71.4 | 95.8 | 88.1 | 74.5 | 96.3 | 80.1 | 91.3 | 87.6 | 74.0 | 79.0 | ||
LDN-121 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Ladder-style DenseNets for Semantic Segmentation of Large Images | Ivan Kreso, Josip Krapac, Sinisa Segvic | Ladder DenseNet-121 trained on train+val, fine labels only. Single-scale inference. more details | 0.048 | 79.3 | 98.6 | 85.8 | 93.1 | 58.1 | 59.4 | 66.0 | 74.1 | 77.7 | 93.5 | 71.5 | 95.7 | 85.7 | 69.4 | 95.9 | 71.3 | 86.5 | 83.5 | 66.4 | 75.2 | |
TKCN | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Tree-structured Kronecker Convolutional Network for Semantic Segmentation | Tianyi Wu, Sheng Tang, Rui Zhang, Juan Cao, Jintao Li | more details | n/a | 79.5 | 98.4 | 85.8 | 93.0 | 51.7 | 61.7 | 67.6 | 75.8 | 80.0 | 93.6 | 72.7 | 95.4 | 86.9 | 70.9 | 95.9 | 64.5 | 86.9 | 81.8 | 69.6 | 77.6 | |
RPNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Residual Pyramid Learning for Single-Shot Semantic Segmentation | Xiaoyu Chen, Xiaotian Lou, Lianfa Bai, Jing Han | arXiv | we put forward a method for single-shot segmentation in a feature residual pyramid network (RPNet), which learns the main and residuals of segmentation by decomposing the label at different levels of residual blocks. more details | 0.008 | 68.3 | 97.9 | 81.2 | 89.8 | 40.2 | 45.7 | 56.3 | 61.6 | 67.8 | 91.7 | 68.0 | 94.5 | 78.2 | 57.4 | 92.9 | 48.3 | 57.8 | 56.1 | 49.6 | 62.2 |
navi | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | yuxb | mutil scale test more details | n/a | 81.0 | 98.5 | 85.7 | 93.5 | 61.8 | 60.1 | 68.1 | 75.3 | 78.7 | 93.7 | 73.5 | 95.6 | 86.8 | 69.6 | 96.1 | 78.3 | 92.7 | 87.0 | 68.8 | 75.6 | ||
Auto-DeepLab-L | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation | Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan Yuille, Li Fei-Fei | arxiv | In this work, we study Neural Architecture Search for semantic image segmentation, an important computer vision task that assigns a semantic label to every pixel in an image. Existing works often focus on searching the repeatable cell structure, while hand-designing the outer network structure that controls the spatial resolution changes. This choice simplifies the search space, but becomes increasingly problematic for dense image prediction which exhibits a lot more network level architectural variations. Therefore, we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space. We present a network level search space that includes many popular designs, and develop a formulation that allows efficient gradient-based architecture search (3 P100 GPU days on Cityscapes images). We demonstrate the effectiveness of the proposed method on the challenging Cityscapes, PASCAL VOC 2012, and ADE20K datasets. Without any ImageNet pretraining, our architecture searched specifically for semantic image segmentation attains state-of-the-art performance. Please refer to https://arxiv.org/abs/1901.02985 for details. more details | n/a | 82.1 | 98.8 | 87.6 | 93.8 | 61.4 | 64.4 | 71.2 | 77.6 | 80.9 | 94.1 | 72.7 | 96.0 | 87.8 | 72.8 | 96.5 | 78.2 | 90.9 | 88.4 | 69.0 | 77.6 |
LiteSeg-Darknet19 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | LiteSeg: A Litewiegth ConvNet for Semantic Segmentation | Taha Emara, Hossam E. Abd El Munim, Hazem M. Abbas | DICTA 2019 | more details | 0.0102 | 70.8 | 98.0 | 82.2 | 91.3 | 49.0 | 50.9 | 58.8 | 64.4 | 72.1 | 92.7 | 69.5 | 95.0 | 80.7 | 57.0 | 94.2 | 49.3 | 61.8 | 55.4 | 53.6 | 68.4 |
AdapNet++ | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Self-Supervised Model Adaptation for Multimodal Semantic Segmentation | Abhinav Valada, Rohit Mohan, Wolfram Burgard | IJCV 2019 | In this work, we propose the AdapNet++ architecture for semantic segmentation that aims to achieve the right trade-off between performance and computational complexity of the model. AdapNet++ incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling (eASPP) module that has a larger effective receptive field with more than 10x fewer parameters compared to the standard ASPP, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details. Comprehensive empirical evaluations on the challenging Cityscapes, Synthia, SUN RGB-D, ScanNet and Freiburg Forest datasets demonstrate that our architecture achieves state-of-the-art performance while simultaneously being efficient in terms of both the number of parameters and inference time. Please refer to https://arxiv.org/abs/1808.03833 for details. A live demo on various datasets can be viewed at http://deepscene.cs.uni-freiburg.de more details | n/a | 81.3 | 98.6 | 86.2 | 93.3 | 57.8 | 62.0 | 67.3 | 75.0 | 79.6 | 93.6 | 72.3 | 95.3 | 86.4 | 72.2 | 96.2 | 81.5 | 92.4 | 88.0 | 71.2 | 76.6 |
SSMA | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | yes | yes | Self-Supervised Model Adaptation for Multimodal Semantic Segmentation | Abhinav Valada, Rohit Mohan, Wolfram Burgard | IJCV 2019 | Learning to reliably perceive and understand the scene is an integral enabler for robots to operate in the real-world. This problem is inherently challenging due to the multitude of object types as well as appearance changes caused by varying illumination and weather conditions. Leveraging complementary modalities can enable learning of semantically richer representations that are resilient to such perturbations. Despite the tremendous progress in recent years, most multimodal convolutional neural network approaches directly concatenate feature maps from individual modality streams rendering the model incapable of focusing only on the relevant complementary information for fusion. To address this limitation, we propose a mutimodal semantic segmentation framework that dynamically adapts the fusion of modality-specific features while being sensitive to the object category, spatial location and scene context in a self-supervised manner. Specifically, we propose an architecture consisting of two modality-specific encoder streams that fuse intermediate encoder representations into a single decoder using our proposed SSMA fusion mechanism which optimally combines complementary features. As intermediate representations are not aligned across modalities, we introduce an attention scheme for better correlation. Extensive experimental evaluations on the challenging Cityscapes, Synthia, SUN RGB-D, ScanNet and Freiburg Forest datasets demonstrate that our architecture achieves state-of-the-art performance in addition to providing exceptional robustness in adverse perceptual conditions. Please refer to https://arxiv.org/abs/1808.03833 for details. A live demo on various datasets can be viewed at http://deepscene.cs.uni-freiburg.de more details | n/a | 82.3 | 98.7 | 86.9 | 93.6 | 57.9 | 63.4 | 68.9 | 77.1 | 81.1 | 93.9 | 73.1 | 95.3 | 87.4 | 73.8 | 96.4 | 81.1 | 93.5 | 90.0 | 73.5 | 78.3 |
LiteSeg-Mobilenet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | LiteSeg: A Litewiegth ConvNet for Semantic Segmentation | Taha Emara, Hossam E. Abd El Munim, Hazem M. Abbas | DICTA 2019 | more details | 0.0062 | 67.8 | 97.1 | 77.8 | 90.4 | 44.2 | 47.5 | 55.3 | 59.3 | 68.7 | 92.0 | 69.2 | 94.6 | 78.6 | 55.3 | 92.0 | 42.9 | 55.8 | 54.8 | 49.2 | 64.0 |
LiteSeg-Shufflenet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | LiteSeg: A Litewiegth ConvNet for Semantic Segmentation | Taha Emara, Hossam E. Abd El Munim, Hazem M. Abbas | DICTA 2019 | more details | 0.007518 | 65.2 | 97.1 | 77.9 | 89.5 | 41.8 | 42.5 | 49.8 | 52.9 | 65.6 | 91.5 | 67.8 | 94.0 | 76.1 | 50.1 | 91.4 | 43.4 | 51.8 | 48.0 | 44.3 | 62.7 |
Fast OCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 82.1 | 98.7 | 87.3 | 93.7 | 59.9 | 62.6 | 70.5 | 78.4 | 81.0 | 93.8 | 73.4 | 95.6 | 87.6 | 75.1 | 96.4 | 75.8 | 90.1 | 88.9 | 71.9 | 78.6 | ||
ShuffleNet v2 + DPC | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | An efficient solution for semantic segmentation: ShuffleNet V2 with atrous separable convolutions | Sercan Turkmen, Janne Heikkila | ShuffleNet v2 with DPC at output_stride 16. more details | n/a | 70.3 | 98.1 | 82.5 | 90.7 | 51.3 | 50.9 | 51.5 | 61.2 | 66.9 | 91.7 | 68.5 | 93.9 | 78.5 | 59.7 | 94.0 | 59.1 | 68.1 | 48.1 | 54.3 | 67.5 | |
ERSNet-coarse | yes | yes | yes | yes | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 0.012 | 67.6 | 97.9 | 81.3 | 89.8 | 45.0 | 43.9 | 54.4 | 56.5 | 63.3 | 91.5 | 68.0 | 94.5 | 75.1 | 53.6 | 92.9 | 50.2 | 66.9 | 55.0 | 44.4 | 60.3 | ||
MiniNet-v2-coarse | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.012 | 67.8 | 98.0 | 81.5 | 90.0 | 44.6 | 44.1 | 54.6 | 57.7 | 63.9 | 91.6 | 68.2 | 94.5 | 75.4 | 54.5 | 93.2 | 50.1 | 66.8 | 53.1 | 44.0 | 61.3 | ||
SwiftNetRN-18 ensemble | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images | Marin Oršić, Ivan Krešo, Petra Bevandić, Siniša Šegvić | CVPR 2019 | more details | n/a | 76.5 | 98.5 | 85.4 | 92.5 | 49.1 | 53.9 | 64.2 | 71.1 | 76.3 | 93.1 | 71.0 | 95.5 | 84.8 | 67.5 | 95.6 | 67.5 | 80.5 | 68.3 | 63.5 | 74.2 |
EFC_sync | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Anonymous | more details | n/a | 80.2 | 98.6 | 86.2 | 93.3 | 58.0 | 62.9 | 66.4 | 74.4 | 79.2 | 93.6 | 73.0 | 95.6 | 86.5 | 71.5 | 96.0 | 75.8 | 88.6 | 77.4 | 69.4 | 76.7 | ||
PL-Seg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Partial Order Pruning: for Best Speed/Accuracy Trade-of in Neural Architecture Search | Xin Li, Yiming Zhou, Zheng Pan, Jiashi Feng | CVPR 2019 | Following "partial order pruning", we conduct architecture searching experiments on Snapdragon 845 platform, and obtained PL1A/PL1A-Seg. 1、Snapdragon 845 2、NCNN Library 3、latency evaluated at 640x384 more details | 0.0192 | 69.1 | 97.9 | 80.8 | 90.2 | 42.0 | 44.4 | 52.3 | 59.8 | 66.3 | 91.8 | 68.7 | 94.7 | 77.8 | 56.9 | 93.2 | 54.4 | 67.7 | 60.6 | 48.4 | 65.0 |
MiniNet-v2-pretrained | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.012 | 68.0 | 98.0 | 81.6 | 89.9 | 43.5 | 44.3 | 54.8 | 58.0 | 64.1 | 91.7 | 68.8 | 94.5 | 75.9 | 54.3 | 93.2 | 49.1 | 67.7 | 57.0 | 45.0 | 61.0 | ||
GALD-Net | yes | yes | yes | yes | yes | yes | yes | yes | no | no | no | no | yes | yes | Global Aggregation then Local Distribution in Fully Convolutional Networks | Xiangtai Li, Li Zhang, Ansheng You, Maoke Yang, Kuiyuan Yang, Yunhai Tong | BMVC 2019 | We propose Global Aggregation then Local Distribution (GALD) scheme to distribute global information to each position adaptively according to the local information around the position. (Joint work: Key Laboratory of Machine Perception, School of EECS @Peking University and DeepMotion AI Research ) more details | n/a | 83.3 | 98.8 | 87.7 | 94.2 | 65.0 | 66.7 | 73.1 | 79.3 | 82.4 | 94.2 | 72.9 | 96.0 | 88.4 | 76.2 | 96.5 | 79.8 | 89.6 | 87.7 | 74.1 | 79.9 |
GALD-net | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Global Aggregation then Local Distribution in Fully Convolutional Networks | Xiangtai Li, Li Zhang, Ansheng You, Maoke Yang, Kuiyuan Yang, Yunhai Tong | BMVC 2019 | We propose Global Aggregation then Local Distribution (GALD) scheme to distribute global information to each position adaptively according the local information surrounding the position. more details | n/a | 83.1 | 98.8 | 87.6 | 94.2 | 64.6 | 66.5 | 72.8 | 79.0 | 82.2 | 94.2 | 73.1 | 96.0 | 88.3 | 75.2 | 96.5 | 79.3 | 89.7 | 87.5 | 73.8 | 80.0 |
ndnet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.024 | 64.8 | 97.7 | 79.5 | 89.0 | 39.6 | 41.2 | 47.0 | 51.8 | 58.7 | 90.7 | 66.8 | 93.7 | 74.6 | 53.3 | 92.4 | 48.2 | 61.9 | 43.3 | 44.0 | 58.5 | ||
HRNetV2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | High-Resolution Representations for Labeling Pixels and Regions | Ke Sun, Yang Zhao, Borui Jiang, Tianheng Cheng, Bin Xiao, Dong Liu, Yadong Mu, Xinggang Wang, Wenyu Liu, Jingdong Wang | The high-resolution network (HRNet) recently developed for human pose estimation, maintains high-resolution representations through the whole process by connecting high-to-low resolution convolutions in parallel and produces strong high-resolution representations by repeatedly conducting fusions across parallel convolutions. more details | n/a | 81.8 | 98.8 | 87.9 | 93.9 | 61.3 | 63.1 | 72.1 | 79.3 | 82.4 | 94.0 | 73.4 | 96.0 | 88.5 | 75.1 | 96.5 | 72.5 | 88.1 | 79.9 | 73.1 | 79.2 | |
SPGNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | SPGNet: Semantic Prediction Guidance for Scene Parsing | Bowen Cheng, Liang-Chieh Chen, Yunchao Wei, Yukun Zhu, Zilong Huang, Jinjun Xiong, Thomas Huang, Wen-Mei Hwu, Honghui Shi | ICCV 2019 | Multi-scale context module and single-stage encoder-decoder structure are commonly employed for semantic segmentation. The multi-scale context module refers to the operations to aggregate feature responses from a large spatial extent, while the single-stage encoder-decoder structure encodes the high-level semantic information in the encoder path and recovers the boundary information in the decoder path. In contrast, multi-stage encoder-decoder networks have been widely used in human pose estimation and show superior performance than their single-stage counterpart. However, few efforts have been attempted to bring this effective design to semantic segmentation. In this work, we propose a Semantic Prediction Guidance (SPG) module which learns to re-weight the local features through the guidance from pixel-wise semantic prediction. We find that by carefully re-weighting features across stages, a two-stage encoder-decoder network coupled with our proposed SPG module can significantly outperform its one-stage counterpart with similar parameters and computations. Finally, we report experimental results on the semantic segmentation benchmark Cityscapes, in which our SPGNet attains 81.1% on the test set using only 'fine' annotations. more details | n/a | 81.1 | 98.8 | 87.6 | 93.8 | 56.5 | 61.9 | 71.9 | 80.0 | 82.1 | 94.1 | 73.5 | 96.1 | 88.7 | 74.9 | 96.5 | 67.3 | 84.8 | 81.8 | 71.1 | 79.4 |
LDN-161 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Ladder-style DenseNets for Semantic Segmentation of Large Images | Ivan Kreso, Josip Krapac, Sinisa Segvic | Ladder DenseNet-161 trained on train+val, fine labels only. Inference on multi-scale inputs. more details | 2.0 | 80.6 | 98.7 | 86.5 | 93.6 | 61.8 | 60.9 | 68.3 | 75.6 | 80.1 | 93.7 | 72.4 | 95.8 | 86.8 | 72.2 | 96.1 | 72.3 | 88.8 | 80.7 | 69.9 | 77.1 | |
GGCF | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 83.2 | 98.8 | 87.8 | 94.1 | 66.0 | 66.1 | 71.1 | 78.4 | 82.2 | 94.1 | 74.5 | 95.8 | 88.1 | 74.0 | 96.5 | 79.9 | 92.4 | 90.8 | 71.8 | 78.4 | ||
GFF-Net | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | GFF: Gated Fully Fusion for Semantic Segmentation | Xiangtai Li, Houlong Zhao, Yunhai Tong, Kuiyuan Yang | We proposed Gated Fully Fusion (GFF) to fuse features from multiple levels through gates in a fully connected way. Specifically, features at each level are enhanced by higher-level features with stronger semantics and lower-level features with more details, and gates are used to control the pass of useful information which significantly reducing noise propagation during fusion. (Joint work: Key Laboratory of Machine Perception, School of EECS @Peking University and DeepMotion AI Research ) more details | n/a | 82.3 | 98.7 | 87.2 | 93.9 | 59.6 | 64.3 | 71.5 | 78.3 | 82.2 | 94.0 | 72.6 | 95.9 | 88.2 | 73.9 | 96.5 | 79.8 | 92.2 | 84.7 | 71.5 | 78.8 | |
Gated-SCNN | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Gated-SCNN: Gated Shape CNNs for Semantic Segmentation | Towaki Takikawa, David Acuna, Varun Jampani, Sanja Fidler | more details | n/a | 82.8 | 98.7 | 87.4 | 94.2 | 61.9 | 64.6 | 72.9 | 79.6 | 82.5 | 94.3 | 74.3 | 96.2 | 88.3 | 74.2 | 96.6 | 77.2 | 90.1 | 87.7 | 72.6 | 79.4 | |
ESPNetv2 | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ESPNetv2: A Light-weight, Power Efficient, and General Purpose Convolutional Neural Network | Sachin Mehta, Mohammad Rastegari, Linda Shapiro, and Hannaneh Hajishirzi | CVPR 2019 | We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2, for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on three different tasks: (1) object classification, (2) semantic segmentation, and (3) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-of-the-art methods. Our network has better generalization properties than ShuffleNetv2 when tested on the MSCOCO multi-object classification task and the Cityscapes urban scene semantic segmentation task. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at https://github.com/sacmehta/ESPNetv2 more details | n/a | 66.2 | 97.3 | 78.6 | 88.8 | 43.5 | 42.1 | 49.3 | 52.6 | 60.0 | 90.5 | 66.8 | 93.3 | 72.9 | 53.1 | 91.8 | 53.0 | 65.9 | 53.2 | 44.2 | 59.9 |
MRFM | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Multi Receptive Field Network for Semantic Segmentation | Jianlong Yuan, Zelu Deng, Shu Wang, Zhenbo Luo | WACV2020 | Semantic segmentation is one of the key tasks in comput- er vision, which is to assign a category label to each pixel in an image. Despite significant progress achieved recently, most existing methods still suffer from two challenging is- sues: 1) the size of objects and stuff in an image can be very diverse, demanding for incorporating multi-scale features into the fully convolutional networks (FCNs); 2) the pixel- s close to or at the boundaries of object/stuff are hard to classify due to the intrinsic weakness of convolutional net- works. To address the first issue, we propose a new Multi- Receptive Field Module (MRFM), explicitly taking multi- scale features into account. For the second issue, we design an edge-aware loss which is effective in distinguishing the boundaries of object/stuff. With these two designs, our Mul- ti Receptive Field Network achieves new state-of-the-art re- sults on two widely-used semantic segmentation benchmark datasets. Specifically, we achieve a mean IoU of 83.0% on the Cityscapes dataset and 88.4% mean IoU on the Pascal VOC2012 dataset. more details | n/a | 83.0 | 98.8 | 88.0 | 94.2 | 63.8 | 64.7 | 72.2 | 78.3 | 81.8 | 94.2 | 73.9 | 95.7 | 88.3 | 74.6 | 96.4 | 79.5 | 92.2 | 88.1 | 72.8 | 78.6 |
DGCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Dual Graph Convolutional Network for Semantic Segmentation | Li Zhang*, Xiangtai Li*, Anurag Arnab, Kuiyuan Yang, Yunhai Tong, Philip H.S. Torr | BMVC 2019 | We propose Dual Graph Convolutional Network (DGCNet) models the global context of the input feature by modelling two orthogonal graphs in a single framework. (Joint work: University of Oxford, Peking University and DeepMotion AI Research) more details | n/a | 82.0 | 98.7 | 87.4 | 93.9 | 62.4 | 63.4 | 70.9 | 78.7 | 81.3 | 94.0 | 73.3 | 95.8 | 87.8 | 73.7 | 96.4 | 76.0 | 91.6 | 81.6 | 71.5 | 78.7 |
dpcan_trainval_os16_225 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 81.6 | 98.7 | 86.9 | 93.5 | 56.6 | 61.5 | 69.9 | 76.8 | 80.9 | 93.8 | 71.5 | 95.7 | 87.8 | 73.0 | 96.3 | 77.6 | 90.6 | 89.6 | 71.2 | 78.1 | ||
Learnable Tree Filter | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Learnable Tree Filter for Structure-preserving Feature Transform | Lin Song; Yanwei Li; Zeming Li; Gang Yu; Hongbin Sun; Jian Sun; Nanning Zheng | NeurIPS 2019 | Learnable Tree Filter for Structure-preserving Feature Transform more details | n/a | 80.8 | 98.7 | 86.8 | 93.4 | 53.6 | 60.2 | 69.6 | 77.1 | 81.1 | 93.7 | 71.7 | 95.8 | 87.5 | 72.2 | 96.2 | 72.7 | 87.0 | 86.4 | 72.6 | 78.4 |
FreeNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 65.8 | 96.9 | 77.3 | 89.3 | 41.9 | 46.4 | 47.9 | 53.0 | 63.2 | 91.5 | 69.0 | 93.5 | 77.1 | 54.3 | 93.3 | 43.6 | 59.8 | 46.7 | 45.7 | 60.8 | ||
HRNetV2 + OCR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | High-Resolution Representations for Labeling Pixels and Regions; OCNet: Object Context Network for Scene Parsing | HRNet Team; OCR Team | HRNetV2W48 + OCR. OCR is an extension of object context networks https://arxiv.org/pdf/1809.00916.pdf more details | n/a | 83.3 | 98.8 | 88.2 | 94.2 | 67.6 | 65.3 | 72.2 | 79.1 | 82.4 | 94.1 | 73.8 | 96.0 | 88.1 | 75.0 | 96.4 | 76.9 | 92.3 | 90.9 | 72.8 | 78.9 | |
Valeo DAR Germany | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Valeo DAR Germany, New Algo Lab more details | n/a | 82.8 | 98.7 | 86.9 | 93.8 | 58.6 | 63.5 | 71.0 | 78.2 | 82.2 | 94.0 | 73.2 | 95.4 | 88.5 | 74.5 | 96.5 | 81.2 | 93.7 | 90.5 | 74.2 | 79.1 | ||
GLNet_fine | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | The proposed network architecture, combined with spatial information and multi scale context information, and repair the boundaries and details of the segmented object through channel attention modules.(Use the train-fine and the val-fine data) more details | n/a | 80.8 | 98.7 | 86.7 | 93.4 | 56.9 | 60.5 | 68.3 | 75.5 | 79.8 | 93.7 | 72.6 | 95.9 | 87.0 | 71.6 | 96.0 | 73.5 | 90.5 | 85.7 | 71.1 | 77.4 | ||
MCDN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 81.1 | 98.7 | 87.0 | 93.5 | 60.8 | 65.7 | 69.8 | 77.1 | 80.8 | 93.8 | 72.4 | 95.8 | 87.1 | 71.9 | 96.2 | 77.6 | 88.7 | 77.4 | 70.4 | 76.6 | ||
AAF+GLR | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 78.2 | 98.4 | 84.8 | 92.8 | 50.8 | 57.8 | 68.0 | 74.9 | 78.5 | 93.6 | 71.5 | 95.6 | 86.0 | 68.8 | 95.8 | 71.3 | 81.8 | 74.0 | 66.7 | 75.4 | ||
HRNetV2 + OCR (w/ ASP) | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | openseg-group (OCR team + HRNet team) | Our approach is based on a single HRNet48V2 and an OCR module combined with ASPP. We apply depth based multi-scale ensemble weights during testing (provided by DeepMotion AI Research) . more details | n/a | 83.7 | 98.8 | 88.3 | 94.3 | 66.9 | 66.7 | 73.3 | 80.2 | 83.0 | 94.2 | 74.1 | 96.0 | 88.5 | 75.8 | 96.5 | 78.5 | 91.8 | 90.1 | 73.4 | 79.3 | ||
CASIA_IVA_DRANet-101_NoCoarse | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 82.9 | 98.8 | 87.6 | 94.1 | 61.7 | 62.7 | 72.9 | 80.0 | 83.0 | 94.2 | 73.8 | 96.0 | 88.8 | 76.1 | 96.6 | 76.5 | 89.8 | 88.0 | 73.8 | 80.0 | ||
Hyundai Mobis AD Lab | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Hyundai Mobis AD Lab, DL-DB Group, AA (Automated Annotator) Team | more details | n/a | 83.8 | 98.9 | 88.4 | 94.3 | 65.2 | 65.9 | 72.8 | 79.5 | 83.0 | 94.3 | 74.3 | 96.1 | 88.6 | 75.9 | 96.6 | 79.3 | 93.8 | 91.5 | 74.8 | 79.7 | ||
EFRNet-13 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.0146 | 72.8 | 98.2 | 83.0 | 90.8 | 56.5 | 51.4 | 54.9 | 60.4 | 66.4 | 92.1 | 69.6 | 94.7 | 78.6 | 59.4 | 93.5 | 62.8 | 79.2 | 72.7 | 53.6 | 65.9 | ||
FarSee-Net | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | FarSee-Net: Real-Time Semantic Segmentation by Efficient Multi-scale Context Aggregation and Feature Space Super-resolution | Zhanpeng Zhang and Kaipeng Zhang | IEEE International Conference on Robotics and Automation (ICRA) 2020 | FarSee-Net: Real-Time Semantic Segmentation by Efficient Multi-scale Context Aggregation and Feature Space Super-resolution Real-time semantic segmentation is desirable in many robotic applications with limited computation resources. One challenge of semantic segmentation is to deal with the objectscalevariationsandleveragethecontext.Howtoperform multi-scale context aggregation within limited computation budget is important. In this paper, firstly, we introduce a novel and efficient module called Cascaded Factorized Atrous Spatial Pyramid Pooling (CF-ASPP). It is a lightweight cascaded structure for Convolutional Neural Networks (CNNs) to efficiently leverage context information. On the other hand, for runtime efficiency, state-of-the-art methods will quickly decrease the spatial size of the inputs or feature maps in the early network stages. The final high-resolution result is usuallyobtainedbynon-parametricup-samplingoperation(e.g. bilinear interpolation). Differently, we rethink this pipeline and treat it as a super-resolution process. We use optimized superresolution operation in the up-sampling step and improve the accuracy, especially in sub-sampled input image scenario for real-time applications. By fusing the above two improvements, our methods provide better latency-accuracy trade-off than the other state-of-the-art methods. In particular, we achieve 68.4% mIoU at 84 fps on the Cityscapes test set with a single Nivida Titan X (Maxwell) GPU card. The proposed module can be plugged into any feature extraction CNN and benefits from the CNN structure development. more details | 0.0119 | 68.4 | 97.9 | 81.4 | 89.9 | 38.6 | 43.6 | 53.2 | 58.8 | 64.4 | 91.0 | 67.7 | 94.0 | 75.9 | 57.3 | 93.2 | 55.9 | 67.8 | 55.1 | 49.4 | 63.9 |
C3Net [2,3,7,13] | no | no | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | C3: Concentrated-Comprehensive Convolution and its application to semantic segmentation | Hyojin Park, Youngjoon Yoo, Geonseok Seo, Dongyoon Han, Sangdoo Yun, Nojun Kwak | more details | n/a | 64.8 | 97.2 | 79.0 | 88.7 | 40.9 | 43.6 | 52.8 | 54.8 | 61.3 | 90.8 | 67.6 | 93.5 | 72.3 | 50.8 | 90.7 | 47.3 | 56.1 | 43.6 | 41.9 | 58.0 | |
Panoptic-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | Our proposed bottom-up Panoptic-DeepLab is conceptually simple yet delivers state-of-the-art results. The Panoptic-DeepLab adopts dual-ASPP and dual-decoder modules, specific to semantic segmentation and instance segmentation respectively. The semantic segmentation prediction follows the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation prediction involves a simple instance center regression, where the model learns to predict instance centers as well as the offset from each pixel to its corresponding center. This submission exploits only Cityscapes fine annotations. more details | n/a | 79.4 | 98.7 | 87.2 | 93.6 | 57.7 | 60.8 | 70.8 | 78.0 | 81.2 | 93.8 | 74.1 | 95.7 | 88.2 | 76.4 | 96.0 | 55.3 | 75.1 | 79.6 | 72.1 | 74.0 | |
EKENet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.0229 | 74.3 | 98.2 | 83.5 | 91.1 | 59.5 | 53.5 | 55.3 | 61.1 | 66.9 | 92.2 | 70.3 | 94.7 | 78.9 | 59.8 | 93.7 | 64.3 | 85.5 | 81.0 | 55.6 | 66.3 | ||
SPSSN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Stage Pooling Semantic Segmentation Network more details | n/a | 69.4 | 97.7 | 80.8 | 89.8 | 43.9 | 46.5 | 53.1 | 58.8 | 64.7 | 91.5 | 68.7 | 94.2 | 76.2 | 59.0 | 92.7 | 53.5 | 71.0 | 59.2 | 52.9 | 63.8 | ||
FC-HarDNet-70 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | HarDNet: A Low Memory Traffic Network | Ping Chao, Chao-Yang Kao, Yu-Shan Ruan, Chien-Hsiang Huang, Youn-Long Lin | ICCV 2019 | Fully Convolutional Harmonic DenseNet 70 U-shape encoder-decoder structure with HarDNet blocks Trained with single scale loss at stride-4 validation mIoU=77.7 more details | 0.015 | 75.9 | 98.5 | 85.5 | 92.5 | 49.0 | 54.4 | 64.0 | 71.5 | 75.6 | 93.0 | 70.6 | 95.4 | 84.5 | 67.4 | 95.7 | 67.7 | 79.0 | 63.6 | 60.7 | 72.7 |
BFP | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Boundary-Aware Feature Propagation for Scene Segmentation | Henghui Ding, Xudong Jiang, Ai Qun Liu, Nadia Magnenat Thalmann, and Gang Wang | IEEE International Conference on Computer Vision (ICCV), 2019 | Boundary-Aware Feature Propagation for Scene Segmentation more details | n/a | 81.4 | 98.7 | 87.0 | 93.5 | 59.8 | 63.4 | 68.9 | 76.8 | 80.9 | 93.7 | 72.8 | 95.5 | 87.0 | 72.1 | 96.0 | 77.6 | 89.0 | 86.9 | 69.2 | 77.6 |
FasterSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | FasterSeg: Searching for Faster Real-time Semantic Segmentation | Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, Zhangyang Wang | ICLR 2020 | We present FasterSeg, an automatically designed semantic segmentation network with not only state-of-the-art performance but also faster speed than current methods. Utilizing neural architecture search (NAS), FasterSeg is discovered from a novel and broader search space integrating multi-resolution branches, that has been recently found to be vital in manually designed segmentation models. To better calibrate the balance between the goals of high accuracy and low latency, we propose a decoupled and fine-grained latency regularization, that effectively overcomes our observed phenomenons that the searched networks are prone to "collapsing" to low-latency yet poor-accuracy models. Moreover, we seamlessly extend FasterSeg to a new collaborative search (co-searching) framework, simultaneously searching for a teacher and a student network in the same single run. The teacher-student distillation further boosts the student model's accuracy. Experiments on popular segmentation benchmarks demonstrate the competency of FasterSeg. For example, FasterSeg can run over 30% faster than the closest manually designed competitor on Cityscapes, while maintaining comparable accuracy. more details | 0.00613 | 71.5 | 98.0 | 83.5 | 91.1 | 39.1 | 48.7 | 58.6 | 66.7 | 71.6 | 92.3 | 69.1 | 94.5 | 81.5 | 61.8 | 93.7 | 55.0 | 67.1 | 61.1 | 55.0 | 69.2 |
VCD-NoCoarse | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 82.3 | 98.8 | 88.0 | 93.8 | 56.9 | 61.9 | 72.9 | 80.0 | 82.6 | 94.1 | 73.0 | 95.9 | 88.6 | 76.1 | 96.5 | 75.5 | 88.6 | 87.8 | 73.4 | 79.6 | ||
NAVINFO_DLR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | pengfei zhang | weighted aspp+ohem+hard region refine more details | n/a | 83.8 | 98.8 | 87.9 | 94.2 | 65.2 | 63.5 | 73.4 | 80.2 | 83.0 | 94.1 | 73.5 | 95.9 | 89.0 | 76.8 | 96.7 | 82.1 | 93.7 | 90.5 | 74.1 | 80.0 | ||
LBPSS | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | CVPR 2020 submission #5455 more details | 0.9 | 61.0 | 97.0 | 76.9 | 87.4 | 31.3 | 38.0 | 53.6 | 53.8 | 60.9 | 90.4 | 65.9 | 93.1 | 70.3 | 43.3 | 90.9 | 31.6 | 50.3 | 33.9 | 31.8 | 58.7 | ||
KANet_Res101 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 81.8 | 98.6 | 86.4 | 93.6 | 61.9 | 63.5 | 69.6 | 77.5 | 81.0 | 93.7 | 72.8 | 95.7 | 87.5 | 74.7 | 96.2 | 75.6 | 88.2 | 87.2 | 72.7 | 78.4 | ||
Learnable Tree Filter V2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Rethinking Learnable Tree Filter for Generic Feature Transform | Lin Song, Yanwei Li, Zhengkai Jiang, Zeming Li, Xiangyu Zhang, Hongbin Sun, Jian Sun, Nanning Zheng | NeurIPS 2020 | Based on ResNet-101 backbone and FPN architecture. more details | n/a | 82.1 | 98.7 | 86.5 | 93.6 | 57.5 | 61.2 | 71.9 | 79.6 | 82.7 | 94.0 | 72.5 | 95.9 | 88.5 | 75.3 | 96.6 | 76.2 | 88.4 | 87.9 | 74.5 | 79.6 |
GPSNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 82.1 | 98.9 | 88.1 | 93.9 | 61.5 | 63.0 | 71.5 | 79.0 | 81.4 | 94.0 | 73.2 | 95.8 | 88.1 | 74.9 | 96.5 | 75.5 | 91.2 | 84.1 | 70.7 | 78.8 | ||
FTFNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | An Efficient Network Focused on Tiny Feature Maps for Real-Time Semantic Segmentation more details | 0.0088 | 72.4 | 98.2 | 83.3 | 91.0 | 47.9 | 47.8 | 55.4 | 63.0 | 68.7 | 92.3 | 69.5 | 94.7 | 80.4 | 61.6 | 94.5 | 62.5 | 75.9 | 63.0 | 56.5 | 68.6 | ||
iFLYTEK-CV | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | iFLYTEK Research, CV Group more details | n/a | 84.4 | 98.9 | 88.5 | 94.4 | 69.0 | 66.9 | 73.1 | 79.7 | 83.3 | 94.4 | 74.3 | 96.0 | 88.8 | 76.3 | 96.7 | 84.0 | 94.3 | 91.7 | 74.7 | 79.3 | ||
F2MF-short | yes | yes | no | no | no | no | no | no | yes | yes | no | no | yes | yes | Warp to the Future: Joint Forecasting of Features and Feature Motion | Josip Saric, Marin Orsic, Tonci Antunovic, Sacha Vrazic, Sinisa Segvic | The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | Our method forecasts semantic segmentation 3 timesteps into the future. more details | n/a | 70.2 | 97.3 | 78.7 | 89.0 | 54.0 | 52.1 | 46.0 | 58.0 | 61.9 | 89.6 | 66.2 | 91.6 | 65.9 | 54.2 | 90.0 | 66.5 | 80.7 | 77.0 | 54.0 | 61.3 |
HPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | High-Order Paired-ASPP Networks for Semantic Segmentation | Yu Zhang, Xin Sun, Junyu Dong, Changrui Chen, Yue Shen | more details | n/a | 81.6 | 98.7 | 87.2 | 93.6 | 62.7 | 63.2 | 68.5 | 76.6 | 80.3 | 93.9 | 73.3 | 95.7 | 87.2 | 73.1 | 96.1 | 76.9 | 89.5 | 86.7 | 70.1 | 77.4 | |
HANet (fine-train only) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | TBA | Anonymous | We use only fine-training data. more details | n/a | 80.9 | 98.7 | 87.2 | 93.6 | 62.3 | 62.2 | 67.6 | 75.2 | 79.2 | 93.8 | 73.1 | 95.8 | 87.1 | 71.7 | 96.2 | 72.7 | 88.6 | 85.7 | 69.0 | 76.7 | |
F2MF-mid | yes | yes | no | no | no | no | no | no | yes | yes | no | no | yes | yes | Warp to the Future: Joint Forecasting of Features and Feature Motion | Josip Saric, Marin Orsic, Tonci Antunovic, Sacha Vrazic, Sinisa Segvic | The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | Our method forecasts semantic segmentation 9 timesteps into the future. more details | n/a | 59.1 | 95.1 | 69.2 | 83.5 | 47.2 | 43.8 | 22.9 | 41.8 | 41.3 | 84.2 | 58.5 | 86.0 | 46.7 | 33.9 | 80.3 | 53.8 | 72.5 | 79.0 | 39.7 | 44.3 |
EMANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Expectation Maximization Attention Networks for Semantic Segmentation | Xia Li, Zhisheng Zhong, Jianlong Wu, Yibo Yang, Zhouchen Lin, Hong Liu | ICCV 2019 | more details | n/a | 81.9 | 98.7 | 87.3 | 93.8 | 63.4 | 62.3 | 70.0 | 77.9 | 80.7 | 93.9 | 73.6 | 95.7 | 87.8 | 74.5 | 96.2 | 75.5 | 90.2 | 84.5 | 71.5 | 78.7 |
PartnerNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | PARTNERNET: A LIGHTWEIGHT AND EFFICIENT PARTNER NETWORK FOR SEMANTIC SEGMENTATION more details | 0.0058 | 72.4 | 98.1 | 83.0 | 91.2 | 47.7 | 52.0 | 59.2 | 67.2 | 70.4 | 92.5 | 70.2 | 94.6 | 82.2 | 63.5 | 94.4 | 53.4 | 67.7 | 59.8 | 58.3 | 69.8 | ||
SwiftNet RN18 pyr sepBN MVD | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient semantic segmentation with pyramidal fusion | M Oršić, S Šegvić | Pattern Recognition 2020 | more details | 0.029 | 76.4 | 98.5 | 85.3 | 92.7 | 53.4 | 55.2 | 66.1 | 73.1 | 77.1 | 93.4 | 70.4 | 95.8 | 83.9 | 65.7 | 95.4 | 64.6 | 78.6 | 66.0 | 63.0 | 73.2 |
Tencent YYB VisualAlgo | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Tencent YYB VisualAlgo Group more details | n/a | 83.6 | 98.8 | 88.1 | 94.2 | 64.1 | 65.0 | 72.1 | 78.8 | 82.7 | 94.2 | 73.6 | 96.1 | 88.2 | 75.8 | 96.5 | 82.1 | 94.3 | 91.9 | 73.9 | 79.0 | ||
MoKu Lab | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Alibaba, MoKu AI Lab, CV Group more details | n/a | 84.3 | 98.9 | 88.7 | 94.4 | 65.7 | 68.4 | 73.9 | 80.3 | 83.8 | 94.4 | 74.5 | 96.2 | 88.8 | 76.2 | 96.7 | 81.3 | 93.1 | 91.6 | 74.8 | 80.1 | ||
HRNetV2 + OCR + SegFix | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Object-Contextual Representations for Semantic Segmentation | Yuhui Yuan, Xilin Chen, Jingdong Wang | First, we pre-train "HRNet+OCR" method on the Mapillary training set (achieves 50.8% on the Mapillary val set). Second, we fine-tune the model with the Cityscapes training, validation and coarse set. Finally, we apply the "SegFix" scheme to further improve the results. more details | n/a | 84.5 | 98.9 | 88.3 | 94.4 | 68.0 | 67.8 | 73.6 | 80.6 | 83.9 | 94.3 | 74.5 | 96.1 | 89.2 | 75.9 | 96.8 | 83.6 | 94.2 | 91.3 | 74.0 | 80.1 | |
DecoupleSegNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Improving Semantic Segmentation via Decoupled Body and Edge Supervision | Xiangtai Li, Xia Li, Li Zhang, Guangliang Cheng, Jianping Shi, Zhouchen Lin, Shaohua Tan, and Yunhai Tong | ECCV-2020 | In this paper, We propose a new paradigm for semantic segmentation. Our insight is that appealing performance of semantic segmentation re- quires explicitly modeling the object body and edge, which correspond to the high and low frequency of the image. To do so, we first warp the image feature by learning a flow field to make the object part more consistent. The resulting body feature and the residual edge feature are further optimized under decoupled supervision by explicitly sampling dif- ferent parts (body or edge) pixels. The code and models have been released. more details | n/a | 83.7 | 98.8 | 87.8 | 94.4 | 66.1 | 64.8 | 72.3 | 78.8 | 82.6 | 94.2 | 74.0 | 96.1 | 88.7 | 75.9 | 96.6 | 80.2 | 93.8 | 91.6 | 74.3 | 79.5 |
LGE A&B Center: HANet (ResNet-101) | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Cars Can’t Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks | Sungha Choi (LGE, Korea Univ.), Joanne T. Kim (Korea Univ.), Jaegul Choo (KAIST) | CVPR 2020 | Dataset: "fine train + fine val", No coarse, Backbone: ImageNet pretrained ResNet-101 more details | n/a | 82.1 | 98.8 | 88.0 | 93.9 | 60.5 | 63.3 | 71.3 | 78.1 | 81.3 | 94.0 | 72.9 | 96.1 | 87.9 | 74.5 | 96.5 | 77.0 | 88.0 | 85.9 | 72.7 | 79.0 |
DCNAS | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | DCNAS: Densely Connected Neural Architecture Search for Semantic Image Segmentation | Xiong Zhang, Hongmin Xu, Hong Mo, Jianchao Tan, Cheng Yang, Wenqi Ren | Neural Architecture Search (NAS) has shown great potentials in automatically designing scalable network architectures for dense image predictions. However, existing NAS algorithms usually compromise on restricted search space and search on proxy task to meet the achievable computational demands. To allow as wide as possible network architectures and avoid the gap between target and proxy dataset, we propose a Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information, over a large-scale target dataset. Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs. Moreover, by combining both path-level and channel-level sampling strategies, we design a fusion module to reduce the memory consumption of ample search space. more details | n/a | 83.6 | 98.8 | 88.0 | 94.2 | 66.0 | 66.1 | 72.2 | 78.7 | 82.7 | 94.2 | 73.9 | 94.0 | 88.2 | 75.1 | 96.5 | 82.6 | 94.1 | 90.9 | 73.6 | 79.3 | |
GPNet-ResNet101 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 82.5 | 98.8 | 87.9 | 93.9 | 61.9 | 63.3 | 70.4 | 78.9 | 81.8 | 94.0 | 72.5 | 95.9 | 88.3 | 74.9 | 96.5 | 80.5 | 91.1 | 85.5 | 72.1 | 78.6 | ||
Axial-DeepLab-XL [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 79.9 | 98.7 | 87.0 | 93.5 | 57.9 | 60.4 | 70.9 | 77.9 | 81.4 | 93.7 | 72.8 | 95.6 | 87.9 | 75.3 | 96.1 | 65.8 | 80.5 | 78.7 | 72.8 | 70.8 |
Axial-DeepLab-L [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 83.1 | 98.8 | 87.8 | 94.2 | 59.8 | 68.1 | 73.4 | 79.5 | 82.6 | 93.9 | 72.7 | 96.1 | 88.9 | 77.5 | 96.5 | 76.9 | 91.2 | 92.6 | 74.7 | 74.5 |
Axial-DeepLab-L [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 79.5 | 98.6 | 86.5 | 93.4 | 52.0 | 61.3 | 70.2 | 77.5 | 81.1 | 93.5 | 72.3 | 95.7 | 87.9 | 75.6 | 96.0 | 68.5 | 81.4 | 77.1 | 71.0 | 70.7 |
LGE A&B Center: HANet (ResNext-101) | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Cars Can’t Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks | Sungha Choi (LGE, Korea Univ.), Joanne T. Kim (Korea Univ.), Jaegul Choo (KAIST) | CVPR 2020 | Dataset: "fine train + fine val + coarse", Backbone: Mapillary pretrained ResNext-101 more details | n/a | 83.2 | 98.8 | 88.0 | 94.2 | 66.6 | 64.8 | 72.0 | 78.2 | 81.4 | 94.2 | 74.5 | 96.1 | 88.1 | 75.6 | 96.5 | 80.3 | 93.2 | 86.6 | 72.5 | 78.7 |
ERINet-v2 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Residual Inception Network | MINJONG KIM, SUYOUNG CHI | ongoing | more details | 0.00526316 | 67.4 | 97.7 | 80.6 | 89.6 | 41.7 | 45.0 | 52.2 | 55.9 | 62.9 | 91.5 | 69.1 | 93.9 | 75.9 | 53.7 | 92.3 | 41.8 | 66.3 | 64.2 | 44.3 | 62.1 |
Naive-Student (iterative semi-supervised learning with Panoptic-DeepLab) | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Semi-Supervised Learning in Video Sequences for Urban Scene Segmentation | Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D. Collins, Ekin D. Cubuk, Barret Zoph, Hartwig Adam, Jonathon Shlens | Supervised learning in large discriminative models is a mainstay for modern computer vision. Such an approach necessitates investing in large-scale human-annotated datasets for achieving state-of-the-art results. In turn, the efficacy of supervised learning may be limited by the size of the human annotated dataset. This limitation is particularly notable for image segmentation tasks, where the expense of human annotation is especially large, yet large amounts of unlabeled data may exist. In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences to improve the performance on urban scene segmentation, simultaneously tackling semantic, instance, and panoptic segmentation. The goal of this work is to avoid the construction of sophisticated, learned architectures specific to label propagation (e.g., patch matching and optical flow). Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data. The procedure is iterated for several times. As a result, our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks, reaching the performance of 67.8% PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences to surpass state-of-the-art performance on core computer vision tasks. more details | n/a | 85.2 | 98.8 | 88.3 | 94.6 | 65.3 | 69.6 | 75.2 | 80.9 | 84.4 | 94.3 | 74.5 | 96.2 | 90.0 | 79.7 | 96.7 | 83.0 | 95.6 | 93.4 | 78.4 | 79.6 | |
Axial-DeepLab-XL [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 84.1 | 98.9 | 88.3 | 94.5 | 69.1 | 67.8 | 74.5 | 80.3 | 83.9 | 94.1 | 72.8 | 96.1 | 89.3 | 78.2 | 96.4 | 76.4 | 93.0 | 91.3 | 75.7 | 76.9 |
TUE-5LSM0-g23 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Deeplabv3+decoder more details | n/a | 65.8 | 95.5 | 77.7 | 85.9 | 45.7 | 48.3 | 43.9 | 53.2 | 57.5 | 89.2 | 65.3 | 91.1 | 70.7 | 54.2 | 91.3 | 48.1 | 67.5 | 53.5 | 51.8 | 60.4 | ||
PBRNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | modified MobileNetV2 backbone + Prediction and Boundary attention-based Refinement Module (PBRM) more details | 0.0107 | 72.4 | 98.0 | 81.8 | 91.3 | 46.3 | 51.5 | 58.5 | 69.1 | 74.1 | 92.9 | 71.0 | 94.7 | 82.6 | 64.7 | 94.5 | 57.7 | 62.5 | 49.0 | 61.9 | 72.9 | ||
ResNeSt200 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ResNeSt: Split-Attention Networks | Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Zhi Zhang, Haibin Lin, Yue Sun, Tong He, Jonas Mueller, R. Manmatha, Mu Li, and Alexander Smola | DeepLabV3+ network with ResNeSt200 backbone. more details | n/a | 83.3 | 98.9 | 88.4 | 94.4 | 66.0 | 66.0 | 72.5 | 78.6 | 82.5 | 94.2 | 72.9 | 96.3 | 88.4 | 74.8 | 96.6 | 77.0 | 92.3 | 90.0 | 73.2 | 79.1 | |
Panoptic-DeepLab [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | We employ a stronger backbone, WR-41, in Panoptic-DeepLab. For Panoptic-DeepLab, please refer to https://arxiv.org/abs/1911.10194. For wide-ResNet-41 (WR-41) backbone, please refer to https://arxiv.org/abs/2005.10266. more details | n/a | 84.5 | 98.8 | 88.4 | 94.4 | 64.3 | 68.3 | 75.3 | 81.0 | 84.2 | 94.2 | 73.7 | 96.1 | 89.7 | 78.6 | 96.7 | 82.2 | 93.7 | 90.2 | 76.4 | 79.8 | |
EaNet-V1 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Parsing Very High Resolution Urban Scene Images by Learning Deep ConvNets with Edge-Aware Loss | Xianwei Zheng, Linxi Huan, Gui-Song Xia, Jianya Gong | Parsing very high resolution (VHR) urban scene images into regions with semantic meaning, e.g. buildings and cars, is a fundamental task necessary for interpreting and understanding urban scenes. However, due to the huge quantity of details contained in an image and the large variations of objects in scale and appearance, the existing semantic segmentation methods often break one object into pieces, or confuse adjacent objects and thus fail to depict these objects consistently. To address this issue, we propose a concise and effective edge-aware neural network (EaNet) for urban scene semantic segmentation. The proposed EaNet model is deployed as a standard balanced encoder-decoder framework. Specifically, we devised two plug-and-play modules that append on top of the encoder and decoder respectively, i.e., the large kernel pyramid pooling (LKPP) and the edge-aware loss (EA loss) function, to extend the model ability in learning discriminating features. The LKPP module captures rich multi-scale context with strong continuous feature relations to promote coherent labeling of multi-scale urban objects. The EA loss module learns edge information directly from semantic segmentation prediction, which avoids costly post-processing or extra edge detection. During training, EA loss imposes a strong geometric awareness to guide object structure learning at both the pixel- and image-level, and thus effectively separates confusing objects with sharp contours. more details | n/a | 81.7 | 98.8 | 87.2 | 93.8 | 67.3 | 64.1 | 67.5 | 75.6 | 79.9 | 93.8 | 72.4 | 95.7 | 86.9 | 73.4 | 96.0 | 80.4 | 89.4 | 80.5 | 71.7 | 77.6 | |
EfficientPS [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 84.2 | 98.8 | 88.2 | 94.3 | 67.6 | 67.7 | 73.4 | 80.2 | 83.3 | 94.3 | 74.4 | 96.0 | 88.7 | 75.3 | 96.6 | 83.5 | 94.0 | 91.1 | 73.5 | 79.7 | |
FSFNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Accelerator-Aware Fast Spatial Feature Network for Real-Time Semantic Segmentation | Minjong Kim, Byungjae Park, Suyoung Chi | IEEE Access | Semantic segmentation is performed to understand an image at the pixel level; it is widely used in the field of autonomous driving. In recent years, deep neural networks achieve good accuracy performance; however, there exist few models that have a good trade-off between high accuracy and low inference time. In this paper, we propose a fast spatial feature network (FSFNet), an optimized lightweight semantic segmentation model using an accelerator, offering high performance as well as faster inference speed than current methods. FSFNet employs the FSF and MRA modules. The FSF module has three different types of subset modules to extract spatial features efficiently. They are designed in consideration of the size of the spatial domain. The multi-resolution aggregation module combines features that are extracted at different resolutions to reconstruct the segmentation image accurately. Our approach is able to run at over 203 FPS at full resolution 1024 x 2048) in a single NVIDIA 1080Ti GPU, and obtains a result of 69.13% mIoU on the Cityscapes test dataset. Compared with existing models in real-time semantic segmentation, our proposed model retains remarkable accuracy while having high FPS that is over 30% faster than the state-of-the-art model. The experimental results proved that our model is an ideal approach for the Cityscapes dataset. more details | 0.0049261 | 69.1 | 97.7 | 81.2 | 90.2 | 41.8 | 47.1 | 54.2 | 61.1 | 65.4 | 91.9 | 69.4 | 94.2 | 77.9 | 57.9 | 92.9 | 47.4 | 64.4 | 59.4 | 53.2 | 66.3 |
Hierarchical Multi-Scale Attention for Semantic Segmentation | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Hierarchical Multi-Scale Attention for Semantic Segmentation | Andrew Tao, Karan Sapra, Bryan Catanzaro | Multi-scale inference is commonly used to improve the results of semantic segmentation. Multiple images scales are passed through a network and then the results are combined with averaging or max pooling. In this work, we present an attention-based approach to combining multi-scale predictions. We show that predictions at certain scales are better at resolving particular failures modes and that the network learns to favor those scales for such cases in order to generate better predictions. Our attention mechanism is hierarchical, which enables it to be roughly 4x more memory efficient to train than other recent approaches. In addition to enabling faster training, this allows us to train with larger crop sizes which leads to greater model accuracy. We demonstrate the result of our method on two datasets: Cityscapes and Mapillary Vistas. For Cityscapes, which has a large number of weakly labelled images, we also leverage auto-labelling to improve generalization. Using our approach we achieve a new state-of-the-art results in both Mapillary (61.1 IOU val) and Cityscapes (85.4 IOU test). more details | n/a | 85.4 | 99.0 | 89.4 | 94.9 | 71.8 | 68.4 | 75.9 | 82.2 | 85.3 | 94.5 | 75.0 | 96.3 | 90.1 | 79.7 | 97.0 | 82.6 | 94.6 | 87.8 | 77.2 | 81.7 | |
SANet | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 25.0 | 80.9 | 98.7 | 87.1 | 93.6 | 61.6 | 62.4 | 68.1 | 75.9 | 79.5 | 93.8 | 73.1 | 95.8 | 87.3 | 71.5 | 96.2 | 71.9 | 88.1 | 86.1 | 69.4 | 77.2 | ||
SJTU_hpm | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | no | no | Hard Pixel Mining for Depth Privileged Semantic Segmentation | Zhangxuan Gu, Li Niu*, Haohua Zhao, and Liqing Zhang | more details | n/a | 81.3 | 98.7 | 87.3 | 93.6 | 61.7 | 61.9 | 69.4 | 75.5 | 80.7 | 93.8 | 74.4 | 95.9 | 86.7 | 72.7 | 96.2 | 75.0 | 91.3 | 82.9 | 69.5 | 76.5 | |
FANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | FANet: Feature Aggregation Network for Semantic Segmentation | Tanmay Singha, Duc-Son Pham, and Aneesh Krishna | Feature Aggregation Network for Semantic Segmentation more details | n/a | 64.1 | 96.7 | 75.3 | 88.2 | 35.4 | 37.8 | 45.7 | 51.3 | 57.4 | 90.4 | 64.3 | 93.0 | 71.8 | 50.4 | 91.6 | 48.9 | 62.0 | 52.0 | 46.3 | 59.0 | |
Hard Pixel Mining for Depth Privileged Semantic Segmentation | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | no | no | Hard Pixel Mining for Depth Privileged Semantic Segmentation | Zhangxuan Gu, Li Niu, Haohua Zhao, and Liqing Zhang | Semantic segmentation has achieved remarkable progress but remains challenging due to the complex scene, object occlusion, and so on. Some research works have attempted to use extra information such as a depth map to help RGB based semantic segmentation because the depth map could provide complementary geometric cues. However, due to the inaccessibility of depth sensors, depth information is usually unavailable for the test images. In this paper, we leverage only the depth of training images as the privileged information to mine the hard pixels in semantic segmentation, in which depth information is only available for training images but not available for test images. Specifically, we propose a novel Loss Weight Module, which outputs a loss weight map by employing two depth-related measurements of hard pixels: Depth Prediction Error and Depthaware Segmentation Error. The loss weight map is then applied to segmentation loss, with the goal of learning a more robust model by paying more attention to the hard pixels. Besides, we also explore a curriculum learning strategy based on the loss weight map. Meanwhile, to fully mine the hard pixels on different scales, we apply our loss weight module to multi-scale side outputs. Our hard pixels mining method achieves the state-of-the-art results on three benchmark datasets, and even outperforms the methods which need depth input during testing. more details | n/a | 83.4 | 98.8 | 87.8 | 94.3 | 65.6 | 65.2 | 72.9 | 79.5 | 82.3 | 94.3 | 74.4 | 96.1 | 88.6 | 75.8 | 96.6 | 77.9 | 93.2 | 88.8 | 73.0 | 79.2 | |
MSeg1080_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | MSeg: A Composite Dataset for Multi-domain Semantic Segmentation | John Lambert*, Zhuang Liu*, Ozan Sener, James Hays, Vladlen Koltun | CVPR 2020 | We present MSeg, a composite dataset that unifies semantic segmentation datasets from different domains. A naive merge of the constituent datasets yields poor performance due to inconsistent taxonomies and annotation practices. We reconcile the taxonomies and bring the pixel-level annotations into alignment by relabeling more than 220,000 object masks in more than 80,000 images, requiring more than 1.34 years of collective annotator effort. The resulting composite dataset enables training a single semantic segmentation model that functions effectively across domains and generalizes to datasets that were not seen during training. We adopt zero-shot cross-dataset transfer as a benchmark to systematically evaluate a model’s robustness and show that MSeg training yields substantially more robust models in comparison to training on individual datasets or naive mixing of datasets without the presented contributions. more details | 0.49 | 80.7 | 98.7 | 86.9 | 93.8 | 64.9 | 66.1 | 69.3 | 76.6 | 80.3 | 94.0 | 74.0 | 95.9 | 87.3 | 70.6 | 96.2 | 77.2 | 84.9 | 71.9 | 69.8 | 75.6 |
SA-Gate (ResNet-101,OS=16) | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Bi-directional Cross-Modality Feature Propagation with Separation-and-Aggregation Gate for RGB-D Semantic Segmentation | Xiaokang Chen, Kwan-Yee Lin, Jingbo Wang, Wayne Wu, Chen Qian, Hongsheng Li, and Gang Zeng | European Conference on Computer Vision (ECCV), 2020 | RGB+HHA input, input resolution = 800x800, output stride = 16, training 240 epochs, no coarse data is used. more details | n/a | 82.8 | 98.7 | 87.3 | 93.9 | 63.8 | 62.7 | 70.8 | 77.9 | 82.2 | 93.9 | 72.8 | 95.9 | 88.2 | 75.2 | 96.5 | 80.4 | 91.6 | 89.0 | 73.2 | 78.9 |
seamseg_rvcsubset | no | no | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Porzi, Lorenzo and Rota Bulò, Samuel and Colovic, Aleksander and Kontschieder, Peter | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019 | Seamless Scene Segmentation Resnet101, pretrained on Imagenet; supplied with altered MVD to include WildDash2 classes; does not contain other RVC label policies (i.e. no ADE20K/COCO-specific classes -> rvcsubset and not a proper submission) more details | n/a | 67.0 | 87.7 | 73.0 | 90.3 | 47.7 | 49.9 | 64.1 | 71.2 | 71.1 | 92.4 | 52.7 | 94.8 | 79.3 | 45.0 | 87.5 | 53.4 | 69.3 | 36.7 | 46.9 | 59.3 |
HRNet + LKPP + EA loss | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 82.6 | 98.8 | 87.5 | 93.8 | 67.3 | 63.7 | 68.5 | 75.9 | 80.5 | 93.7 | 71.9 | 95.6 | 87.2 | 73.6 | 96.4 | 83.0 | 92.7 | 88.5 | 72.8 | 77.7 | ||
SN_RN152pyrx8_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images | Marin Oršić, Ivan Krešo, Petra Bevandić, Siniša Šegvić | CVPR 2019 | more details | 1.0 | 74.7 | 98.4 | 84.8 | 92.6 | 56.0 | 53.6 | 61.1 | 70.3 | 74.2 | 93.1 | 71.1 | 95.5 | 82.8 | 63.1 | 95.3 | 66.0 | 72.0 | 53.7 | 63.1 | 71.6 |
EffPS_b1bs4_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | EfficientPS with EfficientNet-b1 backbone. Trained with a batch size of 4. more details | n/a | 63.2 | 97.5 | 77.4 | 89.8 | 40.4 | 38.4 | 49.0 | 47.4 | 61.8 | 91.2 | 61.9 | 94.7 | 73.2 | 28.9 | 93.3 | 54.2 | 64.1 | 42.0 | 35.0 | 60.0 | |
AttaNet_light | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | AttaNet: Attention-Augmented Network for Fast and Accurate Scene Parsing(AAAI21) | Anonymous | more details | n/a | 70.1 | 97.8 | 80.8 | 90.5 | 49.6 | 49.6 | 51.7 | 62.7 | 67.6 | 92.1 | 69.4 | 94.0 | 80.8 | 61.0 | 94.0 | 61.7 | 62.8 | 47.0 | 51.1 | 67.6 | |
CFPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 70.1 | 97.8 | 81.4 | 90.5 | 46.4 | 50.6 | 56.4 | 61.5 | 67.7 | 92.1 | 68.9 | 94.3 | 80.4 | 60.7 | 93.9 | 51.4 | 68.0 | 50.8 | 51.2 | 67.7 | ||
Seg_UJS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 84.3 | 99.0 | 89.2 | 94.8 | 69.7 | 67.1 | 73.7 | 76.4 | 82.2 | 94.4 | 74.8 | 96.2 | 89.7 | 77.4 | 96.9 | 81.2 | 95.1 | 86.4 | 76.0 | 80.8 | ||
Bilateral_attention_semantic | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | we use bilateral attention mechanism for semantic segmentation more details | 0.0141 | 76.5 | 98.4 | 84.9 | 92.5 | 48.1 | 55.4 | 65.7 | 73.9 | 77.2 | 93.3 | 71.7 | 94.9 | 85.6 | 68.9 | 95.4 | 62.7 | 77.5 | 71.0 | 61.5 | 74.7 | ||
Panoptic-DeepLab w/ SWideRNet [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 80.4 | 98.8 | 87.8 | 93.9 | 53.9 | 64.1 | 73.0 | 80.1 | 82.9 | 93.9 | 73.6 | 96.0 | 89.3 | 78.4 | 96.4 | 67.0 | 76.3 | 71.6 | 74.1 | 75.8 | |
ESANet RGB-D (small input) | yes | yes | no | no | no | no | yes | yes | no | no | 2 | 2 | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB-D data with half the input resolution. more details | 0.0427 | 75.6 | 98.5 | 85.9 | 92.3 | 54.1 | 55.4 | 61.6 | 68.0 | 72.8 | 92.3 | 71.3 | 95.2 | 82.5 | 65.8 | 94.8 | 64.0 | 79.9 | 72.3 | 60.5 | 69.8 | |
ESANet RGB (small input) | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | ESANet: Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB images with half the input resolution. more details | 0.031 | 72.9 | 98.2 | 84.1 | 91.2 | 57.1 | 52.6 | 55.7 | 61.3 | 66.8 | 91.6 | 69.6 | 94.6 | 79.3 | 62.8 | 93.9 | 64.9 | 71.6 | 64.8 | 57.0 | 67.7 | |
ESANet RGB-D | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB-D data. more details | 0.1613 | 78.4 | 98.7 | 87.1 | 93.3 | 49.8 | 60.2 | 69.1 | 76.1 | 79.4 | 93.6 | 72.7 | 95.9 | 87.2 | 71.3 | 96.1 | 66.2 | 78.4 | 71.5 | 67.1 | 76.2 | |
DAHUA-ARI | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | multi-scale and refineNet more details | n/a | 85.8 | 99.0 | 89.5 | 95.0 | 72.0 | 68.8 | 75.9 | 82.3 | 85.1 | 94.5 | 75.3 | 96.3 | 90.0 | 79.4 | 97.0 | 83.9 | 95.2 | 91.9 | 78.2 | 81.7 | ||
ESANet RGB | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | ESANet: Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB images only. more details | 0.1205 | 77.6 | 98.4 | 84.9 | 92.7 | 55.4 | 58.9 | 64.7 | 71.7 | 75.8 | 93.3 | 71.0 | 95.3 | 84.9 | 67.7 | 95.7 | 64.7 | 79.1 | 80.9 | 64.5 | 73.9 | |
DCNAS+ASPP [Mapillary Vistas] | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Existing NAS algorithms usually compromise on restricted search space or search on proxy task to meet the achievable computational demands. To allow as wide as possible network architectures and avoid the gap between realistic and proxy setting, we propose a novel Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information, over a large-scale target dataset without proxy. Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs. Moreover, by combining both path-level and channel-level sampling strategies, we design a fusion module and mixture layer to reduce the memory consumption of ample search space, hence favor the proxyless searching. Compared with contemporary works, experiments reveal that the proxyless searching scheme is capable of bridge the gap between searching and training environments. more details | n/a | 85.3 | 99.0 | 89.4 | 94.9 | 71.7 | 69.1 | 75.6 | 82.0 | 84.9 | 94.5 | 75.3 | 96.3 | 89.9 | 79.1 | 96.9 | 81.6 | 95.3 | 87.0 | 77.1 | 81.0 | ||
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 84.1 | 98.8 | 88.4 | 94.6 | 66.0 | 68.5 | 75.5 | 81.5 | 84.6 | 94.4 | 74.3 | 96.2 | 89.6 | 79.6 | 96.6 | 77.5 | 89.3 | 86.2 | 77.4 | 78.3 | |
DCNAS+ASPP | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | DCNAS: Densely Connected Neural Architecture Search for Semantic ImageSegmentation | Anonymous | Existing NAS algorithms usually compromise on restricted search space or search on proxy task to meet the achievable computational demands. To allow as wide as possible network architectures and avoid the gap between realistic and proxy setting, we propose a novel Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information, over a large-scale target dataset without proxy. Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs. Moreover, by combining both path-level and channel-level sampling strategies, we design a fusion module and mixture layer to reduce the memory consumption of ample search space, hence favor the proxyless searching. more details | n/a | 84.3 | 98.9 | 88.7 | 94.5 | 68.0 | 67.0 | 73.9 | 80.3 | 83.8 | 94.3 | 74.8 | 96.2 | 89.3 | 78.0 | 96.7 | 79.6 | 95.0 | 86.9 | 75.2 | 80.7 | |
ddl_seg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 84.6 | 99.0 | 89.2 | 94.8 | 69.8 | 67.2 | 73.7 | 76.4 | 82.2 | 94.4 | 74.9 | 96.2 | 89.7 | 77.4 | 96.9 | 81.4 | 95.1 | 91.9 | 76.1 | 80.8 | ||
CABiNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | CABiNet: Efficient Context Aggregation Network for Low-Latency Semantic Segmentation | Saumya Kumaar, Ye Lyu, Francesco Nex, Michael Ying Yang | With the increasing demand of autonomous machines, pixel-wise semantic segmentation for visual scene understanding needs to be not only accurate but also efficient for any potential real-time applications. In this paper, we propose CABiNet (Context Aggregated Bi-lateral Network), a dual branch convolutional neural network (CNN), with significantly lower computational costs as compared to the state-of-the-art, while maintaining a competitive prediction accuracy. Building upon the existing multi-branch architectures for high-speed semantic segmentation, we design a cheap high resolution branch for effective spatial detailing and a context branch with light-weight versions of global aggregation and local distribution blocks, potent to capture both long-range and local contextual dependencies required for accurate semantic segmentation, with low computational overheads. Specifically, we achieve 76.6% and 75.9% mIOU on Cityscapes validation and test sets respectively, at 76 FPS on an NVIDIA RTX 2080Ti and 8 FPS on a Jetson Xavier NX. Codes and training models will be made publicly available. more details | 0.013 | 76.0 | 98.2 | 83.2 | 92.9 | 44.1 | 62.0 | 71.1 | 78.4 | 80.8 | 93.9 | 70.9 | 95.7 | 84.5 | 67.1 | 95.6 | 60.0 | 70.3 | 57.7 | 61.4 | 75.5 | |
Margin calibration | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | The model is DeepLab v3+ backend on SEResNeXt50. We used the margin calibration with log-loss as the learning objective. more details | n/a | 82.1 | 98.7 | 87.3 | 94.0 | 63.4 | 63.8 | 71.7 | 78.1 | 81.8 | 94.1 | 73.6 | 96.0 | 88.1 | 72.8 | 96.5 | 74.9 | 89.3 | 88.1 | 68.9 | 78.3 | ||
MT-SSSR | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | n/a | 79.0 | 98.6 | 86.3 | 93.3 | 53.5 | 59.7 | 69.9 | 77.5 | 80.1 | 93.7 | 72.2 | 95.7 | 87.4 | 71.5 | 96.1 | 65.5 | 82.8 | 72.3 | 68.6 | 77.0 | ||
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas + Pseudo-labels] | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. Following Naive-Student, this model is additionally trained with pseudo-labels generated from Cityscapes Video and train-extra set (i.e., the coarse annotations are not used, but the images are). more details | n/a | 85.1 | 98.9 | 88.4 | 94.7 | 68.2 | 68.6 | 76.0 | 81.3 | 84.7 | 94.4 | 74.1 | 96.2 | 89.7 | 79.8 | 96.8 | 82.1 | 94.2 | 92.1 | 77.2 | 79.2 | |
DSANet: Dilated Spatial Attention for Real-time Semantic Segmentation in Urban Street Scenes | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | we present computationally efficient network named DSANet, which follows a two-branch strategy to tackle the problem of real-time semantic segmentation in urban scenes. We first design a Context branch, which employs Depth-wise Asymmetric ShuffleNet DAS as main building block to acquire sufficient receptive fields. In addition, we propose a dual attention module consisting of dilated spatial attention and channel attention to make full use of the multi-level feature maps simultaneously, which helps predict the pixel-wise labels in each stage. Meanwhile, Spatial Encoding Network is used to enhance semantic information by preserving the spatial details. Finally, to better combine context information and spatial information, we introduce a Simple Feature Fusion Module to combine the features from the two branches. more details | n/a | 71.4 | 96.8 | 78.5 | 91.2 | 50.5 | 50.8 | 59.4 | 64.0 | 71.7 | 92.6 | 70.0 | 94.5 | 81.3 | 61.8 | 92.9 | 56.1 | 75.6 | 50.6 | 51.0 | 66.9 | ||
UJS_model | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 85.3 | 99.0 | 88.9 | 94.8 | 71.2 | 67.8 | 75.6 | 82.0 | 85.2 | 94.5 | 75.0 | 96.2 | 90.0 | 79.3 | 97.0 | 81.1 | 95.1 | 89.3 | 76.8 | 81.2 | ||
Mobilenetv3-small-backbone real-time segmentation | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | The model is a dual-path network with mobilenetv3-small backbone. PSP module was used as the context aggregation block. We also use feature fusion module at x16, x32. The features of the two branches are then concatenated and fused with a bottleneck conv. Only train data is used to train the model excluding validation data. And evaluation was done by single scale input images. more details | 0.02 | 63.9 | 97.0 | 75.2 | 88.7 | 41.1 | 44.2 | 45.0 | 53.2 | 61.3 | 90.9 | 68.4 | 93.2 | 74.4 | 51.0 | 92.4 | 44.6 | 46.4 | 39.6 | 45.3 | 61.9 | ||
M2FANet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Urban street scene analysis using lightweight multi-level multi-path feature aggregation network | Tanmay Singha; Duc-Son Pham; Aneesh Krishna | Multiagent and Grid Systems Journal | more details | n/a | 68.3 | 97.4 | 78.6 | 90.1 | 37.1 | 42.9 | 56.8 | 63.8 | 68.3 | 92.1 | 66.5 | 94.5 | 79.3 | 58.8 | 93.6 | 48.5 | 62.1 | 53.2 | 48.7 | 65.9 |
AFPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.03 | 76.4 | 98.3 | 84.1 | 92.0 | 50.8 | 55.0 | 61.2 | 70.6 | 74.5 | 92.9 | 71.5 | 94.9 | 83.5 | 66.5 | 95.3 | 68.0 | 81.2 | 78.4 | 60.9 | 72.2 | ||
YOLO V5s with Segmentation Head | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Anonymous | Multitask model. fine tune from COCO detection pretrained model, train semantic segmentation and object detection(transfer from instance label) at the same time more details | 0.007 | 71.3 | 98.0 | 81.3 | 90.2 | 41.9 | 44.9 | 44.8 | 62.6 | 67.8 | 90.7 | 67.4 | 93.5 | 78.4 | 61.6 | 94.1 | 65.3 | 77.1 | 70.2 | 58.5 | 66.3 | ||
FSFFNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | A Lightweight Multi-scale Feature Fusion Network for Real-Time Semantic Segmentation | Tanmay Singha, Duc-Son Pham, Aneesh Krishna, Tom Gedeon | International Conference on Neural Information Processing 2021 | Feature Scaling Feature Fusion Network more details | n/a | 69.4 | 97.4 | 78.5 | 90.7 | 41.8 | 46.1 | 57.8 | 65.3 | 68.5 | 92.0 | 64.0 | 94.4 | 79.2 | 57.0 | 93.9 | 55.4 | 65.7 | 54.4 | 50.4 | 65.8 |
Qualcomm AI Research | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | InverseForm: A Loss Function for Structured Boundary-Aware Segmentation | Shubhankar Borse, Ying Wang, Yizhe Zhang, Fatih Porikli | CVPR 2021 oral | more details | n/a | 85.8 | 98.8 | 89.6 | 94.8 | 71.8 | 69.2 | 75.8 | 82.3 | 85.5 | 94.3 | 75.0 | 96.3 | 90.2 | 79.8 | 97.0 | 84.4 | 95.7 | 90.5 | 77.2 | 81.7 |
HIK-CCSLT | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 86.1 | 98.9 | 89.1 | 95.0 | 72.5 | 70.4 | 76.0 | 82.2 | 85.9 | 94.6 | 75.2 | 96.3 | 90.4 | 79.8 | 97.1 | 84.1 | 95.0 | 92.0 | 79.3 | 82.1 | ||
BFNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | BFNet | Jiaqi Fan | more details | n/a | 71.0 | 97.0 | 79.3 | 91.2 | 51.2 | 50.2 | 58.0 | 65.3 | 69.5 | 92.3 | 67.8 | 94.8 | 79.8 | 60.5 | 92.9 | 55.0 | 72.2 | 53.6 | 53.1 | 64.6 | |
Hai Wang+Yingfeng Cai-research group | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.00164 | 85.3 | 99.0 | 89.1 | 94.9 | 71.8 | 68.7 | 75.6 | 82.1 | 84.8 | 94.5 | 74.4 | 96.3 | 90.0 | 79.4 | 97.0 | 81.3 | 95.2 | 89.4 | 76.9 | 81.3 | ||
Jiangsu_university_Intelligent_Drive_AI | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 85.3 | 99.0 | 89.1 | 94.9 | 71.8 | 68.7 | 75.7 | 82.1 | 84.8 | 94.5 | 74.4 | 96.3 | 90.0 | 79.4 | 97.0 | 81.3 | 95.2 | 89.4 | 76.9 | 81.3 | ||
MCANet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Anonymous | more details | n/a | 73.4 | 98.3 | 84.2 | 92.0 | 49.5 | 51.6 | 62.1 | 67.8 | 73.2 | 92.6 | 70.3 | 95.3 | 81.4 | 59.2 | 94.6 | 57.8 | 75.9 | 64.2 | 56.3 | 69.1 | ||
UFONet (half-resolution) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | UFO RPN: A Region Proposal Network for Ultra Fast Object Detection | Wenkai Li, Andy Song | The 34th Australasian Joint Conference on Artificial Intelligence | more details | n/a | 48.0 | 95.0 | 65.1 | 83.0 | 21.4 | 25.9 | 36.3 | 40.5 | 48.3 | 88.7 | 60.2 | 90.6 | 59.4 | 13.1 | 85.5 | 4.8 | 14.2 | 13.9 | 17.3 | 48.8 |
SCMNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 67.9 | 97.9 | 81.3 | 90.4 | 44.2 | 44.3 | 56.1 | 60.6 | 67.6 | 91.8 | 67.7 | 94.6 | 77.5 | 55.6 | 92.8 | 52.1 | 61.2 | 50.3 | 42.2 | 62.9 | ||
FsaNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | FsaNet: Frequency Self-attention for Semantic Segmentation | Anonymous | more details | n/a | 83.0 | 98.8 | 87.8 | 94.1 | 67.7 | 65.1 | 70.1 | 77.5 | 80.8 | 94.0 | 74.0 | 95.9 | 88.0 | 75.1 | 96.4 | 79.2 | 93.9 | 91.8 | 69.0 | 78.9 | |
SCMNet coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | SCMNet: Shared Context Mining Network for Real-time Semantic Segmentation | Tanmay Singha; Moritz Bergemann; Duc-Son Pham; Aneesh Krishna | 2021 Digital Image Computing: Techniques and Applications (DICTA) | more details | n/a | 68.3 | 97.9 | 81.6 | 90.5 | 39.8 | 44.7 | 57.1 | 62.2 | 68.7 | 91.9 | 67.4 | 94.7 | 78.0 | 56.1 | 92.9 | 52.4 | 62.0 | 51.8 | 44.1 | 63.7 |
SAIT SeeThroughNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 86.2 | 99.0 | 89.2 | 95.1 | 73.9 | 71.6 | 76.3 | 82.5 | 85.2 | 94.5 | 75.4 | 96.3 | 90.1 | 79.6 | 97.0 | 83.6 | 95.2 | 92.8 | 77.9 | 81.8 | ||
JSU_IDT_group | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 85.9 | 99.0 | 89.4 | 94.9 | 73.1 | 69.2 | 75.7 | 82.2 | 85.0 | 94.5 | 75.9 | 96.3 | 90.1 | 79.3 | 97.0 | 83.7 | 94.9 | 92.3 | 77.4 | 81.5 | ||
DLA_HRNet48OCR_MSFLIP_000 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | This set of predictions is from DLA (differentiable lattice assignment network) with "HRNet48+OCR-Head" as base segmentation model. The model is, first trained on coarse-data, and then trained on fine-annotated train/val sets. Multi-scale (0.5, 0.75, 1.0, 1.25, 1.5, 1.75) and flip scheme is adopted during inference. more details | n/a | 84.8 | 98.9 | 89.2 | 94.8 | 71.0 | 67.5 | 75.5 | 81.9 | 84.9 | 94.4 | 74.7 | 96.2 | 89.7 | 78.9 | 96.9 | 80.1 | 93.0 | 87.0 | 76.3 | 80.9 | ||
MYBank-AIoT | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 86.3 | 99.0 | 89.4 | 95.0 | 73.4 | 71.2 | 76.1 | 82.2 | 85.3 | 94.6 | 75.8 | 96.4 | 90.7 | 80.8 | 97.0 | 84.3 | 94.5 | 90.9 | 79.9 | 82.6 | ||
kMaX-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | k-means Mask Transformer | Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen | ECCV 2022 | kMaX-DeepLab w/ ConvNeXt-L backbone (ImageNet-22k + 1k pretrained). This result is obtained by the kMaX-DeepLab trained for Panoptic Segmentation task. No test-time augmentation or other external dataset. more details | n/a | 83.2 | 98.8 | 88.3 | 94.3 | 63.5 | 67.5 | 71.9 | 79.7 | 83.3 | 94.1 | 73.5 | 96.2 | 88.7 | 77.5 | 96.7 | 83.1 | 90.3 | 82.6 | 75.4 | 74.5 |
LeapAI | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Using advanced AI techniques. more details | n/a | 86.4 | 99.0 | 89.9 | 95.1 | 73.8 | 71.6 | 76.0 | 82.4 | 85.9 | 94.6 | 76.0 | 96.4 | 90.3 | 80.3 | 97.0 | 84.2 | 95.1 | 92.1 | 79.3 | 82.5 | ||
adlab_iiau_ldz | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | meticulous-caiman_2022.05.01_03.32 more details | n/a | 85.6 | 99.0 | 89.5 | 95.0 | 73.0 | 68.5 | 75.7 | 82.1 | 85.3 | 94.5 | 75.7 | 96.3 | 89.9 | 79.3 | 96.9 | 84.3 | 95.1 | 88.4 | 76.6 | 81.4 | ||
SFRSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | A Real-Time Semantic Segmentation Model Using Iteratively Shared Features In Multiple Sub-Encoders | Tanmay Singha, Duc-Son Pham, Aneesh Krishna | Pattern Recognition | more details | n/a | 70.6 | 98.0 | 82.6 | 90.9 | 45.9 | 49.9 | 49.0 | 62.1 | 66.8 | 91.4 | 68.3 | 94.7 | 78.0 | 54.5 | 93.1 | 57.9 | 69.1 | 67.8 | 54.7 | 66.0 |
PIDNet-S | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | PIDNet: A Real-time Semantic Segmentation Network Inspired from PID Controller | Anonymous | more details | 0.0107 | 78.6 | 98.5 | 85.6 | 92.8 | 52.8 | 59.0 | 65.5 | 73.8 | 76.4 | 93.5 | 71.8 | 95.6 | 85.4 | 70.3 | 95.5 | 68.1 | 84.2 | 85.2 | 64.7 | 74.8 | |
Vision Transformer Adapter for Dense Predictions | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Vision Transformer Adapter for Dense Predictions | Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, Yu Qiao | ViT-Adapter-L, BEiT pre-train, multi-scale testing more details | n/a | 85.2 | 98.9 | 88.5 | 94.5 | 66.7 | 70.2 | 74.5 | 80.2 | 83.6 | 94.4 | 73.7 | 96.2 | 89.7 | 79.0 | 96.7 | 85.5 | 94.4 | 90.5 | 79.9 | 81.8 | |
SSNet | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Anonymous | more details | n/a | 72.5 | 97.9 | 81.7 | 92.1 | 44.9 | 50.7 | 65.0 | 72.3 | 76.6 | 92.6 | 68.0 | 95.1 | 84.8 | 65.3 | 94.7 | 52.2 | 63.7 | 47.0 | 58.3 | 73.6 | ||
SDBNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | SDBNet: Lightweight Real-time Semantic Segmentation Using Short-term Dense Bottleneck | Tanmay Singha, Duc-Son Pham, Aneesh Krishna | 2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA) | more details | n/a | 70.8 | 97.9 | 81.4 | 91.0 | 47.4 | 47.6 | 55.2 | 64.5 | 69.7 | 92.0 | 68.5 | 94.4 | 79.4 | 59.1 | 93.6 | 53.2 | 70.4 | 62.9 | 51.6 | 66.1 |
MeiTuan-BaseModel | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 86.5 | 99.0 | 89.6 | 95.1 | 73.5 | 72.6 | 76.7 | 82.2 | 85.3 | 94.7 | 75.6 | 96.4 | 90.6 | 80.9 | 96.8 | 84.5 | 95.5 | 92.4 | 78.8 | 82.7 | ||
SDBNetV2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Improved Short-term Dense Bottleneck network for efficient scene analysis | Tanmay Singha; Duc-Son Pham; Aneesh Krishna | Computer Vision and Image Understanding | more details | n/a | 72.4 | 98.1 | 83.0 | 91.6 | 53.3 | 50.7 | 57.0 | 66.6 | 71.4 | 92.2 | 68.6 | 94.8 | 80.5 | 61.4 | 93.8 | 58.2 | 69.1 | 61.8 | 56.0 | 66.9 |
mogo_semantic | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 85.3 | 99.0 | 89.0 | 94.9 | 71.3 | 68.1 | 76.1 | 82.3 | 85.0 | 94.4 | 74.5 | 96.3 | 90.2 | 79.4 | 96.9 | 82.2 | 95.5 | 88.6 | 76.1 | 81.5 | ||
UDSSEG_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | UDSSEG_RVC more details | n/a | 79.4 | 98.4 | 86.7 | 93.4 | 63.4 | 64.5 | 67.3 | 73.6 | 76.9 | 93.7 | 72.4 | 95.9 | 85.1 | 67.9 | 95.9 | 73.4 | 83.6 | 77.1 | 66.0 | 74.0 | ||
MIX6D_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | MIX6D_RVC more details | n/a | 79.8 | 98.1 | 82.5 | 92.6 | 61.9 | 62.3 | 61.7 | 70.8 | 74.7 | 93.1 | 69.6 | 95.0 | 84.4 | 68.9 | 95.0 | 78.4 | 92.3 | 90.6 | 69.6 | 74.4 | ||
FAN_NV_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Hybrid-Base + Segformer more details | n/a | 82.0 | 98.4 | 84.8 | 93.8 | 65.1 | 66.8 | 67.3 | 74.5 | 78.0 | 94.0 | 72.2 | 96.1 | 86.8 | 71.3 | 96.3 | 81.1 | 93.3 | 88.6 | 72.2 | 76.9 | ||
UNIV_CNP_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | RVC 2022 more details | n/a | 75.1 | 97.5 | 80.2 | 91.6 | 62.4 | 58.6 | 46.8 | 66.8 | 70.1 | 91.9 | 65.8 | 94.3 | 79.7 | 64.6 | 92.0 | 76.0 | 85.0 | 76.9 | 59.8 | 66.8 | ||
AntGroup-AI-VisionAlgo | yes | yes | yes | yes | no | no | no | no | yes | yes | no | no | no | no | Anonymous | AntGroup AI vision algo more details | n/a | 86.4 | 98.9 | 89.2 | 95.0 | 74.6 | 72.6 | 76.3 | 82.1 | 84.8 | 94.6 | 75.2 | 96.3 | 90.0 | 80.5 | 96.8 | 84.9 | 95.3 | 93.5 | 78.8 | 82.2 | ||
InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions | Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, Xiaogang Wang, Yu Qiao | CVPR 2023 | We use Mask2Former as the segmentation framework, and initialize our InternImage-H model with the pre-trained weights on the 427M joint dataset of public Laion-400M, YFCC-15M, and CC12M. Following common practices, we first pre-train on Mapillary Vistas for 80k iterations, and then fine-tune on Cityscapes for 80k iterations. The crop size is set to 1024×1024 in this experiment. As a result, our InternImage-H achieves 87.0 multi-scale mIoU on the validation set, and 86.1 multi-scale mIoU on the test set. more details | n/a | 86.1 | 98.9 | 88.8 | 94.9 | 72.5 | 71.2 | 75.4 | 80.9 | 84.7 | 94.5 | 75.5 | 96.3 | 90.1 | 79.9 | 96.8 | 85.3 | 95.5 | 92.6 | 80.0 | 82.2 |
Dense Prediction with Attentive Feature aggregation | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Dense Prediction with Attentive Feature Aggregation | Yung-Hsu Yang, Thomas E. Huang, Min Sun, Samuel Rota Bulò, Peter Kontschieder, Fisher Yu | WACV 2023 | We propose Attentive Feature Aggregation (AFA) to exploit both spatial and channel information for semantic segmentation and boundary detection. more details | n/a | 83.6 | 98.9 | 88.6 | 94.4 | 68.4 | 64.9 | 72.7 | 80.7 | 83.4 | 94.2 | 74.5 | 96.1 | 89.1 | 76.9 | 96.7 | 76.4 | 90.4 | 89.0 | 73.3 | 79.6 |
W3_FAFM | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Junyan Yang, Qian Xu, Lei La | Team: BOSCH-XC-DX-WAVE3 more details | 0.029309 | 80.5 | 98.7 | 86.5 | 93.4 | 59.4 | 59.4 | 67.4 | 76.9 | 79.2 | 93.6 | 72.1 | 95.5 | 86.8 | 73.6 | 95.9 | 70.3 | 88.1 | 86.1 | 69.7 | 77.3 | ||
HRN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Hierarchical residual network more details | 45.0 | 77.7 | 98.4 | 85.1 | 92.6 | 55.9 | 56.8 | 63.9 | 72.2 | 75.9 | 93.2 | 72.3 | 95.3 | 84.3 | 68.1 | 95.5 | 66.7 | 80.7 | 81.6 | 63.3 | 73.5 | ||
HRN+DCNv2_for_DOAS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | HRN with DCNv2 for DOAS in paper "Dynamic Obstacle Avoidance System based on Rapid Instance Segmentation Network" more details | 0.032 | 81.2 | 98.6 | 86.7 | 93.7 | 62.1 | 63.9 | 70.4 | 77.0 | 79.5 | 93.8 | 72.9 | 95.6 | 87.7 | 73.4 | 96.0 | 71.2 | 87.7 | 82.9 | 71.4 | 78.0 | ||
GEELY-ATC-SEG | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 86.7 | 98.9 | 89.2 | 95.0 | 74.0 | 73.8 | 76.3 | 81.5 | 84.9 | 94.6 | 75.6 | 96.3 | 90.3 | 80.4 | 96.8 | 86.9 | 95.7 | 93.3 | 80.4 | 82.5 | ||
PMSDSEN | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient Parallel Multi-Scale Detail and Semantic Encoding Network for Lightweight Semantic Segmentation | Xiao Liu, Xiuya Shi, Lufei Chen, Linbo Qing, Chao Ren | ACM International Conference on Multimedia 2023 | MM '23: Proceedings of the 31th ACM International Conference on Multimedia more details | n/a | 74.0 | 98.2 | 83.9 | 91.4 | 44.4 | 53.1 | 62.8 | 67.4 | 72.6 | 92.6 | 70.2 | 94.8 | 81.2 | 63.1 | 94.4 | 65.2 | 75.2 | 67.0 | 58.1 | 70.1 |
ECFD | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | backbone: ConvNext-Large more details | n/a | 82.1 | 98.7 | 87.6 | 94.1 | 63.6 | 67.2 | 72.1 | 78.6 | 82.5 | 94.1 | 73.2 | 96.0 | 88.8 | 76.4 | 96.3 | 79.3 | 85.0 | 71.3 | 75.6 | 79.5 | ||
DWGSeg-L75 | yes | yes | no | no | no | no | no | no | no | no | 1.3 | 1.3 | no | no | Anonymous | more details | 0.00755 | 76.7 | 98.5 | 85.5 | 92.4 | 54.3 | 55.6 | 61.6 | 70.5 | 74.3 | 92.9 | 70.7 | 95.2 | 83.5 | 66.6 | 95.4 | 71.7 | 82.1 | 72.0 | 61.1 | 72.5 | ||
VLTSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | VLTSeg: Simple Transfer of CLIP-Based Vision-Language Representations for Domain Generalized Semantic Segmentation | Christoph Hümmer, Manuel Schwonberg, Liangwei Zhou, Hu Cao, Alois Knoll, Hanno Gottschalk | more details | n/a | 86.4 | 98.9 | 89.0 | 94.8 | 70.9 | 72.8 | 75.6 | 81.3 | 84.6 | 94.5 | 75.7 | 96.3 | 90.6 | 81.4 | 96.8 | 84.2 | 95.4 | 93.2 | 82.4 | 82.7 | |
CGMANet_v1 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Context Guided Multi-scale Attention for Real-time Semantic Segmentation of Road-scene | Saquib Mazhar | Context Guided Multi-scale Attention for Real-time Semantic Segmentation of Road-scene more details | n/a | 73.3 | 97.8 | 82.1 | 91.6 | 52.1 | 49.9 | 61.8 | 68.3 | 72.6 | 91.9 | 68.4 | 94.3 | 82.7 | 63.7 | 93.9 | 56.4 | 75.3 | 60.8 | 59.0 | 70.8 | |
SERNet-Former_v2 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 84.4 | 98.7 | 87.4 | 94.3 | 68.7 | 68.5 | 72.0 | 78.2 | 81.9 | 93.9 | 71.7 | 95.9 | 88.7 | 78.3 | 96.1 | 83.2 | 95.4 | 92.6 | 78.5 | 80.1 |
iIoU on class-level
name | fine | fine | coarse | coarse | 16-bit | 16-bit | depth | depth | video | video | sub | sub | code | code | title | authors | venue | description | Runtime [s] | average | person | rider | car | truck | bus | train | motorcycle | bicycle |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
FCN 8s | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Fully Convolutional Networks for Semantic Segmentation | J. Long, E. Shelhamer, and T. Darrell | CVPR 2015 | Trained by Marius Cordts on a pre-release version of the dataset more details | 0.5 | 41.7 | 55.9 | 33.4 | 83.9 | 22.2 | 30.8 | 26.7 | 31.1 | 49.6 |
RRR-ResNet152-MultiScale | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | update: this submission actually used the coarse labels, which was previously not marked accordingly more details | n/a | 48.5 | 60.6 | 38.0 | 88.4 | 26.1 | 39.6 | 43.1 | 38.8 | 53.6 | ||
Dilation10 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Multi-Scale Context Aggregation by Dilated Convolutions | Fisher Yu and Vladlen Koltun | ICLR 2016 | Dilation10 is a convolutional network that consists of a front-end prediction module and a context aggregation module. Both are described in the paper. The combined network was trained jointly. The context module consists of 10 layers, each of which has C=19 feature maps. The larger number of layers in the context module (10 for Cityscapes versus 8 for Pascal VOC) is due to the high input resolution. The Dilation10 model is a pure convolutional network: there is no CRF and no structured prediction. Dilation10 can therefore be used as the baseline input for structured prediction models. Note that the reported results were produced by training on the training set only; the network was not retrained on train+val. more details | 4.0 | 42.0 | 56.3 | 34.5 | 85.8 | 21.8 | 32.7 | 27.6 | 28.0 | 49.1 |
Adelaide | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Piecewise Training of Deep Structured Models for Semantic Segmentation | G. Lin, C. Shen, I. Reid, and A. van den Hengel | arXiv preprint 2015 | Trained on a pre-release version of the dataset more details | 35.0 | 46.7 | 56.2 | 38.0 | 77.1 | 34.0 | 47.0 | 33.4 | 38.1 | 49.9 |
DeepLab LargeFOV StrongWeak | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | yes | yes | Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image Segmentation | G. Papandreou, L.-C. Chen, K. Murphy, and A. L. Yuille | ICCV 2015 | Trained on a pre-release version of the dataset more details | 4.0 | 34.9 | 40.7 | 23.1 | 78.6 | 21.4 | 32.4 | 27.6 | 20.8 | 34.6 |
DeepLab LargeFOV Strong | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs | L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille | ICLR 2015 | Trained on a pre-release version of the dataset more details | 4.0 | 34.5 | 40.5 | 23.3 | 78.8 | 20.3 | 31.9 | 24.8 | 21.1 | 35.2 |
DPN | yes | yes | yes | yes | no | no | no | no | no | no | 3 | 3 | no | no | Semantic Image Segmentation via Deep Parsing Network | Z. Liu, X. Li, P. Luo, C. C. Loy, and X. Tang | ICCV 2015 | Trained on a pre-release version of the dataset more details | n/a | 28.1 | 38.9 | 12.8 | 78.6 | 13.4 | 24.0 | 19.2 | 10.7 | 27.2 |
Segnet basic | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | yes | yes | SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation | V. Badrinarayanan, A. Kendall, and R. Cipolla | arXiv preprint 2015 | Trained on a pre-release version of the dataset more details | 0.06 | 32.0 | 44.3 | 22.7 | 78.4 | 16.1 | 24.3 | 20.7 | 15.8 | 33.6 |
Segnet extended | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | yes | yes | SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation | V. Badrinarayanan, A. Kendall, and R. Cipolla | arXiv preprint 2015 | Trained on a pre-release version of the dataset more details | 0.06 | 34.2 | 49.9 | 27.1 | 81.1 | 15.3 | 23.7 | 18.5 | 19.6 | 38.4 |
CRFasRNN | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Conditional Random Fields as Recurrent Neural Networks | S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. S. Torr | ICCV 2015 | Trained on a pre-release version of the dataset more details | 0.7 | 34.4 | 50.6 | 17.8 | 81.1 | 18.0 | 25.0 | 30.3 | 22.3 | 30.1 |
Scale invariant CNN + CRF | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Convolutional Scale Invariance for Semantic Segmentation | I. Kreso, D. Causevic, J. Krapac, and S. Segvic | GCPR 2016 | We propose an effective technique to address large scale variation in images taken from a moving car by cross-breeding deep learning with stereo reconstruction. Our main contribution is a novel scale selection layer which extracts convolutional features at the scale which matches the corresponding reconstructed depth. The recovered scaleinvariant representation disentangles appearance from scale and frees the pixel-level classifier from the need to learn the laws of the perspective. This results in improved segmentation results due to more effi- cient exploitation of representation capacity and training data. We perform experiments on two challenging stereoscopic datasets (KITTI and Cityscapes) and report competitive class-level IoU performance. more details | n/a | 44.9 | 59.0 | 40.0 | 84.0 | 19.7 | 35.8 | 33.0 | 36.0 | 51.4 |
DPN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Semantic Image Segmentation via Deep Parsing Network | Z. Liu, X. Li, P. Luo, C. C. Loy, and X. Tang | ICCV 2015 | DPN trained on full resolution images more details | n/a | 39.1 | 53.6 | 28.9 | 85.0 | 20.1 | 28.3 | 24.9 | 24.8 | 46.9 |
Pixel-level Encoding for Instance Segmentation | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Pixel-level Encoding and Depth Layering for Instance-level Semantic Labeling | J. Uhrig, M. Cordts, U. Franke, and T. Brox | GCPR 2016 | We predict three encoding channels from a single image using an FCN: semantic labels, depth classes, and an instance-aware representation based on directions towards instance centers. Using low-level computer vision techniques, we obtain pixel-level and instance-level semantic labeling paired with a depth estimate of the instances. more details | n/a | 41.6 | 60.6 | 33.4 | 86.7 | 19.5 | 25.6 | 25.8 | 30.5 | 50.5 |
Adelaide_context | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Piecewise Training of Deep Structured Models for Semantic Segmentation | Guosheng Lin, Chunhua Shen, Anton van den Hengel, Ian Reid | CVPR 2016 | We explore contextual information to improve semantic image segmentation. Details are described in the paper. We trained contextual networks for coarse level prediction and a refinement network for refining the coarse prediction. Our models are trained on the training set only (2975 images) without adding the validation set. more details | n/a | 51.7 | 61.5 | 41.2 | 86.3 | 35.8 | 47.7 | 42.0 | 42.1 | 57.4 |
NVSegNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | In the inference, we use the image of 2 different scales. The same for training! more details | 0.4 | 41.4 | 51.9 | 33.3 | 85.0 | 25.6 | 34.5 | 25.3 | 27.8 | 47.6 | ||
ENet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation | Adam Paszke, Abhishek Chaurasia, Sangpil Kim, Eugenio Culurciello | more details | 0.013 | 34.4 | 47.6 | 20.8 | 80.0 | 17.5 | 26.8 | 21.8 | 20.9 | 39.4 | |
DeepLabv2-CRF | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs | Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L. Yuille | arXiv preprint | DeepLabv2-CRF is based on three main methods. First, we employ convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool to repurpose ResNet-101 (trained on image classification task) in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within DCNNs. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and fully connected Conditional Random Fields (CRFs). The model is only trained on train set. more details | n/a | 42.6 | 51.5 | 31.2 | 85.4 | 26.5 | 37.8 | 34.5 | 27.4 | 46.5 |
m-TCFs | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Convolutional Neural Network more details | 1.0 | 43.6 | 55.8 | 33.6 | 86.6 | 24.8 | 36.5 | 31.9 | 30.5 | 48.8 | ||
DeepLab+DynamicCRF | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ru.nl | more details | n/a | 38.3 | 44.5 | 29.8 | 81.3 | 19.8 | 31.3 | 30.4 | 28.7 | 40.7 | ||
LRR-4x | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation | Golnaz Ghiasi, Charless C. Fowlkes | ECCV 2016 | We introduce a CNN architecture that reconstructs high-resolution class label predictions from low-resolution feature maps using class-specific basis functions. Our multi-resolution architecture also uses skip connections from higher resolution feature maps to successively refine segment boundaries reconstructed from lower resolution maps. The model used for this submission is based on VGG-16 and it was trained on the training set (2975 images). The segmentation predictions were not post-processed using CRF. (This is a revision of a previous submission in which we didn't use the correct basis functions; the method name changed from 'LLR-4x' to 'LRR-4x') more details | n/a | 48.0 | 61.5 | 40.1 | 87.3 | 31.2 | 41.9 | 28.4 | 36.4 | 57.3 |
LRR-4x | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation | Golnaz Ghiasi, Charless C. Fowlkes | ECCV 2016 | We introduce a CNN architecture that reconstructs high-resolution class label predictions from low-resolution feature maps using class-specific basis functions. Our multi-resolution architecture also uses skip connections from higher resolution feature maps to successively refine segment boundaries reconstructed from lower resolution maps. The model used for this submission is based on VGG-16 and it was trained using both coarse and fine annotations. The segmentation predictions were not post-processed using CRF. more details | n/a | 47.9 | 61.0 | 39.7 | 86.7 | 30.4 | 40.1 | 34.5 | 35.4 | 55.2 |
Le_Selfdriving_VGG | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 35.6 | 48.5 | 25.3 | 81.0 | 16.5 | 26.8 | 21.6 | 25.0 | 39.9 | ||
SQ | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Speeding up Semantic Segmentation for Autonomous Driving | Michael Treml, José Arjona-Medina, Thomas Unterthiner, Rupesh Durgesh, Felix Friedmann, Peter Schuberth, Andreas Mayr, Martin Heusel, Markus Hofmarcher, Michael Widrich, Bernhard Nessler, Sepp Hochreiter | NIPS 2016 Workshop - MLITS Machine Learning for Intelligent Transportation Systems Neural Information Processing Systems 2016, Barcelona, Spain | more details | 0.06 | 32.3 | 47.8 | 21.7 | 84.6 | 7.8 | 21.0 | 16.6 | 14.9 | 43.6 |
SAIT | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous more details | 4.0 | 51.8 | 63.4 | 41.2 | 88.9 | 33.4 | 48.2 | 44.4 | 38.7 | 56.4 | ||
FoveaNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | FoveaNet | Xin Li, Jiashi Feng | 1.caffe-master 2.resnet-101 3.single scale testing Previously listed as "LXFCRN". more details | n/a | 52.4 | 66.7 | 46.6 | 88.5 | 32.8 | 41.2 | 40.9 | 42.9 | 59.9 | |
RefineNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation | Guosheng Lin; Anton Milan; Chunhua Shen; Ian Reid; | Please refer to our technical report for details: "RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation" (https://arxiv.org/abs/1611.06612). Our source code is available at: https://github.com/guosheng/refinenet 2975 images (training set with fine labels) are used for training. more details | n/a | 47.2 | 55.6 | 35.8 | 86.9 | 30.1 | 42.6 | 42.4 | 34.3 | 50.0 | |
SegModel | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Both train set (2975) and val set (500) are used to train model for this submission. more details | 0.8 | 56.1 | 63.3 | 47.0 | 89.8 | 41.4 | 56.2 | 48.4 | 45.4 | 56.7 | ||
TuSimple | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Understanding Convolution for Semantic Segmentation | Panqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, Garrison Cottrell | more details | n/a | 53.6 | 62.7 | 43.9 | 88.2 | 38.5 | 49.7 | 40.3 | 43.7 | 61.4 | |
Global-Local-Refinement | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Global-residual and Local-boundary Refinement Networks for Rectifying Scene Parsing Predictions | Rui Zhang, Sheng Tang, Min Lin, Jintao Li, Shuicheng Yan | International Joint Conference on Artificial Intelligence (IJCAI) 2017 | global-residual and local-boundary refinement The method was previously listed as "RefineNet". To avoid confusions with a recently appeared and similarly named approach, the submission name was updated. more details | n/a | 53.4 | 65.3 | 45.3 | 89.1 | 36.5 | 50.5 | 42.7 | 39.4 | 58.2 |
XPARSE | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 49.2 | 61.7 | 40.5 | 87.5 | 31.3 | 42.7 | 35.9 | 38.3 | 55.4 | ||
ResNet-38 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Wider or Deeper: Revisiting the ResNet Model for Visual Recognition | Zifeng Wu, Chunhua Shen, Anton van den Hengel | arxiv | single model, single scale, no post-processing with CRFs Model A2, 2 conv., fine only, single scale testing The submissions was previously listed as "Model A2, 2 conv.". The name was changed for consistency with the other submission of the same work. more details | n/a | 59.1 | 71.9 | 50.6 | 90.5 | 42.0 | 54.2 | 51.4 | 48.1 | 64.2 |
SegModel | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 56.4 | 65.2 | 46.7 | 90.1 | 41.5 | 55.1 | 48.3 | 45.3 | 59.2 | ||
Deep Layer Cascade (LC) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Not All Pixels Are Equal: Difficulty-aware Semantic Segmentation via Deep Layer Cascade | Xiaoxiao Li, Ziwei Liu, Ping Luo, Chen Change Loy, Xiaoou Tang | CVPR 2017 | We propose a novel deep layer cascade (LC) method to improve the accuracy and speed of semantic segmentation. Unlike the conventional model cascade (MC) that is composed of multiple independent models, LC treats a single deep model as a cascade of several sub-models. Earlier sub-models are trained to handle easy and confident regions, and they progressively feed-forward harder regions to the next sub-model for processing. Convolutions are only calculated on these regions to reduce computations. The proposed method possesses several advantages. First, LC classifies most of the easy regions in the shallow stage and makes deeper stage focuses on a few hard regions. Such an adaptive and 'difficulty-aware' learning improves segmentation performance. Second, LC accelerates both training and testing of deep network thanks to early decisions in the shallow stage. Third, in comparison to MC, LC is an end-to-end trainable framework, allowing joint learning of all sub-models. We evaluate our method on PASCAL VOC and more details | n/a | 47.0 | 60.5 | 36.1 | 87.9 | 26.1 | 42.1 | 32.3 | 35.1 | 56.0 |
FRRN | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes | Tobias Pohlen, Alexander Hermans, Markus Mathias, Bastian Leibe | Arxiv | Full-Resolution Residual Networks (FRRN) combine multi-scale context with pixel-level accuracy by using two processing streams within one network: One stream carries information at the full image resolution, enabling precise adherence to segment boundaries. The other stream undergoes a sequence of pooling operations to obtain robust features for recognition. more details | n/a | 45.5 | 62.9 | 39.0 | 87.9 | 22.0 | 35.6 | 28.1 | 35.1 | 53.0 |
MNet_MPRG | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Chubu University, MPRG | without val dataset, external dataset (e.g. image net) and post-processing more details | 0.6 | 46.6 | 66.9 | 40.6 | 89.2 | 22.5 | 32.9 | 30.1 | 33.3 | 57.2 | ||
ResNet-38 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Wider or Deeper: Revisiting the ResNet Model for Visual Recognition | Zifeng Wu, Chunhua Shen, Anton van den Hengel | arxiv | single model, no post-processing with CRFs Model A2, 2 conv., fine+coarse, multi scale testing more details | n/a | 57.8 | 68.5 | 48.8 | 90.5 | 42.0 | 51.9 | 50.7 | 47.0 | 62.8 |
FCN8s-QunjieYu | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 34.5 | 53.2 | 28.1 | 84.9 | 9.4 | 18.2 | 13.3 | 22.0 | 46.6 | ||
RGB-D FCN | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | no | no | Anonymous | GoogLeNet + depth branch, single model no data augmentation, no training on validation set, no graphical model Used coarse labels to initialize depth branch more details | n/a | 42.1 | 56.0 | 35.5 | 85.8 | 22.3 | 33.5 | 23.3 | 30.0 | 50.6 | ||
MultiBoost | yes | yes | yes | yes | no | no | yes | yes | no | no | 2 | 2 | no | no | Anonymous | Boosting based solution. Publication is under review. more details | 0.25 | 32.5 | 41.2 | 25.8 | 77.8 | 11.2 | 23.6 | 24.9 | 21.8 | 33.4 | ||
GoogLeNet FCN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Going Deeper with Convolutions | Christian Szegedy , Wei Liu , Yangqing Jia , Pierre Sermanet , Scott Reed , Dragomir Anguelov , Dumitru Erhan , Vincent Vanhoucke , Andrew Rabinovich | CVPR 2015 | GoogLeNet No data augmentation, no graphical model Trained by Lukas Schneider, following "Fully Convolutional Networks for Semantic Segmentation", Long et al. CVPR 2015 more details | n/a | 38.6 | 54.0 | 28.6 | 85.0 | 16.9 | 29.6 | 19.3 | 25.7 | 49.8 |
ERFNet (pretrained) | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ERFNet: Efficient Residual Factorized ConvNet for Real-time Semantic Segmentation | Eduardo Romera, Jose M. Alvarez, Luis M. Bergasa and Roberto Arroyo | Transactions on Intelligent Transportation Systems (T-ITS) | ERFNet pretrained on ImageNet and trained only on the fine train (2975) annotated images more details | 0.02 | 44.1 | 60.1 | 34.7 | 86.1 | 22.6 | 37.6 | 31.2 | 29.0 | 51.4 |
ERFNet (from scratch) | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Efficient ConvNet for Real-time Semantic Segmentation | Eduardo Romera, Jose M. Alvarez, Luis M. Bergasa and Roberto Arroyo | IV2017 | ERFNet trained entirely on the fine train set (2975 images) without any pretraining nor coarse labels more details | 0.02 | 40.4 | 56.7 | 31.5 | 84.9 | 19.4 | 35.1 | 25.0 | 24.3 | 46.6 |
TuSimple_Coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Understanding Convolution for Semantic Segmentation | Panqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, Garrison Cottrell | Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are better for practical use. First, we implement dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields of the network to aggregate global information; 2) alleviates what we call the "gridding issue" caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a new state-of-art result of 80.1% mIOU in the test set. We also are state-of-the-art overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Pretrained models are available at https://goo.gl/DQMeun. more details | n/a | 56.9 | 67.6 | 47.3 | 89.2 | 38.3 | 52.5 | 54.8 | 44.8 | 60.9 | |
SAC-multiple | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scale-adaptive Convolutions for Scene Parsing | Rui Zhang, Sheng Tang, Yongdong Zhang, Jintao Li, and Shuicheng Yan | International Conference on Computer Vision (ICCV) 2017 | more details | n/a | 55.2 | 67.0 | 48.4 | 90.3 | 39.4 | 50.6 | 42.0 | 43.5 | 60.7 |
NetWarp | yes | yes | yes | yes | no | no | no | no | yes | yes | no | no | no | no | Anonymous | more details | n/a | 59.5 | 69.7 | 52.3 | 90.8 | 41.9 | 55.7 | 52.7 | 51.4 | 61.8 | ||
depthAwareSeg_RNN_ff | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | training with fine-annotated training images only (val set is not used); flip-augmentation only in training; single GPU for train&test; softmax loss; resnet101 as front end; multiscale test. more details | n/a | 56.0 | 66.3 | 46.7 | 88.4 | 37.3 | 50.7 | 52.0 | 45.5 | 60.8 | ||
Ladder DenseNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Ladder-style DenseNets for Semantic Segmentation of Large Natural Images | Ivan Krešo, Josip Krapac, Siniša Šegvić | ICCV 2017 | https://ivankreso.github.io/publication/ladder-densenet/ more details | 0.45 | 51.6 | 68.8 | 42.9 | 90.1 | 29.0 | 42.5 | 40.7 | 38.2 | 60.4 |
Real-time FCN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Understanding Cityscapes: Efficient Urban Semantic Scene Understanding | Marius Cordts | Dissertation | Combines the following concepts: Network architecture: "Going deeper with convolutions". Szegedy et al., CVPR 2015 Framework and skip connections: "Fully convolutional networks for semantic segmentation". Long et al., CVPR 2015 Context modules: "Multi-scale context aggregation by dilated convolutions". Yu and Kolutin, ICLR 2016 more details | 0.044 | 45.5 | 59.2 | 36.1 | 85.2 | 25.6 | 40.0 | 35.6 | 32.7 | 49.7 |
GridNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Conv-Deconv Grid-Network for semantic segmentation. Using only the training set without extra coarse annotated data (only 2975 images). No pre-training (ImageNet). No post-processing (like CRF). more details | n/a | 44.1 | 57.3 | 36.4 | 85.6 | 21.4 | 37.8 | 29.3 | 32.0 | 52.7 | ||
PEARL | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Video Scene Parsing with Predictive Feature Learning | Xiaojie Jin, Xin Li, Huaxin Xiao, Xiaohui Shen, Zhe Lin, Jimei Yang, Yunpeng Chen, Jian Dong, Luoqi Liu, Zequn Jie, Jiashi Feng, and Shuicheng Yan | ICCV 2017 | We proposed a novel Parsing with prEdictive feAtuRe Learning (PEARL) model to address the following two problems in video scene parsing: firstly, how to effectively learn meaningful video representations for producing the temporally consistent labeling maps; secondly, how to overcome the problem of insufficient labeled video training data, i.e. how to effectively conduct unsupervised deep learning. To our knowledge, this is the first model to employ predictive feature learning in the video scene parsing. more details | n/a | 51.6 | 63.2 | 42.5 | 88.0 | 33.2 | 45.8 | 42.6 | 40.8 | 56.8 |
pruned & dilated inception-resnet-v2 (PD-IR2) | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Anonymous | more details | 0.69 | 42.1 | 53.1 | 35.3 | 82.9 | 20.8 | 36.5 | 32.1 | 28.5 | 47.4 | ||
PSPNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Pyramid Scene Parsing Network | Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, Jiaya Jia | CVPR 2017 | This submission is trained on coarse+fine(train+val set, 2975+500 images). Former submission is trained on coarse+fine(train set, 2975 images) which gets 80.2 mIoU: https://www.cityscapes-dataset.com/method-details/?submissionID=314 Previous versions of this method were listed as "SenseSeg_1026". more details | n/a | 59.6 | 69.3 | 51.2 | 90.3 | 42.6 | 55.1 | 56.2 | 48.7 | 63.5 |
motovis | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | motovis.com | more details | n/a | 57.7 | 71.3 | 48.1 | 91.1 | 40.9 | 50.8 | 50.5 | 46.6 | 62.1 | ||
ML-CRNN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Multi-level Contextual RNNs with Attention Model for Scene Labeling | Heng Fan, Xue Mei, Danil Prokhorov, Haibin Ling | arXiv | A framework based on CNNs and RNNs is proposed, in which the RNNs are used to model spatial dependencies among image units. Besides, to enrich deep features, we use different features from multiple levels, and adopt a novel attention model to fuse them. more details | n/a | 47.1 | 59.1 | 39.0 | 85.8 | 30.3 | 40.1 | 34.1 | 34.7 | 53.6 |
Hybrid Model | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 41.2 | 54.1 | 32.2 | 83.3 | 22.0 | 33.4 | 24.3 | 31.2 | 49.4 | ||
tek-Ifly | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Iflytek | Iflytek-yin | using a fusion strategy of three single models, the best result of a single model is 80.01%,multi-scale more details | n/a | 60.1 | 69.9 | 48.3 | 90.1 | 44.6 | 55.4 | 57.7 | 50.2 | 64.5 | |
GridNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Residual Conv-Deconv Grid Network for Semantic Segmentation | Damien Fourure, Rémi Emonet, Elisa Fromont, Damien Muselet, Alain Tremeau & Christian Wolf | BMVC 2017 | We used a new architecture for semantic image segmentation called GridNet, following a grid pattern allowing multiple interconnected streams to work at different resolutions (see paper). We used only the training set without extra coarse annotated data (only 2975 images) and no pre-training (ImageNet) nor pre or post-processing. more details | n/a | 44.5 | 57.7 | 37.1 | 85.9 | 22.0 | 38.8 | 29.2 | 32.0 | 53.2 |
firenet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | n/a | 47.8 | 64.9 | 40.1 | 85.9 | 27.6 | 40.5 | 31.7 | 37.4 | 54.1 | ||
DeepLabv3 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Rethinking Atrous Convolution for Semantic Image Segmentation | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | arXiv preprint | In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we employ a module, called Atrous Spatial Pyrmid Pooling (ASPP), which adopts atrous convolution in parallel to capture multi-scale context with multiple atrous rates. Furthermore, we propose to augment ASPP module with image-level features encoding global context and further boost performance. Results obtained with a single model (no ensemble), trained with fine + coarse annotations. More details will be shown in the updated arXiv report. more details | n/a | 62.1 | 72.9 | 53.9 | 91.2 | 46.0 | 57.8 | 55.9 | 52.9 | 65.9 |
EdgeSenseSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Deep segmentation network with hard negative mining and other tricks. more details | n/a | 57.1 | 68.4 | 49.1 | 88.8 | 41.8 | 53.2 | 46.9 | 46.7 | 62.3 | ||
ScaleNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | ScaleNet: Scale Invariant Network for Semantic Segmentation in Urban Driving Scenes | Mohammad Dawud Ansari, Stephan Krarß, Oliver Wasenmüller and Didier Stricker | International Conference on Computer Vision Theory and Applications, Funchal, Portugal, 2018 | The scale difference in driving scenarios is one of the essential challenges in semantic scene segmentation. Close objects cover significantly more pixels than far objects. In this paper, we address this challenge with a scale invariant architecture. Within this architecture, we explicitly estimate the depth and adapt the pooling field size accordingly. Our model is compact and can be extended easily to other research domains. Finally, the accuracy of our approach is comparable to the state-of-the-art and superior for scale problems. We evaluate on the widely used automotive dataset Cityscapes as well as a self-recorded dataset. more details | n/a | 53.1 | 65.6 | 44.7 | 88.6 | 34.6 | 47.2 | 42.4 | 43.2 | 58.6 |
K-net | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | XinLiang Zhong | more details | n/a | 52.8 | 62.8 | 46.7 | 88.3 | 37.0 | 51.7 | 42.6 | 38.6 | 55.0 | ||
MSNET | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | previously also listed as "MultiPathJoin" and "MultiPath_Scale". more details | 0.2 | 57.1 | 74.2 | 51.4 | 89.7 | 37.6 | 52.8 | 38.3 | 46.6 | 66.6 | ||
Multitask Learning | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics | Alex Kendall, Yarin Gal and Roberto Cipolla | Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task. more details | n/a | 57.4 | 66.9 | 49.3 | 89.4 | 40.0 | 50.8 | 54.1 | 47.2 | 61.8 | |
DeepMotion | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | We propose a novel method based on convnets to extract multi-scale features in a large range particularly for solving street scene segmentation. more details | n/a | 58.6 | 67.5 | 49.2 | 89.9 | 44.2 | 55.2 | 55.6 | 46.3 | 60.9 | ||
SR-AIC | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 60.7 | 69.3 | 51.3 | 90.7 | 47.0 | 57.7 | 54.2 | 50.9 | 64.2 | ||
Roadstar.ai_CV(SFNet) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Roadstar.ai-CV | Maosheng Ye, Guang Zhou, Tongyi Cao, YongTao Huang, Yinzi Chen | same foucs net(SFNet), based on only fine labels, with focus on the loss distribution and same focus on the every layer of feature map more details | 0.2 | 60.8 | 75.8 | 52.2 | 89.9 | 48.9 | 59.5 | 44.8 | 47.8 | 67.8 | |
DFN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Learning a Discriminative Feature Network for Semantic Segmentation | Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, Nong Sang | arxiv | Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2% mean IOU on PASCAL VOC 2012 and 80.3% mean IOU on Cityscapes dataset. more details | n/a | 58.3 | 69.7 | 47.5 | 90.3 | 43.6 | 54.1 | 52.4 | 46.2 | 62.5 |
RelationNet_Coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | RelationNet: Learning Deep-Aligned Representation for Semantic Image Segmentation | Yueqing Zhuang | ICPR | Semantic image segmentation, which assigns labels in pixel level, plays a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning. However, one central problem of these methods is that deep convolution neural network gives little consideration to the correlation among pixels. To handle this issue, in this paper, we propose a novel deep neural network named RelationNet, which utilizes CNN and RNN to aggregate context information. Besides, a spatial correlation loss is applied to supervise RelationNet to align features of spatial pixels belonging to same category. Importantly, since it is expensive to obtain pixel-wise annotations, we exploit a new training method for combining the coarsely and finely labeled data. Separate experiments show the detailed improvements of each proposal. Experimental results demonstrate the effectiveness of our proposed method to the problem of semantic image segmentation. more details | n/a | 61.9 | 71.9 | 55.8 | 91.2 | 44.1 | 58.6 | 55.5 | 52.6 | 65.5 |
ARSAIT | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | anonymous more details | 1.0 | 48.2 | 61.7 | 38.4 | 88.2 | 27.6 | 42.8 | 33.6 | 37.0 | 56.7 | ||
Mapillary Research: In-Place Activated BatchNorm | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | In-Place Activated BatchNorm for Memory-Optimized Training of DNNs | Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder | arXiv | In-Place Activated Batch Normalization (InPlace-ABN) is a novel approach to drastically reduce the training memory footprint of modern deep neural networks in a computationally efficient way. Our solution substitutes the conventionally used succession of BatchNorm + Activation layers with a single plugin layer, hence avoiding invasive framework surgery while providing straightforward applicability for existing deep learning frameworks. We obtain memory savings of up to 50% by dropping intermediate results and by recovering required information during the backward pass through the inversion of stored forward results, with only minor increase (0.8-2%) in computation time. Test results are obtained using a single model. more details | n/a | 65.9 | 73.6 | 54.2 | 90.1 | 55.9 | 65.3 | 66.5 | 56.4 | 65.6 |
EFBNET | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 59.9 | 68.7 | 49.8 | 90.2 | 43.0 | 55.5 | 61.0 | 49.0 | 62.3 | ||
Ladder DenseNet v2 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Journal submission | Anonymous | DenseNet-121 model used in downsampling path with ladder-style skip connections upsampling path on top of it. more details | 1.0 | 54.6 | 68.3 | 45.5 | 90.5 | 35.5 | 48.5 | 45.4 | 41.7 | 61.2 | |
ESPNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation | Sachin Mehta, Mohammad Rastegari, Anat Caspi, Linda Shapiro, and Hannaneh Hajishirzi | We introduce a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a new convolutional module, efficient spatial pyramid (ESP), which is efficient in terms of computation, memory, and power. ESPNet is 22 times faster (on a standard GPU) and 180 times smaller than the state-of-the-art semantic segmentation network PSPNet, while its category-wise accuracy is only 8% less. We evaluated EPSNet on a variety of semantic segmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy whole slide image dataset. Under the same constraints on memory and computation, ESPNet outperforms all the current efficient CNN networks such as MobileNet, ShuffleNet, and ENet on both standard metrics and our newly introduced performance metrics that measure efficiency on edge devices. Our network can process high resolution images at a rate of 112 and 9 frames per second on a standard GPU and edge device, respectively more details | 0.0089 | 31.8 | 45.8 | 19.2 | 81.7 | 15.2 | 24.3 | 16.8 | 16.2 | 35.5 | |
ENet with the Lovász-Softmax loss | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks | Maxim Berman, Amal Rannen Triki, Matthew B. Blaschko | arxiv | The Lovász-Softmax loss is a novel surrogate for optimizing the IoU measure in neural networks. Here we finetune the weights provided by the authors of ENet (arXiv:1606.02147) with this loss, for 10'000 iterations on training dataset. The runtimes are unchanged with respect to the ENet architecture. more details | 0.013 | 34.1 | 42.8 | 26.3 | 79.4 | 18.1 | 25.9 | 21.7 | 22.1 | 36.3 |
DRN_CRL_Coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Dense Relation Network: Learning Consistent and Context-Aware Representation For Semantic Image Segmentation | Yueqing Zhuang | ICIP | DRN_CoarseSemantic image segmentation, which aims at assigning pixel-wise category, is one of challenging image understanding problems. Global context plays an important role on local pixel-wise category assignment. To make the best of global context, in this paper, we propose dense relation network (DRN) and context-restricted loss (CRL) to aggregate global and local information. DRN uses Recurrent Neural Network (RNN) with different skip lengths in spatial directions to get context-aware representations while CRL helps aggregate them to learn consistency. Compared with previous methods, our proposed method takes full advantage of hierarchical contextual representations to produce high-quality results. Extensive experiments demonstrate that our methods achieves significant state-of-the-art performances on Cityscapes and Pascal Context benchmarks, with mean-IoU of 82.8% and 49.0% respectively. more details | n/a | 61.1 | 71.4 | 53.9 | 90.9 | 48.2 | 53.9 | 56.0 | 50.2 | 64.3 |
ShuffleSeg | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | ShuffleSeg: Real-time Semantic Segmentation Network | Mostafa Gamal, Mennatullah Siam, Mo'men Abdel-Razek | Under Review by ICIP 2018 | ShuffleSeg: An efficient realtime semantic segmentation network with skip connections and ShuffleNet units more details | n/a | 32.4 | 44.0 | 19.9 | 80.0 | 16.2 | 22.7 | 22.2 | 16.2 | 37.7 |
SkipNet-MobileNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | RTSeg: Real-time Semantic Segmentation Framework | Mennatullah Siam, Mostafa Gamal, Moemen Abdel-Razek, Senthil Yogamani, Martin Jagersand | Under Review by ICIP 2018 | An efficient realtime semantic segmentation network with skip connections based on MobileNet. more details | n/a | 35.2 | 45.4 | 26.1 | 80.1 | 17.6 | 27.6 | 23.5 | 22.1 | 38.9 |
ThunderNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.0104 | 40.4 | 53.7 | 30.3 | 84.0 | 21.0 | 34.9 | 32.6 | 26.5 | 40.6 | ||
PAC: Perspective-adaptive Convolutions | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Perspective-adaptive Convolutions for Scene Parsing | Rui Zhang, Sheng Tang, Yongdong Zhang, Jintao Li, and Shuicheng Yan | IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) | Many existing scene parsing methods adopt Convolutional Neural Networks with receptive fields of fixed sizes and shapes, which frequently results in inconsistent predictions of large objects and invisibility of small objects. To tackle this issue, we propose perspective-adaptive convolutions to acquire receptive fields of flexible sizes and shapes during scene parsing. Through adding a new perspective regression layer, we can dynamically infer the position-adaptive perspective coefficient vectors utilized to reshape the convolutional patches. Consequently, the receptive fields can be adjusted automatically according to the various sizes and perspective deformations of the objects in scene images. Our proposed convolutions are differentiable to learn the convolutional parameters and perspective coefficients in an end-to-end way without any extra training supervision of object sizes. Furthermore, considering that the standard convolutions lack contextual information and spatial dependencies, we propose a context adaptive bias to capture both local and global contextual information through average pooling on the local feature patches and global feature maps, followed by flexible attentive summing to the convolutional results. The attentive weights are position-adaptive and context-aware, and can be learned through adding an additional context regression layer. Experiments on Cityscapes and ADE20K datasets well demonstrate the effectiveness of the proposed methods. more details | n/a | 55.7 | 67.2 | 48.5 | 90.4 | 42.3 | 51.7 | 42.2 | 43.2 | 60.0 |
SU_Net | no | no | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 52.3 | 62.4 | 42.7 | 89.0 | 35.5 | 46.6 | 43.4 | 42.4 | 56.4 | ||
MobileNetV2Plus | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Huijun Liu | MobileNetV2Plus more details | n/a | 46.8 | 59.8 | 40.9 | 85.7 | 30.3 | 37.1 | 31.3 | 34.4 | 54.8 | ||
DeepLabv3+ | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation | Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, Hartwig Adam | arXiv | Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We will provide more details in the coming update on the arXiv report. more details | n/a | 62.4 | 73.1 | 53.7 | 91.4 | 47.1 | 58.8 | 56.3 | 53.2 | 65.8 |
RFMobileNetV2Plus | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Huijun Liu | Receptive Filed MobileNetV2Plus for Semantic Segmentation more details | n/a | 49.4 | 64.2 | 44.3 | 87.3 | 31.4 | 39.6 | 34.1 | 37.6 | 57.1 | ||
GoogLeNetV1_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | GoogLeNet-v1 FCN trained on Cityscapes, KITTI, and ScanNet, as required by the Robust Vision Challenge at CVPR'18 (http://robustvision.net/) more details | n/a | 35.1 | 46.9 | 27.2 | 83.4 | 16.1 | 24.4 | 18.3 | 22.6 | 42.2 | ||
SAITv2 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.025 | 36.5 | 44.4 | 24.1 | 81.5 | 23.5 | 32.6 | 30.0 | 19.1 | 36.4 | ||
GUNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Guided Upsampling Network for Real-Time Semantic Segmentation | Davide Mazzini | arxiv | Guided Upsampling Network for Real-Time Semantic Segmentation more details | 0.03 | 40.8 | 53.7 | 29.9 | 85.6 | 21.3 | 33.8 | 30.4 | 25.0 | 47.1 |
RMNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | A fast and light net for semantic segmentation. more details | 0.014 | 37.3 | 51.5 | 29.0 | 83.9 | 19.4 | 28.8 | 24.2 | 21.7 | 40.2 | ||
ContextNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ContextNet: Exploring Context and Detail for Semantic Segmentation in Real-time | Rudra PK Poudel, Ujwal Bonde, Stephan Liwicki, Christopher Zach | arXiv | Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naive adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representations to produce competitive semantic segmentation in real-time with low memory requirements. ContextNet combines a deep branch at low resolution that captures global context information efficiently with a shallow branch that focuses on high-resolution segmentation details. We analyze our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024x2048) resolution. more details | 0.0238 | 36.8 | 47.1 | 24.7 | 82.9 | 19.3 | 30.6 | 28.3 | 21.5 | 39.9 |
RFLR | yes | yes | yes | yes | yes | yes | no | no | no | no | 4 | 4 | no | no | Random Forest with Learned Representations for Semantic Segmentation | Byeongkeun Kang, Truong Q. Nguyen | IEEE Transactions on Image Processing | Random Forest with Learned Representations for Semantic Segmentation more details | 0.03 | 7.8 | 18.5 | 0.4 | 23.6 | 2.4 | 4.6 | 2.0 | 0.6 | 10.0 |
DPC | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Searching for Efficient Multi-Scale Architectures for Dense Image Prediction | Liang-Chieh Chen, Maxwell D. Collins, Yukun Zhu, George Papandreou, Barret Zoph, Florian Schroff, Hartwig Adam, Jonathon Shlens | NIPS 2018 | In this work we explore the construction of meta-learning techniques for dense image prediction focused on the tasks of scene parsing. Constructing viable search spaces in this domain is challenging because of the multi-scale representation of visual information and the necessity to operate on high resolution imagery. Based on a survey of techniques in dense image prediction, we construct a recursive search space and demonstrate that even with efficient random search, we can identify architectures that achieve state-of-the-art performance. Additionally, the resulting architecture (called DPC for Dense Prediction Cell) is more computationally efficient, requiring half the parameters and half the computational cost as previous state of the art systems. more details | n/a | 63.3 | 73.9 | 54.0 | 91.6 | 49.3 | 59.5 | 58.8 | 52.9 | 66.7 |
NV-ADLR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | NVIDIA Applied Deep Learning Research more details | n/a | 64.2 | 72.8 | 52.4 | 92.4 | 51.8 | 63.9 | 58.3 | 54.4 | 67.3 | ||
Adaptive Affinity Field on PSPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Adaptive Affinity Field for Semantic Segmentation | Tsung-Wei Ke*, Jyh-Jing Hwang*, Ziwei Liu, Stella X. Yu | ECCV 2018 | Existing semantic segmentation methods mostly rely on per-pixel supervision, unable to capture structural regularity present in natural images. Instead of learning to enforce semantic labels on individual pixels, we propose to enforce affinity field patterns in individual pixel neighbourhoods, i.e., the semantic label patterns of whether neighbouring pixels are in the same segment should match between the prediction and the ground-truth. The affinity fields characterize geometric relationships within the image, such as "motorcycles have round wheels". We further develop a novel method for learning the optimal neighbourhood size for each semantic category, with an adversarial loss that optimizes over worst-case scenarios. Unlike the common Conditional Random Field (CRF) approaches, our adaptive affinity field (AAF) method has no extra parameters during inference, and is less sensitive to appearance changes in the image. more details | n/a | 56.1 | 68.1 | 48.5 | 89.9 | 38.8 | 49.3 | 49.4 | 42.3 | 62.8 |
APMoE_seg_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Pixel-wise Attentional Gating for Parsimonious Pixel Labeling | Shu Kong, Charless Fowlkes | arxiv | The Pixel-level Attentional Gating (PAG) unit is trained to choose for each pixel the pooling size to adopt to aggregate contextual region around it. There are multiple branches with different dilate rates for varied pooling size, thus varying receptive field. For this ROB challenge, PAG is expected to robustly aggregate information for final prediction. This is our entry for Robust Vision Challenge 2018 workshop (ROB). The model is based on ResNet50, trained over mixed dataset of Cityscapes, ScanNet and Kitti. more details | 0.9 | 30.6 | 47.2 | 10.8 | 84.4 | 23.3 | 25.3 | 9.1 | 11.8 | 32.7 |
BatMAN_ROB | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | batch-normalized multistage attention network more details | 1.0 | 29.3 | 44.4 | 9.4 | 83.5 | 10.6 | 24.8 | 8.2 | 13.7 | 39.7 | ||
HiSS_ROB | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.06 | 32.1 | 40.9 | 19.8 | 79.9 | 17.5 | 27.2 | 21.6 | 13.4 | 36.7 | ||
VENUS_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | VENUS_ROB more details | n/a | 37.1 | 51.0 | 22.9 | 83.3 | 21.0 | 31.8 | 24.7 | 20.1 | 42.2 | ||
VlocNet++_ROB | no | no | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 33.9 | 40.8 | 23.9 | 82.4 | 18.1 | 30.0 | 16.1 | 21.1 | 38.6 | ||
AHiSS_ROB | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | Augmented Hierarchical Semantic Segmentation more details | 0.06 | 39.8 | 45.0 | 27.0 | 82.7 | 29.7 | 34.9 | 31.7 | 24.1 | 43.1 | ||
IBN-PSP-SA_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | IBN-PSP-SA_ROB more details | n/a | 46.3 | 57.5 | 35.1 | 88.0 | 30.2 | 44.8 | 33.2 | 29.4 | 52.4 | ||
LDN2_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Ladder DenseNet: https://ivankreso.github.io/publication/ladder-densenet/ more details | 1.0 | 52.3 | 65.1 | 42.9 | 90.1 | 34.9 | 48.2 | 39.8 | 38.9 | 58.2 | ||
MiniNet | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 0.004 | 15.8 | 22.7 | 5.8 | 67.3 | 4.4 | 7.1 | 3.8 | 3.8 | 11.6 | ||
AdapNetv2_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 34.9 | 42.9 | 23.7 | 83.5 | 18.4 | 31.6 | 17.0 | 21.7 | 40.5 | ||
MapillaryAI_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 60.1 | 70.2 | 50.6 | 90.5 | 42.9 | 58.5 | 54.8 | 49.4 | 63.5 | ||
FCN101_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 11.3 | 12.2 | 0.0 | 65.5 | 4.1 | 0.0 | 4.1 | 0.0 | 4.2 | ||
MaskRCNN_BOSH | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Jin shengtao, Yi zhihao, Liu wei [Our team name is firefly] | Bosh autodrive challenge more details | n/a | 48.5 | 58.3 | 41.8 | 81.9 | 34.3 | 43.1 | 41.5 | 31.5 | 55.4 | ||
EnsembleModel_Bosch | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Jin shengtao, Yi zhihao, Liu wei [Our team name was MaskRCNN_BOSH,firefly] | we've ensembled three model(erfnet,deeplab-mobilenet,tusimple) and gained 0.57 improvment of IoU Classes value. The best single model is 73.8549 more details | n/a | 48.9 | 60.4 | 41.1 | 86.2 | 30.6 | 44.0 | 40.1 | 33.7 | 55.4 | ||
EVANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 44.0 | 60.3 | 35.5 | 86.6 | 23.0 | 35.6 | 31.2 | 31.0 | 48.8 | ||
CLRCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | CLRCNet: Cascaded Low-Rank Convolutions for Semantic Segmentation in Real-time | Anonymous | A lightweight and real-time semantic segmentation method. more details | 0.013 | 35.9 | 51.8 | 26.6 | 84.2 | 13.6 | 27.5 | 22.0 | 18.3 | 43.4 | |
Edgenet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | A lightweight semantic segmentation network combined with edge information and channel-wise attention mechanism. more details | 0.03 | 46.6 | 62.5 | 38.1 | 88.4 | 25.4 | 38.3 | 33.0 | 34.4 | 52.5 | ||
L2-SP | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Explicit Inductive Bias for Transfer Learning with Convolutional Networks | Xuhong Li, Yves Grandvalet, Franck Davoine | ICML-2018 | With a simple variant of weight decay, L2-SP regularization (see the paper for details), we reproduced PSPNet based on the original ResNet-101 using "train_fine + val_fine + train_extra" set (2975 + 500 + 20000 images), with a small batch size 8. The sync batch normalization layer is implemented in Tensorflow (see the code). more details | n/a | 58.1 | 67.9 | 49.8 | 90.1 | 42.6 | 52.8 | 51.9 | 47.7 | 62.3 |
ALV303 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.2 | 52.0 | 67.7 | 47.4 | 90.2 | 31.1 | 46.1 | 34.0 | 40.1 | 59.5 | ||
NCTU-ITRI | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | For the purpose of fast semantic segmentation, we design a CNN-based encoder-decoder architecture, which is called DSNet. The encoder part is constructed based on the concept of DenseNet, and a simple decoder is adopted to make the network more efficient without degrading the accuracy. We pre-train the encoder network on the ImageNet dataset. Then, only the fine-annotated Cityscapes dataset (2975 training images) is used to train the complete DSNet. The DSNet demonstrates a good trade-off between accuracy and speed. It can process 68 frames per second on 1024x512 resolution images on a single GTX 1080 Ti GPU. more details | 0.0147 | 41.4 | 56.9 | 33.2 | 85.6 | 19.3 | 33.3 | 31.9 | 27.0 | 44.4 | ||
ADSCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ADSCNet: Asymmetric Depthwise Separable Convolution for Semantic Segmentation in Real-time | Anonymous | A lightweight and real-time semantic segmentation method for mobile devices. more details | 0.013 | 36.8 | 53.5 | 25.2 | 84.4 | 16.6 | 25.5 | 24.4 | 19.9 | 44.6 | |
SRC-B-MachineLearningLab | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Jianlong Yuan, Zelu Deng, Shu Wang, Zhenbo Luo | Samsung Research Center MachineLearningLab. The result is tested by multi scale and filp. The paper is in preparing. more details | n/a | 60.7 | 72.9 | 51.4 | 91.0 | 44.2 | 57.0 | 53.9 | 49.5 | 65.3 | ||
Tencent AI Lab | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 63.9 | 71.5 | 55.8 | 90.1 | 50.6 | 63.2 | 56.1 | 57.1 | 66.5 | ||
ERINet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | Efficient residual inception networks for real-time semantic segmentation more details | 0.023 | 44.1 | 59.7 | 35.4 | 87.6 | 23.2 | 31.9 | 32.1 | 33.2 | 49.6 | ||
PGCNet_Res101_fine | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | we choose the ResNet101 pretrained on ImageNet as our backbone, then we use both the train-fine and the val-fine data to train our model with batch size=8 for 8w iterations without any bells and whistles. We will release our paper latter. more details | n/a | 60.7 | 72.2 | 53.8 | 90.6 | 43.7 | 57.7 | 49.5 | 52.0 | 66.5 | ||
EDANet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Efficient Dense Modules of Asymmetric Convolution for Real-Time Semantic Segmentation | Shao-Yuan Lo (NCTU), Hsueh-Ming Hang (NCTU), Sheng-Wei Chan (ITRI), Jing-Jhih Lin (ITRI) | Training data: Fine annotations only (train+val. set, 2975+500 images) without any pretraining nor coarse annotations. For training on fine annotations (train set only, 2975 images), it attains a mIoU of 66.3%. Runtime: (resolution 512x1024) 0.0092s on a single GTX 1080Ti, 0.0123s on a single Titan X. more details | 0.0092 | 41.8 | 54.9 | 32.7 | 85.1 | 18.6 | 35.4 | 31.9 | 28.4 | 47.1 | |
OCNet_ResNet101_fine | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Context is essential for various computer vision tasks. The state-of-the-art scene parsing methods define the context as the prior of the scene categories (e.g., bathroom, badroom, street). Such scene context is not suitable for the street scene parsing tasks as most of the scenes are similar. In this work, we propose the Object Context that captures the prior of the object's category that the pixel belongs to. We compute the object context by aggregating all the pixels' features according to a attention map that encodes the probability of each pixel that it belongs to the same category with the associated pixel. Specifically, We employ the self-attention method to compute the pixel-wise attention map. We further propose the Pyramid Object Context and Atrous Spatial Pyramid Object Context to handle the problem of multi-scales. more details | n/a | 61.3 | 72.1 | 54.6 | 90.7 | 43.9 | 57.5 | 53.3 | 51.3 | 66.7 | ||
Knowledge-Aware | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Knowledge-Aware Semantic Segmentation more details | n/a | 55.6 | 67.6 | 45.6 | 89.3 | 36.3 | 49.4 | 52.6 | 43.6 | 59.9 | ||
CASIA_IVA_DANet_NoCoarse | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Dual Attention Network for Scene Segmentation | Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang,and Hanqing Lu | CVPR2019 | we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results more details | n/a | 62.3 | 73.8 | 54.6 | 91.9 | 46.2 | 60.3 | 52.5 | 51.9 | 66.7 |
LDFNet | yes | yes | no | no | no | no | yes | yes | no | no | 2 | 2 | yes | yes | Incorporating Luminance, Depth and Color Information by Fusion-based Networks for Semantic Segmentation | Shang-Wei Hung, Shao-Yuan Lo | We propose a preferred solution, which incorporates Luminance, Depth and color information by a Fusion-based network named LDFNet. It includes a distinctive encoder sub-network to process the depth maps and further employs the luminance images to assist the depth information in a process. LDFNet achieves very competitive results compared to the other state-of-art systems on the challenging Cityscapes dataset, while it maintains an inference speed faster than most of the existing top-performing networks. The experimental results show the effectiveness of the proposed information-fused approach and the potential of LDFNet for road scene understanding tasks. more details | n/a | 46.3 | 61.4 | 38.4 | 87.7 | 27.9 | 39.1 | 31.4 | 33.0 | 51.8 | |
CGNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Tianyi Wu et al | we propose a novel Context Guided Network for semantic segmentation on mobile devices. We first design a Context Guided (CG) block by considering the inherent characteristic of semantic segmentation. CG Block aggregates local feature, surrounding context feature and global context feature effectively and efficiently. Based on the CG block, we develop Context Guided Network (CGNet), which not only has a strong capacity of localization and recognition, but also has a low computational and memory footprint. Under a similar number of parameters, the proposed CGNet significantly outperforms existing segmentation networks. Extensive experiments on Cityscapes and CamVid datasets verify the effectiveness of the proposed approach. Specifically, without any post-processing, the proposed approach achieves 64.8% mean IoU on Cityscapes test set with less than 0.5 M parameters, and has a frame-rate of 50 fps on one NVIDIA Tesla K80 card for 2048 × 1024 high-resolution image. more details | 0.02 | 35.9 | 52.1 | 29.3 | 84.4 | 15.8 | 30.0 | 16.0 | 21.0 | 38.7 | ||
SAITv2-light | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.025 | 44.0 | 53.4 | 34.9 | 86.0 | 27.2 | 38.2 | 35.3 | 29.2 | 47.8 | ||
Deform_ResNet_Balanced | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.258 | 31.2 | 48.6 | 19.3 | 77.4 | 7.9 | 23.2 | 14.6 | 18.8 | 40.0 | ||
NfS-Seg | yes | yes | yes | yes | no | no | yes | yes | yes | yes | no | no | no | no | Uncertainty-Aware Knowledge Distillation for Real-Time Scene Segmentation: 7.43 GFLOPs at Full-HD Image with 120 fps | Anonymous | more details | 0.00837312 | 44.4 | 54.4 | 35.7 | 86.3 | 26.2 | 37.9 | 36.5 | 29.9 | 48.5 | |
Improving Semantic Segmentation via Video Propagation and Label Relaxation | yes | yes | yes | yes | no | no | no | no | yes | yes | no | no | yes | yes | Improving Semantic Segmentation via Video Propagation and Label Relaxation | Yi Zhu, Karan Sapra, Fitsum A. Reda, Kevin J. Shih, Shawn Newsam, Andrew Tao, Bryan Catanzaro | CVPR 2019 | Semantic segmentation requires large amounts of pixel-wise annotations to learn accurate models. In this paper, we present a video prediction-based methodology to scale up training sets by synthesizing new training samples in order to improve the accuracy of semantic segmentation networks. We exploit video prediction models' ability to predict future frames in order to also predict future labels. A joint propagation strategy is also proposed to alleviate mis-alignments in synthesized samples. We demonstrate that training segmentation models on datasets augmented by the synthesized samples lead to significant improvements in accuracy. Furthermore, we introduce a novel boundary label relaxation technique that makes training robust to annotation noise and propagation artifacts along object boundaries. Our proposed methods achieve state-of-the-art mIoUs of 83.5% on Cityscapes and 82.9% on CamVid. Our single model, without model ensembles, achieves 72.8% mIoU on the KITTI semantic segmentation test set, which surpasses the winning entry of the ROB challenge 2018. more details | n/a | 64.4 | 72.9 | 55.2 | 92.2 | 50.1 | 61.0 | 63.0 | 53.8 | 66.8 |
Spatial Sampling Net | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Spatial Sampling Network for Fast Scene Understanding | Davide Mazzini, Raimondo Schettini | CVPR 2019 Workshop on Autonomous Driving | We propose a network architecture to perform efficient scene understanding. This work presents three main novelties: the first is an Improved Guided Upsampling Module that can replace in toto the decoder part in common semantic segmentation networks. Our second contribution is the introduction of a new module based on spatial sampling to perform Instance Segmentation. It provides a very fast instance segmentation, needing only thresholding as post-processing step at inference time. Finally, we propose a novel efficient network design that includes the new modules and we test it against different datasets for outdoor scene understanding. more details | 0.00884 | 39.0 | 51.2 | 29.4 | 83.6 | 20.3 | 34.0 | 24.6 | 25.3 | 43.6 |
SwiftNetRN-18 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images | Marin Oršić, Ivan Krešo, Petra Bevandić, Siniša Šegvić | CVPR 2019 | more details | 0.0243 | 52.0 | 65.5 | 43.5 | 89.2 | 33.0 | 43.8 | 44.4 | 37.7 | 58.8 |
Fast-SCNN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Fast-SCNN: Fast Semantic Segmentation Network | Rudra PK Poudel, Stephan Liwicki, Roberto Cipolla | The encoder-decoder framework is state-of-the-art for offline semantic image segmentation. Since the rise in autonomous systems, real-time computation is increasingly desirable. In this paper, we introduce fast segmentation convolutional neural network (Fast-SCNN), an above real-time semantic segmentation model on high resolution image data (1024x2048px) suited to efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, we introduce our `learning to downsample' module which computes low-level features for multiple resolution branches simultaneously. Our network combines spatial detail at high resolution with deep features extracted at lower resolution, yielding an accuracy of 68.0% mean intersection over union at 123.5 frames per second on Cityscapes. We also show that large scale pre-training is unnecessary. We thoroughly validate our metric in experiments with ImageNet pre-training and the coarse labeled data of Cityscapes. Finally, we show even faster computation with competitive results on subsampled inputs, without any network modifications. more details | 0.0081 | 37.9 | 45.1 | 25.5 | 83.3 | 20.0 | 32.0 | 34.2 | 20.7 | 42.3 | |
Fast-SCNN (Half-resolution) | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Fast-SCNN: Fast Semantic Segmentation Network | Rudra P K Poudel, Stephan Liwicki, Roberto Cipolla | The encoder-decoder framework is state-of-the-art for offline semantic image segmentation. Since the rise in autonomous systems, real-time computation is increasingly desirable. In this paper, we introduce fast segmentation convolutional neural network (Fast-SCNN), an above real-time semantic segmentation model on high resolution image data (1024x2048px) suited to efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, we introduce our `learning to downsample' module which computes low-level features for multiple resolution branches simultaneously. Our network combines spatial detail at high resolution with deep features extracted at lower resolution, yielding an accuracy of 68.0% mean intersection over union at 123.5 frames per second on Cityscapes. We also show that large scale pre-training is unnecessary. We thoroughly validate our metric in experiments with ImageNet pre-training and the coarse labeled data of Cityscapes. Finally, we show even faster computation with competitive results on subsampled inputs, without any network modifications. more details | 0.0035 | 31.9 | 36.7 | 17.2 | 79.6 | 16.8 | 27.0 | 29.9 | 14.9 | 33.1 | |
Fast-SCNN (Quarter-resolution) | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Fast-SCNN: Fast Semantic Segmentation Network | Rudra P K Poudel, Stephan Liwicki, Roberto Cipolla | The encoder-decoder framework is state-of-the-art for offline semantic image segmentation. Since the rise in autonomous systems, real-time computation is increasingly desirable. In this paper, we introduce fast segmentation convolutional neural network (Fast-SCNN), an above real-time semantic segmentation model on high resolution image data (1024x2048px) suited to efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, we introduce our `learning to downsample' module which computes low-level features for multiple resolution branches simultaneously. Our network combines spatial detail at high resolution with deep features extracted at lower resolution, yielding an accuracy of 68.0% mean intersection over union at 123.5 frames per second on Cityscapes. We also show that large scale pre-training is unnecessary. We thoroughly validate our metric in experiments with ImageNet pre-training and the coarse labeled data of Cityscapes. Finally, we show even faster computation with competitive results on subsampled inputs, without any network modifications. more details | 0.00206 | 23.0 | 28.2 | 10.2 | 70.7 | 10.4 | 18.1 | 18.2 | 6.9 | 21.5 | |
DSNet | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | yes | yes | DSNet for Real-Time Driving Scene Semantic Segmentation | Wenfu Wang | DSNet for Real-Time Driving Scene Semantic Segmentation more details | 0.027 | 42.8 | 57.1 | 35.0 | 85.4 | 21.9 | 37.1 | 28.4 | 30.1 | 47.5 | |
SwiftNetRN-18 pyramid | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 48.4 | 62.7 | 41.3 | 87.7 | 30.3 | 39.9 | 34.3 | 35.5 | 55.6 | ||
DF-Seg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search | Xin Li, Yiming Zhou, Zheng Pan, Jiashi Feng | CVPR 2019 | DF1-Seg-d8 more details | 0.007 | 45.0 | 55.9 | 34.7 | 84.3 | 26.2 | 39.3 | 37.9 | 29.9 | 51.8 |
DF-Seg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | DF2-Seg2 more details | 0.018 | 50.2 | 59.4 | 41.3 | 87.9 | 31.2 | 46.0 | 45.2 | 34.8 | 55.7 | ||
DDAR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | DiDi Labs, AR Group more details | n/a | 62.7 | 73.0 | 54.5 | 90.8 | 47.7 | 58.7 | 57.5 | 52.6 | 66.7 | ||
LDN-121 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Ladder-style DenseNets for Semantic Segmentation of Large Images | Ivan Kreso, Josip Krapac, Sinisa Segvic | Ladder DenseNet-121 trained on train+val, fine labels only. Single-scale inference. more details | 0.048 | 54.7 | 67.3 | 45.2 | 90.0 | 38.1 | 46.7 | 48.0 | 41.1 | 60.9 | |
TKCN | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Tree-structured Kronecker Convolutional Network for Semantic Segmentation | Tianyi Wu, Sheng Tang, Rui Zhang, Juan Cao, Jintao Li | more details | n/a | 61.3 | 72.4 | 52.9 | 90.9 | 43.5 | 58.3 | 54.5 | 51.1 | 66.7 | |
RPNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Residual Pyramid Learning for Single-Shot Semantic Segmentation | Xiaoyu Chen, Xiaotian Lou, Lianfa Bai, Jing Han | arXiv | we put forward a method for single-shot segmentation in a feature residual pyramid network (RPNet), which learns the main and residuals of segmentation by decomposing the label at different levels of residual blocks. more details | 0.008 | 43.6 | 57.7 | 34.2 | 87.2 | 24.5 | 34.0 | 28.7 | 30.9 | 51.8 |
navi | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | yuxb | mutil scale test more details | n/a | 60.1 | 67.0 | 49.7 | 91.5 | 47.4 | 52.2 | 60.7 | 47.5 | 64.8 | ||
Auto-DeepLab-L | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation | Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan Yuille, Li Fei-Fei | arxiv | In this work, we study Neural Architecture Search for semantic image segmentation, an important computer vision task that assigns a semantic label to every pixel in an image. Existing works often focus on searching the repeatable cell structure, while hand-designing the outer network structure that controls the spatial resolution changes. This choice simplifies the search space, but becomes increasingly problematic for dense image prediction which exhibits a lot more network level architectural variations. Therefore, we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space. We present a network level search space that includes many popular designs, and develop a formulation that allows efficient gradient-based architecture search (3 P100 GPU days on Cityscapes images). We demonstrate the effectiveness of the proposed method on the challenging Cityscapes, PASCAL VOC 2012, and ADE20K datasets. Without any ImageNet pretraining, our architecture searched specifically for semantic image segmentation attains state-of-the-art performance. Please refer to https://arxiv.org/abs/1901.02985 for details. more details | n/a | 61.0 | 73.4 | 51.3 | 91.6 | 43.8 | 56.8 | 55.7 | 50.1 | 65.3 |
LiteSeg-Darknet19 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | LiteSeg: A Litewiegth ConvNet for Semantic Segmentation | Taha Emara, Hossam E. Abd El Munim, Hazem M. Abbas | DICTA 2019 | more details | 0.0102 | 49.1 | 64.3 | 40.9 | 88.3 | 29.7 | 41.1 | 35.7 | 37.0 | 55.6 |
AdapNet++ | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Self-Supervised Model Adaptation for Multimodal Semantic Segmentation | Abhinav Valada, Rohit Mohan, Wolfram Burgard | IJCV 2019 | In this work, we propose the AdapNet++ architecture for semantic segmentation that aims to achieve the right trade-off between performance and computational complexity of the model. AdapNet++ incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling (eASPP) module that has a larger effective receptive field with more than 10x fewer parameters compared to the standard ASPP, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details. Comprehensive empirical evaluations on the challenging Cityscapes, Synthia, SUN RGB-D, ScanNet and Freiburg Forest datasets demonstrate that our architecture achieves state-of-the-art performance while simultaneously being efficient in terms of both the number of parameters and inference time. Please refer to https://arxiv.org/abs/1808.03833 for details. A live demo on various datasets can be viewed at http://deepscene.cs.uni-freiburg.de more details | n/a | 59.5 | 70.3 | 50.0 | 90.3 | 44.6 | 55.1 | 54.4 | 48.2 | 63.3 |
SSMA | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | yes | yes | Self-Supervised Model Adaptation for Multimodal Semantic Segmentation | Abhinav Valada, Rohit Mohan, Wolfram Burgard | IJCV 2019 | Learning to reliably perceive and understand the scene is an integral enabler for robots to operate in the real-world. This problem is inherently challenging due to the multitude of object types as well as appearance changes caused by varying illumination and weather conditions. Leveraging complementary modalities can enable learning of semantically richer representations that are resilient to such perturbations. Despite the tremendous progress in recent years, most multimodal convolutional neural network approaches directly concatenate feature maps from individual modality streams rendering the model incapable of focusing only on the relevant complementary information for fusion. To address this limitation, we propose a mutimodal semantic segmentation framework that dynamically adapts the fusion of modality-specific features while being sensitive to the object category, spatial location and scene context in a self-supervised manner. Specifically, we propose an architecture consisting of two modality-specific encoder streams that fuse intermediate encoder representations into a single decoder using our proposed SSMA fusion mechanism which optimally combines complementary features. As intermediate representations are not aligned across modalities, we introduce an attention scheme for better correlation. Extensive experimental evaluations on the challenging Cityscapes, Synthia, SUN RGB-D, ScanNet and Freiburg Forest datasets demonstrate that our architecture achieves state-of-the-art performance in addition to providing exceptional robustness in adverse perceptual conditions. Please refer to https://arxiv.org/abs/1808.03833 for details. A live demo on various datasets can be viewed at http://deepscene.cs.uni-freiburg.de more details | n/a | 62.3 | 72.6 | 52.4 | 91.4 | 47.8 | 58.1 | 58.6 | 51.7 | 65.4 |
LiteSeg-Mobilenet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | LiteSeg: A Litewiegth ConvNet for Semantic Segmentation | Taha Emara, Hossam E. Abd El Munim, Hazem M. Abbas | DICTA 2019 | more details | 0.0062 | 45.3 | 60.6 | 37.2 | 84.0 | 26.4 | 35.0 | 33.8 | 33.2 | 51.9 |
LiteSeg-Shufflenet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | LiteSeg: A Litewiegth ConvNet for Semantic Segmentation | Taha Emara, Hossam E. Abd El Munim, Hazem M. Abbas | DICTA 2019 | more details | 0.007518 | 41.0 | 52.7 | 33.3 | 81.7 | 22.7 | 33.7 | 30.7 | 26.6 | 46.7 |
Fast OCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 61.0 | 71.6 | 54.3 | 91.0 | 43.3 | 57.5 | 54.5 | 51.8 | 64.0 | ||
ShuffleNet v2 + DPC | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | An efficient solution for semantic segmentation: ShuffleNet V2 with atrous separable convolutions | Sercan Turkmen, Janne Heikkila | ShuffleNet v2 with DPC at output_stride 16. more details | n/a | 43.6 | 55.5 | 34.9 | 85.7 | 27.0 | 35.6 | 30.4 | 30.6 | 49.0 | |
ERSNet-coarse | yes | yes | yes | yes | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 0.012 | 39.2 | 51.6 | 28.1 | 85.1 | 20.4 | 33.5 | 29.5 | 24.0 | 41.3 | ||
MiniNet-v2-coarse | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.012 | 39.2 | 52.9 | 27.8 | 84.7 | 19.6 | 34.0 | 30.6 | 23.1 | 41.3 | ||
SwiftNetRN-18 ensemble | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images | Marin Oršić, Ivan Krešo, Petra Bevandić, Siniša Šegvić | CVPR 2019 | more details | n/a | 51.4 | 65.0 | 43.5 | 89.1 | 32.5 | 44.0 | 41.2 | 37.8 | 58.3 |
EFC_sync | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Anonymous | more details | n/a | 56.7 | 66.5 | 47.3 | 90.6 | 39.8 | 51.0 | 52.6 | 43.8 | 61.7 | ||
PL-Seg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Partial Order Pruning: for Best Speed/Accuracy Trade-of in Neural Architecture Search | Xin Li, Yiming Zhou, Zheng Pan, Jiashi Feng | CVPR 2019 | Following "partial order pruning", we conduct architecture searching experiments on Snapdragon 845 platform, and obtained PL1A/PL1A-Seg. 1、Snapdragon 845 2、NCNN Library 3、latency evaluated at 640x384 more details | 0.0192 | 41.2 | 51.5 | 32.5 | 85.1 | 22.1 | 35.9 | 30.2 | 24.7 | 48.0 |
MiniNet-v2-pretrained | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.012 | 39.7 | 52.1 | 29.7 | 85.4 | 20.1 | 33.5 | 30.8 | 23.4 | 42.8 | ||
GALD-Net | yes | yes | yes | yes | yes | yes | yes | yes | no | no | no | no | yes | yes | Global Aggregation then Local Distribution in Fully Convolutional Networks | Xiangtai Li, Li Zhang, Ansheng You, Maoke Yang, Kuiyuan Yang, Yunhai Tong | BMVC 2019 | We propose Global Aggregation then Local Distribution (GALD) scheme to distribute global information to each position adaptively according to the local information around the position. (Joint work: Key Laboratory of Machine Perception, School of EECS @Peking University and DeepMotion AI Research ) more details | n/a | 64.5 | 73.7 | 56.7 | 91.1 | 52.8 | 59.6 | 60.4 | 53.7 | 68.1 |
GALD-net | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Global Aggregation then Local Distribution in Fully Convolutional Networks | Xiangtai Li, Li Zhang, Ansheng You, Maoke Yang, Kuiyuan Yang, Yunhai Tong | BMVC 2019 | We propose Global Aggregation then Local Distribution (GALD) scheme to distribute global information to each position adaptively according the local information surrounding the position. more details | n/a | 63.5 | 72.9 | 55.8 | 90.9 | 51.2 | 58.5 | 58.7 | 52.5 | 67.3 |
ndnet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.024 | 34.7 | 47.4 | 24.1 | 83.9 | 19.2 | 26.4 | 18.7 | 18.9 | 39.3 | ||
HRNetV2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | High-Resolution Representations for Labeling Pixels and Regions | Ke Sun, Yang Zhao, Borui Jiang, Tianheng Cheng, Bin Xiao, Dong Liu, Yadong Mu, Xinggang Wang, Wenyu Liu, Jingdong Wang | The high-resolution network (HRNet) recently developed for human pose estimation, maintains high-resolution representations through the whole process by connecting high-to-low resolution convolutions in parallel and produces strong high-resolution representations by repeatedly conducting fusions across parallel convolutions. more details | n/a | 61.2 | 73.8 | 56.3 | 91.3 | 41.6 | 56.3 | 51.5 | 52.2 | 66.2 | |
SPGNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | SPGNet: Semantic Prediction Guidance for Scene Parsing | Bowen Cheng, Liang-Chieh Chen, Yunchao Wei, Yukun Zhu, Zilong Huang, Jinjun Xiong, Thomas Huang, Wen-Mei Hwu, Honghui Shi | ICCV 2019 | Multi-scale context module and single-stage encoder-decoder structure are commonly employed for semantic segmentation. The multi-scale context module refers to the operations to aggregate feature responses from a large spatial extent, while the single-stage encoder-decoder structure encodes the high-level semantic information in the encoder path and recovers the boundary information in the decoder path. In contrast, multi-stage encoder-decoder networks have been widely used in human pose estimation and show superior performance than their single-stage counterpart. However, few efforts have been attempted to bring this effective design to semantic segmentation. In this work, we propose a Semantic Prediction Guidance (SPG) module which learns to re-weight the local features through the guidance from pixel-wise semantic prediction. We find that by carefully re-weighting features across stages, a two-stage encoder-decoder network coupled with our proposed SPG module can significantly outperform its one-stage counterpart with similar parameters and computations. Finally, we report experimental results on the semantic segmentation benchmark Cityscapes, in which our SPGNet attains 81.1% on the test set using only 'fine' annotations. more details | n/a | 61.4 | 73.5 | 54.8 | 91.6 | 42.6 | 57.6 | 53.7 | 51.3 | 66.6 |
LDN-161 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Ladder-style DenseNets for Semantic Segmentation of Large Images | Ivan Kreso, Josip Krapac, Sinisa Segvic | Ladder DenseNet-161 trained on train+val, fine labels only. Inference on multi-scale inputs. more details | 2.0 | 56.4 | 68.2 | 48.4 | 91.2 | 39.1 | 50.2 | 48.8 | 43.8 | 61.6 | |
GGCF | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 63.0 | 72.2 | 54.2 | 91.1 | 50.2 | 59.4 | 59.7 | 51.6 | 65.6 | ||
GFF-Net | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | GFF: Gated Fully Fusion for Semantic Segmentation | Xiangtai Li, Houlong Zhao, Yunhai Tong, Kuiyuan Yang | We proposed Gated Fully Fusion (GFF) to fuse features from multiple levels through gates in a fully connected way. Specifically, features at each level are enhanced by higher-level features with stronger semantics and lower-level features with more details, and gates are used to control the pass of useful information which significantly reducing noise propagation during fusion. (Joint work: Key Laboratory of Machine Perception, School of EECS @Peking University and DeepMotion AI Research ) more details | n/a | 62.1 | 72.7 | 53.7 | 91.1 | 46.0 | 58.1 | 56.8 | 51.6 | 67.1 | |
Gated-SCNN | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Gated-SCNN: Gated Shape CNNs for Semantic Segmentation | Towaki Takikawa, David Acuna, Varun Jampani, Sanja Fidler | more details | n/a | 64.3 | 73.7 | 55.6 | 92.3 | 49.1 | 61.9 | 61.4 | 52.9 | 67.4 | |
ESPNetv2 | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ESPNetv2: A Light-weight, Power Efficient, and General Purpose Convolutional Neural Network | Sachin Mehta, Mohammad Rastegari, Linda Shapiro, and Hannaneh Hajishirzi | CVPR 2019 | We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2, for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on three different tasks: (1) object classification, (2) semantic segmentation, and (3) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-of-the-art methods. Our network has better generalization properties than ShuffleNetv2 when tested on the MSCOCO multi-object classification task and the Cityscapes urban scene semantic segmentation task. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at https://github.com/sacmehta/ESPNetv2 more details | n/a | 36.0 | 50.5 | 23.5 | 83.4 | 19.3 | 31.5 | 22.5 | 19.2 | 38.3 |
MRFM | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Multi Receptive Field Network for Semantic Segmentation | Jianlong Yuan, Zelu Deng, Shu Wang, Zhenbo Luo | WACV2020 | Semantic segmentation is one of the key tasks in comput- er vision, which is to assign a category label to each pixel in an image. Despite significant progress achieved recently, most existing methods still suffer from two challenging is- sues: 1) the size of objects and stuff in an image can be very diverse, demanding for incorporating multi-scale features into the fully convolutional networks (FCNs); 2) the pixel- s close to or at the boundaries of object/stuff are hard to classify due to the intrinsic weakness of convolutional net- works. To address the first issue, we propose a new Multi- Receptive Field Module (MRFM), explicitly taking multi- scale features into account. For the second issue, we design an edge-aware loss which is effective in distinguishing the boundaries of object/stuff. With these two designs, our Mul- ti Receptive Field Network achieves new state-of-the-art re- sults on two widely-used semantic segmentation benchmark datasets. Specifically, we achieve a mean IoU of 83.0% on the Cityscapes dataset and 88.4% mean IoU on the Pascal VOC2012 dataset. more details | n/a | 62.2 | 74.1 | 54.7 | 91.3 | 47.7 | 57.0 | 53.9 | 51.4 | 67.6 |
DGCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Dual Graph Convolutional Network for Semantic Segmentation | Li Zhang*, Xiangtai Li*, Anurag Arnab, Kuiyuan Yang, Yunhai Tong, Philip H.S. Torr | BMVC 2019 | We propose Dual Graph Convolutional Network (DGCNet) models the global context of the input feature by modelling two orthogonal graphs in a single framework. (Joint work: University of Oxford, Peking University and DeepMotion AI Research) more details | n/a | 61.7 | 72.0 | 53.9 | 91.2 | 47.1 | 57.6 | 54.0 | 51.9 | 66.2 |
dpcan_trainval_os16_225 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 61.1 | 72.1 | 52.1 | 90.9 | 45.0 | 56.4 | 57.8 | 49.5 | 64.8 | ||
Learnable Tree Filter | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Learnable Tree Filter for Structure-preserving Feature Transform | Lin Song; Yanwei Li; Zeming Li; Gang Yu; Hongbin Sun; Jian Sun; Nanning Zheng | NeurIPS 2019 | Learnable Tree Filter for Structure-preserving Feature Transform more details | n/a | 60.7 | 71.9 | 51.1 | 91.1 | 44.2 | 55.4 | 56.4 | 51.5 | 64.4 |
FreeNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 40.3 | 56.8 | 33.9 | 84.5 | 25.0 | 31.2 | 25.3 | 26.6 | 39.4 | ||
HRNetV2 + OCR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | High-Resolution Representations for Labeling Pixels and Regions; OCNet: Object Context Network for Scene Parsing | HRNet Team; OCR Team | HRNetV2W48 + OCR. OCR is an extension of object context networks https://arxiv.org/pdf/1809.00916.pdf more details | n/a | 62.0 | 73.0 | 54.1 | 91.2 | 43.8 | 59.1 | 57.8 | 51.0 | 65.6 | |
Valeo DAR Germany | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Valeo DAR Germany, New Algo Lab more details | n/a | 62.9 | 73.3 | 55.1 | 91.6 | 47.3 | 58.6 | 57.8 | 53.2 | 66.8 | ||
GLNet_fine | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | The proposed network architecture, combined with spatial information and multi scale context information, and repair the boundaries and details of the segmented object through channel attention modules.(Use the train-fine and the val-fine data) more details | n/a | 58.3 | 69.4 | 50.0 | 90.5 | 39.7 | 54.6 | 52.7 | 47.3 | 62.1 | ||
MCDN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 62.4 | 71.2 | 53.8 | 91.5 | 50.5 | 62.9 | 53.4 | 51.3 | 64.7 | ||
AAF+GLR | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 54.9 | 68.8 | 48.5 | 90.4 | 35.6 | 47.1 | 45.8 | 41.2 | 62.0 | ||
HRNetV2 + OCR (w/ ASP) | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | openseg-group (OCR team + HRNet team) | Our approach is based on a single HRNet48V2 and an OCR module combined with ASPP. We apply depth based multi-scale ensemble weights during testing (provided by DeepMotion AI Research) . more details | n/a | 64.8 | 76.0 | 57.5 | 91.7 | 49.6 | 62.0 | 58.4 | 55.3 | 68.2 | ||
CASIA_IVA_DRANet-101_NoCoarse | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 66.1 | 76.9 | 58.2 | 92.3 | 50.2 | 65.5 | 58.6 | 56.5 | 70.4 | ||
Hyundai Mobis AD Lab | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Hyundai Mobis AD Lab, DL-DB Group, AA (Automated Annotator) Team | more details | n/a | 65.0 | 73.7 | 55.3 | 92.2 | 52.2 | 61.3 | 63.9 | 54.6 | 67.0 | ||
EFRNet-13 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.0146 | 43.6 | 56.2 | 32.0 | 85.4 | 23.4 | 38.6 | 36.7 | 29.8 | 47.0 | ||
FarSee-Net | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | FarSee-Net: Real-Time Semantic Segmentation by Efficient Multi-scale Context Aggregation and Feature Space Super-resolution | Zhanpeng Zhang and Kaipeng Zhang | IEEE International Conference on Robotics and Automation (ICRA) 2020 | FarSee-Net: Real-Time Semantic Segmentation by Efficient Multi-scale Context Aggregation and Feature Space Super-resolution Real-time semantic segmentation is desirable in many robotic applications with limited computation resources. One challenge of semantic segmentation is to deal with the objectscalevariationsandleveragethecontext.Howtoperform multi-scale context aggregation within limited computation budget is important. In this paper, firstly, we introduce a novel and efficient module called Cascaded Factorized Atrous Spatial Pyramid Pooling (CF-ASPP). It is a lightweight cascaded structure for Convolutional Neural Networks (CNNs) to efficiently leverage context information. On the other hand, for runtime efficiency, state-of-the-art methods will quickly decrease the spatial size of the inputs or feature maps in the early network stages. The final high-resolution result is usuallyobtainedbynon-parametricup-samplingoperation(e.g. bilinear interpolation). Differently, we rethink this pipeline and treat it as a super-resolution process. We use optimized superresolution operation in the up-sampling step and improve the accuracy, especially in sub-sampled input image scenario for real-time applications. By fusing the above two improvements, our methods provide better latency-accuracy trade-off than the other state-of-the-art methods. In particular, we achieve 68.4% mIoU at 84 fps on the Cityscapes test set with a single Nivida Titan X (Maxwell) GPU card. The proposed module can be plugged into any feature extraction CNN and benefits from the CNN structure development. more details | 0.0119 | 39.3 | 53.9 | 29.0 | 86.1 | 19.3 | 30.2 | 28.1 | 23.4 | 44.7 |
C3Net [2,3,7,13] | no | no | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | C3: Concentrated-Comprehensive Convolution and its application to semantic segmentation | Hyojin Park, Youngjoon Yoo, Geonseok Seo, Dongyoon Han, Sangdoo Yun, Nojun Kwak | more details | n/a | 37.3 | 52.1 | 28.4 | 83.8 | 18.3 | 29.6 | 23.7 | 20.6 | 42.2 | |
Panoptic-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | Our proposed bottom-up Panoptic-DeepLab is conceptually simple yet delivers state-of-the-art results. The Panoptic-DeepLab adopts dual-ASPP and dual-decoder modules, specific to semantic segmentation and instance segmentation respectively. The semantic segmentation prediction follows the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation prediction involves a simple instance center regression, where the model learns to predict instance centers as well as the offset from each pixel to its corresponding center. This submission exploits only Cityscapes fine annotations. more details | n/a | 58.5 | 70.6 | 55.1 | 86.7 | 34.5 | 55.2 | 55.7 | 52.0 | 57.8 | |
EKENet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.0229 | 44.1 | 56.0 | 31.8 | 85.3 | 24.2 | 41.2 | 37.8 | 30.1 | 46.7 | ||
SPSSN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Stage Pooling Semantic Segmentation Network more details | n/a | 41.8 | 55.3 | 32.2 | 85.9 | 22.0 | 35.9 | 29.7 | 28.5 | 45.0 | ||
FC-HarDNet-70 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | HarDNet: A Low Memory Traffic Network | Ping Chao, Chao-Yang Kao, Yu-Shan Ruan, Chien-Hsiang Huang, Youn-Long Lin | ICCV 2019 | Fully Convolutional Harmonic DenseNet 70 U-shape encoder-decoder structure with HarDNet blocks Trained with single scale loss at stride-4 validation mIoU=77.7 more details | 0.015 | 51.4 | 65.0 | 43.0 | 89.2 | 34.0 | 45.2 | 37.4 | 38.8 | 58.4 |
BFP | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Boundary-Aware Feature Propagation for Scene Segmentation | Henghui Ding, Xudong Jiang, Ai Qun Liu, Nadia Magnenat Thalmann, and Gang Wang | IEEE International Conference on Computer Vision (ICCV), 2019 | Boundary-Aware Feature Propagation for Scene Segmentation more details | n/a | 62.3 | 72.1 | 52.4 | 91.2 | 49.9 | 58.4 | 59.5 | 49.9 | 65.1 |
FasterSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | FasterSeg: Searching for Faster Real-time Semantic Segmentation | Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, Zhangyang Wang | ICLR 2020 | We present FasterSeg, an automatically designed semantic segmentation network with not only state-of-the-art performance but also faster speed than current methods. Utilizing neural architecture search (NAS), FasterSeg is discovered from a novel and broader search space integrating multi-resolution branches, that has been recently found to be vital in manually designed segmentation models. To better calibrate the balance between the goals of high accuracy and low latency, we propose a decoupled and fine-grained latency regularization, that effectively overcomes our observed phenomenons that the searched networks are prone to "collapsing" to low-latency yet poor-accuracy models. Moreover, we seamlessly extend FasterSeg to a new collaborative search (co-searching) framework, simultaneously searching for a teacher and a student network in the same single run. The teacher-student distillation further boosts the student model's accuracy. Experiments on popular segmentation benchmarks demonstrate the competency of FasterSeg. For example, FasterSeg can run over 30% faster than the closest manually designed competitor on Cityscapes, while maintaining comparable accuracy. more details | 0.00613 | 44.3 | 61.0 | 37.9 | 87.6 | 21.0 | 34.3 | 28.5 | 30.5 | 53.9 |
VCD-NoCoarse | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 64.2 | 75.5 | 57.0 | 91.4 | 46.9 | 59.7 | 61.5 | 53.7 | 68.0 | ||
NAVINFO_DLR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | pengfei zhang | weighted aspp+ohem+hard region refine more details | n/a | 65.6 | 76.5 | 57.8 | 91.4 | 53.2 | 61.8 | 61.2 | 54.6 | 68.3 | ||
LBPSS | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | CVPR 2020 submission #5455 more details | 0.9 | 34.4 | 52.0 | 22.2 | 82.3 | 14.5 | 26.9 | 20.9 | 14.8 | 41.3 | ||
KANet_Res101 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 63.1 | 74.1 | 55.7 | 91.5 | 49.0 | 57.0 | 57.8 | 52.6 | 67.1 | ||
Learnable Tree Filter V2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Rethinking Learnable Tree Filter for Generic Feature Transform | Lin Song, Yanwei Li, Zhengkai Jiang, Zeming Li, Xiangyu Zhang, Hongbin Sun, Jian Sun, Nanning Zheng | NeurIPS 2020 | Based on ResNet-101 backbone and FPN architecture. more details | n/a | 64.0 | 76.3 | 57.4 | 91.5 | 49.2 | 58.4 | 56.6 | 54.4 | 67.9 |
GPSNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 62.4 | 72.9 | 55.1 | 91.3 | 44.7 | 58.3 | 57.9 | 52.1 | 67.2 | ||
FTFNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | An Efficient Network Focused on Tiny Feature Maps for Real-Time Semantic Segmentation more details | 0.0088 | 45.8 | 57.2 | 35.4 | 86.1 | 26.4 | 41.2 | 39.3 | 31.0 | 50.1 | ||
iFLYTEK-CV | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | iFLYTEK Research, CV Group more details | n/a | 64.9 | 73.9 | 55.7 | 91.8 | 51.3 | 61.5 | 62.9 | 54.7 | 67.8 | ||
F2MF-short | yes | yes | no | no | no | no | no | no | yes | yes | no | no | yes | yes | Warp to the Future: Joint Forecasting of Features and Feature Motion | Josip Saric, Marin Orsic, Tonci Antunovic, Sacha Vrazic, Sinisa Segvic | The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | Our method forecasts semantic segmentation 3 timesteps into the future. more details | n/a | 43.6 | 46.3 | 31.3 | 77.7 | 30.4 | 43.0 | 47.0 | 30.4 | 43.1 |
HPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | High-Order Paired-ASPP Networks for Semantic Segmentation | Yu Zhang, Xin Sun, Junyu Dong, Changrui Chen, Yue Shen | more details | n/a | 60.6 | 70.5 | 51.6 | 91.0 | 45.3 | 58.1 | 53.2 | 49.5 | 65.4 | |
HANet (fine-train only) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | TBA | Anonymous | We use only fine-training data. more details | n/a | 58.6 | 68.8 | 48.6 | 91.5 | 39.0 | 54.1 | 55.9 | 47.8 | 63.3 | |
F2MF-mid | yes | yes | no | no | no | no | no | no | yes | yes | no | no | yes | yes | Warp to the Future: Joint Forecasting of Features and Feature Motion | Josip Saric, Marin Orsic, Tonci Antunovic, Sacha Vrazic, Sinisa Segvic | The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | Our method forecasts semantic segmentation 9 timesteps into the future. more details | n/a | 32.9 | 31.7 | 17.1 | 65.5 | 24.1 | 34.3 | 41.8 | 20.6 | 28.3 |
EMANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Expectation Maximization Attention Networks for Semantic Segmentation | Xia Li, Zhisheng Zhong, Jianlong Wu, Yibo Yang, Zhouchen Lin, Hong Liu | ICCV 2019 | more details | n/a | 61.3 | 71.3 | 53.6 | 90.4 | 44.5 | 59.6 | 54.0 | 51.1 | 66.1 |
PartnerNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | PARTNERNET: A LIGHTWEIGHT AND EFFICIENT PARTNER NETWORK FOR SEMANTIC SEGMENTATION more details | 0.0058 | 48.3 | 61.0 | 40.1 | 86.8 | 28.8 | 42.6 | 36.2 | 34.9 | 56.0 | ||
SwiftNet RN18 pyr sepBN MVD | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient semantic segmentation with pyramidal fusion | M Oršić, S Šegvić | Pattern Recognition 2020 | more details | 0.029 | 52.9 | 66.2 | 43.6 | 90.0 | 33.2 | 48.3 | 42.3 | 41.6 | 58.1 |
Tencent YYB VisualAlgo | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Tencent YYB VisualAlgo Group more details | n/a | 64.8 | 72.8 | 55.5 | 92.2 | 52.4 | 61.7 | 63.1 | 53.8 | 66.7 | ||
MoKu Lab | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Alibaba, MoKu AI Lab, CV Group more details | n/a | 65.1 | 74.6 | 55.9 | 92.4 | 50.6 | 63.0 | 62.6 | 53.7 | 68.1 | ||
HRNetV2 + OCR + SegFix | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Object-Contextual Representations for Semantic Segmentation | Yuhui Yuan, Xilin Chen, Jingdong Wang | First, we pre-train "HRNet+OCR" method on the Mapillary training set (achieves 50.8% on the Mapillary val set). Second, we fine-tune the model with the Cityscapes training, validation and coarse set. Finally, we apply the "SegFix" scheme to further improve the results. more details | n/a | 65.9 | 76.3 | 56.9 | 91.9 | 51.3 | 65.2 | 62.7 | 54.9 | 68.3 | |
DecoupleSegNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Improving Semantic Segmentation via Decoupled Body and Edge Supervision | Xiangtai Li, Xia Li, Li Zhang, Guangliang Cheng, Jianping Shi, Zhouchen Lin, Shaohua Tan, and Yunhai Tong | ECCV-2020 | In this paper, We propose a new paradigm for semantic segmentation. Our insight is that appealing performance of semantic segmentation re- quires explicitly modeling the object body and edge, which correspond to the high and low frequency of the image. To do so, we first warp the image feature by learning a flow field to make the object part more consistent. The resulting body feature and the residual edge feature are further optimized under decoupled supervision by explicitly sampling dif- ferent parts (body or edge) pixels. The code and models have been released. more details | n/a | 64.4 | 72.1 | 55.6 | 92.0 | 50.9 | 61.7 | 62.1 | 54.4 | 66.6 |
LGE A&B Center: HANet (ResNet-101) | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Cars Can’t Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks | Sungha Choi (LGE, Korea Univ.), Joanne T. Kim (Korea Univ.), Jaegul Choo (KAIST) | CVPR 2020 | Dataset: "fine train + fine val", No coarse, Backbone: ImageNet pretrained ResNet-101 more details | n/a | 62.2 | 71.4 | 54.7 | 91.6 | 46.2 | 60.5 | 57.0 | 50.6 | 65.5 |
DCNAS | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | DCNAS: Densely Connected Neural Architecture Search for Semantic Image Segmentation | Xiong Zhang, Hongmin Xu, Hong Mo, Jianchao Tan, Cheng Yang, Wenqi Ren | Neural Architecture Search (NAS) has shown great potentials in automatically designing scalable network architectures for dense image predictions. However, existing NAS algorithms usually compromise on restricted search space and search on proxy task to meet the achievable computational demands. To allow as wide as possible network architectures and avoid the gap between target and proxy dataset, we propose a Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information, over a large-scale target dataset. Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs. Moreover, by combining both path-level and channel-level sampling strategies, we design a fusion module to reduce the memory consumption of ample search space. more details | n/a | 65.0 | 74.6 | 57.0 | 91.2 | 46.1 | 61.5 | 65.3 | 55.7 | 68.7 | |
GPNet-ResNet101 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 62.4 | 73.1 | 54.2 | 91.7 | 46.8 | 59.1 | 54.5 | 52.2 | 67.3 | ||
Axial-DeepLab-XL [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 57.3 | 69.0 | 55.1 | 85.2 | 37.3 | 54.4 | 52.3 | 50.7 | 54.4 |
Axial-DeepLab-L [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 64.0 | 71.2 | 58.2 | 87.0 | 51.3 | 64.1 | 63.8 | 57.9 | 58.6 |
Axial-DeepLab-L [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 57.5 | 68.3 | 55.5 | 84.8 | 39.8 | 55.9 | 51.2 | 51.0 | 53.5 |
LGE A&B Center: HANet (ResNext-101) | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Cars Can’t Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks | Sungha Choi (LGE, Korea Univ.), Joanne T. Kim (Korea Univ.), Jaegul Choo (KAIST) | CVPR 2020 | Dataset: "fine train + fine val + coarse", Backbone: Mapillary pretrained ResNext-101 more details | n/a | 62.6 | 70.6 | 55.1 | 91.8 | 47.7 | 61.1 | 56.2 | 53.2 | 65.4 |
ERINet-v2 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Residual Inception Network | MINJONG KIM, SUYOUNG CHI | ongoing | more details | 0.00526316 | 39.2 | 55.4 | 29.5 | 85.3 | 18.0 | 31.6 | 26.1 | 22.3 | 45.8 |
Naive-Student (iterative semi-supervised learning with Panoptic-DeepLab) | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Semi-Supervised Learning in Video Sequences for Urban Scene Segmentation | Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D. Collins, Ekin D. Cubuk, Barret Zoph, Hartwig Adam, Jonathon Shlens | Supervised learning in large discriminative models is a mainstay for modern computer vision. Such an approach necessitates investing in large-scale human-annotated datasets for achieving state-of-the-art results. In turn, the efficacy of supervised learning may be limited by the size of the human annotated dataset. This limitation is particularly notable for image segmentation tasks, where the expense of human annotation is especially large, yet large amounts of unlabeled data may exist. In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences to improve the performance on urban scene segmentation, simultaneously tackling semantic, instance, and panoptic segmentation. The goal of this work is to avoid the construction of sophisticated, learned architectures specific to label propagation (e.g., patch matching and optical flow). Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data. The procedure is iterated for several times. As a result, our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks, reaching the performance of 67.8% PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences to surpass state-of-the-art performance on core computer vision tasks. more details | n/a | 68.8 | 75.5 | 62.1 | 89.0 | 57.2 | 71.0 | 67.2 | 62.2 | 66.3 | |
Axial-DeepLab-XL [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 66.0 | 72.0 | 59.7 | 87.5 | 51.7 | 69.3 | 66.7 | 59.6 | 61.2 |
TUE-5LSM0-g23 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Deeplabv3+decoder more details | n/a | 42.4 | 53.8 | 32.6 | 82.9 | 23.4 | 37.8 | 32.9 | 28.8 | 47.1 | ||
PBRNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | modified MobileNetV2 backbone + Prediction and Boundary attention-based Refinement Module (PBRM) more details | 0.0107 | 51.9 | 65.9 | 46.0 | 89.4 | 33.9 | 45.2 | 30.9 | 44.6 | 59.3 | ||
ResNeSt200 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ResNeSt: Split-Attention Networks | Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Zhi Zhang, Haibin Lin, Yue Sun, Tong He, Jonas Mueller, R. Manmatha, Mu Li, and Alexander Smola | DeepLabV3+ network with ResNeSt200 backbone. more details | n/a | 63.0 | 71.9 | 53.7 | 91.8 | 47.2 | 63.4 | 58.9 | 52.1 | 65.0 | |
Panoptic-DeepLab [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | We employ a stronger backbone, WR-41, in Panoptic-DeepLab. For Panoptic-DeepLab, please refer to https://arxiv.org/abs/1911.10194. For wide-ResNet-41 (WR-41) backbone, please refer to https://arxiv.org/abs/2005.10266. more details | n/a | 68.7 | 75.8 | 61.9 | 90.1 | 57.6 | 69.5 | 64.1 | 61.7 | 68.4 | |
EaNet-V1 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Parsing Very High Resolution Urban Scene Images by Learning Deep ConvNets with Edge-Aware Loss | Xianwei Zheng, Linxi Huan, Gui-Song Xia, Jianya Gong | Parsing very high resolution (VHR) urban scene images into regions with semantic meaning, e.g. buildings and cars, is a fundamental task necessary for interpreting and understanding urban scenes. However, due to the huge quantity of details contained in an image and the large variations of objects in scale and appearance, the existing semantic segmentation methods often break one object into pieces, or confuse adjacent objects and thus fail to depict these objects consistently. To address this issue, we propose a concise and effective edge-aware neural network (EaNet) for urban scene semantic segmentation. The proposed EaNet model is deployed as a standard balanced encoder-decoder framework. Specifically, we devised two plug-and-play modules that append on top of the encoder and decoder respectively, i.e., the large kernel pyramid pooling (LKPP) and the edge-aware loss (EA loss) function, to extend the model ability in learning discriminating features. The LKPP module captures rich multi-scale context with strong continuous feature relations to promote coherent labeling of multi-scale urban objects. The EA loss module learns edge information directly from semantic segmentation prediction, which avoids costly post-processing or extra edge detection. During training, EA loss imposes a strong geometric awareness to guide object structure learning at both the pixel- and image-level, and thus effectively separates confusing objects with sharp contours. more details | n/a | 59.6 | 65.9 | 49.4 | 90.7 | 44.9 | 54.8 | 58.9 | 48.8 | 63.3 | |
EfficientPS [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 65.2 | 75.7 | 56.7 | 92.0 | 50.0 | 64.4 | 61.5 | 53.9 | 67.6 | |
FSFNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Accelerator-Aware Fast Spatial Feature Network for Real-Time Semantic Segmentation | Minjong Kim, Byungjae Park, Suyoung Chi | IEEE Access | Semantic segmentation is performed to understand an image at the pixel level; it is widely used in the field of autonomous driving. In recent years, deep neural networks achieve good accuracy performance; however, there exist few models that have a good trade-off between high accuracy and low inference time. In this paper, we propose a fast spatial feature network (FSFNet), an optimized lightweight semantic segmentation model using an accelerator, offering high performance as well as faster inference speed than current methods. FSFNet employs the FSF and MRA modules. The FSF module has three different types of subset modules to extract spatial features efficiently. They are designed in consideration of the size of the spatial domain. The multi-resolution aggregation module combines features that are extracted at different resolutions to reconstruct the segmentation image accurately. Our approach is able to run at over 203 FPS at full resolution 1024 x 2048) in a single NVIDIA 1080Ti GPU, and obtains a result of 69.13% mIoU on the Cityscapes test dataset. Compared with existing models in real-time semantic segmentation, our proposed model retains remarkable accuracy while having high FPS that is over 30% faster than the state-of-the-art model. The experimental results proved that our model is an ideal approach for the Cityscapes dataset. more details | 0.0049261 | 43.0 | 59.0 | 34.4 | 86.7 | 22.4 | 33.7 | 29.0 | 27.8 | 51.1 |
Hierarchical Multi-Scale Attention for Semantic Segmentation | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Hierarchical Multi-Scale Attention for Semantic Segmentation | Andrew Tao, Karan Sapra, Bryan Catanzaro | Multi-scale inference is commonly used to improve the results of semantic segmentation. Multiple images scales are passed through a network and then the results are combined with averaging or max pooling. In this work, we present an attention-based approach to combining multi-scale predictions. We show that predictions at certain scales are better at resolving particular failures modes and that the network learns to favor those scales for such cases in order to generate better predictions. Our attention mechanism is hierarchical, which enables it to be roughly 4x more memory efficient to train than other recent approaches. In addition to enabling faster training, this allows us to train with larger crop sizes which leads to greater model accuracy. We demonstrate the result of our method on two datasets: Cityscapes and Mapillary Vistas. For Cityscapes, which has a large number of weakly labelled images, we also leverage auto-labelling to improve generalization. Using our approach we achieve a new state-of-the-art results in both Mapillary (61.1 IOU val) and Cityscapes (85.4 IOU test). more details | n/a | 70.4 | 78.4 | 63.5 | 92.7 | 58.7 | 70.0 | 65.6 | 62.3 | 72.3 | |
SANet | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 25.0 | 59.6 | 69.9 | 49.6 | 91.6 | 40.1 | 55.4 | 57.2 | 49.0 | 64.1 | ||
SJTU_hpm | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | no | no | Hard Pixel Mining for Depth Privileged Semantic Segmentation | Zhangxuan Gu, Li Niu*, Haohua Zhao, and Liqing Zhang | more details | n/a | 59.1 | 68.3 | 48.8 | 90.8 | 45.4 | 55.6 | 49.5 | 51.3 | 62.8 | |
FANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | FANet: Feature Aggregation Network for Semantic Segmentation | Tanmay Singha, Duc-Son Pham, and Aneesh Krishna | Feature Aggregation Network for Semantic Segmentation more details | n/a | 33.2 | 42.1 | 22.3 | 81.8 | 14.2 | 26.4 | 20.8 | 20.1 | 38.0 | |
Hard Pixel Mining for Depth Privileged Semantic Segmentation | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | no | no | Hard Pixel Mining for Depth Privileged Semantic Segmentation | Zhangxuan Gu, Li Niu, Haohua Zhao, and Liqing Zhang | Semantic segmentation has achieved remarkable progress but remains challenging due to the complex scene, object occlusion, and so on. Some research works have attempted to use extra information such as a depth map to help RGB based semantic segmentation because the depth map could provide complementary geometric cues. However, due to the inaccessibility of depth sensors, depth information is usually unavailable for the test images. In this paper, we leverage only the depth of training images as the privileged information to mine the hard pixels in semantic segmentation, in which depth information is only available for training images but not available for test images. Specifically, we propose a novel Loss Weight Module, which outputs a loss weight map by employing two depth-related measurements of hard pixels: Depth Prediction Error and Depthaware Segmentation Error. The loss weight map is then applied to segmentation loss, with the goal of learning a more robust model by paying more attention to the hard pixels. Besides, we also explore a curriculum learning strategy based on the loss weight map. Meanwhile, to fully mine the hard pixels on different scales, we apply our loss weight module to multi-scale side outputs. Our hard pixels mining method achieves the state-of-the-art results on three benchmark datasets, and even outperforms the methods which need depth input during testing. more details | n/a | 65.2 | 74.0 | 57.8 | 91.9 | 49.4 | 63.8 | 60.2 | 56.4 | 68.5 | |
MSeg1080_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | MSeg: A Composite Dataset for Multi-domain Semantic Segmentation | John Lambert*, Zhuang Liu*, Ozan Sener, James Hays, Vladlen Koltun | CVPR 2020 | We present MSeg, a composite dataset that unifies semantic segmentation datasets from different domains. A naive merge of the constituent datasets yields poor performance due to inconsistent taxonomies and annotation practices. We reconcile the taxonomies and bring the pixel-level annotations into alignment by relabeling more than 220,000 object masks in more than 80,000 images, requiring more than 1.34 years of collective annotator effort. The resulting composite dataset enables training a single semantic segmentation model that functions effectively across domains and generalizes to datasets that were not seen during training. We adopt zero-shot cross-dataset transfer as a benchmark to systematically evaluate a model’s robustness and show that MSeg training yields substantially more robust models in comparison to training on individual datasets or naive mixing of datasets without the presented contributions. more details | 0.49 | 57.7 | 69.9 | 47.8 | 90.4 | 43.5 | 57.9 | 47.3 | 46.4 | 58.1 |
SA-Gate (ResNet-101,OS=16) | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Bi-directional Cross-Modality Feature Propagation with Separation-and-Aggregation Gate for RGB-D Semantic Segmentation | Xiaokang Chen, Kwan-Yee Lin, Jingbo Wang, Wayne Wu, Chen Qian, Hongsheng Li, and Gang Zeng | European Conference on Computer Vision (ECCV), 2020 | RGB+HHA input, input resolution = 800x800, output stride = 16, training 240 epochs, no coarse data is used. more details | n/a | 63.5 | 75.2 | 55.4 | 91.6 | 49.2 | 59.8 | 55.2 | 54.8 | 66.5 |
seamseg_rvcsubset | no | no | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Porzi, Lorenzo and Rota Bulò, Samuel and Colovic, Aleksander and Kontschieder, Peter | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019 | Seamless Scene Segmentation Resnet101, pretrained on Imagenet; supplied with altered MVD to include WildDash2 classes; does not contain other RVC label policies (i.e. no ADE20K/COCO-specific classes -> rvcsubset and not a proper submission) more details | n/a | 44.8 | 57.6 | 34.2 | 77.2 | 40.0 | 46.4 | 22.0 | 39.5 | 41.6 |
HRNet + LKPP + EA loss | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 61.8 | 66.6 | 51.0 | 91.3 | 47.0 | 61.6 | 60.6 | 51.4 | 64.7 | ||
SN_RN152pyrx8_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images | Marin Oršić, Ivan Krešo, Petra Bevandić, Siniša Šegvić | CVPR 2019 | more details | 1.0 | 50.5 | 63.7 | 36.6 | 88.8 | 42.3 | 47.9 | 32.3 | 38.2 | 54.3 |
EffPS_b1bs4_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | EfficientPS with EfficientNet-b1 backbone. Trained with a batch size of 4. more details | n/a | 33.5 | 51.1 | 7.4 | 86.7 | 26.6 | 30.0 | 12.1 | 15.8 | 38.2 | |
AttaNet_light | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | AttaNet: Attention-Augmented Network for Fast and Accurate Scene Parsing(AAAI21) | Anonymous | more details | n/a | 42.9 | 58.0 | 33.5 | 88.2 | 22.5 | 34.5 | 27.2 | 28.4 | 51.1 | |
CFPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 43.7 | 58.8 | 37.0 | 86.3 | 23.9 | 37.9 | 25.7 | 27.6 | 52.6 | ||
Seg_UJS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 69.0 | 77.1 | 60.7 | 91.9 | 57.6 | 70.6 | 62.0 | 61.2 | 70.7 | ||
Bilateral_attention_semantic | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | we use bilateral attention mechanism for semantic segmentation more details | 0.0141 | 55.9 | 69.7 | 47.3 | 89.7 | 34.8 | 50.6 | 47.2 | 44.0 | 64.2 | ||
Panoptic-DeepLab w/ SWideRNet [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 62.2 | 74.9 | 60.7 | 89.2 | 45.9 | 57.1 | 50.8 | 56.4 | 62.3 | |
ESANet RGB-D (small input) | yes | yes | no | no | no | no | yes | yes | no | no | 2 | 2 | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB-D data with half the input resolution. more details | 0.0427 | 44.9 | 56.1 | 35.4 | 87.4 | 23.0 | 38.1 | 39.6 | 32.1 | 47.6 | |
ESANet RGB (small input) | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | ESANet: Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB images with half the input resolution. more details | 0.031 | 40.5 | 49.6 | 31.0 | 85.6 | 23.3 | 33.3 | 33.0 | 25.8 | 42.5 | |
ESANet RGB-D | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB-D data. more details | 0.1613 | 56.4 | 68.4 | 48.7 | 90.7 | 37.0 | 49.8 | 48.7 | 46.2 | 61.9 | |
DAHUA-ARI | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | multi-scale and refineNet more details | n/a | 70.6 | 78.3 | 63.2 | 92.7 | 58.9 | 71.4 | 65.7 | 62.6 | 72.4 | ||
ESANet RGB | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | ESANet: Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB images only. more details | 0.1205 | 53.1 | 64.1 | 43.6 | 89.0 | 34.1 | 46.7 | 45.8 | 42.9 | 58.8 | |
DCNAS+ASPP [Mapillary Vistas] | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Existing NAS algorithms usually compromise on restricted search space or search on proxy task to meet the achievable computational demands. To allow as wide as possible network architectures and avoid the gap between realistic and proxy setting, we propose a novel Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information, over a large-scale target dataset without proxy. Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs. Moreover, by combining both path-level and channel-level sampling strategies, we design a fusion module and mixture layer to reduce the memory consumption of ample search space, hence favor the proxyless searching. Compared with contemporary works, experiments reveal that the proxyless searching scheme is capable of bridge the gap between searching and training environments. more details | n/a | 70.0 | 78.3 | 62.8 | 92.6 | 58.0 | 71.5 | 62.8 | 61.9 | 71.8 | ||
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 68.6 | 76.7 | 63.4 | 90.2 | 55.2 | 69.5 | 65.1 | 61.9 | 66.3 | |
DCNAS+ASPP | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | DCNAS: Densely Connected Neural Architecture Search for Semantic ImageSegmentation | Anonymous | Existing NAS algorithms usually compromise on restricted search space or search on proxy task to meet the achievable computational demands. To allow as wide as possible network architectures and avoid the gap between realistic and proxy setting, we propose a novel Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information, over a large-scale target dataset without proxy. Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs. Moreover, by combining both path-level and channel-level sampling strategies, we design a fusion module and mixture layer to reduce the memory consumption of ample search space, hence favor the proxyless searching. more details | n/a | 68.5 | 77.3 | 61.0 | 92.5 | 55.5 | 69.7 | 62.1 | 59.3 | 70.7 | |
ddl_seg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 69.4 | 77.1 | 60.7 | 92.0 | 57.8 | 70.8 | 65.2 | 61.1 | 70.7 | ||
CABiNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | CABiNet: Efficient Context Aggregation Network for Low-Latency Semantic Segmentation | Saumya Kumaar, Ye Lyu, Francesco Nex, Michael Ying Yang | With the increasing demand of autonomous machines, pixel-wise semantic segmentation for visual scene understanding needs to be not only accurate but also efficient for any potential real-time applications. In this paper, we propose CABiNet (Context Aggregated Bi-lateral Network), a dual branch convolutional neural network (CNN), with significantly lower computational costs as compared to the state-of-the-art, while maintaining a competitive prediction accuracy. Building upon the existing multi-branch architectures for high-speed semantic segmentation, we design a cheap high resolution branch for effective spatial detailing and a context branch with light-weight versions of global aggregation and local distribution blocks, potent to capture both long-range and local contextual dependencies required for accurate semantic segmentation, with low computational overheads. Specifically, we achieve 76.6% and 75.9% mIOU on Cityscapes validation and test sets respectively, at 76 FPS on an NVIDIA RTX 2080Ti and 8 FPS on a Jetson Xavier NX. Codes and training models will be made publicly available. more details | 0.013 | 49.0 | 61.3 | 41.7 | 90.9 | 27.9 | 38.3 | 32.7 | 34.8 | 64.2 | |
Margin calibration | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | The model is DeepLab v3+ backend on SEResNeXt50. We used the margin calibration with log-loss as the learning objective. more details | n/a | 62.5 | 72.8 | 52.1 | 91.6 | 46.5 | 61.6 | 60.5 | 49.8 | 65.4 | ||
MT-SSSR | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | n/a | 57.6 | 73.6 | 50.7 | 90.3 | 38.6 | 53.0 | 45.8 | 44.0 | 64.8 | ||
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas + Pseudo-labels] | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. Following Naive-Student, this model is additionally trained with pseudo-labels generated from Cityscapes Video and train-extra set (i.e., the coarse annotations are not used, but the images are). more details | n/a | 71.2 | 79.4 | 64.3 | 91.3 | 60.0 | 73.2 | 68.2 | 64.6 | 68.5 | |
DSANet: Dilated Spatial Attention for Real-time Semantic Segmentation in Urban Street Scenes | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | we present computationally efficient network named DSANet, which follows a two-branch strategy to tackle the problem of real-time semantic segmentation in urban scenes. We first design a Context branch, which employs Depth-wise Asymmetric ShuffleNet DAS as main building block to acquire sufficient receptive fields. In addition, we propose a dual attention module consisting of dilated spatial attention and channel attention to make full use of the multi-level feature maps simultaneously, which helps predict the pixel-wise labels in each stage. Meanwhile, Spatial Encoding Network is used to enhance semantic information by preserving the spatial details. Finally, to better combine context information and spatial information, we introduce a Simple Feature Fusion Module to combine the features from the two branches. more details | n/a | 42.9 | 61.2 | 34.6 | 85.7 | 23.8 | 39.2 | 24.5 | 27.6 | 46.9 | ||
UJS_model | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 70.5 | 78.0 | 63.0 | 92.4 | 60.0 | 72.9 | 65.2 | 61.3 | 71.1 | ||
Mobilenetv3-small-backbone real-time segmentation | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | The model is a dual-path network with mobilenetv3-small backbone. PSP module was used as the context aggregation block. We also use feature fusion module at x16, x32. The features of the two branches are then concatenated and fused with a bottleneck conv. Only train data is used to train the model excluding validation data. And evaluation was done by single scale input images. more details | 0.02 | 37.8 | 52.1 | 29.1 | 83.9 | 17.0 | 27.9 | 20.1 | 23.1 | 49.2 | ||
M2FANet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Urban street scene analysis using lightweight multi-level multi-path feature aggregation network | Tanmay Singha; Duc-Son Pham; Aneesh Krishna | Multiagent and Grid Systems Journal | more details | n/a | 38.7 | 52.7 | 30.4 | 84.7 | 18.4 | 29.9 | 23.4 | 23.6 | 46.7 |
AFPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.03 | 50.1 | 63.3 | 41.1 | 88.2 | 30.5 | 42.4 | 42.6 | 36.2 | 56.1 | ||
YOLO V5s with Segmentation Head | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Anonymous | Multitask model. fine tune from COCO detection pretrained model, train semantic segmentation and object detection(transfer from instance label) at the same time more details | 0.007 | 46.3 | 56.0 | 36.4 | 86.1 | 28.5 | 38.8 | 40.9 | 32.9 | 50.5 | ||
FSFFNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | A Lightweight Multi-scale Feature Fusion Network for Real-Time Semantic Segmentation | Tanmay Singha, Duc-Son Pham, Aneesh Krishna, Tom Gedeon | International Conference on Neural Information Processing 2021 | Feature Scaling Feature Fusion Network more details | n/a | 40.4 | 54.0 | 31.5 | 85.0 | 22.3 | 32.0 | 26.5 | 26.1 | 45.5 |
Qualcomm AI Research | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | InverseForm: A Loss Function for Structured Boundary-Aware Segmentation | Shubhankar Borse, Ying Wang, Yizhe Zhang, Fatih Porikli | CVPR 2021 oral | more details | n/a | 72.0 | 78.9 | 64.8 | 92.6 | 61.8 | 73.2 | 68.2 | 63.2 | 73.0 |
HIK-CCSLT | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 70.3 | 77.9 | 63.1 | 92.5 | 56.2 | 69.7 | 67.2 | 63.9 | 71.5 | ||
BFNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | BFNet | Jiaqi Fan | more details | n/a | 43.1 | 57.8 | 34.5 | 86.0 | 24.8 | 37.7 | 29.5 | 30.3 | 44.1 | |
Hai Wang+Yingfeng Cai-research group | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.00164 | 70.1 | 77.1 | 62.9 | 92.2 | 59.5 | 72.7 | 64.3 | 60.9 | 70.9 | ||
Jiangsu_university_Intelligent_Drive_AI | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 70.1 | 77.1 | 62.9 | 92.2 | 59.5 | 72.7 | 64.3 | 60.9 | 70.9 | ||
MCANet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Anonymous | more details | n/a | 45.8 | 58.5 | 36.2 | 87.7 | 27.2 | 41.5 | 29.7 | 32.6 | 52.7 | ||
UFONet (half-resolution) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | UFO RPN: A Region Proposal Network for Ultra Fast Object Detection | Wenkai Li, Andy Song | The 34th Australasian Joint Conference on Artificial Intelligence | more details | n/a | 22.1 | 35.2 | 5.4 | 77.9 | 2.1 | 10.7 | 6.4 | 8.2 | 31.1 |
SCMNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 37.1 | 52.2 | 27.6 | 84.9 | 16.9 | 31.0 | 22.2 | 17.8 | 44.2 | ||
FsaNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | FsaNet: Frequency Self-attention for Semantic Segmentation | Anonymous | more details | n/a | 62.2 | 69.5 | 53.4 | 90.8 | 46.8 | 60.3 | 63.0 | 48.1 | 65.5 | |
SCMNet coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | SCMNet: Shared Context Mining Network for Real-time Semantic Segmentation | Tanmay Singha; Moritz Bergemann; Duc-Son Pham; Aneesh Krishna | 2021 Digital Image Computing: Techniques and Applications (DICTA) | more details | n/a | 38.3 | 53.7 | 28.7 | 85.6 | 18.2 | 31.2 | 23.2 | 20.4 | 45.3 |
SAIT SeeThroughNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 71.5 | 78.7 | 65.2 | 92.6 | 61.7 | 72.4 | 64.8 | 63.4 | 73.4 | ||
JSU_IDT_group | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 69.0 | 77.5 | 63.3 | 92.4 | 48.2 | 74.0 | 63.5 | 61.6 | 71.9 | ||
DLA_HRNet48OCR_MSFLIP_000 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | This set of predictions is from DLA (differentiable lattice assignment network) with "HRNet48+OCR-Head" as base segmentation model. The model is, first trained on coarse-data, and then trained on fine-annotated train/val sets. Multi-scale (0.5, 0.75, 1.0, 1.25, 1.5, 1.75) and flip scheme is adopted during inference. more details | n/a | 68.6 | 76.9 | 62.0 | 92.3 | 55.5 | 67.9 | 63.2 | 60.1 | 70.7 | ||
MYBank-AIoT | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 72.9 | 79.3 | 64.9 | 92.1 | 62.8 | 76.2 | 69.0 | 65.6 | 73.5 | ||
kMaX-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | k-means Mask Transformer | Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen | ECCV 2022 | kMaX-DeepLab w/ ConvNeXt-L backbone (ImageNet-22k + 1k pretrained). This result is obtained by the kMaX-DeepLab trained for Panoptic Segmentation task. No test-time augmentation or other external dataset. more details | n/a | 65.9 | 76.1 | 58.4 | 89.4 | 54.4 | 67.0 | 60.4 | 58.7 | 62.4 |
LeapAI | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Using advanced AI techniques. more details | n/a | 70.9 | 76.7 | 63.5 | 92.6 | 62.9 | 72.6 | 63.4 | 63.8 | 71.7 | ||
adlab_iiau_ldz | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | meticulous-caiman_2022.05.01_03.32 more details | n/a | 69.5 | 77.9 | 62.5 | 92.7 | 58.2 | 70.2 | 61.5 | 61.3 | 71.9 | ||
SFRSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | A Real-Time Semantic Segmentation Model Using Iteratively Shared Features In Multiple Sub-Encoders | Tanmay Singha, Duc-Son Pham, Aneesh Krishna | Pattern Recognition | more details | n/a | 41.3 | 50.2 | 29.8 | 82.4 | 24.1 | 36.8 | 33.8 | 27.1 | 46.5 |
PIDNet-S | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | PIDNet: A Real-time Semantic Segmentation Network Inspired from PID Controller | Anonymous | more details | 0.0107 | 55.0 | 67.3 | 46.6 | 89.2 | 32.3 | 47.1 | 53.1 | 43.3 | 60.9 | |
Vision Transformer Adapter for Dense Predictions | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Vision Transformer Adapter for Dense Predictions | Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, Yu Qiao | ViT-Adapter-L, BEiT pre-train, multi-scale testing more details | n/a | 68.3 | 75.9 | 58.9 | 91.5 | 54.6 | 68.2 | 66.7 | 60.4 | 69.9 | |
SSNet | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Anonymous | more details | n/a | 51.7 | 67.6 | 48.0 | 88.6 | 32.1 | 43.9 | 31.7 | 42.6 | 59.0 | ||
SDBNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | SDBNet: Lightweight Real-time Semantic Segmentation Using Short-term Dense Bottleneck | Tanmay Singha, Duc-Son Pham, Aneesh Krishna | 2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA) | more details | n/a | 42.0 | 54.7 | 32.1 | 85.5 | 21.2 | 34.2 | 31.8 | 27.8 | 48.9 |
MeiTuan-BaseModel | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 73.2 | 78.9 | 65.8 | 91.7 | 62.4 | 74.8 | 72.8 | 65.9 | 73.3 | ||
SDBNetV2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Improved Short-term Dense Bottleneck network for efficient scene analysis | Tanmay Singha; Duc-Son Pham; Aneesh Krishna | Computer Vision and Image Understanding | more details | n/a | 43.8 | 56.1 | 35.4 | 86.1 | 26.0 | 36.2 | 31.8 | 29.5 | 49.7 |
mogo_semantic | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 69.6 | 78.1 | 63.3 | 92.5 | 56.4 | 69.1 | 64.5 | 60.6 | 72.3 | ||
UDSSEG_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | UDSSEG_RVC more details | n/a | 55.9 | 64.9 | 42.7 | 89.9 | 45.4 | 55.0 | 50.2 | 42.5 | 56.7 | ||
MIX6D_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | MIX6D_RVC more details | n/a | 58.6 | 64.1 | 45.1 | 86.0 | 45.9 | 58.2 | 63.9 | 46.7 | 58.9 | ||
FAN_NV_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Hybrid-Base + Segformer more details | n/a | 59.0 | 67.2 | 46.2 | 90.6 | 44.2 | 58.4 | 55.1 | 49.3 | 60.9 | ||
UNIV_CNP_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | RVC 2022 more details | n/a | 50.2 | 57.5 | 42.7 | 83.3 | 36.8 | 54.3 | 49.9 | 32.6 | 44.9 | ||
AntGroup-AI-VisionAlgo | yes | yes | yes | yes | no | no | no | no | yes | yes | no | no | no | no | Anonymous | AntGroup AI vision algo more details | n/a | 71.3 | 77.3 | 63.3 | 91.9 | 62.5 | 72.3 | 69.1 | 64.4 | 69.9 | ||
InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions | Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, Xiaogang Wang, Yu Qiao | CVPR 2023 | We use Mask2Former as the segmentation framework, and initialize our InternImage-H model with the pre-trained weights on the 427M joint dataset of public Laion-400M, YFCC-15M, and CC12M. Following common practices, we first pre-train on Mapillary Vistas for 80k iterations, and then fine-tune on Cityscapes for 80k iterations. The crop size is set to 1024×1024 in this experiment. As a result, our InternImage-H achieves 87.0 multi-scale mIoU on the validation set, and 86.1 multi-scale mIoU on the test set. more details | n/a | 73.6 | 78.8 | 66.8 | 90.8 | 66.9 | 76.9 | 69.2 | 66.2 | 73.3 |
Dense Prediction with Attentive Feature aggregation | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Dense Prediction with Attentive Feature Aggregation | Yung-Hsu Yang, Thomas E. Huang, Min Sun, Samuel Rota Bulò, Peter Kontschieder, Fisher Yu | WACV 2023 | We propose Attentive Feature Aggregation (AFA) to exploit both spatial and channel information for semantic segmentation and boundary detection. more details | n/a | 65.1 | 73.5 | 57.2 | 91.1 | 51.1 | 64.0 | 57.3 | 57.9 | 69.0 |
W3_FAFM | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Junyan Yang, Qian Xu, Lei La | Team: BOSCH-XC-DX-WAVE3 more details | 0.029309 | 59.3 | 71.1 | 52.9 | 90.3 | 40.1 | 54.0 | 54.3 | 46.3 | 65.0 | ||
HRN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Hierarchical residual network more details | 45.0 | 53.4 | 66.3 | 44.8 | 89.0 | 33.4 | 44.2 | 49.9 | 42.2 | 57.7 | ||
HRN+DCNv2_for_DOAS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | HRN with DCNv2 for DOAS in paper "Dynamic Obstacle Avoidance System based on Rapid Instance Segmentation Network" more details | 0.032 | 59.8 | 72.0 | 51.9 | 90.3 | 40.3 | 52.9 | 55.7 | 48.6 | 66.5 | ||
GEELY-ATC-SEG | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 73.9 | 79.1 | 65.1 | 91.3 | 66.8 | 74.8 | 73.5 | 66.8 | 73.8 | ||
PMSDSEN | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient Parallel Multi-Scale Detail and Semantic Encoding Network for Lightweight Semantic Segmentation | Xiao Liu, Xiuya Shi, Lufei Chen, Linbo Qing, Chao Ren | ACM International Conference on Multimedia 2023 | MM '23: Proceedings of the 31th ACM International Conference on Multimedia more details | n/a | 49.9 | 63.7 | 40.6 | 88.1 | 29.4 | 45.0 | 43.8 | 34.6 | 53.7 |
ECFD | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | backbone: ConvNext-Large more details | n/a | 63.6 | 74.0 | 55.2 | 90.8 | 48.3 | 61.9 | 55.6 | 56.0 | 67.0 | ||
DWGSeg-L75 | yes | yes | no | no | no | no | no | no | no | no | 1.3 | 1.3 | no | no | Anonymous | more details | 0.00755 | 50.3 | 62.5 | 42.7 | 88.5 | 29.3 | 44.1 | 39.9 | 38.9 | 56.3 | ||
VLTSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | VLTSeg: Simple Transfer of CLIP-Based Vision-Language Representations for Domain Generalized Semantic Segmentation | Christoph Hümmer, Manuel Schwonberg, Liangwei Zhou, Hu Cao, Alois Knoll, Hanno Gottschalk | more details | n/a | 73.3 | 79.4 | 66.5 | 91.4 | 63.0 | 73.3 | 71.5 | 67.9 | 73.4 | |
CGMANet_v1 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Context Guided Multi-scale Attention for Real-time Semantic Segmentation of Road-scene | Saquib Mazhar | Context Guided Multi-scale Attention for Real-time Semantic Segmentation of Road-scene more details | n/a | 48.3 | 63.7 | 40.1 | 86.9 | 26.5 | 41.8 | 34.7 | 36.9 | 56.0 | |
SERNet-Former_v2 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 67.1 | 72.7 | 58.8 | 89.7 | 53.3 | 66.0 | 67.8 | 60.3 | 68.2 |
IoU on category-level
name | fine | fine | coarse | coarse | 16-bit | 16-bit | depth | depth | video | video | sub | sub | code | code | title | authors | venue | description | Runtime [s] | average | flat | nature | object | sky | construction | human | vehicle |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
FCN 8s | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Fully Convolutional Networks for Semantic Segmentation | J. Long, E. Shelhamer, and T. Darrell | CVPR 2015 | Trained by Marius Cordts on a pre-release version of the dataset more details | 0.5 | 85.7 | 98.2 | 91.1 | 57.0 | 93.9 | 89.6 | 78.6 | 91.3 |
RRR-ResNet152-MultiScale | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | update: this submission actually used the coarse labels, which was previously not marked accordingly more details | n/a | 89.3 | 98.5 | 92.5 | 69.0 | 95.0 | 92.3 | 83.2 | 94.3 | ||
Dilation10 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Multi-Scale Context Aggregation by Dilated Convolutions | Fisher Yu and Vladlen Koltun | ICLR 2016 | Dilation10 is a convolutional network that consists of a front-end prediction module and a context aggregation module. Both are described in the paper. The combined network was trained jointly. The context module consists of 10 layers, each of which has C=19 feature maps. The larger number of layers in the context module (10 for Cityscapes versus 8 for Pascal VOC) is due to the high input resolution. The Dilation10 model is a pure convolutional network: there is no CRF and no structured prediction. Dilation10 can therefore be used as the baseline input for structured prediction models. Note that the reported results were produced by training on the training set only; the network was not retrained on train+val. more details | 4.0 | 86.5 | 98.3 | 91.4 | 60.5 | 93.7 | 90.2 | 79.8 | 91.8 |
Adelaide | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Piecewise Training of Deep Structured Models for Semantic Segmentation | G. Lin, C. Shen, I. Reid, and A. van den Hengel | arXiv preprint 2015 | Trained on a pre-release version of the dataset more details | 35.0 | 82.8 | 97.8 | 89.7 | 48.2 | 92.2 | 88.7 | 73.1 | 89.6 |
DeepLab LargeFOV StrongWeak | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | yes | yes | Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image Segmentation | G. Papandreou, L.-C. Chen, K. Murphy, and A. L. Yuille | ICCV 2015 | Trained on a pre-release version of the dataset more details | 4.0 | 81.3 | 97.8 | 89.0 | 40.4 | 92.8 | 88.2 | 70.9 | 90.0 |
DeepLab LargeFOV Strong | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs | L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille | ICLR 2015 | Trained on a pre-release version of the dataset more details | 4.0 | 81.2 | 97.8 | 89.0 | 40.4 | 92.7 | 88.0 | 71.0 | 89.7 |
DPN | yes | yes | yes | yes | no | no | no | no | no | no | 3 | 3 | no | no | Semantic Image Segmentation via Deep Parsing Network | Z. Liu, X. Li, P. Luo, C. C. Loy, and X. Tang | ICCV 2015 | Trained on a pre-release version of the dataset more details | n/a | 79.5 | 97.3 | 88.0 | 37.7 | 93.9 | 86.6 | 65.4 | 87.7 |
Segnet basic | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | yes | yes | SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation | V. Badrinarayanan, A. Kendall, and R. Cipolla | arXiv preprint 2015 | Trained on a pre-release version of the dataset more details | 0.06 | 79.1 | 97.4 | 86.7 | 42.5 | 91.8 | 83.8 | 64.7 | 87.2 |
Segnet extended | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | yes | yes | SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation | V. Badrinarayanan, A. Kendall, and R. Cipolla | arXiv preprint 2015 | Trained on a pre-release version of the dataset more details | 0.06 | 79.8 | 97.5 | 87.1 | 43.7 | 91.7 | 82.8 | 68.6 | 87.5 |
CRFasRNN | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Conditional Random Fields as Recurrent Neural Networks | S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. S. Torr | ICCV 2015 | Trained on a pre-release version of the dataset more details | 0.7 | 82.7 | 97.7 | 90.3 | 46.5 | 93.5 | 88.5 | 73.6 | 88.9 |
Scale invariant CNN + CRF | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Convolutional Scale Invariance for Semantic Segmentation | I. Kreso, D. Causevic, J. Krapac, and S. Segvic | GCPR 2016 | We propose an effective technique to address large scale variation in images taken from a moving car by cross-breeding deep learning with stereo reconstruction. Our main contribution is a novel scale selection layer which extracts convolutional features at the scale which matches the corresponding reconstructed depth. The recovered scaleinvariant representation disentangles appearance from scale and frees the pixel-level classifier from the need to learn the laws of the perspective. This results in improved segmentation results due to more effi- cient exploitation of representation capacity and training data. We perform experiments on two challenging stereoscopic datasets (KITTI and Cityscapes) and report competitive class-level IoU performance. more details | n/a | 85.0 | 97.2 | 90.2 | 59.9 | 92.2 | 89.0 | 78.2 | 88.4 |
DPN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Semantic Image Segmentation via Deep Parsing Network | Z. Liu, X. Li, P. Luo, C. C. Loy, and X. Tang | ICCV 2015 | DPN trained on full resolution images more details | n/a | 86.0 | 98.2 | 91.1 | 58.9 | 94.5 | 89.8 | 78.4 | 91.2 |
Pixel-level Encoding for Instance Segmentation | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Pixel-level Encoding and Depth Layering for Instance-level Semantic Labeling | J. Uhrig, M. Cordts, U. Franke, and T. Brox | GCPR 2016 | We predict three encoding channels from a single image using an FCN: semantic labels, depth classes, and an instance-aware representation based on directions towards instance centers. Using low-level computer vision techniques, we obtain pixel-level and instance-level semantic labeling paired with a depth estimate of the instances. more details | n/a | 85.9 | 98.2 | 90.8 | 59.3 | 93.5 | 89.2 | 79.2 | 91.1 |
Adelaide_context | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Piecewise Training of Deep Structured Models for Semantic Segmentation | Guosheng Lin, Chunhua Shen, Anton van den Hengel, Ian Reid | CVPR 2016 | We explore contextual information to improve semantic image segmentation. Details are described in the paper. We trained contextual networks for coarse level prediction and a refinement network for refining the coarse prediction. Our models are trained on the training set only (2975 images) without adding the validation set. more details | n/a | 87.3 | 98.4 | 91.7 | 60.8 | 94.1 | 90.9 | 82.0 | 93.3 |
NVSegNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | In the inference, we use the image of 2 different scales. The same for training! more details | 0.4 | 87.2 | 98.4 | 91.6 | 63.5 | 94.6 | 90.5 | 80.2 | 92.0 | ||
ENet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation | Adam Paszke, Abhishek Chaurasia, Sangpil Kim, Eugenio Culurciello | more details | 0.013 | 80.4 | 97.3 | 88.3 | 46.8 | 90.6 | 85.4 | 65.5 | 88.9 | |
DeepLabv2-CRF | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs | Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L. Yuille | arXiv preprint | DeepLabv2-CRF is based on three main methods. First, we employ convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool to repurpose ResNet-101 (trained on image classification task) in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within DCNNs. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and fully connected Conditional Random Fields (CRFs). The model is only trained on train set. more details | n/a | 86.4 | 98.3 | 91.5 | 57.3 | 94.2 | 90.8 | 80.2 | 92.6 |
m-TCFs | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Convolutional Neural Network more details | 1.0 | 87.6 | 98.4 | 91.9 | 63.2 | 94.5 | 91.5 | 80.3 | 93.2 | ||
DeepLab+DynamicCRF | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ru.nl | more details | n/a | 83.7 | 97.3 | 89.4 | 48.2 | 93.6 | 88.8 | 77.1 | 91.1 | ||
LRR-4x | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation | Golnaz Ghiasi, Charless C. Fowlkes | ECCV 2016 | We introduce a CNN architecture that reconstructs high-resolution class label predictions from low-resolution feature maps using class-specific basis functions. Our multi-resolution architecture also uses skip connections from higher resolution feature maps to successively refine segment boundaries reconstructed from lower resolution maps. The model used for this submission is based on VGG-16 and it was trained on the training set (2975 images). The segmentation predictions were not post-processed using CRF. (This is a revision of a previous submission in which we didn't use the correct basis functions; the method name changed from 'LLR-4x' to 'LRR-4x') more details | n/a | 88.2 | 98.4 | 92.2 | 66.2 | 94.7 | 91.1 | 82.4 | 92.5 |
LRR-4x | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation | Golnaz Ghiasi, Charless C. Fowlkes | ECCV 2016 | We introduce a CNN architecture that reconstructs high-resolution class label predictions from low-resolution feature maps using class-specific basis functions. Our multi-resolution architecture also uses skip connections from higher resolution feature maps to successively refine segment boundaries reconstructed from lower resolution maps. The model used for this submission is based on VGG-16 and it was trained using both coarse and fine annotations. The segmentation predictions were not post-processed using CRF. more details | n/a | 88.4 | 98.4 | 92.2 | 66.9 | 95.0 | 91.5 | 81.9 | 93.1 |
Le_Selfdriving_VGG | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 84.4 | 98.0 | 89.9 | 54.7 | 93.4 | 88.9 | 75.2 | 90.7 | ||
SQ | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Speeding up Semantic Segmentation for Autonomous Driving | Michael Treml, José Arjona-Medina, Thomas Unterthiner, Rupesh Durgesh, Felix Friedmann, Peter Schuberth, Andreas Mayr, Martin Heusel, Markus Hofmarcher, Michael Widrich, Bernhard Nessler, Sepp Hochreiter | NIPS 2016 Workshop - MLITS Machine Learning for Intelligent Transportation Systems Neural Information Processing Systems 2016, Barcelona, Spain | more details | 0.06 | 84.3 | 96.7 | 90.4 | 57.0 | 93.0 | 87.5 | 75.6 | 89.9 |
SAIT | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous more details | 4.0 | 89.6 | 98.6 | 92.9 | 69.1 | 95.2 | 92.8 | 83.6 | 94.6 | ||
FoveaNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | FoveaNet | Xin Li, Jiashi Feng | 1.caffe-master 2.resnet-101 3.single scale testing Previously listed as "LXFCRN". more details | n/a | 89.3 | 98.5 | 92.4 | 69.8 | 94.5 | 91.9 | 84.3 | 93.4 | |
RefineNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation | Guosheng Lin; Anton Milan; Chunhua Shen; Ian Reid; | Please refer to our technical report for details: "RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation" (https://arxiv.org/abs/1611.06612). Our source code is available at: https://github.com/guosheng/refinenet 2975 images (training set with fine labels) are used for training. more details | n/a | 87.9 | 98.4 | 91.9 | 63.8 | 94.8 | 91.7 | 81.3 | 93.6 | |
SegModel | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Both train set (2975) and val set (500) are used to train model for this submission. more details | 0.8 | 89.8 | 98.6 | 93.0 | 68.1 | 95.5 | 93.3 | 85.2 | 95.0 | ||
TuSimple | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Understanding Convolution for Semantic Segmentation | Panqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, Garrison Cottrell | more details | n/a | 90.1 | 98.6 | 93.0 | 71.9 | 95.2 | 93.0 | 84.8 | 94.5 | |
Global-Local-Refinement | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Global-residual and Local-boundary Refinement Networks for Rectifying Scene Parsing Predictions | Rui Zhang, Sheng Tang, Min Lin, Jintao Li, Shuicheng Yan | International Joint Conference on Artificial Intelligence (IJCAI) 2017 | global-residual and local-boundary refinement The method was previously listed as "RefineNet". To avoid confusions with a recently appeared and similarly named approach, the submission name was updated. more details | n/a | 90.0 | 98.7 | 93.0 | 70.1 | 95.4 | 93.1 | 85.2 | 94.8 |
XPARSE | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 88.7 | 98.5 | 92.4 | 66.5 | 95.2 | 91.9 | 82.8 | 93.7 | ||
ResNet-38 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Wider or Deeper: Revisiting the ResNet Model for Visual Recognition | Zifeng Wu, Chunhua Shen, Anton van den Hengel | arxiv | single model, single scale, no post-processing with CRFs Model A2, 2 conv., fine only, single scale testing The submissions was previously listed as "Model A2, 2 conv.". The name was changed for consistency with the other submission of the same work. more details | n/a | 90.9 | 98.7 | 93.4 | 73.4 | 95.5 | 93.5 | 87.0 | 95.1 |
SegModel | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 90.4 | 98.7 | 93.2 | 71.2 | 95.5 | 93.5 | 85.3 | 95.2 | ||
Deep Layer Cascade (LC) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Not All Pixels Are Equal: Difficulty-aware Semantic Segmentation via Deep Layer Cascade | Xiaoxiao Li, Ziwei Liu, Ping Luo, Chen Change Loy, Xiaoou Tang | CVPR 2017 | We propose a novel deep layer cascade (LC) method to improve the accuracy and speed of semantic segmentation. Unlike the conventional model cascade (MC) that is composed of multiple independent models, LC treats a single deep model as a cascade of several sub-models. Earlier sub-models are trained to handle easy and confident regions, and they progressively feed-forward harder regions to the next sub-model for processing. Convolutions are only calculated on these regions to reduce computations. The proposed method possesses several advantages. First, LC classifies most of the easy regions in the shallow stage and makes deeper stage focuses on a few hard regions. Such an adaptive and 'difficulty-aware' learning improves segmentation performance. Second, LC accelerates both training and testing of deep network thanks to early decisions in the shallow stage. Third, in comparison to MC, LC is an end-to-end trainable framework, allowing joint learning of all sub-models. We evaluate our method on PASCAL VOC and more details | n/a | 88.1 | 98.4 | 92.1 | 64.5 | 94.2 | 91.5 | 82.5 | 93.5 |
FRRN | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes | Tobias Pohlen, Alexander Hermans, Markus Mathias, Bastian Leibe | Arxiv | Full-Resolution Residual Networks (FRRN) combine multi-scale context with pixel-level accuracy by using two processing streams within one network: One stream carries information at the full image resolution, enabling precise adherence to segment boundaries. The other stream undergoes a sequence of pooling operations to obtain robust features for recognition. more details | n/a | 88.9 | 98.5 | 92.3 | 68.4 | 94.9 | 91.8 | 82.5 | 93.8 |
MNet_MPRG | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Chubu University, MPRG | without val dataset, external dataset (e.g. image net) and post-processing more details | 0.6 | 89.3 | 98.5 | 92.4 | 70.4 | 94.7 | 91.9 | 83.3 | 93.6 | ||
ResNet-38 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Wider or Deeper: Revisiting the ResNet Model for Visual Recognition | Zifeng Wu, Chunhua Shen, Anton van den Hengel | arxiv | single model, no post-processing with CRFs Model A2, 2 conv., fine+coarse, multi scale testing more details | n/a | 91.0 | 98.7 | 93.4 | 73.6 | 95.5 | 93.6 | 86.9 | 95.5 |
FCN8s-QunjieYu | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 81.8 | 96.5 | 91.2 | 35.0 | 93.3 | 88.3 | 77.9 | 90.1 | ||
RGB-D FCN | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | no | no | Anonymous | GoogLeNet + depth branch, single model no data augmentation, no training on validation set, no graphical model Used coarse labels to initialize depth branch more details | n/a | 87.5 | 98.4 | 91.6 | 64.3 | 94.7 | 90.9 | 80.3 | 92.3 | ||
MultiBoost | yes | yes | yes | yes | no | no | yes | yes | no | no | 2 | 2 | no | no | Anonymous | Boosting based solution. Publication is under review. more details | 0.25 | 81.9 | 97.5 | 88.7 | 50.9 | 90.3 | 87.1 | 71.4 | 87.3 | ||
GoogLeNet FCN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Going Deeper with Convolutions | Christian Szegedy , Wei Liu , Yangqing Jia , Pierre Sermanet , Scott Reed , Dragomir Anguelov , Dumitru Erhan , Vincent Vanhoucke , Andrew Rabinovich | CVPR 2015 | GoogLeNet No data augmentation, no graphical model Trained by Lukas Schneider, following "Fully Convolutional Networks for Semantic Segmentation", Long et al. CVPR 2015 more details | n/a | 85.8 | 98.2 | 90.9 | 58.6 | 93.7 | 89.5 | 78.4 | 91.2 |
ERFNet (pretrained) | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ERFNet: Efficient Residual Factorized ConvNet for Real-time Semantic Segmentation | Eduardo Romera, Jose M. Alvarez, Luis M. Bergasa and Roberto Arroyo | Transactions on Intelligent Transportation Systems (T-ITS) | ERFNet pretrained on ImageNet and trained only on the fine train (2975) annotated images more details | 0.02 | 87.3 | 98.2 | 91.5 | 65.1 | 94.2 | 90.6 | 78.9 | 92.3 |
ERFNet (from scratch) | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Efficient ConvNet for Real-time Semantic Segmentation | Eduardo Romera, Jose M. Alvarez, Luis M. Bergasa and Roberto Arroyo | IV2017 | ERFNet trained entirely on the fine train set (2975 images) without any pretraining nor coarse labels more details | 0.02 | 86.5 | 98.2 | 91.1 | 62.4 | 94.2 | 90.1 | 77.4 | 91.9 |
TuSimple_Coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Understanding Convolution for Semantic Segmentation | Panqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, Garrison Cottrell | Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are better for practical use. First, we implement dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields of the network to aggregate global information; 2) alleviates what we call the "gridding issue" caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a new state-of-art result of 80.1% mIOU in the test set. We also are state-of-the-art overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Pretrained models are available at https://goo.gl/DQMeun. more details | n/a | 90.7 | 98.7 | 93.1 | 73.1 | 95.4 | 93.4 | 86.0 | 95.4 | |
SAC-multiple | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scale-adaptive Convolutions for Scene Parsing | Rui Zhang, Sheng Tang, Yongdong Zhang, Jintao Li, and Shuicheng Yan | International Conference on Computer Vision (ICCV) 2017 | more details | n/a | 90.6 | 98.7 | 93.1 | 71.8 | 95.6 | 93.4 | 86.1 | 95.3 |
NetWarp | yes | yes | yes | yes | no | no | no | no | yes | yes | no | no | no | no | Anonymous | more details | n/a | 91.0 | 98.6 | 93.2 | 74.8 | 95.3 | 93.5 | 86.6 | 95.3 | ||
depthAwareSeg_RNN_ff | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | training with fine-annotated training images only (val set is not used); flip-augmentation only in training; single GPU for train&test; softmax loss; resnet101 as front end; multiscale test. more details | n/a | 89.7 | 98.6 | 92.8 | 68.3 | 94.8 | 93.0 | 85.5 | 95.0 | ||
Ladder DenseNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Ladder-style DenseNets for Semantic Segmentation of Large Natural Images | Ivan Krešo, Josip Krapac, Siniša Šegvić | ICCV 2017 | https://ivankreso.github.io/publication/ladder-densenet/ more details | 0.45 | 89.7 | 98.3 | 92.1 | 71.1 | 95.5 | 92.3 | 84.5 | 93.9 |
Real-time FCN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Understanding Cityscapes: Efficient Urban Semantic Scene Understanding | Marius Cordts | Dissertation | Combines the following concepts: Network architecture: "Going deeper with convolutions". Szegedy et al., CVPR 2015 Framework and skip connections: "Fully convolutional networks for semantic segmentation". Long et al., CVPR 2015 Context modules: "Multi-scale context aggregation by dilated convolutions". Yu and Kolutin, ICLR 2016 more details | 0.044 | 87.9 | 98.4 | 91.5 | 64.3 | 94.7 | 91.4 | 81.6 | 93.7 |
GridNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Conv-Deconv Grid-Network for semantic segmentation. Using only the training set without extra coarse annotated data (only 2975 images). No pre-training (ImageNet). No post-processing (like CRF). more details | n/a | 87.9 | 98.4 | 92.1 | 65.5 | 93.8 | 90.9 | 81.8 | 92.5 | ||
PEARL | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Video Scene Parsing with Predictive Feature Learning | Xiaojie Jin, Xin Li, Huaxin Xiao, Xiaohui Shen, Zhe Lin, Jimei Yang, Yunpeng Chen, Jian Dong, Luoqi Liu, Zequn Jie, Jiashi Feng, and Shuicheng Yan | ICCV 2017 | We proposed a novel Parsing with prEdictive feAtuRe Learning (PEARL) model to address the following two problems in video scene parsing: firstly, how to effectively learn meaningful video representations for producing the temporally consistent labeling maps; secondly, how to overcome the problem of insufficient labeled video training data, i.e. how to effectively conduct unsupervised deep learning. To our knowledge, this is the first model to employ predictive feature learning in the video scene parsing. more details | n/a | 89.2 | 98.5 | 92.6 | 67.6 | 95.2 | 92.3 | 83.7 | 94.2 |
pruned & dilated inception-resnet-v2 (PD-IR2) | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Anonymous | more details | 0.69 | 86.5 | 98.3 | 91.0 | 61.9 | 94.4 | 90.2 | 78.4 | 91.6 | ||
PSPNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Pyramid Scene Parsing Network | Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, Jiaya Jia | CVPR 2017 | This submission is trained on coarse+fine(train+val set, 2975+500 images). Former submission is trained on coarse+fine(train set, 2975 images) which gets 80.2 mIoU: https://www.cityscapes-dataset.com/method-details/?submissionID=314 Previous versions of this method were listed as "SenseSeg_1026". more details | n/a | 91.2 | 98.7 | 93.3 | 74.2 | 95.3 | 93.8 | 87.1 | 95.7 |
motovis | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | motovis.com | more details | n/a | 91.5 | 98.7 | 93.4 | 75.4 | 95.8 | 93.8 | 87.3 | 95.8 | ||
ML-CRNN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Multi-level Contextual RNNs with Attention Model for Scene Labeling | Heng Fan, Xue Mei, Danil Prokhorov, Haibin Ling | arXiv | A framework based on CNNs and RNNs is proposed, in which the RNNs are used to model spatial dependencies among image units. Besides, to enrich deep features, we use different features from multiple levels, and adopt a novel attention model to fuse them. more details | n/a | 87.7 | 98.3 | 91.8 | 64.7 | 94.6 | 91.2 | 80.7 | 92.7 |
Hybrid Model | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 85.2 | 98.1 | 90.8 | 56.8 | 91.8 | 89.6 | 78.1 | 91.1 | ||
tek-Ifly | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Iflytek | Iflytek-yin | using a fusion strategy of three single models, the best result of a single model is 80.01%,multi-scale more details | n/a | 90.9 | 98.7 | 93.4 | 72.9 | 95.6 | 93.7 | 86.6 | 95.6 | |
GridNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Residual Conv-Deconv Grid Network for Semantic Segmentation | Damien Fourure, Rémi Emonet, Elisa Fromont, Damien Muselet, Alain Tremeau & Christian Wolf | BMVC 2017 | We used a new architecture for semantic image segmentation called GridNet, following a grid pattern allowing multiple interconnected streams to work at different resolutions (see paper). We used only the training set without extra coarse annotated data (only 2975 images) and no pre-training (ImageNet) nor pre or post-processing. more details | n/a | 88.1 | 98.4 | 92.1 | 66.2 | 93.8 | 91.1 | 82.3 | 92.6 |
firenet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | n/a | 84.9 | 95.0 | 89.8 | 60.6 | 92.1 | 87.2 | 77.6 | 91.7 | ||
DeepLabv3 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Rethinking Atrous Convolution for Semantic Image Segmentation | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | arXiv preprint | In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we employ a module, called Atrous Spatial Pyrmid Pooling (ASPP), which adopts atrous convolution in parallel to capture multi-scale context with multiple atrous rates. Furthermore, we propose to augment ASPP module with image-level features encoding global context and further boost performance. Results obtained with a single model (no ensemble), trained with fine + coarse annotations. More details will be shown in the updated arXiv report. more details | n/a | 91.6 | 98.7 | 93.5 | 76.0 | 95.9 | 93.9 | 87.9 | 95.7 |
EdgeSenseSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Deep segmentation network with hard negative mining and other tricks. more details | n/a | 89.8 | 98.6 | 92.9 | 68.9 | 95.0 | 92.9 | 85.6 | 94.5 | ||
ScaleNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | ScaleNet: Scale Invariant Network for Semantic Segmentation in Urban Driving Scenes | Mohammad Dawud Ansari, Stephan Krarß, Oliver Wasenmüller and Didier Stricker | International Conference on Computer Vision Theory and Applications, Funchal, Portugal, 2018 | The scale difference in driving scenarios is one of the essential challenges in semantic scene segmentation. Close objects cover significantly more pixels than far objects. In this paper, we address this challenge with a scale invariant architecture. Within this architecture, we explicitly estimate the depth and adapt the pooling field size accordingly. Our model is compact and can be extended easily to other research domains. Finally, the accuracy of our approach is comparable to the state-of-the-art and superior for scale problems. We evaluate on the widely used automotive dataset Cityscapes as well as a self-recorded dataset. more details | n/a | 89.6 | 98.5 | 92.8 | 69.9 | 94.6 | 92.9 | 84.1 | 94.4 |
K-net | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | XinLiang Zhong | more details | n/a | 88.8 | 98.5 | 92.3 | 66.4 | 94.6 | 92.4 | 83.0 | 94.4 | ||
MSNET | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | previously also listed as "MultiPathJoin" and "MultiPath_Scale". more details | 0.2 | 90.6 | 98.6 | 93.1 | 73.5 | 94.9 | 93.0 | 86.2 | 94.8 | ||
Multitask Learning | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics | Alex Kendall, Yarin Gal and Roberto Cipolla | Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task. more details | n/a | 89.9 | 98.5 | 93.0 | 69.8 | 95.1 | 93.2 | 85.3 | 94.7 | |
DeepMotion | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | We propose a novel method based on convnets to extract multi-scale features in a large range particularly for solving street scene segmentation. more details | n/a | 90.7 | 98.7 | 93.3 | 72.1 | 95.4 | 93.6 | 86.2 | 95.5 | ||
SR-AIC | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 91.3 | 98.7 | 93.4 | 75.1 | 95.5 | 94.0 | 86.9 | 95.7 | ||
Roadstar.ai_CV(SFNet) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Roadstar.ai-CV | Maosheng Ye, Guang Zhou, Tongyi Cao, YongTao Huang, Yinzi Chen | same foucs net(SFNet), based on only fine labels, with focus on the loss distribution and same focus on the every layer of feature map more details | 0.2 | 91.0 | 98.7 | 93.4 | 73.8 | 95.3 | 93.5 | 87.0 | 95.3 | |
DFN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Learning a Discriminative Feature Network for Semantic Segmentation | Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, Nong Sang | arxiv | Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2% mean IOU on PASCAL VOC 2012 and 80.3% mean IOU on Cityscapes dataset. more details | n/a | 90.8 | 98.7 | 93.1 | 72.7 | 95.5 | 93.4 | 86.7 | 95.6 |
RelationNet_Coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | RelationNet: Learning Deep-Aligned Representation for Semantic Image Segmentation | Yueqing Zhuang | ICPR | Semantic image segmentation, which assigns labels in pixel level, plays a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning. However, one central problem of these methods is that deep convolution neural network gives little consideration to the correlation among pixels. To handle this issue, in this paper, we propose a novel deep neural network named RelationNet, which utilizes CNN and RNN to aggregate context information. Besides, a spatial correlation loss is applied to supervise RelationNet to align features of spatial pixels belonging to same category. Importantly, since it is expensive to obtain pixel-wise annotations, we exploit a new training method for combining the coarsely and finely labeled data. Separate experiments show the detailed improvements of each proposal. Experimental results demonstrate the effectiveness of our proposed method to the problem of semantic image segmentation. more details | n/a | 91.8 | 98.8 | 93.6 | 76.1 | 95.8 | 94.1 | 87.9 | 95.9 |
ARSAIT | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | anonymous more details | 1.0 | 89.0 | 98.5 | 92.4 | 67.6 | 95.2 | 92.1 | 83.2 | 93.9 | ||
Mapillary Research: In-Place Activated BatchNorm | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | In-Place Activated BatchNorm for Memory-Optimized Training of DNNs | Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder | arXiv | In-Place Activated Batch Normalization (InPlace-ABN) is a novel approach to drastically reduce the training memory footprint of modern deep neural networks in a computationally efficient way. Our solution substitutes the conventionally used succession of BatchNorm + Activation layers with a single plugin layer, hence avoiding invasive framework surgery while providing straightforward applicability for existing deep learning frameworks. We obtain memory savings of up to 50% by dropping intermediate results and by recovering required information during the backward pass through the inversion of stored forward results, with only minor increase (0.8-2%) in computation time. Test results are obtained using a single model. more details | n/a | 91.2 | 98.6 | 93.3 | 74.4 | 95.6 | 93.8 | 86.9 | 95.4 |
EFBNET | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 90.7 | 98.6 | 93.2 | 72.5 | 95.2 | 93.6 | 86.4 | 95.5 | ||
Ladder DenseNet v2 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Journal submission | Anonymous | DenseNet-121 model used in downsampling path with ladder-style skip connections upsampling path on top of it. more details | 1.0 | 90.8 | 98.7 | 93.3 | 73.3 | 95.5 | 93.4 | 86.4 | 95.3 | |
ESPNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation | Sachin Mehta, Mohammad Rastegari, Anat Caspi, Linda Shapiro, and Hannaneh Hajishirzi | We introduce a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a new convolutional module, efficient spatial pyramid (ESP), which is efficient in terms of computation, memory, and power. ESPNet is 22 times faster (on a standard GPU) and 180 times smaller than the state-of-the-art semantic segmentation network PSPNet, while its category-wise accuracy is only 8% less. We evaluated EPSNet on a variety of semantic segmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy whole slide image dataset. Under the same constraints on memory and computation, ESPNet outperforms all the current efficient CNN networks such as MobileNet, ShuffleNet, and ENet on both standard metrics and our newly introduced performance metrics that measure efficiency on edge devices. Our network can process high resolution images at a rate of 112 and 9 frames per second on a standard GPU and edge device, respectively more details | 0.0089 | 82.2 | 95.5 | 89.5 | 52.9 | 92.5 | 86.7 | 69.8 | 88.4 | |
ENet with the Lovász-Softmax loss | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks | Maxim Berman, Amal Rannen Triki, Matthew B. Blaschko | arxiv | The Lovász-Softmax loss is a novel surrogate for optimizing the IoU measure in neural networks. Here we finetune the weights provided by the authors of ENet (arXiv:1606.02147) with this loss, for 10'000 iterations on training dataset. The runtimes are unchanged with respect to the ENet architecture. more details | 0.013 | 83.6 | 98.0 | 89.6 | 54.5 | 92.7 | 87.6 | 72.8 | 89.7 |
DRN_CRL_Coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Dense Relation Network: Learning Consistent and Context-Aware Representation For Semantic Image Segmentation | Yueqing Zhuang | ICIP | DRN_CoarseSemantic image segmentation, which aims at assigning pixel-wise category, is one of challenging image understanding problems. Global context plays an important role on local pixel-wise category assignment. To make the best of global context, in this paper, we propose dense relation network (DRN) and context-restricted loss (CRL) to aggregate global and local information. DRN uses Recurrent Neural Network (RNN) with different skip lengths in spatial directions to get context-aware representations while CRL helps aggregate them to learn consistency. Compared with previous methods, our proposed method takes full advantage of hierarchical contextual representations to produce high-quality results. Extensive experiments demonstrate that our methods achieves significant state-of-the-art performances on Cityscapes and Pascal Context benchmarks, with mean-IoU of 82.8% and 49.0% respectively. more details | n/a | 91.8 | 98.8 | 93.7 | 76.0 | 95.8 | 94.2 | 88.1 | 96.1 |
ShuffleSeg | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | ShuffleSeg: Real-time Semantic Segmentation Network | Mostafa Gamal, Mennatullah Siam, Mo'men Abdel-Razek | Under Review by ICIP 2018 | ShuffleSeg: An efficient realtime semantic segmentation network with skip connections and ShuffleNet units more details | n/a | 80.2 | 95.4 | 88.2 | 46.9 | 92.5 | 84.7 | 66.4 | 87.4 |
SkipNet-MobileNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | RTSeg: Real-time Semantic Segmentation Framework | Mennatullah Siam, Mostafa Gamal, Moemen Abdel-Razek, Senthil Yogamani, Martin Jagersand | Under Review by ICIP 2018 | An efficient realtime semantic segmentation network with skip connections based on MobileNet. more details | n/a | 82.0 | 95.9 | 89.1 | 51.5 | 92.9 | 86.0 | 70.4 | 88.3 |
ThunderNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.0104 | 84.1 | 97.9 | 90.3 | 56.0 | 93.0 | 88.4 | 73.4 | 89.8 | ||
PAC: Perspective-adaptive Convolutions | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Perspective-adaptive Convolutions for Scene Parsing | Rui Zhang, Sheng Tang, Yongdong Zhang, Jintao Li, and Shuicheng Yan | IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) | Many existing scene parsing methods adopt Convolutional Neural Networks with receptive fields of fixed sizes and shapes, which frequently results in inconsistent predictions of large objects and invisibility of small objects. To tackle this issue, we propose perspective-adaptive convolutions to acquire receptive fields of flexible sizes and shapes during scene parsing. Through adding a new perspective regression layer, we can dynamically infer the position-adaptive perspective coefficient vectors utilized to reshape the convolutional patches. Consequently, the receptive fields can be adjusted automatically according to the various sizes and perspective deformations of the objects in scene images. Our proposed convolutions are differentiable to learn the convolutional parameters and perspective coefficients in an end-to-end way without any extra training supervision of object sizes. Furthermore, considering that the standard convolutions lack contextual information and spatial dependencies, we propose a context adaptive bias to capture both local and global contextual information through average pooling on the local feature patches and global feature maps, followed by flexible attentive summing to the convolutional results. The attentive weights are position-adaptive and context-aware, and can be learned through adding an additional context regression layer. Experiments on Cityscapes and ADE20K datasets well demonstrate the effectiveness of the proposed methods. more details | n/a | 90.7 | 98.7 | 93.2 | 72.2 | 95.6 | 93.5 | 86.1 | 95.4 |
SU_Net | no | no | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 88.5 | 98.5 | 92.4 | 63.6 | 94.5 | 92.3 | 84.0 | 94.4 | ||
MobileNetV2Plus | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Huijun Liu | MobileNetV2Plus more details | n/a | 87.6 | 98.4 | 91.9 | 64.0 | 94.5 | 91.2 | 80.4 | 93.1 | ||
DeepLabv3+ | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation | Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, Hartwig Adam | arXiv | Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We will provide more details in the coming update on the arXiv report. more details | n/a | 92.0 | 98.8 | 93.7 | 77.1 | 95.8 | 94.2 | 88.3 | 96.1 |
RFMobileNetV2Plus | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Huijun Liu | Receptive Filed MobileNetV2Plus for Semantic Segmentation more details | n/a | 88.3 | 98.4 | 92.2 | 66.6 | 94.1 | 91.5 | 82.0 | 93.1 | ||
GoogLeNetV1_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | GoogLeNet-v1 FCN trained on Cityscapes, KITTI, and ScanNet, as required by the Robust Vision Challenge at CVPR'18 (http://robustvision.net/) more details | n/a | 83.0 | 97.3 | 88.5 | 53.9 | 92.2 | 87.4 | 73.5 | 88.4 | ||
SAITv2 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.025 | 84.5 | 98.0 | 90.1 | 54.7 | 93.7 | 89.6 | 73.5 | 91.9 | ||
GUNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Guided Upsampling Network for Real-Time Semantic Segmentation | Davide Mazzini | arxiv | Guided Upsampling Network for Real-Time Semantic Segmentation more details | 0.03 | 86.8 | 98.4 | 91.4 | 59.2 | 94.8 | 90.8 | 79.7 | 93.4 |
RMNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | A fast and light net for semantic segmentation. more details | 0.014 | 84.6 | 98.0 | 90.2 | 58.1 | 93.3 | 88.5 | 73.6 | 90.3 | ||
ContextNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ContextNet: Exploring Context and Detail for Semantic Segmentation in Real-time | Rudra PK Poudel, Ujwal Bonde, Stephan Liwicki, Christopher Zach | arXiv | Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naive adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representations to produce competitive semantic segmentation in real-time with low memory requirements. ContextNet combines a deep branch at low resolution that captures global context information efficiently with a shallow branch that focuses on high-resolution segmentation details. We analyze our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024x2048) resolution. more details | 0.0238 | 82.8 | 97.8 | 89.6 | 47.7 | 92.0 | 88.9 | 72.6 | 90.8 |
RFLR | yes | yes | yes | yes | yes | yes | no | no | no | no | 4 | 4 | no | no | Random Forest with Learned Representations for Semantic Segmentation | Byeongkeun Kang, Truong Q. Nguyen | IEEE Transactions on Image Processing | Random Forest with Learned Representations for Semantic Segmentation more details | 0.03 | 60.2 | 93.8 | 73.6 | 15.7 | 84.6 | 61.0 | 29.8 | 63.3 |
DPC | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Searching for Efficient Multi-Scale Architectures for Dense Image Prediction | Liang-Chieh Chen, Maxwell D. Collins, Yukun Zhu, George Papandreou, Barret Zoph, Florian Schroff, Hartwig Adam, Jonathon Shlens | NIPS 2018 | In this work we explore the construction of meta-learning techniques for dense image prediction focused on the tasks of scene parsing. Constructing viable search spaces in this domain is challenging because of the multi-scale representation of visual information and the necessity to operate on high resolution imagery. Based on a survey of techniques in dense image prediction, we construct a recursive search space and demonstrate that even with efficient random search, we can identify architectures that achieve state-of-the-art performance. Additionally, the resulting architecture (called DPC for Dense Prediction Cell) is more computationally efficient, requiring half the parameters and half the computational cost as previous state of the art systems. more details | n/a | 92.0 | 98.8 | 93.6 | 76.9 | 95.4 | 94.1 | 88.7 | 96.2 |
NV-ADLR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | NVIDIA Applied Deep Learning Research more details | n/a | 92.1 | 98.8 | 94.0 | 77.6 | 96.1 | 94.4 | 88.0 | 96.1 | ||
Adaptive Affinity Field on PSPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Adaptive Affinity Field for Semantic Segmentation | Tsung-Wei Ke*, Jyh-Jing Hwang*, Ziwei Liu, Stella X. Yu | ECCV 2018 | Existing semantic segmentation methods mostly rely on per-pixel supervision, unable to capture structural regularity present in natural images. Instead of learning to enforce semantic labels on individual pixels, we propose to enforce affinity field patterns in individual pixel neighbourhoods, i.e., the semantic label patterns of whether neighbouring pixels are in the same segment should match between the prediction and the ground-truth. The affinity fields characterize geometric relationships within the image, such as "motorcycles have round wheels". We further develop a novel method for learning the optimal neighbourhood size for each semantic category, with an adversarial loss that optimizes over worst-case scenarios. Unlike the common Conditional Random Field (CRF) approaches, our adaptive affinity field (AAF) method has no extra parameters during inference, and is less sensitive to appearance changes in the image. more details | n/a | 90.8 | 98.7 | 93.4 | 72.5 | 95.6 | 93.5 | 86.9 | 95.3 |
APMoE_seg_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Pixel-wise Attentional Gating for Parsimonious Pixel Labeling | Shu Kong, Charless Fowlkes | arxiv | The Pixel-level Attentional Gating (PAG) unit is trained to choose for each pixel the pooling size to adopt to aggregate contextual region around it. There are multiple branches with different dilate rates for varied pooling size, thus varying receptive field. For this ROB challenge, PAG is expected to robustly aggregate information for final prediction. This is our entry for Robust Vision Challenge 2018 workshop (ROB). The model is based on ResNet50, trained over mixed dataset of Cityscapes, ScanNet and Kitti. more details | 0.9 | 83.5 | 96.9 | 90.2 | 53.3 | 92.3 | 88.1 | 74.6 | 88.9 |
BatMAN_ROB | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | batch-normalized multistage attention network more details | 1.0 | 83.9 | 97.9 | 89.5 | 55.0 | 94.2 | 88.1 | 72.0 | 90.3 | ||
HiSS_ROB | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.06 | 81.4 | 97.8 | 88.9 | 45.1 | 92.9 | 87.5 | 68.8 | 88.9 | ||
VENUS_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | VENUS_ROB more details | n/a | 84.5 | 96.7 | 90.7 | 52.9 | 93.8 | 89.5 | 76.7 | 91.6 | ||
VlocNet++_ROB | no | no | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 83.4 | 98.1 | 89.3 | 51.6 | 92.9 | 88.4 | 72.9 | 90.4 | ||
AHiSS_ROB | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | Augmented Hierarchical Semantic Segmentation more details | 0.06 | 84.2 | 98.0 | 90.2 | 51.7 | 93.7 | 90.0 | 73.6 | 92.0 | ||
IBN-PSP-SA_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | IBN-PSP-SA_ROB more details | n/a | 89.1 | 98.6 | 92.5 | 67.1 | 95.4 | 92.6 | 83.3 | 94.6 | ||
LDN2_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Ladder DenseNet: https://ivankreso.github.io/publication/ladder-densenet/ more details | 1.0 | 90.1 | 98.6 | 93.0 | 70.2 | 95.6 | 93.0 | 85.2 | 95.0 | ||
MiniNet | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 0.004 | 70.5 | 96.0 | 81.8 | 25.8 | 89.0 | 77.2 | 46.5 | 77.3 | ||
AdapNetv2_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 84.3 | 98.2 | 90.0 | 53.9 | 93.8 | 88.8 | 74.1 | 91.2 | ||
MapillaryAI_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 91.1 | 98.7 | 93.5 | 74.1 | 95.8 | 93.8 | 86.7 | 95.3 | ||
FCN101_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 61.1 | 92.7 | 75.2 | 1.5 | 76.4 | 74.4 | 33.9 | 73.3 | ||
MaskRCNN_BOSH | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Jin shengtao, Yi zhihao, Liu wei [Our team name is firefly] | Bosh autodrive challenge more details | n/a | 87.2 | 98.3 | 91.6 | 65.8 | 88.3 | 90.7 | 82.4 | 93.7 | ||
EnsembleModel_Bosch | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Jin shengtao, Yi zhihao, Liu wei [Our team name was MaskRCNN_BOSH,firefly] | we've ensembled three model(erfnet,deeplab-mobilenet,tusimple) and gained 0.57 improvment of IoU Classes value. The best single model is 73.8549 more details | n/a | 88.5 | 98.5 | 92.2 | 66.0 | 94.5 | 91.8 | 82.6 | 93.7 | ||
EVANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 87.7 | 98.3 | 91.7 | 65.9 | 94.6 | 90.9 | 80.1 | 92.4 | ||
CLRCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | CLRCNet: Cascaded Low-Rank Convolutions for Semantic Segmentation in Real-time | Anonymous | A lightweight and real-time semantic segmentation method. more details | 0.013 | 84.4 | 98.0 | 90.6 | 56.7 | 93.4 | 88.5 | 74.1 | 89.7 | |
Edgenet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | A lightweight semantic segmentation network combined with edge information and channel-wise attention mechanism. more details | 0.03 | 88.5 | 98.4 | 92.1 | 68.5 | 94.9 | 91.5 | 80.9 | 93.1 | ||
L2-SP | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Explicit Inductive Bias for Transfer Learning with Convolutional Networks | Xuhong Li, Yves Grandvalet, Franck Davoine | ICML-2018 | With a simple variant of weight decay, L2-SP regularization (see the paper for details), we reproduced PSPNet based on the original ResNet-101 using "train_fine + val_fine + train_extra" set (2975 + 500 + 20000 images), with a small batch size 8. The sync batch normalization layer is implemented in Tensorflow (see the code). more details | n/a | 91.0 | 98.7 | 93.4 | 73.6 | 95.6 | 93.7 | 86.6 | 95.6 |
ALV303 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.2 | 89.8 | 98.6 | 92.5 | 72.2 | 95.0 | 92.1 | 84.4 | 93.8 | ||
NCTU-ITRI | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | For the purpose of fast semantic segmentation, we design a CNN-based encoder-decoder architecture, which is called DSNet. The encoder part is constructed based on the concept of DenseNet, and a simple decoder is adopted to make the network more efficient without degrading the accuracy. We pre-train the encoder network on the ImageNet dataset. Then, only the fine-annotated Cityscapes dataset (2975 training images) is used to train the complete DSNet. The DSNet demonstrates a good trade-off between accuracy and speed. It can process 68 frames per second on 1024x512 resolution images on a single GTX 1080 Ti GPU. more details | 0.0147 | 86.8 | 98.3 | 91.4 | 62.9 | 94.3 | 90.4 | 77.6 | 92.5 | ||
ADSCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ADSCNet: Asymmetric Depthwise Separable Convolution for Semantic Segmentation in Real-time | Anonymous | A lightweight and real-time semantic segmentation method for mobile devices. more details | 0.013 | 84.9 | 98.0 | 90.7 | 57.7 | 93.5 | 88.9 | 74.9 | 90.3 | |
SRC-B-MachineLearningLab | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Jianlong Yuan, Zelu Deng, Shu Wang, Zhenbo Luo | Samsung Research Center MachineLearningLab. The result is tested by multi scale and filp. The paper is in preparing. more details | n/a | 91.8 | 98.8 | 93.7 | 76.4 | 95.9 | 94.1 | 87.9 | 95.9 | ||
Tencent AI Lab | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 91.8 | 98.7 | 93.6 | 77.0 | 95.9 | 94.1 | 87.6 | 95.8 | ||
ERINet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | Efficient residual inception networks for real-time semantic segmentation more details | 0.023 | 87.4 | 98.2 | 91.6 | 64.9 | 94.7 | 90.6 | 79.2 | 92.3 | ||
PGCNet_Res101_fine | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | we choose the ResNet101 pretrained on ImageNet as our backbone, then we use both the train-fine and the val-fine data to train our model with batch size=8 for 8w iterations without any bells and whistles. We will release our paper latter. more details | n/a | 91.5 | 98.8 | 93.6 | 75.3 | 95.6 | 94.0 | 87.5 | 95.8 | ||
EDANet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Efficient Dense Modules of Asymmetric Convolution for Real-Time Semantic Segmentation | Shao-Yuan Lo (NCTU), Hsueh-Ming Hang (NCTU), Sheng-Wei Chan (ITRI), Jing-Jhih Lin (ITRI) | Training data: Fine annotations only (train+val. set, 2975+500 images) without any pretraining nor coarse annotations. For training on fine annotations (train set only, 2975 images), it attains a mIoU of 66.3%. Runtime: (resolution 512x1024) 0.0092s on a single GTX 1080Ti, 0.0123s on a single Titan X. more details | 0.0092 | 85.8 | 98.1 | 91.0 | 59.6 | 93.6 | 89.8 | 76.5 | 91.6 | |
OCNet_ResNet101_fine | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Context is essential for various computer vision tasks. The state-of-the-art scene parsing methods define the context as the prior of the scene categories (e.g., bathroom, badroom, street). Such scene context is not suitable for the street scene parsing tasks as most of the scenes are similar. In this work, we propose the Object Context that captures the prior of the object's category that the pixel belongs to. We compute the object context by aggregating all the pixels' features according to a attention map that encodes the probability of each pixel that it belongs to the same category with the associated pixel. Specifically, We employ the self-attention method to compute the pixel-wise attention map. We further propose the Pyramid Object Context and Atrous Spatial Pyramid Object Context to handle the problem of multi-scales. more details | n/a | 91.6 | 98.8 | 93.6 | 75.7 | 95.8 | 94.0 | 87.7 | 95.9 | ||
Knowledge-Aware | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Knowledge-Aware Semantic Segmentation more details | n/a | 90.7 | 98.7 | 93.1 | 72.6 | 95.7 | 93.4 | 86.0 | 95.5 | ||
CASIA_IVA_DANet_NoCoarse | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Dual Attention Network for Scene Segmentation | Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang,and Hanqing Lu | CVPR2019 | we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results more details | n/a | 91.6 | 98.7 | 93.5 | 75.8 | 95.7 | 93.9 | 87.7 | 95.8 |
LDFNet | yes | yes | no | no | no | no | yes | yes | no | no | 2 | 2 | yes | yes | Incorporating Luminance, Depth and Color Information by Fusion-based Networks for Semantic Segmentation | Shang-Wei Hung, Shao-Yuan Lo | We propose a preferred solution, which incorporates Luminance, Depth and color information by a Fusion-based network named LDFNet. It includes a distinctive encoder sub-network to process the depth maps and further employs the luminance images to assist the depth information in a process. LDFNet achieves very competitive results compared to the other state-of-art systems on the challenging Cityscapes dataset, while it maintains an inference speed faster than most of the existing top-performing networks. The experimental results show the effectiveness of the proposed information-fused approach and the potential of LDFNet for road scene understanding tasks. more details | n/a | 88.5 | 98.4 | 92.2 | 68.0 | 94.8 | 91.7 | 81.3 | 93.1 | |
CGNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Tianyi Wu et al | we propose a novel Context Guided Network for semantic segmentation on mobile devices. We first design a Context Guided (CG) block by considering the inherent characteristic of semantic segmentation. CG Block aggregates local feature, surrounding context feature and global context feature effectively and efficiently. Based on the CG block, we develop Context Guided Network (CGNet), which not only has a strong capacity of localization and recognition, but also has a low computational and memory footprint. Under a similar number of parameters, the proposed CGNet significantly outperforms existing segmentation networks. Extensive experiments on Cityscapes and CamVid datasets verify the effectiveness of the proposed approach. Specifically, without any post-processing, the proposed approach achieves 64.8% mean IoU on Cityscapes test set with less than 0.5 M parameters, and has a frame-rate of 50 fps on one NVIDIA Tesla K80 card for 2048 × 1024 high-resolution image. more details | 0.02 | 85.7 | 97.7 | 91.3 | 59.3 | 94.1 | 90.2 | 77.4 | 90.3 | ||
SAITv2-light | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.025 | 87.4 | 98.4 | 92.0 | 63.4 | 94.5 | 91.4 | 78.9 | 93.1 | ||
Deform_ResNet_Balanced | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.258 | 78.0 | 94.5 | 88.1 | 41.3 | 88.2 | 83.3 | 63.4 | 87.5 | ||
NfS-Seg | yes | yes | yes | yes | no | no | yes | yes | yes | yes | no | no | no | no | Uncertainty-Aware Knowledge Distillation for Real-Time Scene Segmentation: 7.43 GFLOPs at Full-HD Image with 120 fps | Anonymous | more details | 0.00837312 | 87.5 | 98.4 | 92.0 | 64.0 | 94.6 | 91.4 | 79.0 | 93.2 | |
Improving Semantic Segmentation via Video Propagation and Label Relaxation | yes | yes | yes | yes | no | no | no | no | yes | yes | no | no | yes | yes | Improving Semantic Segmentation via Video Propagation and Label Relaxation | Yi Zhu, Karan Sapra, Fitsum A. Reda, Kevin J. Shih, Shawn Newsam, Andrew Tao, Bryan Catanzaro | CVPR 2019 | Semantic segmentation requires large amounts of pixel-wise annotations to learn accurate models. In this paper, we present a video prediction-based methodology to scale up training sets by synthesizing new training samples in order to improve the accuracy of semantic segmentation networks. We exploit video prediction models' ability to predict future frames in order to also predict future labels. A joint propagation strategy is also proposed to alleviate mis-alignments in synthesized samples. We demonstrate that training segmentation models on datasets augmented by the synthesized samples lead to significant improvements in accuracy. Furthermore, we introduce a novel boundary label relaxation technique that makes training robust to annotation noise and propagation artifacts along object boundaries. Our proposed methods achieve state-of-the-art mIoUs of 83.5% on Cityscapes and 82.9% on CamVid. Our single model, without model ensembles, achieves 72.8% mIoU on the KITTI semantic segmentation test set, which surpasses the winning entry of the ROB challenge 2018. more details | n/a | 92.2 | 98.8 | 93.9 | 78.0 | 96.1 | 94.4 | 88.2 | 96.1 |
Spatial Sampling Net | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Spatial Sampling Network for Fast Scene Understanding | Davide Mazzini, Raimondo Schettini | CVPR 2019 Workshop on Autonomous Driving | We propose a network architecture to perform efficient scene understanding. This work presents three main novelties: the first is an Improved Guided Upsampling Module that can replace in toto the decoder part in common semantic segmentation networks. Our second contribution is the introduction of a new module based on spatial sampling to perform Instance Segmentation. It provides a very fast instance segmentation, needing only thresholding as post-processing step at inference time. Finally, we propose a novel efficient network design that includes the new modules and we test it against different datasets for outdoor scene understanding. more details | 0.00884 | 85.9 | 98.2 | 91.1 | 58.3 | 94.0 | 90.4 | 77.3 | 92.4 |
SwiftNetRN-18 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images | Marin Oršić, Ivan Krešo, Petra Bevandić, Siniša Šegvić | CVPR 2019 | more details | 0.0243 | 89.8 | 98.6 | 92.8 | 70.0 | 95.4 | 92.6 | 84.7 | 94.7 |
Fast-SCNN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Fast-SCNN: Fast Semantic Segmentation Network | Rudra PK Poudel, Stephan Liwicki, Roberto Cipolla | The encoder-decoder framework is state-of-the-art for offline semantic image segmentation. Since the rise in autonomous systems, real-time computation is increasingly desirable. In this paper, we introduce fast segmentation convolutional neural network (Fast-SCNN), an above real-time semantic segmentation model on high resolution image data (1024x2048px) suited to efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, we introduce our `learning to downsample' module which computes low-level features for multiple resolution branches simultaneously. Our network combines spatial detail at high resolution with deep features extracted at lower resolution, yielding an accuracy of 68.0% mean intersection over union at 123.5 frames per second on Cityscapes. We also show that large scale pre-training is unnecessary. We thoroughly validate our metric in experiments with ImageNet pre-training and the coarse labeled data of Cityscapes. Finally, we show even faster computation with competitive results on subsampled inputs, without any network modifications. more details | 0.0081 | 84.7 | 98.2 | 90.3 | 55.1 | 94.3 | 89.6 | 74.2 | 91.4 | |
Fast-SCNN (Half-resolution) | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Fast-SCNN: Fast Semantic Segmentation Network | Rudra P K Poudel, Stephan Liwicki, Roberto Cipolla | The encoder-decoder framework is state-of-the-art for offline semantic image segmentation. Since the rise in autonomous systems, real-time computation is increasingly desirable. In this paper, we introduce fast segmentation convolutional neural network (Fast-SCNN), an above real-time semantic segmentation model on high resolution image data (1024x2048px) suited to efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, we introduce our `learning to downsample' module which computes low-level features for multiple resolution branches simultaneously. Our network combines spatial detail at high resolution with deep features extracted at lower resolution, yielding an accuracy of 68.0% mean intersection over union at 123.5 frames per second on Cityscapes. We also show that large scale pre-training is unnecessary. We thoroughly validate our metric in experiments with ImageNet pre-training and the coarse labeled data of Cityscapes. Finally, we show even faster computation with competitive results on subsampled inputs, without any network modifications. more details | 0.0035 | 80.5 | 97.7 | 88.0 | 42.5 | 92.7 | 87.3 | 65.9 | 89.4 | |
Fast-SCNN (Quarter-resolution) | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Fast-SCNN: Fast Semantic Segmentation Network | Rudra P K Poudel, Stephan Liwicki, Roberto Cipolla | The encoder-decoder framework is state-of-the-art for offline semantic image segmentation. Since the rise in autonomous systems, real-time computation is increasingly desirable. In this paper, we introduce fast segmentation convolutional neural network (Fast-SCNN), an above real-time semantic segmentation model on high resolution image data (1024x2048px) suited to efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, we introduce our `learning to downsample' module which computes low-level features for multiple resolution branches simultaneously. Our network combines spatial detail at high resolution with deep features extracted at lower resolution, yielding an accuracy of 68.0% mean intersection over union at 123.5 frames per second on Cityscapes. We also show that large scale pre-training is unnecessary. We thoroughly validate our metric in experiments with ImageNet pre-training and the coarse labeled data of Cityscapes. Finally, we show even faster computation with competitive results on subsampled inputs, without any network modifications. more details | 0.00206 | 74.2 | 96.8 | 83.9 | 25.4 | 89.5 | 82.9 | 55.6 | 85.1 | |
DSNet | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | yes | yes | DSNet for Real-Time Driving Scene Semantic Segmentation | Wenfu Wang | DSNet for Real-Time Driving Scene Semantic Segmentation more details | 0.027 | 86.0 | 97.7 | 90.5 | 63.1 | 93.5 | 89.9 | 76.1 | 91.3 | |
SwiftNetRN-18 pyramid | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 89.5 | 98.6 | 92.3 | 69.1 | 95.3 | 92.2 | 84.3 | 94.6 | ||
DF-Seg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search | Xin Li, Yiming Zhou, Zheng Pan, Jiashi Feng | CVPR 2019 | DF1-Seg-d8 more details | 0.007 | 86.6 | 98.1 | 91.3 | 61.0 | 93.9 | 91.0 | 78.9 | 92.3 |
DF-Seg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | DF2-Seg2 more details | 0.018 | 89.2 | 98.5 | 92.5 | 68.7 | 95.3 | 92.5 | 82.9 | 94.3 | ||
DDAR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | DiDi Labs, AR Group more details | n/a | 91.9 | 98.7 | 93.5 | 76.7 | 95.8 | 94.1 | 88.3 | 95.9 | ||
LDN-121 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Ladder-style DenseNets for Semantic Segmentation of Large Images | Ivan Kreso, Josip Krapac, Sinisa Segvic | Ladder DenseNet-121 trained on train+val, fine labels only. Single-scale inference. more details | 0.048 | 90.7 | 98.7 | 93.2 | 72.5 | 95.7 | 93.3 | 86.1 | 95.4 | |
TKCN | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Tree-structured Kronecker Convolutional Network for Semantic Segmentation | Tianyi Wu, Sheng Tang, Rui Zhang, Juan Cao, Jintao Li | more details | n/a | 91.1 | 98.6 | 93.3 | 74.1 | 95.4 | 93.5 | 87.3 | 95.2 | |
RPNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Residual Pyramid Learning for Single-Shot Semantic Segmentation | Xiaoyu Chen, Xiaotian Lou, Lianfa Bai, Jing Han | arXiv | we put forward a method for single-shot segmentation in a feature residual pyramid network (RPNet), which learns the main and residuals of segmentation by decomposing the label at different levels of residual blocks. more details | 0.008 | 86.8 | 98.2 | 91.3 | 63.2 | 94.5 | 90.2 | 78.6 | 91.7 |
navi | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | yuxb | mutil scale test more details | n/a | 91.2 | 98.8 | 93.5 | 74.0 | 95.6 | 93.7 | 87.0 | 95.6 | ||
Auto-DeepLab-L | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation | Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan Yuille, Li Fei-Fei | arxiv | In this work, we study Neural Architecture Search for semantic image segmentation, an important computer vision task that assigns a semantic label to every pixel in an image. Existing works often focus on searching the repeatable cell structure, while hand-designing the outer network structure that controls the spatial resolution changes. This choice simplifies the search space, but becomes increasingly problematic for dense image prediction which exhibits a lot more network level architectural variations. Therefore, we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space. We present a network level search space that includes many popular designs, and develop a formulation that allows efficient gradient-based architecture search (3 P100 GPU days on Cityscapes images). We demonstrate the effectiveness of the proposed method on the challenging Cityscapes, PASCAL VOC 2012, and ADE20K datasets. Without any ImageNet pretraining, our architecture searched specifically for semantic image segmentation attains state-of-the-art performance. Please refer to https://arxiv.org/abs/1901.02985 for details. more details | n/a | 91.9 | 98.8 | 93.7 | 76.7 | 96.0 | 94.2 | 87.8 | 96.0 |
LiteSeg-Darknet19 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | LiteSeg: A Litewiegth ConvNet for Semantic Segmentation | Taha Emara, Hossam E. Abd El Munim, Hazem M. Abbas | DICTA 2019 | more details | 0.0102 | 88.3 | 98.4 | 92.3 | 65.9 | 95.0 | 91.7 | 81.7 | 93.0 |
AdapNet++ | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Self-Supervised Model Adaptation for Multimodal Semantic Segmentation | Abhinav Valada, Rohit Mohan, Wolfram Burgard | IJCV 2019 | In this work, we propose the AdapNet++ architecture for semantic segmentation that aims to achieve the right trade-off between performance and computational complexity of the model. AdapNet++ incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling (eASPP) module that has a larger effective receptive field with more than 10x fewer parameters compared to the standard ASPP, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details. Comprehensive empirical evaluations on the challenging Cityscapes, Synthia, SUN RGB-D, ScanNet and Freiburg Forest datasets demonstrate that our architecture achieves state-of-the-art performance while simultaneously being efficient in terms of both the number of parameters and inference time. Please refer to https://arxiv.org/abs/1808.03833 for details. A live demo on various datasets can be viewed at http://deepscene.cs.uni-freiburg.de more details | n/a | 91.0 | 98.7 | 93.2 | 73.7 | 95.3 | 93.6 | 86.7 | 95.8 |
SSMA | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | yes | yes | Self-Supervised Model Adaptation for Multimodal Semantic Segmentation | Abhinav Valada, Rohit Mohan, Wolfram Burgard | IJCV 2019 | Learning to reliably perceive and understand the scene is an integral enabler for robots to operate in the real-world. This problem is inherently challenging due to the multitude of object types as well as appearance changes caused by varying illumination and weather conditions. Leveraging complementary modalities can enable learning of semantically richer representations that are resilient to such perturbations. Despite the tremendous progress in recent years, most multimodal convolutional neural network approaches directly concatenate feature maps from individual modality streams rendering the model incapable of focusing only on the relevant complementary information for fusion. To address this limitation, we propose a mutimodal semantic segmentation framework that dynamically adapts the fusion of modality-specific features while being sensitive to the object category, spatial location and scene context in a self-supervised manner. Specifically, we propose an architecture consisting of two modality-specific encoder streams that fuse intermediate encoder representations into a single decoder using our proposed SSMA fusion mechanism which optimally combines complementary features. As intermediate representations are not aligned across modalities, we introduce an attention scheme for better correlation. Extensive experimental evaluations on the challenging Cityscapes, Synthia, SUN RGB-D, ScanNet and Freiburg Forest datasets demonstrate that our architecture achieves state-of-the-art performance in addition to providing exceptional robustness in adverse perceptual conditions. Please refer to https://arxiv.org/abs/1808.03833 for details. A live demo on various datasets can be viewed at http://deepscene.cs.uni-freiburg.de more details | n/a | 91.5 | 98.7 | 93.5 | 75.2 | 95.3 | 93.9 | 87.8 | 96.1 |
LiteSeg-Mobilenet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | LiteSeg: A Litewiegth ConvNet for Semantic Segmentation | Taha Emara, Hossam E. Abd El Munim, Hazem M. Abbas | DICTA 2019 | more details | 0.0062 | 86.8 | 97.9 | 91.7 | 62.8 | 94.6 | 90.4 | 79.3 | 90.9 |
LiteSeg-Shufflenet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | LiteSeg: A Litewiegth ConvNet for Semantic Segmentation | Taha Emara, Hossam E. Abd El Munim, Hazem M. Abbas | DICTA 2019 | more details | 0.007518 | 85.4 | 97.9 | 91.2 | 57.4 | 94.0 | 89.7 | 77.3 | 90.3 |
Fast OCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 91.7 | 98.8 | 93.5 | 76.2 | 95.6 | 94.0 | 87.6 | 95.9 | ||
ShuffleNet v2 + DPC | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | An efficient solution for semantic segmentation: ShuffleNet V2 with atrous separable convolutions | Sercan Turkmen, Janne Heikkila | ShuffleNet v2 with DPC at output_stride 16. more details | n/a | 86.5 | 98.3 | 91.3 | 59.5 | 93.9 | 90.9 | 78.6 | 93.0 | |
ERSNet-coarse | yes | yes | yes | yes | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 0.012 | 85.9 | 98.2 | 91.1 | 60.5 | 94.5 | 89.9 | 75.5 | 91.8 | ||
MiniNet-v2-coarse | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.012 | 86.1 | 98.2 | 91.2 | 60.7 | 94.5 | 90.1 | 75.9 | 92.1 | ||
SwiftNetRN-18 ensemble | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images | Marin Oršić, Ivan Krešo, Petra Bevandić, Siniša Šegvić | CVPR 2019 | more details | n/a | 90.1 | 98.6 | 92.8 | 70.6 | 95.5 | 92.8 | 85.2 | 95.0 |
EFC_sync | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Anonymous | more details | n/a | 90.9 | 98.7 | 93.3 | 72.9 | 95.6 | 93.6 | 86.8 | 95.5 | ||
PL-Seg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Partial Order Pruning: for Best Speed/Accuracy Trade-of in Neural Architecture Search | Xin Li, Yiming Zhou, Zheng Pan, Jiashi Feng | CVPR 2019 | Following "partial order pruning", we conduct architecture searching experiments on Snapdragon 845 platform, and obtained PL1A/PL1A-Seg. 1、Snapdragon 845 2、NCNN Library 3、latency evaluated at 640x384 more details | 0.0192 | 86.4 | 98.2 | 91.4 | 59.8 | 94.7 | 90.4 | 78.2 | 92.0 |
MiniNet-v2-pretrained | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.012 | 86.2 | 98.3 | 91.3 | 60.9 | 94.5 | 90.2 | 76.2 | 92.1 | ||
GALD-Net | yes | yes | yes | yes | yes | yes | yes | yes | no | no | no | no | yes | yes | Global Aggregation then Local Distribution in Fully Convolutional Networks | Xiangtai Li, Li Zhang, Ansheng You, Maoke Yang, Kuiyuan Yang, Yunhai Tong | BMVC 2019 | We propose Global Aggregation then Local Distribution (GALD) scheme to distribute global information to each position adaptively according to the local information around the position. (Joint work: Key Laboratory of Machine Perception, School of EECS @Peking University and DeepMotion AI Research ) more details | n/a | 92.3 | 98.8 | 93.8 | 78.3 | 96.0 | 94.4 | 88.5 | 96.1 |
GALD-net | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Global Aggregation then Local Distribution in Fully Convolutional Networks | Xiangtai Li, Li Zhang, Ansheng You, Maoke Yang, Kuiyuan Yang, Yunhai Tong | BMVC 2019 | We propose Global Aggregation then Local Distribution (GALD) scheme to distribute global information to each position adaptively according the local information surrounding the position. more details | n/a | 92.2 | 98.8 | 93.8 | 78.0 | 96.0 | 94.4 | 88.5 | 96.1 |
ndnet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.024 | 84.5 | 98.1 | 90.5 | 53.6 | 93.7 | 89.2 | 74.7 | 91.4 | ||
HRNetV2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | High-Resolution Representations for Labeling Pixels and Regions | Ke Sun, Yang Zhao, Borui Jiang, Tianheng Cheng, Bin Xiao, Dong Liu, Yadong Mu, Xinggang Wang, Wenyu Liu, Jingdong Wang | The high-resolution network (HRNet) recently developed for human pose estimation, maintains high-resolution representations through the whole process by connecting high-to-low resolution convolutions in parallel and produces strong high-resolution representations by repeatedly conducting fusions across parallel convolutions. more details | n/a | 92.2 | 98.8 | 93.7 | 77.7 | 96.0 | 94.3 | 88.6 | 96.0 | |
SPGNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | SPGNet: Semantic Prediction Guidance for Scene Parsing | Bowen Cheng, Liang-Chieh Chen, Yunchao Wei, Yukun Zhu, Zilong Huang, Jinjun Xiong, Thomas Huang, Wen-Mei Hwu, Honghui Shi | ICCV 2019 | Multi-scale context module and single-stage encoder-decoder structure are commonly employed for semantic segmentation. The multi-scale context module refers to the operations to aggregate feature responses from a large spatial extent, while the single-stage encoder-decoder structure encodes the high-level semantic information in the encoder path and recovers the boundary information in the decoder path. In contrast, multi-stage encoder-decoder networks have been widely used in human pose estimation and show superior performance than their single-stage counterpart. However, few efforts have been attempted to bring this effective design to semantic segmentation. In this work, we propose a Semantic Prediction Guidance (SPG) module which learns to re-weight the local features through the guidance from pixel-wise semantic prediction. We find that by carefully re-weighting features across stages, a two-stage encoder-decoder network coupled with our proposed SPG module can significantly outperform its one-stage counterpart with similar parameters and computations. Finally, we report experimental results on the semantic segmentation benchmark Cityscapes, in which our SPGNet attains 81.1% on the test set using only 'fine' annotations. more details | n/a | 92.1 | 98.8 | 93.8 | 77.5 | 96.1 | 94.2 | 88.8 | 95.9 |
LDN-161 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Ladder-style DenseNets for Semantic Segmentation of Large Images | Ivan Kreso, Josip Krapac, Sinisa Segvic | Ladder DenseNet-161 trained on train+val, fine labels only. Inference on multi-scale inputs. more details | 2.0 | 91.3 | 98.7 | 93.4 | 74.3 | 95.8 | 93.8 | 87.0 | 95.8 | |
GGCF | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 92.0 | 98.8 | 93.8 | 77.0 | 95.8 | 94.3 | 88.2 | 96.0 | ||
GFF-Net | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | GFF: Gated Fully Fusion for Semantic Segmentation | Xiangtai Li, Houlong Zhao, Yunhai Tong, Kuiyuan Yang | We proposed Gated Fully Fusion (GFF) to fuse features from multiple levels through gates in a fully connected way. Specifically, features at each level are enhanced by higher-level features with stronger semantics and lower-level features with more details, and gates are used to control the pass of useful information which significantly reducing noise propagation during fusion. (Joint work: Key Laboratory of Machine Perception, School of EECS @Peking University and DeepMotion AI Research ) more details | n/a | 92.0 | 98.8 | 93.7 | 77.2 | 95.9 | 94.2 | 88.4 | 96.0 | |
Gated-SCNN | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Gated-SCNN: Gated Shape CNNs for Semantic Segmentation | Towaki Takikawa, David Acuna, Varun Jampani, Sanja Fidler | more details | n/a | 92.3 | 98.8 | 94.0 | 78.3 | 96.2 | 94.5 | 88.4 | 96.2 | |
ESPNetv2 | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ESPNetv2: A Light-weight, Power Efficient, and General Purpose Convolutional Neural Network | Sachin Mehta, Mohammad Rastegari, Linda Shapiro, and Hannaneh Hajishirzi | CVPR 2019 | We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2, for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on three different tasks: (1) object classification, (2) semantic segmentation, and (3) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-of-the-art methods. Our network has better generalization properties than ShuffleNetv2 when tested on the MSCOCO multi-object classification task and the Cityscapes urban scene semantic segmentation task. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at https://github.com/sacmehta/ESPNetv2 more details | n/a | 84.3 | 97.9 | 90.1 | 56.2 | 93.3 | 88.9 | 73.3 | 90.7 |
MRFM | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Multi Receptive Field Network for Semantic Segmentation | Jianlong Yuan, Zelu Deng, Shu Wang, Zhenbo Luo | WACV2020 | Semantic segmentation is one of the key tasks in comput- er vision, which is to assign a category label to each pixel in an image. Despite significant progress achieved recently, most existing methods still suffer from two challenging is- sues: 1) the size of objects and stuff in an image can be very diverse, demanding for incorporating multi-scale features into the fully convolutional networks (FCNs); 2) the pixel- s close to or at the boundaries of object/stuff are hard to classify due to the intrinsic weakness of convolutional net- works. To address the first issue, we propose a new Multi- Receptive Field Module (MRFM), explicitly taking multi- scale features into account. For the second issue, we design an edge-aware loss which is effective in distinguishing the boundaries of object/stuff. With these two designs, our Mul- ti Receptive Field Network achieves new state-of-the-art re- sults on two widely-used semantic segmentation benchmark datasets. Specifically, we achieve a mean IoU of 83.0% on the Cityscapes dataset and 88.4% mean IoU on the Pascal VOC2012 dataset. more details | n/a | 92.0 | 98.8 | 93.8 | 77.6 | 95.7 | 94.1 | 88.4 | 95.9 |
DGCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Dual Graph Convolutional Network for Semantic Segmentation | Li Zhang*, Xiangtai Li*, Anurag Arnab, Kuiyuan Yang, Yunhai Tong, Philip H.S. Torr | BMVC 2019 | We propose Dual Graph Convolutional Network (DGCNet) models the global context of the input feature by modelling two orthogonal graphs in a single framework. (Joint work: University of Oxford, Peking University and DeepMotion AI Research) more details | n/a | 91.8 | 98.8 | 93.6 | 76.6 | 95.8 | 94.1 | 88.0 | 95.9 |
dpcan_trainval_os16_225 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 91.7 | 98.8 | 93.5 | 75.9 | 95.7 | 94.0 | 88.0 | 95.8 | ||
Learnable Tree Filter | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Learnable Tree Filter for Structure-preserving Feature Transform | Lin Song; Yanwei Li; Zeming Li; Gang Yu; Hongbin Sun; Jian Sun; Nanning Zheng | NeurIPS 2019 | Learnable Tree Filter for Structure-preserving Feature Transform more details | n/a | 91.6 | 98.7 | 93.4 | 75.7 | 95.8 | 93.8 | 88.0 | 95.7 |
FreeNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 85.1 | 96.6 | 91.2 | 56.1 | 93.5 | 89.3 | 77.3 | 92.0 | ||
HRNetV2 + OCR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | High-Resolution Representations for Labeling Pixels and Regions; OCNet: Object Context Network for Scene Parsing | HRNet Team; OCR Team | HRNetV2W48 + OCR. OCR is an extension of object context networks https://arxiv.org/pdf/1809.00916.pdf more details | n/a | 92.1 | 98.8 | 93.8 | 77.7 | 96.0 | 94.4 | 88.2 | 96.1 | |
Valeo DAR Germany | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Valeo DAR Germany, New Algo Lab more details | n/a | 92.0 | 98.8 | 93.7 | 76.9 | 95.4 | 94.1 | 88.9 | 96.2 | ||
GLNet_fine | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | The proposed network architecture, combined with spatial information and multi scale context information, and repair the boundaries and details of the segmented object through channel attention modules.(Use the train-fine and the val-fine data) more details | n/a | 91.3 | 98.7 | 93.4 | 74.4 | 95.9 | 93.8 | 87.3 | 95.7 | ||
MCDN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 91.5 | 98.7 | 93.5 | 75.8 | 95.8 | 93.9 | 87.2 | 95.6 | ||
AAF+GLR | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 90.9 | 98.7 | 93.3 | 73.9 | 95.6 | 93.3 | 86.6 | 95.2 | ||
HRNetV2 + OCR (w/ ASP) | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | openseg-group (OCR team + HRNet team) | Our approach is based on a single HRNet48V2 and an OCR module combined with ASPP. We apply depth based multi-scale ensemble weights during testing (provided by DeepMotion AI Research) . more details | n/a | 92.4 | 98.8 | 93.9 | 78.7 | 96.0 | 94.5 | 88.6 | 96.1 | ||
CASIA_IVA_DRANet-101_NoCoarse | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 92.4 | 98.8 | 93.9 | 78.4 | 96.0 | 94.4 | 88.9 | 96.1 | ||
Hyundai Mobis AD Lab | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Hyundai Mobis AD Lab, DL-DB Group, AA (Automated Annotator) Team | more details | n/a | 92.4 | 98.8 | 94.0 | 78.3 | 96.1 | 94.5 | 88.7 | 96.3 | ||
EFRNet-13 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.0146 | 87.0 | 98.3 | 91.8 | 61.5 | 94.7 | 91.0 | 78.9 | 92.7 | ||
FarSee-Net | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | FarSee-Net: Real-Time Semantic Segmentation by Efficient Multi-scale Context Aggregation and Feature Space Super-resolution | Zhanpeng Zhang and Kaipeng Zhang | IEEE International Conference on Robotics and Automation (ICRA) 2020 | FarSee-Net: Real-Time Semantic Segmentation by Efficient Multi-scale Context Aggregation and Feature Space Super-resolution Real-time semantic segmentation is desirable in many robotic applications with limited computation resources. One challenge of semantic segmentation is to deal with the objectscalevariationsandleveragethecontext.Howtoperform multi-scale context aggregation within limited computation budget is important. In this paper, firstly, we introduce a novel and efficient module called Cascaded Factorized Atrous Spatial Pyramid Pooling (CF-ASPP). It is a lightweight cascaded structure for Convolutional Neural Networks (CNNs) to efficiently leverage context information. On the other hand, for runtime efficiency, state-of-the-art methods will quickly decrease the spatial size of the inputs or feature maps in the early network stages. The final high-resolution result is usuallyobtainedbynon-parametricup-samplingoperation(e.g. bilinear interpolation). Differently, we rethink this pipeline and treat it as a super-resolution process. We use optimized superresolution operation in the up-sampling step and improve the accuracy, especially in sub-sampled input image scenario for real-time applications. By fusing the above two improvements, our methods provide better latency-accuracy trade-off than the other state-of-the-art methods. In particular, we achieve 68.4% mIoU at 84 fps on the Cityscapes test set with a single Nivida Titan X (Maxwell) GPU card. The proposed module can be plugged into any feature extraction CNN and benefits from the CNN structure development. more details | 0.0119 | 85.9 | 98.1 | 90.6 | 60.1 | 94.0 | 89.9 | 76.5 | 92.2 |
C3Net [2,3,7,13] | no | no | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | C3: Concentrated-Comprehensive Convolution and its application to semantic segmentation | Hyojin Park, Youngjoon Yoo, Geonseok Seo, Dongyoon Han, Sangdoo Yun, Nojun Kwak | more details | n/a | 84.7 | 97.8 | 90.5 | 58.8 | 93.5 | 88.8 | 73.5 | 89.8 | |
Panoptic-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | Our proposed bottom-up Panoptic-DeepLab is conceptually simple yet delivers state-of-the-art results. The Panoptic-DeepLab adopts dual-ASPP and dual-decoder modules, specific to semantic segmentation and instance segmentation respectively. The semantic segmentation prediction follows the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation prediction involves a simple instance center regression, where the model learns to predict instance centers as well as the offset from each pixel to its corresponding center. This submission exploits only Cityscapes fine annotations. more details | n/a | 91.7 | 98.7 | 93.5 | 76.5 | 95.7 | 93.9 | 88.4 | 95.5 | |
EKENet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.0229 | 87.2 | 98.4 | 91.8 | 62.0 | 94.7 | 91.2 | 79.1 | 93.0 | ||
SPSSN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Stage Pooling Semantic Segmentation Network more details | n/a | 86.0 | 98.1 | 91.1 | 60.3 | 94.2 | 90.0 | 76.6 | 91.8 | ||
FC-HarDNet-70 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | HarDNet: A Low Memory Traffic Network | Ping Chao, Chao-Yang Kao, Yu-Shan Ruan, Chien-Hsiang Huang, Youn-Long Lin | ICCV 2019 | Fully Convolutional Harmonic DenseNet 70 U-shape encoder-decoder structure with HarDNet blocks Trained with single scale loss at stride-4 validation mIoU=77.7 more details | 0.015 | 89.9 | 98.7 | 92.8 | 70.5 | 95.4 | 92.7 | 84.6 | 94.9 |
BFP | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Boundary-Aware Feature Propagation for Scene Segmentation | Henghui Ding, Xudong Jiang, Ai Qun Liu, Nadia Magnenat Thalmann, and Gang Wang | IEEE International Conference on Computer Vision (ICCV), 2019 | Boundary-Aware Feature Propagation for Scene Segmentation more details | n/a | 91.4 | 98.7 | 93.4 | 75.2 | 95.5 | 93.9 | 87.3 | 95.6 |
FasterSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | FasterSeg: Searching for Faster Real-time Semantic Segmentation | Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, Zhangyang Wang | ICLR 2020 | We present FasterSeg, an automatically designed semantic segmentation network with not only state-of-the-art performance but also faster speed than current methods. Utilizing neural architecture search (NAS), FasterSeg is discovered from a novel and broader search space integrating multi-resolution branches, that has been recently found to be vital in manually designed segmentation models. To better calibrate the balance between the goals of high accuracy and low latency, we propose a decoupled and fine-grained latency regularization, that effectively overcomes our observed phenomenons that the searched networks are prone to "collapsing" to low-latency yet poor-accuracy models. Moreover, we seamlessly extend FasterSeg to a new collaborative search (co-searching) framework, simultaneously searching for a teacher and a student network in the same single run. The teacher-student distillation further boosts the student model's accuracy. Experiments on popular segmentation benchmarks demonstrate the competency of FasterSeg. For example, FasterSeg can run over 30% faster than the closest manually designed competitor on Cityscapes, while maintaining comparable accuracy. more details | 0.00613 | 88.1 | 98.2 | 92.1 | 65.6 | 94.5 | 91.6 | 81.8 | 92.8 |
VCD-NoCoarse | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 92.3 | 98.8 | 93.8 | 78.3 | 95.9 | 94.3 | 88.8 | 96.1 | ||
NAVINFO_DLR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | pengfei zhang | weighted aspp+ohem+hard region refine more details | n/a | 92.4 | 98.8 | 93.9 | 78.7 | 95.9 | 94.4 | 89.1 | 96.3 | ||
LBPSS | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | CVPR 2020 submission #5455 more details | 0.9 | 84.3 | 97.9 | 90.0 | 58.9 | 93.1 | 88.1 | 72.8 | 89.4 | ||
KANet_Res101 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 91.6 | 98.7 | 93.4 | 75.7 | 95.7 | 93.9 | 87.6 | 95.8 | ||
Learnable Tree Filter V2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Rethinking Learnable Tree Filter for Generic Feature Transform | Lin Song, Yanwei Li, Zhengkai Jiang, Zeming Li, Xiangyu Zhang, Hongbin Sun, Jian Sun, Nanning Zheng | NeurIPS 2020 | Based on ResNet-101 backbone and FPN architecture. more details | n/a | 92.1 | 98.7 | 93.6 | 77.5 | 95.9 | 94.1 | 88.7 | 96.0 |
GPSNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 92.0 | 98.8 | 93.8 | 77.0 | 95.8 | 94.2 | 88.1 | 96.1 | ||
FTFNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | An Efficient Network Focused on Tiny Feature Maps for Real-Time Semantic Segmentation more details | 0.0088 | 87.6 | 98.4 | 92.0 | 62.7 | 94.7 | 91.4 | 80.5 | 93.6 | ||
iFLYTEK-CV | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | iFLYTEK Research, CV Group more details | n/a | 92.4 | 98.8 | 94.1 | 78.5 | 96.0 | 94.6 | 88.8 | 96.3 | ||
F2MF-short | yes | yes | no | no | no | no | no | no | yes | yes | no | no | yes | yes | Warp to the Future: Joint Forecasting of Features and Feature Motion | Josip Saric, Marin Orsic, Tonci Antunovic, Sacha Vrazic, Sinisa Segvic | The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | Our method forecasts semantic segmentation 3 timesteps into the future. more details | n/a | 82.5 | 97.3 | 89.2 | 54.5 | 91.6 | 89.2 | 66.2 | 89.2 |
HPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | High-Order Paired-ASPP Networks for Semantic Segmentation | Yu Zhang, Xin Sun, Junyu Dong, Changrui Chen, Yue Shen | more details | n/a | 91.4 | 98.7 | 93.6 | 74.7 | 95.7 | 93.9 | 87.3 | 95.6 | |
HANet (fine-train only) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | TBA | Anonymous | We use only fine-training data. more details | n/a | 91.2 | 98.8 | 93.6 | 73.8 | 95.8 | 93.9 | 87.1 | 95.8 | |
F2MF-mid | yes | yes | no | no | no | no | no | no | yes | yes | no | no | yes | yes | Warp to the Future: Joint Forecasting of Features and Feature Motion | Josip Saric, Marin Orsic, Tonci Antunovic, Sacha Vrazic, Sinisa Segvic | The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | Our method forecasts semantic segmentation 9 timesteps into the future. more details | n/a | 72.4 | 95.3 | 83.7 | 32.4 | 86.0 | 83.4 | 46.3 | 79.4 |
EMANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Expectation Maximization Attention Networks for Semantic Segmentation | Xia Li, Zhisheng Zhong, Jianlong Wu, Yibo Yang, Zhouchen Lin, Hong Liu | ICCV 2019 | more details | n/a | 91.6 | 98.7 | 93.6 | 75.9 | 95.7 | 94.0 | 87.8 | 95.6 |
PartnerNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | PARTNERNET: A LIGHTWEIGHT AND EFFICIENT PARTNER NETWORK FOR SEMANTIC SEGMENTATION more details | 0.0058 | 88.2 | 98.4 | 92.1 | 65.7 | 94.6 | 91.5 | 82.1 | 93.2 | ||
SwiftNet RN18 pyr sepBN MVD | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient semantic segmentation with pyramidal fusion | M Oršić, S Šegvić | Pattern Recognition 2020 | more details | 0.029 | 90.3 | 98.6 | 93.0 | 72.3 | 95.8 | 93.1 | 84.3 | 94.9 |
Tencent YYB VisualAlgo | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Tencent YYB VisualAlgo Group more details | n/a | 92.2 | 98.8 | 93.9 | 77.7 | 96.1 | 94.4 | 88.1 | 96.1 | ||
MoKu Lab | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Alibaba, MoKu AI Lab, CV Group more details | n/a | 92.6 | 98.9 | 94.1 | 79.3 | 96.2 | 94.7 | 89.0 | 96.3 | ||
HRNetV2 + OCR + SegFix | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Object-Contextual Representations for Semantic Segmentation | Yuhui Yuan, Xilin Chen, Jingdong Wang | First, we pre-train "HRNet+OCR" method on the Mapillary training set (achieves 50.8% on the Mapillary val set). Second, we fine-tune the model with the Cityscapes training, validation and coarse set. Finally, we apply the "SegFix" scheme to further improve the results. more details | n/a | 92.7 | 98.8 | 94.0 | 79.2 | 96.1 | 94.6 | 89.4 | 96.5 | |
DecoupleSegNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Improving Semantic Segmentation via Decoupled Body and Edge Supervision | Xiangtai Li, Xia Li, Li Zhang, Guangliang Cheng, Jianping Shi, Zhouchen Lin, Shaohua Tan, and Yunhai Tong | ECCV-2020 | In this paper, We propose a new paradigm for semantic segmentation. Our insight is that appealing performance of semantic segmentation re- quires explicitly modeling the object body and edge, which correspond to the high and low frequency of the image. To do so, we first warp the image feature by learning a flow field to make the object part more consistent. The resulting body feature and the residual edge feature are further optimized under decoupled supervision by explicitly sampling dif- ferent parts (body or edge) pixels. The code and models have been released. more details | n/a | 92.3 | 98.8 | 94.0 | 77.8 | 96.1 | 94.5 | 88.6 | 96.3 |
LGE A&B Center: HANet (ResNet-101) | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Cars Can’t Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks | Sungha Choi (LGE, Korea Univ.), Joanne T. Kim (Korea Univ.), Jaegul Choo (KAIST) | CVPR 2020 | Dataset: "fine train + fine val", No coarse, Backbone: ImageNet pretrained ResNet-101 more details | n/a | 92.0 | 98.8 | 93.7 | 76.8 | 96.1 | 94.2 | 88.1 | 96.0 |
DCNAS | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | DCNAS: Densely Connected Neural Architecture Search for Semantic Image Segmentation | Xiong Zhang, Hongmin Xu, Hong Mo, Jianchao Tan, Cheng Yang, Wenqi Ren | Neural Architecture Search (NAS) has shown great potentials in automatically designing scalable network architectures for dense image predictions. However, existing NAS algorithms usually compromise on restricted search space and search on proxy task to meet the achievable computational demands. To allow as wide as possible network architectures and avoid the gap between target and proxy dataset, we propose a Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information, over a large-scale target dataset. Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs. Moreover, by combining both path-level and channel-level sampling strategies, we design a fusion module to reduce the memory consumption of ample search space. more details | n/a | 91.9 | 98.8 | 93.8 | 77.7 | 94.0 | 94.4 | 88.3 | 96.1 | |
GPNet-ResNet101 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 91.9 | 98.8 | 93.7 | 76.4 | 95.9 | 94.2 | 88.4 | 96.1 | ||
Axial-DeepLab-XL [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 91.6 | 98.7 | 93.4 | 76.5 | 95.6 | 93.8 | 88.1 | 95.3 |
Axial-DeepLab-L [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 92.3 | 98.7 | 93.7 | 78.6 | 96.1 | 94.4 | 88.9 | 95.9 |
Axial-DeepLab-L [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 91.5 | 98.7 | 93.2 | 76.0 | 95.7 | 93.6 | 88.1 | 95.2 |
LGE A&B Center: HANet (ResNext-101) | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Cars Can’t Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks | Sungha Choi (LGE, Korea Univ.), Joanne T. Kim (Korea Univ.), Jaegul Choo (KAIST) | CVPR 2020 | Dataset: "fine train + fine val + coarse", Backbone: Mapillary pretrained ResNext-101 more details | n/a | 92.1 | 98.8 | 93.8 | 77.3 | 96.1 | 94.3 | 88.0 | 96.1 |
ERINet-v2 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Residual Inception Network | MINJONG KIM, SUYOUNG CHI | ongoing | more details | 0.00526316 | 85.7 | 98.2 | 91.2 | 58.9 | 93.9 | 89.8 | 76.6 | 91.6 |
Naive-Student (iterative semi-supervised learning with Panoptic-DeepLab) | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Semi-Supervised Learning in Video Sequences for Urban Scene Segmentation | Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D. Collins, Ekin D. Cubuk, Barret Zoph, Hartwig Adam, Jonathon Shlens | Supervised learning in large discriminative models is a mainstay for modern computer vision. Such an approach necessitates investing in large-scale human-annotated datasets for achieving state-of-the-art results. In turn, the efficacy of supervised learning may be limited by the size of the human annotated dataset. This limitation is particularly notable for image segmentation tasks, where the expense of human annotation is especially large, yet large amounts of unlabeled data may exist. In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences to improve the performance on urban scene segmentation, simultaneously tackling semantic, instance, and panoptic segmentation. The goal of this work is to avoid the construction of sophisticated, learned architectures specific to label propagation (e.g., patch matching and optical flow). Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data. The procedure is iterated for several times. As a result, our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks, reaching the performance of 67.8% PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences to surpass state-of-the-art performance on core computer vision tasks. more details | n/a | 92.9 | 98.8 | 94.1 | 80.2 | 96.2 | 94.8 | 89.8 | 96.4 | |
Axial-DeepLab-XL [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 92.6 | 98.8 | 93.9 | 79.6 | 96.1 | 94.7 | 89.3 | 96.1 |
TUE-5LSM0-g23 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Deeplabv3+decoder more details | n/a | 81.8 | 95.7 | 88.8 | 51.5 | 91.1 | 85.7 | 70.6 | 89.4 | ||
PBRNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | modified MobileNetV2 backbone + Prediction and Boundary attention-based Refinement Module (PBRM) more details | 0.0107 | 88.7 | 98.5 | 92.6 | 66.2 | 94.7 | 91.9 | 83.2 | 93.4 | ||
ResNeSt200 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ResNeSt: Split-Attention Networks | Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Zhi Zhang, Haibin Lin, Yue Sun, Tong He, Jonas Mueller, R. Manmatha, Mu Li, and Alexander Smola | DeepLabV3+ network with ResNeSt200 backbone. more details | n/a | 92.3 | 98.8 | 93.9 | 77.8 | 96.3 | 94.6 | 88.5 | 96.2 | |
Panoptic-DeepLab [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | We employ a stronger backbone, WR-41, in Panoptic-DeepLab. For Panoptic-DeepLab, please refer to https://arxiv.org/abs/1911.10194. For wide-ResNet-41 (WR-41) backbone, please refer to https://arxiv.org/abs/2005.10266. more details | n/a | 92.9 | 98.8 | 93.9 | 80.2 | 96.1 | 94.7 | 89.7 | 96.4 | |
EaNet-V1 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Parsing Very High Resolution Urban Scene Images by Learning Deep ConvNets with Edge-Aware Loss | Xianwei Zheng, Linxi Huan, Gui-Song Xia, Jianya Gong | Parsing very high resolution (VHR) urban scene images into regions with semantic meaning, e.g. buildings and cars, is a fundamental task necessary for interpreting and understanding urban scenes. However, due to the huge quantity of details contained in an image and the large variations of objects in scale and appearance, the existing semantic segmentation methods often break one object into pieces, or confuse adjacent objects and thus fail to depict these objects consistently. To address this issue, we propose a concise and effective edge-aware neural network (EaNet) for urban scene semantic segmentation. The proposed EaNet model is deployed as a standard balanced encoder-decoder framework. Specifically, we devised two plug-and-play modules that append on top of the encoder and decoder respectively, i.e., the large kernel pyramid pooling (LKPP) and the edge-aware loss (EA loss) function, to extend the model ability in learning discriminating features. The LKPP module captures rich multi-scale context with strong continuous feature relations to promote coherent labeling of multi-scale urban objects. The EA loss module learns edge information directly from semantic segmentation prediction, which avoids costly post-processing or extra edge detection. During training, EA loss imposes a strong geometric awareness to guide object structure learning at both the pixel- and image-level, and thus effectively separates confusing objects with sharp contours. more details | n/a | 91.2 | 98.8 | 93.5 | 73.8 | 95.7 | 93.9 | 86.8 | 95.8 | |
EfficientPS [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 92.5 | 98.8 | 93.9 | 78.8 | 96.0 | 94.6 | 88.8 | 96.3 | |
FSFNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Accelerator-Aware Fast Spatial Feature Network for Real-Time Semantic Segmentation | Minjong Kim, Byungjae Park, Suyoung Chi | IEEE Access | Semantic segmentation is performed to understand an image at the pixel level; it is widely used in the field of autonomous driving. In recent years, deep neural networks achieve good accuracy performance; however, there exist few models that have a good trade-off between high accuracy and low inference time. In this paper, we propose a fast spatial feature network (FSFNet), an optimized lightweight semantic segmentation model using an accelerator, offering high performance as well as faster inference speed than current methods. FSFNet employs the FSF and MRA modules. The FSF module has three different types of subset modules to extract spatial features efficiently. They are designed in consideration of the size of the spatial domain. The multi-resolution aggregation module combines features that are extracted at different resolutions to reconstruct the segmentation image accurately. Our approach is able to run at over 203 FPS at full resolution 1024 x 2048) in a single NVIDIA 1080Ti GPU, and obtains a result of 69.13% mIoU on the Cityscapes test dataset. Compared with existing models in real-time semantic segmentation, our proposed model retains remarkable accuracy while having high FPS that is over 30% faster than the state-of-the-art model. The experimental results proved that our model is an ideal approach for the Cityscapes dataset. more details | 0.0049261 | 86.6 | 98.2 | 91.6 | 61.2 | 94.2 | 90.4 | 78.5 | 92.0 |
Hierarchical Multi-Scale Attention for Semantic Segmentation | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Hierarchical Multi-Scale Attention for Semantic Segmentation | Andrew Tao, Karan Sapra, Bryan Catanzaro | Multi-scale inference is commonly used to improve the results of semantic segmentation. Multiple images scales are passed through a network and then the results are combined with averaging or max pooling. In this work, we present an attention-based approach to combining multi-scale predictions. We show that predictions at certain scales are better at resolving particular failures modes and that the network learns to favor those scales for such cases in order to generate better predictions. Our attention mechanism is hierarchical, which enables it to be roughly 4x more memory efficient to train than other recent approaches. In addition to enabling faster training, this allows us to train with larger crop sizes which leads to greater model accuracy. We demonstrate the result of our method on two datasets: Cityscapes and Mapillary Vistas. For Cityscapes, which has a large number of weakly labelled images, we also leverage auto-labelling to improve generalization. Using our approach we achieve a new state-of-the-art results in both Mapillary (61.1 IOU val) and Cityscapes (85.4 IOU test). more details | n/a | 93.2 | 98.9 | 94.3 | 80.9 | 96.3 | 94.9 | 90.2 | 96.6 | |
SANet | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 25.0 | 91.4 | 98.8 | 93.6 | 74.3 | 95.8 | 93.9 | 87.4 | 95.8 | ||
SJTU_hpm | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | no | no | Hard Pixel Mining for Depth Privileged Semantic Segmentation | Zhangxuan Gu, Li Niu*, Haohua Zhao, and Liqing Zhang | more details | n/a | 91.4 | 98.8 | 93.5 | 75.2 | 95.9 | 93.9 | 86.9 | 95.8 | |
FANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | FANet: Feature Aggregation Network for Semantic Segmentation | Tanmay Singha, Duc-Son Pham, and Aneesh Krishna | Feature Aggregation Network for Semantic Segmentation more details | n/a | 83.1 | 96.2 | 89.8 | 52.2 | 93.0 | 88.1 | 72.2 | 90.1 | |
Hard Pixel Mining for Depth Privileged Semantic Segmentation | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | no | no | Hard Pixel Mining for Depth Privileged Semantic Segmentation | Zhangxuan Gu, Li Niu, Haohua Zhao, and Liqing Zhang | Semantic segmentation has achieved remarkable progress but remains challenging due to the complex scene, object occlusion, and so on. Some research works have attempted to use extra information such as a depth map to help RGB based semantic segmentation because the depth map could provide complementary geometric cues. However, due to the inaccessibility of depth sensors, depth information is usually unavailable for the test images. In this paper, we leverage only the depth of training images as the privileged information to mine the hard pixels in semantic segmentation, in which depth information is only available for training images but not available for test images. Specifically, we propose a novel Loss Weight Module, which outputs a loss weight map by employing two depth-related measurements of hard pixels: Depth Prediction Error and Depthaware Segmentation Error. The loss weight map is then applied to segmentation loss, with the goal of learning a more robust model by paying more attention to the hard pixels. Besides, we also explore a curriculum learning strategy based on the loss weight map. Meanwhile, to fully mine the hard pixels on different scales, we apply our loss weight module to multi-scale side outputs. Our hard pixels mining method achieves the state-of-the-art results on three benchmark datasets, and even outperforms the methods which need depth input during testing. more details | n/a | 92.3 | 98.8 | 93.9 | 78.2 | 96.1 | 94.4 | 88.6 | 96.3 | |
MSeg1080_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | MSeg: A Composite Dataset for Multi-domain Semantic Segmentation | John Lambert*, Zhuang Liu*, Ozan Sener, James Hays, Vladlen Koltun | CVPR 2020 | We present MSeg, a composite dataset that unifies semantic segmentation datasets from different domains. A naive merge of the constituent datasets yields poor performance due to inconsistent taxonomies and annotation practices. We reconcile the taxonomies and bring the pixel-level annotations into alignment by relabeling more than 220,000 object masks in more than 80,000 images, requiring more than 1.34 years of collective annotator effort. The resulting composite dataset enables training a single semantic segmentation model that functions effectively across domains and generalizes to datasets that were not seen during training. We adopt zero-shot cross-dataset transfer as a benchmark to systematically evaluate a model’s robustness and show that MSeg training yields substantially more robust models in comparison to training on individual datasets or naive mixing of datasets without the presented contributions. more details | 0.49 | 91.5 | 98.8 | 93.7 | 75.3 | 95.9 | 94.1 | 87.2 | 95.7 |
SA-Gate (ResNet-101,OS=16) | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Bi-directional Cross-Modality Feature Propagation with Separation-and-Aggregation Gate for RGB-D Semantic Segmentation | Xiaokang Chen, Kwan-Yee Lin, Jingbo Wang, Wayne Wu, Chen Qian, Hongsheng Li, and Gang Zeng | European Conference on Computer Vision (ECCV), 2020 | RGB+HHA input, input resolution = 800x800, output stride = 16, training 240 epochs, no coarse data is used. more details | n/a | 91.9 | 98.8 | 93.6 | 76.8 | 95.9 | 94.1 | 88.4 | 96.1 |
seamseg_rvcsubset | no | no | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Porzi, Lorenzo and Rota Bulò, Samuel and Colovic, Aleksander and Kontschieder, Peter | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019 | Seamless Scene Segmentation Resnet101, pretrained on Imagenet; supplied with altered MVD to include WildDash2 classes; does not contain other RVC label policies (i.e. no ADE20K/COCO-specific classes -> rvcsubset and not a proper submission) more details | n/a | 86.0 | 88.9 | 91.3 | 68.7 | 94.8 | 89.8 | 82.4 | 86.4 |
HRNet + LKPP + EA loss | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 91.4 | 98.8 | 93.5 | 74.6 | 95.6 | 94.0 | 87.1 | 95.9 | ||
SN_RN152pyrx8_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images | Marin Oršić, Ivan Krešo, Petra Bevandić, Siniša Šegvić | CVPR 2019 | more details | 1.0 | 89.4 | 98.5 | 92.7 | 68.1 | 95.5 | 92.8 | 83.7 | 94.6 |
EffPS_b1bs4_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | EfficientPS with EfficientNet-b1 backbone. Trained with a batch size of 4. more details | n/a | 85.3 | 98.1 | 90.4 | 54.5 | 94.7 | 90.0 | 77.2 | 92.7 | |
AttaNet_light | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | AttaNet: Attention-Augmented Network for Fast and Accurate Scene Parsing(AAAI21) | Anonymous | more details | n/a | 87.0 | 98.4 | 91.9 | 59.7 | 94.0 | 91.0 | 81.1 | 93.1 | |
CFPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 87.4 | 98.4 | 91.8 | 62.8 | 94.3 | 91.0 | 80.8 | 93.0 | ||
Seg_UJS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 92.7 | 98.9 | 94.2 | 78.8 | 96.2 | 94.8 | 89.6 | 96.6 | ||
Bilateral_attention_semantic | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | we use bilateral attention mechanism for semantic segmentation more details | 0.0141 | 90.4 | 98.6 | 92.9 | 72.0 | 94.9 | 92.9 | 86.1 | 95.0 | ||
Panoptic-DeepLab w/ SWideRNet [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 92.3 | 98.8 | 93.7 | 78.4 | 96.0 | 94.3 | 89.3 | 95.9 | |
ESANet RGB-D (small input) | yes | yes | no | no | no | no | yes | yes | no | no | 2 | 2 | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB-D data with half the input resolution. more details | 0.0427 | 89.0 | 98.5 | 92.0 | 68.0 | 95.2 | 92.4 | 82.4 | 94.2 | |
ESANet RGB (small input) | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | ESANet: Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB images with half the input resolution. more details | 0.031 | 87.1 | 98.3 | 91.2 | 61.6 | 94.6 | 91.3 | 79.3 | 93.2 | |
ESANet RGB-D | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB-D data. more details | 0.1613 | 91.3 | 98.7 | 93.3 | 75.1 | 95.9 | 93.7 | 87.1 | 95.5 | |
DAHUA-ARI | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | multi-scale and refineNet more details | n/a | 93.2 | 98.9 | 94.3 | 80.8 | 96.3 | 95.0 | 90.1 | 96.7 | ||
ESANet RGB | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | ESANet: Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB images only. more details | 0.1205 | 90.2 | 98.6 | 93.0 | 71.0 | 95.3 | 93.1 | 85.1 | 95.0 | |
DCNAS+ASPP [Mapillary Vistas] | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Existing NAS algorithms usually compromise on restricted search space or search on proxy task to meet the achievable computational demands. To allow as wide as possible network architectures and avoid the gap between realistic and proxy setting, we propose a novel Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information, over a large-scale target dataset without proxy. Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs. Moreover, by combining both path-level and channel-level sampling strategies, we design a fusion module and mixture layer to reduce the memory consumption of ample search space, hence favor the proxyless searching. Compared with contemporary works, experiments reveal that the proxyless searching scheme is capable of bridge the gap between searching and training environments. more details | n/a | 93.1 | 98.9 | 94.3 | 80.6 | 96.3 | 94.9 | 90.0 | 96.6 | ||
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 92.9 | 98.8 | 94.1 | 80.5 | 96.2 | 94.9 | 89.7 | 96.4 | |
DCNAS+ASPP | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | DCNAS: Densely Connected Neural Architecture Search for Semantic ImageSegmentation | Anonymous | Existing NAS algorithms usually compromise on restricted search space or search on proxy task to meet the achievable computational demands. To allow as wide as possible network architectures and avoid the gap between realistic and proxy setting, we propose a novel Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information, over a large-scale target dataset without proxy. Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs. Moreover, by combining both path-level and channel-level sampling strategies, we design a fusion module and mixture layer to reduce the memory consumption of ample search space, hence favor the proxyless searching. more details | n/a | 92.7 | 98.9 | 94.1 | 79.2 | 96.2 | 94.6 | 89.3 | 96.4 | |
ddl_seg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 92.8 | 98.9 | 94.3 | 78.8 | 96.2 | 94.9 | 89.6 | 96.7 | ||
CABiNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | CABiNet: Efficient Context Aggregation Network for Low-Latency Semantic Segmentation | Saumya Kumaar, Ye Lyu, Francesco Nex, Michael Ying Yang | With the increasing demand of autonomous machines, pixel-wise semantic segmentation for visual scene understanding needs to be not only accurate but also efficient for any potential real-time applications. In this paper, we propose CABiNet (Context Aggregated Bi-lateral Network), a dual branch convolutional neural network (CNN), with significantly lower computational costs as compared to the state-of-the-art, while maintaining a competitive prediction accuracy. Building upon the existing multi-branch architectures for high-speed semantic segmentation, we design a cheap high resolution branch for effective spatial detailing and a context branch with light-weight versions of global aggregation and local distribution blocks, potent to capture both long-range and local contextual dependencies required for accurate semantic segmentation, with low computational overheads. Specifically, we achieve 76.6% and 75.9% mIOU on Cityscapes validation and test sets respectively, at 76 FPS on an NVIDIA RTX 2080Ti and 8 FPS on a Jetson Xavier NX. Codes and training models will be made publicly available. more details | 0.013 | 91.1 | 98.6 | 93.3 | 76.6 | 95.7 | 93.5 | 85.0 | 94.8 | |
Margin calibration | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | The model is DeepLab v3+ backend on SEResNeXt50. We used the margin calibration with log-loss as the learning objective. more details | n/a | 92.1 | 98.8 | 93.9 | 77.2 | 96.0 | 94.3 | 88.2 | 96.1 | ||
MT-SSSR | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | n/a | 91.4 | 98.7 | 93.4 | 75.8 | 95.7 | 93.7 | 87.3 | 95.5 | ||
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas + Pseudo-labels] | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. Following Naive-Student, this model is additionally trained with pseudo-labels generated from Cityscapes Video and train-extra set (i.e., the coarse annotations are not used, but the images are). more details | n/a | 93.0 | 98.8 | 94.1 | 80.9 | 96.2 | 94.9 | 89.8 | 96.4 | |
DSANet: Dilated Spatial Attention for Real-time Semantic Segmentation in Urban Street Scenes | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | we present computationally efficient network named DSANet, which follows a two-branch strategy to tackle the problem of real-time semantic segmentation in urban scenes. We first design a Context branch, which employs Depth-wise Asymmetric ShuffleNet DAS as main building block to acquire sufficient receptive fields. In addition, we propose a dual attention module consisting of dilated spatial attention and channel attention to make full use of the multi-level feature maps simultaneously, which helps predict the pixel-wise labels in each stage. Meanwhile, Spatial Encoding Network is used to enhance semantic information by preserving the spatial details. Finally, to better combine context information and spatial information, we introduce a Simple Feature Fusion Module to combine the features from the two branches. more details | n/a | 88.0 | 98.0 | 92.3 | 65.6 | 94.5 | 91.8 | 81.6 | 92.0 | ||
UJS_model | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 93.1 | 98.9 | 94.3 | 80.7 | 96.2 | 94.8 | 90.1 | 96.7 | ||
Mobilenetv3-small-backbone real-time segmentation | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | The model is a dual-path network with mobilenetv3-small backbone. PSP module was used as the context aggregation block. We also use feature fusion module at x16, x32. The features of the two branches are then concatenated and fused with a bottleneck conv. Only train data is used to train the model excluding validation data. And evaluation was done by single scale input images. more details | 0.02 | 84.3 | 98.1 | 90.6 | 53.1 | 93.2 | 88.9 | 75.8 | 90.8 | ||
M2FANet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Urban street scene analysis using lightweight multi-level multi-path feature aggregation network | Tanmay Singha; Duc-Son Pham; Aneesh Krishna | Multiagent and Grid Systems Journal | more details | n/a | 86.9 | 96.9 | 91.6 | 63.2 | 94.5 | 90.1 | 79.6 | 92.2 |
AFPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.03 | 89.3 | 98.6 | 92.5 | 68.3 | 94.9 | 92.4 | 83.7 | 94.4 | ||
YOLO V5s with Segmentation Head | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Anonymous | Multitask model. fine tune from COCO detection pretrained model, train semantic segmentation and object detection(transfer from instance label) at the same time more details | 0.007 | 85.7 | 98.2 | 90.4 | 55.6 | 93.5 | 90.3 | 78.5 | 93.2 | ||
FSFFNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | A Lightweight Multi-scale Feature Fusion Network for Real-Time Semantic Segmentation | Tanmay Singha, Duc-Son Pham, Aneesh Krishna, Tom Gedeon | International Conference on Neural Information Processing 2021 | Feature Scaling Feature Fusion Network more details | n/a | 87.1 | 96.8 | 91.5 | 64.1 | 94.4 | 90.2 | 79.8 | 92.7 |
Qualcomm AI Research | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | InverseForm: A Loss Function for Structured Boundary-Aware Segmentation | Shubhankar Borse, Ying Wang, Yizhe Zhang, Fatih Porikli | CVPR 2021 oral | more details | n/a | 93.1 | 98.7 | 94.1 | 80.9 | 96.3 | 94.9 | 90.3 | 96.7 |
HIK-CCSLT | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 93.3 | 98.9 | 94.4 | 81.1 | 96.3 | 95.1 | 90.4 | 96.8 | ||
BFNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | BFNet | Jiaqi Fan | more details | n/a | 87.6 | 97.9 | 91.9 | 64.6 | 94.8 | 91.5 | 80.2 | 92.2 | |
Hai Wang+Yingfeng Cai-research group | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.00164 | 93.1 | 98.9 | 94.2 | 80.6 | 96.3 | 94.9 | 90.1 | 96.7 | ||
Jiangsu_university_Intelligent_Drive_AI | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 93.1 | 98.9 | 94.2 | 80.6 | 96.3 | 94.9 | 90.1 | 96.7 | ||
MCANet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Anonymous | more details | n/a | 88.9 | 98.5 | 92.2 | 68.4 | 95.3 | 92.0 | 82.3 | 93.8 | ||
UFONet (half-resolution) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | UFO RPN: A Region Proposal Network for Ultra Fast Object Detection | Wenkai Li, Andy Song | The 34th Australasian Joint Conference on Artificial Intelligence | more details | n/a | 78.6 | 97.2 | 88.6 | 43.4 | 90.6 | 84.2 | 62.9 | 83.3 |
SCMNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 86.8 | 98.2 | 91.5 | 62.4 | 94.6 | 90.7 | 78.3 | 92.0 | ||
FsaNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | FsaNet: Frequency Self-attention for Semantic Segmentation | Anonymous | more details | n/a | 91.8 | 98.8 | 93.7 | 75.8 | 95.9 | 94.2 | 88.0 | 96.0 | |
SCMNet coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | SCMNet: Shared Context Mining Network for Real-time Semantic Segmentation | Tanmay Singha; Moritz Bergemann; Duc-Son Pham; Aneesh Krishna | 2021 Digital Image Computing: Techniques and Applications (DICTA) | more details | n/a | 87.2 | 98.3 | 91.7 | 63.5 | 94.7 | 90.9 | 79.1 | 92.2 |
SAIT SeeThroughNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 93.2 | 98.9 | 94.3 | 81.1 | 96.3 | 95.0 | 90.2 | 96.8 | ||
JSU_IDT_group | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 93.2 | 98.9 | 94.3 | 80.7 | 96.3 | 95.0 | 90.2 | 96.7 | ||
DLA_HRNet48OCR_MSFLIP_000 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | This set of predictions is from DLA (differentiable lattice assignment network) with "HRNet48+OCR-Head" as base segmentation model. The model is, first trained on coarse-data, and then trained on fine-annotated train/val sets. Multi-scale (0.5, 0.75, 1.0, 1.25, 1.5, 1.75) and flip scheme is adopted during inference. more details | n/a | 93.0 | 98.9 | 94.2 | 80.6 | 96.2 | 94.8 | 89.8 | 96.6 | ||
MYBank-AIoT | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 93.3 | 98.9 | 94.5 | 81.1 | 96.4 | 95.1 | 90.6 | 96.7 | ||
kMaX-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | k-means Mask Transformer | Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen | ECCV 2022 | kMaX-DeepLab w/ ConvNeXt-L backbone (ImageNet-22k + 1k pretrained). This result is obtained by the kMaX-DeepLab trained for Panoptic Segmentation task. No test-time augmentation or other external dataset. more details | n/a | 92.3 | 98.8 | 93.8 | 77.8 | 96.2 | 94.5 | 88.9 | 96.2 |
LeapAI | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Using advanced AI techniques. more details | n/a | 93.2 | 99.0 | 94.4 | 81.2 | 96.4 | 94.9 | 90.0 | 96.7 | ||
adlab_iiau_ldz | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | meticulous-caiman_2022.05.01_03.32 more details | n/a | 93.1 | 98.9 | 94.4 | 80.8 | 96.3 | 95.0 | 90.0 | 96.6 | ||
SFRSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | A Real-Time Semantic Segmentation Model Using Iteratively Shared Features In Multiple Sub-Encoders | Tanmay Singha, Duc-Son Pham, Aneesh Krishna | Pattern Recognition | more details | n/a | 86.3 | 98.2 | 91.1 | 58.0 | 94.7 | 90.9 | 78.5 | 92.4 |
PIDNet-S | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | PIDNet: A Real-time Semantic Segmentation Network Inspired from PID Controller | Anonymous | more details | 0.0107 | 90.5 | 98.7 | 93.2 | 71.8 | 95.6 | 93.3 | 85.7 | 95.1 | |
Vision Transformer Adapter for Dense Predictions | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Vision Transformer Adapter for Dense Predictions | Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, Yu Qiao | ViT-Adapter-L, BEiT pre-train, multi-scale testing more details | n/a | 92.8 | 98.8 | 94.0 | 79.6 | 96.2 | 94.9 | 89.9 | 96.4 | |
SSNet | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Anonymous | more details | n/a | 89.9 | 98.4 | 92.3 | 71.7 | 95.1 | 92.2 | 85.3 | 94.3 | ||
SDBNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | SDBNet: Lightweight Real-time Semantic Segmentation Using Short-term Dense Bottleneck | Tanmay Singha, Duc-Son Pham, Aneesh Krishna | 2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA) | more details | n/a | 87.2 | 98.2 | 91.7 | 62.8 | 94.4 | 91.1 | 79.7 | 92.7 |
MeiTuan-BaseModel | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 93.4 | 98.9 | 94.5 | 81.4 | 96.4 | 95.2 | 90.6 | 96.6 | ||
SDBNetV2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Improved Short-term Dense Bottleneck network for efficient scene analysis | Tanmay Singha; Duc-Son Pham; Aneesh Krishna | Computer Vision and Image Understanding | more details | n/a | 87.9 | 98.3 | 91.9 | 64.5 | 94.8 | 91.7 | 80.8 | 93.1 |
mogo_semantic | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 93.2 | 98.9 | 94.2 | 81.0 | 96.3 | 94.9 | 90.2 | 96.6 | ||
UDSSEG_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | UDSSEG_RVC more details | n/a | 90.7 | 98.3 | 93.3 | 73.0 | 95.9 | 93.7 | 85.5 | 95.3 | ||
MIX6D_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | MIX6D_RVC more details | n/a | 89.3 | 97.7 | 92.6 | 68.3 | 95.0 | 92.4 | 84.7 | 94.4 | ||
FAN_NV_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Hybrid-Base + Segformer more details | n/a | 91.0 | 97.9 | 93.6 | 73.0 | 96.1 | 93.6 | 87.1 | 95.8 | ||
UNIV_CNP_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | RVC 2022 more details | n/a | 86.2 | 97.7 | 91.3 | 57.4 | 94.3 | 91.7 | 79.7 | 91.4 | ||
AntGroup-AI-VisionAlgo | yes | yes | yes | yes | no | no | no | no | yes | yes | no | no | no | no | Anonymous | AntGroup AI vision algo more details | n/a | 93.2 | 98.9 | 94.3 | 81.0 | 96.3 | 95.1 | 90.1 | 96.5 | ||
InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions | Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, Xiaogang Wang, Yu Qiao | CVPR 2023 | We use Mask2Former as the segmentation framework, and initialize our InternImage-H model with the pre-trained weights on the 427M joint dataset of public Laion-400M, YFCC-15M, and CC12M. Following common practices, we first pre-train on Mapillary Vistas for 80k iterations, and then fine-tune on Cityscapes for 80k iterations. The crop size is set to 1024×1024 in this experiment. As a result, our InternImage-H achieves 87.0 multi-scale mIoU on the validation set, and 86.1 multi-scale mIoU on the test set. more details | n/a | 93.0 | 98.9 | 94.3 | 80.4 | 96.3 | 95.0 | 90.0 | 96.4 |
Dense Prediction with Attentive Feature aggregation | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Dense Prediction with Attentive Feature Aggregation | Yung-Hsu Yang, Thomas E. Huang, Min Sun, Samuel Rota Bulò, Peter Kontschieder, Fisher Yu | WACV 2023 | We propose Attentive Feature Aggregation (AFA) to exploit both spatial and channel information for semantic segmentation and boundary detection. more details | n/a | 92.5 | 98.9 | 94.0 | 78.4 | 96.1 | 94.5 | 89.3 | 96.3 |
W3_FAFM | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Junyan Yang, Qian Xu, Lei La | Team: BOSCH-XC-DX-WAVE3 more details | 0.029309 | 91.1 | 98.7 | 93.3 | 73.9 | 95.5 | 93.7 | 87.0 | 95.6 | ||
HRN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Hierarchical residual network more details | 45.0 | 90.0 | 98.6 | 93.0 | 70.5 | 95.3 | 93.0 | 84.7 | 94.9 | ||
HRN+DCNv2_for_DOAS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | HRN with DCNv2 for DOAS in paper "Dynamic Obstacle Avoidance System based on Rapid Instance Segmentation Network" more details | 0.032 | 91.6 | 98.7 | 93.6 | 75.9 | 95.6 | 94.0 | 87.9 | 95.7 | ||
GEELY-ATC-SEG | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 93.3 | 98.9 | 94.4 | 81.1 | 96.3 | 95.2 | 90.3 | 96.5 | ||
PMSDSEN | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient Parallel Multi-Scale Detail and Semantic Encoding Network for Lightweight Semantic Segmentation | Xiao Liu, Xiuya Shi, Lufei Chen, Linbo Qing, Chao Ren | ACM International Conference on Multimedia 2023 | MM '23: Proceedings of the 31th ACM International Conference on Multimedia more details | n/a | 88.8 | 98.4 | 92.3 | 68.7 | 94.8 | 91.8 | 81.6 | 93.7 |
ECFD | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | backbone: ConvNext-Large more details | n/a | 92.2 | 98.7 | 93.8 | 77.8 | 96.0 | 94.4 | 88.9 | 95.9 | ||
DWGSeg-L75 | yes | yes | no | no | no | no | no | no | no | no | 1.3 | 1.3 | no | no | Anonymous | more details | 0.00755 | 89.4 | 98.5 | 92.6 | 68.4 | 95.2 | 92.7 | 83.6 | 94.7 | ||
VLTSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | VLTSeg: Simple Transfer of CLIP-Based Vision-Language Representations for Domain Generalized Semantic Segmentation | Christoph Hümmer, Manuel Schwonberg, Liangwei Zhou, Hu Cao, Alois Knoll, Hanno Gottschalk | more details | n/a | 93.1 | 98.9 | 94.2 | 80.5 | 96.3 | 95.0 | 90.5 | 96.5 | |
CGMANet_v1 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Context Guided Multi-scale Attention for Real-time Semantic Segmentation of Road-scene | Saquib Mazhar | Context Guided Multi-scale Attention for Real-time Semantic Segmentation of Road-scene more details | n/a | 88.5 | 97.7 | 91.5 | 68.1 | 94.3 | 91.6 | 82.9 | 93.3 | |
SERNet-Former_v2 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 92.1 | 98.7 | 93.5 | 77.3 | 95.9 | 94.4 | 88.7 | 95.9 |
iIoU on category-level
name | fine | fine | coarse | coarse | 16-bit | 16-bit | depth | depth | video | video | sub | sub | code | code | title | authors | venue | description | Runtime [s] | average | human | vehicle |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
FCN 8s | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Fully Convolutional Networks for Semantic Segmentation | J. Long, E. Shelhamer, and T. Darrell | CVPR 2015 | Trained by Marius Cordts on a pre-release version of the dataset more details | 0.5 | 70.1 | 58.0 | 82.3 |
RRR-ResNet152-MultiScale | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | update: this submission actually used the coarse labels, which was previously not marked accordingly more details | n/a | 74.0 | 61.8 | 86.1 | ||
Dilation10 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Multi-Scale Context Aggregation by Dilated Convolutions | Fisher Yu and Vladlen Koltun | ICLR 2016 | Dilation10 is a convolutional network that consists of a front-end prediction module and a context aggregation module. Both are described in the paper. The combined network was trained jointly. The context module consists of 10 layers, each of which has C=19 feature maps. The larger number of layers in the context module (10 for Cityscapes versus 8 for Pascal VOC) is due to the high input resolution. The Dilation10 model is a pure convolutional network: there is no CRF and no structured prediction. Dilation10 can therefore be used as the baseline input for structured prediction models. Note that the reported results were produced by training on the training set only; the network was not retrained on train+val. more details | 4.0 | 71.1 | 58.3 | 83.9 |
Adelaide | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Piecewise Training of Deep Structured Models for Semantic Segmentation | G. Lin, C. Shen, I. Reid, and A. van den Hengel | arXiv preprint 2015 | Trained on a pre-release version of the dataset more details | 35.0 | 67.4 | 58.2 | 76.7 |
DeepLab LargeFOV StrongWeak | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | yes | yes | Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image Segmentation | G. Papandreou, L.-C. Chen, K. Murphy, and A. L. Yuille | ICCV 2015 | Trained on a pre-release version of the dataset more details | 4.0 | 58.7 | 41.4 | 75.9 |
DeepLab LargeFOV Strong | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs | L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille | ICLR 2015 | Trained on a pre-release version of the dataset more details | 4.0 | 58.7 | 41.3 | 76.1 |
DPN | yes | yes | yes | yes | no | no | no | no | no | no | 3 | 3 | no | no | Semantic Image Segmentation via Deep Parsing Network | Z. Liu, X. Li, P. Luo, C. C. Loy, and X. Tang | ICCV 2015 | Trained on a pre-release version of the dataset more details | n/a | 57.9 | 39.9 | 76.0 |
Segnet basic | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | yes | yes | SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation | V. Badrinarayanan, A. Kendall, and R. Cipolla | arXiv preprint 2015 | Trained on a pre-release version of the dataset more details | 0.06 | 61.9 | 47.0 | 76.8 |
Segnet extended | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | yes | yes | SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation | V. Badrinarayanan, A. Kendall, and R. Cipolla | arXiv preprint 2015 | Trained on a pre-release version of the dataset more details | 0.06 | 66.4 | 51.9 | 80.9 |
CRFasRNN | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Conditional Random Fields as Recurrent Neural Networks | S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. S. Torr | ICCV 2015 | Trained on a pre-release version of the dataset more details | 0.7 | 66.0 | 53.4 | 78.6 |
Scale invariant CNN + CRF | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Convolutional Scale Invariance for Semantic Segmentation | I. Kreso, D. Causevic, J. Krapac, and S. Segvic | GCPR 2016 | We propose an effective technique to address large scale variation in images taken from a moving car by cross-breeding deep learning with stereo reconstruction. Our main contribution is a novel scale selection layer which extracts convolutional features at the scale which matches the corresponding reconstructed depth. The recovered scaleinvariant representation disentangles appearance from scale and frees the pixel-level classifier from the need to learn the laws of the perspective. This results in improved segmentation results due to more effi- cient exploitation of representation capacity and training data. We perform experiments on two challenging stereoscopic datasets (KITTI and Cityscapes) and report competitive class-level IoU performance. more details | n/a | 71.2 | 60.6 | 81.7 |
DPN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Semantic Image Segmentation via Deep Parsing Network | Z. Liu, X. Li, P. Luo, C. C. Loy, and X. Tang | ICCV 2015 | DPN trained on full resolution images more details | n/a | 69.1 | 55.0 | 83.1 |
Pixel-level Encoding for Instance Segmentation | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Pixel-level Encoding and Depth Layering for Instance-level Semantic Labeling | J. Uhrig, M. Cordts, U. Franke, and T. Brox | GCPR 2016 | We predict three encoding channels from a single image using an FCN: semantic labels, depth classes, and an instance-aware representation based on directions towards instance centers. Using low-level computer vision techniques, we obtain pixel-level and instance-level semantic labeling paired with a depth estimate of the instances. more details | n/a | 73.9 | 62.6 | 85.2 |
Adelaide_context | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Piecewise Training of Deep Structured Models for Semantic Segmentation | Guosheng Lin, Chunhua Shen, Anton van den Hengel, Ian Reid | CVPR 2016 | We explore contextual information to improve semantic image segmentation. Details are described in the paper. We trained contextual networks for coarse level prediction and a refinement network for refining the coarse prediction. Our models are trained on the training set only (2975 images) without adding the validation set. more details | n/a | 74.1 | 63.1 | 85.1 |
NVSegNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | In the inference, we use the image of 2 different scales. The same for training! more details | 0.4 | 68.1 | 53.5 | 82.7 | ||
ENet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation | Adam Paszke, Abhishek Chaurasia, Sangpil Kim, Eugenio Culurciello | more details | 0.013 | 64.0 | 49.3 | 78.7 | |
DeepLabv2-CRF | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs | Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L. Yuille | arXiv preprint | DeepLabv2-CRF is based on three main methods. First, we employ convolution with upsampled filters, or ‘atrous convolution’, as a powerful tool to repurpose ResNet-101 (trained on image classification task) in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within DCNNs. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and fully connected Conditional Random Fields (CRFs). The model is only trained on train set. more details | n/a | 67.7 | 52.5 | 82.9 |
m-TCFs | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Convolutional Neural Network more details | 1.0 | 70.6 | 57.0 | 84.1 | ||
DeepLab+DynamicCRF | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ru.nl | more details | n/a | 62.4 | 45.8 | 79.0 | ||
LRR-4x | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation | Golnaz Ghiasi, Charless C. Fowlkes | ECCV 2016 | We introduce a CNN architecture that reconstructs high-resolution class label predictions from low-resolution feature maps using class-specific basis functions. Our multi-resolution architecture also uses skip connections from higher resolution feature maps to successively refine segment boundaries reconstructed from lower resolution maps. The model used for this submission is based on VGG-16 and it was trained on the training set (2975 images). The segmentation predictions were not post-processed using CRF. (This is a revision of a previous submission in which we didn't use the correct basis functions; the method name changed from 'LLR-4x' to 'LRR-4x') more details | n/a | 74.7 | 63.3 | 86.2 |
LRR-4x | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation | Golnaz Ghiasi, Charless C. Fowlkes | ECCV 2016 | We introduce a CNN architecture that reconstructs high-resolution class label predictions from low-resolution feature maps using class-specific basis functions. Our multi-resolution architecture also uses skip connections from higher resolution feature maps to successively refine segment boundaries reconstructed from lower resolution maps. The model used for this submission is based on VGG-16 and it was trained using both coarse and fine annotations. The segmentation predictions were not post-processed using CRF. more details | n/a | 73.9 | 62.7 | 85.0 |
Le_Selfdriving_VGG | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 64.3 | 50.0 | 78.5 | ||
SQ | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Speeding up Semantic Segmentation for Autonomous Driving | Michael Treml, José Arjona-Medina, Thomas Unterthiner, Rupesh Durgesh, Felix Friedmann, Peter Schuberth, Andreas Mayr, Martin Heusel, Markus Hofmarcher, Michael Widrich, Bernhard Nessler, Sepp Hochreiter | NIPS 2016 Workshop - MLITS Machine Learning for Intelligent Transportation Systems Neural Information Processing Systems 2016, Barcelona, Spain | more details | 0.06 | 66.0 | 50.0 | 82.0 |
SAIT | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous more details | 4.0 | 75.5 | 64.3 | 86.7 | ||
FoveaNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | FoveaNet | Xin Li, Jiashi Feng | 1.caffe-master 2.resnet-101 3.single scale testing Previously listed as "LXFCRN". more details | n/a | 77.6 | 68.3 | 86.9 | |
RefineNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation | Guosheng Lin; Anton Milan; Chunhua Shen; Ian Reid; | Please refer to our technical report for details: "RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation" (https://arxiv.org/abs/1611.06612). Our source code is available at: https://github.com/guosheng/refinenet 2975 images (training set with fine labels) are used for training. more details | n/a | 70.6 | 56.8 | 84.5 | |
SegModel | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Both train set (2975) and val set (500) are used to train model for this submission. more details | 0.8 | 75.9 | 64.2 | 87.6 | ||
TuSimple | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Understanding Convolution for Semantic Segmentation | Panqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, Garrison Cottrell | more details | n/a | 75.2 | 64.0 | 86.5 | |
Global-Local-Refinement | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Global-residual and Local-boundary Refinement Networks for Rectifying Scene Parsing Predictions | Rui Zhang, Sheng Tang, Min Lin, Jintao Li, Shuicheng Yan | International Joint Conference on Artificial Intelligence (IJCAI) 2017 | global-residual and local-boundary refinement The method was previously listed as "RefineNet". To avoid confusions with a recently appeared and similarly named approach, the submission name was updated. more details | n/a | 76.8 | 66.7 | 86.9 |
XPARSE | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 74.2 | 63.0 | 85.4 | ||
ResNet-38 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Wider or Deeper: Revisiting the ResNet Model for Visual Recognition | Zifeng Wu, Chunhua Shen, Anton van den Hengel | arxiv | single model, single scale, no post-processing with CRFs Model A2, 2 conv., fine only, single scale testing The submissions was previously listed as "Model A2, 2 conv.". The name was changed for consistency with the other submission of the same work. more details | n/a | 81.1 | 73.2 | 89.0 |
SegModel | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 77.0 | 66.2 | 87.9 | ||
Deep Layer Cascade (LC) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Not All Pixels Are Equal: Difficulty-aware Semantic Segmentation via Deep Layer Cascade | Xiaoxiao Li, Ziwei Liu, Ping Luo, Chen Change Loy, Xiaoou Tang | CVPR 2017 | We propose a novel deep layer cascade (LC) method to improve the accuracy and speed of semantic segmentation. Unlike the conventional model cascade (MC) that is composed of multiple independent models, LC treats a single deep model as a cascade of several sub-models. Earlier sub-models are trained to handle easy and confident regions, and they progressively feed-forward harder regions to the next sub-model for processing. Convolutions are only calculated on these regions to reduce computations. The proposed method possesses several advantages. First, LC classifies most of the easy regions in the shallow stage and makes deeper stage focuses on a few hard regions. Such an adaptive and 'difficulty-aware' learning improves segmentation performance. Second, LC accelerates both training and testing of deep network thanks to early decisions in the shallow stage. Third, in comparison to MC, LC is an end-to-end trainable framework, allowing joint learning of all sub-models. We evaluate our method on PASCAL VOC and more details | n/a | 74.1 | 62.0 | 86.2 |
FRRN | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes | Tobias Pohlen, Alexander Hermans, Markus Mathias, Bastian Leibe | Arxiv | Full-Resolution Residual Networks (FRRN) combine multi-scale context with pixel-level accuracy by using two processing streams within one network: One stream carries information at the full image resolution, enabling precise adherence to segment boundaries. The other stream undergoes a sequence of pooling operations to obtain robust features for recognition. more details | n/a | 75.1 | 64.9 | 85.4 |
MNet_MPRG | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Chubu University, MPRG | without val dataset, external dataset (e.g. image net) and post-processing more details | 0.6 | 77.9 | 68.6 | 87.1 | ||
ResNet-38 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Wider or Deeper: Revisiting the ResNet Model for Visual Recognition | Zifeng Wu, Chunhua Shen, Anton van den Hengel | arxiv | single model, no post-processing with CRFs Model A2, 2 conv., fine+coarse, multi scale testing more details | n/a | 79.1 | 69.6 | 88.5 |
FCN8s-QunjieYu | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 68.7 | 55.6 | 81.7 | ||
RGB-D FCN | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | no | no | Anonymous | GoogLeNet + depth branch, single model no data augmentation, no training on validation set, no graphical model Used coarse labels to initialize depth branch more details | n/a | 71.0 | 58.0 | 83.9 | ||
MultiBoost | yes | yes | yes | yes | no | no | yes | yes | no | no | 2 | 2 | no | no | Anonymous | Boosting based solution. Publication is under review. more details | 0.25 | 60.2 | 45.0 | 75.5 | ||
GoogLeNet FCN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Going Deeper with Convolutions | Christian Szegedy , Wei Liu , Yangqing Jia , Pierre Sermanet , Scott Reed , Dragomir Anguelov , Dumitru Erhan , Vincent Vanhoucke , Andrew Rabinovich | CVPR 2015 | GoogLeNet No data augmentation, no graphical model Trained by Lukas Schneider, following "Fully Convolutional Networks for Semantic Segmentation", Long et al. CVPR 2015 more details | n/a | 69.8 | 56.3 | 83.3 |
ERFNet (pretrained) | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ERFNet: Efficient Residual Factorized ConvNet for Real-time Semantic Segmentation | Eduardo Romera, Jose M. Alvarez, Luis M. Bergasa and Roberto Arroyo | Transactions on Intelligent Transportation Systems (T-ITS) | ERFNet pretrained on ImageNet and trained only on the fine train (2975) annotated images more details | 0.02 | 72.7 | 61.2 | 84.1 |
ERFNet (from scratch) | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Efficient ConvNet for Real-time Semantic Segmentation | Eduardo Romera, Jose M. Alvarez, Luis M. Bergasa and Roberto Arroyo | IV2017 | ERFNet trained entirely on the fine train set (2975 images) without any pretraining nor coarse labels more details | 0.02 | 70.4 | 58.0 | 82.8 |
TuSimple_Coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Understanding Convolution for Semantic Segmentation | Panqu Wang, Pengfei Chen, Ye Yuan, Ding Liu, Zehua Huang, Xiaodi Hou, Garrison Cottrell | Here we show how to improve pixel-wise semantic segmentation by manipulating convolution-related operations that are better for practical use. First, we implement dense upsampling convolution (DUC) to generate pixel-level prediction, which is able to capture and decode more detailed information that is generally missing in bilinear upsampling. Second, we propose a hybrid dilated convolution (HDC) framework in the encoding phase. This framework 1) effectively enlarges the receptive fields of the network to aggregate global information; 2) alleviates what we call the "gridding issue" caused by the standard dilated convolution operation. We evaluate our approaches thoroughly on the Cityscapes dataset, and achieve a new state-of-art result of 80.1% mIOU in the test set. We also are state-of-the-art overall on the KITTI road estimation benchmark and the PASCAL VOC2012 segmentation task. Pretrained models are available at https://goo.gl/DQMeun. more details | n/a | 77.8 | 68.6 | 87.1 | |
SAC-multiple | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scale-adaptive Convolutions for Scene Parsing | Rui Zhang, Sheng Tang, Yongdong Zhang, Jintao Li, and Shuicheng Yan | International Conference on Computer Vision (ICCV) 2017 | more details | n/a | 78.3 | 68.4 | 88.2 |
NetWarp | yes | yes | yes | yes | no | no | no | no | yes | yes | no | no | no | no | Anonymous | more details | n/a | 79.8 | 70.7 | 88.9 | ||
depthAwareSeg_RNN_ff | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | training with fine-annotated training images only (val set is not used); flip-augmentation only in training; single GPU for train&test; softmax loss; resnet101 as front end; multiscale test. more details | n/a | 76.9 | 67.4 | 86.5 | ||
Ladder DenseNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Ladder-style DenseNets for Semantic Segmentation of Large Natural Images | Ivan Krešo, Josip Krapac, Siniša Šegvić | ICCV 2017 | https://ivankreso.github.io/publication/ladder-densenet/ more details | 0.45 | 79.5 | 70.4 | 88.6 |
Real-time FCN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Understanding Cityscapes: Efficient Urban Semantic Scene Understanding | Marius Cordts | Dissertation | Combines the following concepts: Network architecture: "Going deeper with convolutions". Szegedy et al., CVPR 2015 Framework and skip connections: "Fully convolutional networks for semantic segmentation". Long et al., CVPR 2015 Context modules: "Multi-scale context aggregation by dilated convolutions". Yu and Kolutin, ICLR 2016 more details | 0.044 | 71.6 | 60.5 | 82.7 |
GridNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Conv-Deconv Grid-Network for semantic segmentation. Using only the training set without extra coarse annotated data (only 2975 images). No pre-training (ImageNet). No post-processing (like CRF). more details | n/a | 71.1 | 58.3 | 84.0 | ||
PEARL | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Video Scene Parsing with Predictive Feature Learning | Xiaojie Jin, Xin Li, Huaxin Xiao, Xiaohui Shen, Zhe Lin, Jimei Yang, Yunpeng Chen, Jian Dong, Luoqi Liu, Zequn Jie, Jiashi Feng, and Shuicheng Yan | ICCV 2017 | We proposed a novel Parsing with prEdictive feAtuRe Learning (PEARL) model to address the following two problems in video scene parsing: firstly, how to effectively learn meaningful video representations for producing the temporally consistent labeling maps; secondly, how to overcome the problem of insufficient labeled video training data, i.e. how to effectively conduct unsupervised deep learning. To our knowledge, this is the first model to employ predictive feature learning in the video scene parsing. more details | n/a | 75.1 | 64.3 | 85.9 |
pruned & dilated inception-resnet-v2 (PD-IR2) | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Anonymous | more details | 0.69 | 68.3 | 55.3 | 81.2 | ||
PSPNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Pyramid Scene Parsing Network | Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, Jiaya Jia | CVPR 2017 | This submission is trained on coarse+fine(train+val set, 2975+500 images). Former submission is trained on coarse+fine(train set, 2975 images) which gets 80.2 mIoU: https://www.cityscapes-dataset.com/method-details/?submissionID=314 Previous versions of this method were listed as "SenseSeg_1026". more details | n/a | 79.2 | 70.2 | 88.2 |
motovis | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | motovis.com | more details | n/a | 80.7 | 72.3 | 89.0 | ||
ML-CRNN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Multi-level Contextual RNNs with Attention Model for Scene Labeling | Heng Fan, Xue Mei, Danil Prokhorov, Haibin Ling | arXiv | A framework based on CNNs and RNNs is proposed, in which the RNNs are used to model spatial dependencies among image units. Besides, to enrich deep features, we use different features from multiple levels, and adopt a novel attention model to fuse them. more details | n/a | 72.5 | 60.9 | 84.1 |
Hybrid Model | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 68.5 | 55.6 | 81.5 | ||
tek-Ifly | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Iflytek | Iflytek-yin | using a fusion strategy of three single models, the best result of a single model is 80.01%,multi-scale more details | n/a | 79.6 | 70.7 | 88.4 | |
GridNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Residual Conv-Deconv Grid Network for Semantic Segmentation | Damien Fourure, Rémi Emonet, Elisa Fromont, Damien Muselet, Alain Tremeau & Christian Wolf | BMVC 2017 | We used a new architecture for semantic image segmentation called GridNet, following a grid pattern allowing multiple interconnected streams to work at different resolutions (see paper). We used only the training set without extra coarse annotated data (only 2975 images) and no pre-training (ImageNet) nor pre or post-processing. more details | n/a | 71.4 | 58.7 | 84.2 |
firenet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | n/a | 75.5 | 66.4 | 84.5 | ||
DeepLabv3 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Rethinking Atrous Convolution for Semantic Image Segmentation | Liang-Chieh Chen, George Papandreou, Florian Schroff, Hartwig Adam | arXiv preprint | In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we employ a module, called Atrous Spatial Pyrmid Pooling (ASPP), which adopts atrous convolution in parallel to capture multi-scale context with multiple atrous rates. Furthermore, we propose to augment ASPP module with image-level features encoding global context and further boost performance. Results obtained with a single model (no ensemble), trained with fine + coarse annotations. More details will be shown in the updated arXiv report. more details | n/a | 81.7 | 74.0 | 89.4 |
EdgeSenseSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Deep segmentation network with hard negative mining and other tricks. more details | n/a | 78.5 | 69.7 | 87.3 | ||
ScaleNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | ScaleNet: Scale Invariant Network for Semantic Segmentation in Urban Driving Scenes | Mohammad Dawud Ansari, Stephan Krarß, Oliver Wasenmüller and Didier Stricker | International Conference on Computer Vision Theory and Applications, Funchal, Portugal, 2018 | The scale difference in driving scenarios is one of the essential challenges in semantic scene segmentation. Close objects cover significantly more pixels than far objects. In this paper, we address this challenge with a scale invariant architecture. Within this architecture, we explicitly estimate the depth and adapt the pooling field size accordingly. Our model is compact and can be extended easily to other research domains. Finally, the accuracy of our approach is comparable to the state-of-the-art and superior for scale problems. We evaluate on the widely used automotive dataset Cityscapes as well as a self-recorded dataset. more details | n/a | 76.8 | 66.9 | 86.7 |
K-net | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | XinLiang Zhong | more details | n/a | 75.4 | 64.6 | 86.3 | ||
MSNET | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | previously also listed as "MultiPathJoin" and "MultiPath_Scale". more details | 0.2 | 81.6 | 75.0 | 88.3 | ||
Multitask Learning | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics | Alex Kendall, Yarin Gal and Roberto Cipolla | Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task. more details | n/a | 77.7 | 68.0 | 87.4 | |
DeepMotion | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | We propose a novel method based on convnets to extract multi-scale features in a large range particularly for solving street scene segmentation. more details | n/a | 78.1 | 68.5 | 87.8 | ||
SR-AIC | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 79.6 | 70.2 | 89.0 | ||
Roadstar.ai_CV(SFNet) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Roadstar.ai-CV | Maosheng Ye, Guang Zhou, Tongyi Cao, YongTao Huang, Yinzi Chen | same foucs net(SFNet), based on only fine labels, with focus on the loss distribution and same focus on the every layer of feature map more details | 0.2 | 82.6 | 76.4 | 88.7 | |
DFN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Learning a Discriminative Feature Network for Semantic Segmentation | Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, Nong Sang | arxiv | Most existing methods of semantic segmentation still suffer from two aspects of challenges: intra-class inconsistency and inter-class indistinction. To tackle these two problems, we propose a Discriminative Feature Network (DFN), which contains two sub-networks: Smooth Network and Border Network. Specifically, to handle the intra-class inconsistency problem, we specially design a Smooth Network with Channel Attention Block and global average pooling to select the more discriminative features. Furthermore, we propose a Border Network to make the bilateral features of boundary distinguishable with deep semantic boundary supervision. Based on our proposed DFN, we achieve state-of-the-art performance 86.2% mean IOU on PASCAL VOC 2012 and 80.3% mean IOU on Cityscapes dataset. more details | n/a | 79.6 | 70.6 | 88.5 |
RelationNet_Coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | RelationNet: Learning Deep-Aligned Representation for Semantic Image Segmentation | Yueqing Zhuang | ICPR | Semantic image segmentation, which assigns labels in pixel level, plays a central role in image understanding. Recent approaches have attempted to harness the capabilities of deep learning. However, one central problem of these methods is that deep convolution neural network gives little consideration to the correlation among pixels. To handle this issue, in this paper, we propose a novel deep neural network named RelationNet, which utilizes CNN and RNN to aggregate context information. Besides, a spatial correlation loss is applied to supervise RelationNet to align features of spatial pixels belonging to same category. Importantly, since it is expensive to obtain pixel-wise annotations, we exploit a new training method for combining the coarsely and finely labeled data. Separate experiments show the detailed improvements of each proposal. Experimental results demonstrate the effectiveness of our proposed method to the problem of semantic image segmentation. more details | n/a | 81.4 | 73.3 | 89.4 |
ARSAIT | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | anonymous more details | 1.0 | 74.8 | 63.1 | 86.4 | ||
Mapillary Research: In-Place Activated BatchNorm | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | In-Place Activated BatchNorm for Memory-Optimized Training of DNNs | Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder | arXiv | In-Place Activated Batch Normalization (InPlace-ABN) is a novel approach to drastically reduce the training memory footprint of modern deep neural networks in a computationally efficient way. Our solution substitutes the conventionally used succession of BatchNorm + Activation layers with a single plugin layer, hence avoiding invasive framework surgery while providing straightforward applicability for existing deep learning frameworks. We obtain memory savings of up to 50% by dropping intermediate results and by recovering required information during the backward pass through the inversion of stored forward results, with only minor increase (0.8-2%) in computation time. Test results are obtained using a single model. more details | n/a | 81.7 | 74.4 | 89.0 |
EFBNET | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 78.8 | 69.4 | 88.2 | ||
Ladder DenseNet v2 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Journal submission | Anonymous | DenseNet-121 model used in downsampling path with ladder-style skip connections upsampling path on top of it. more details | 1.0 | 78.7 | 69.1 | 88.4 | |
ESPNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation | Sachin Mehta, Mohammad Rastegari, Anat Caspi, Linda Shapiro, and Hannaneh Hajishirzi | We introduce a fast and efficient convolutional neural network, ESPNet, for semantic segmentation of high resolution images under resource constraints. ESPNet is based on a new convolutional module, efficient spatial pyramid (ESP), which is efficient in terms of computation, memory, and power. ESPNet is 22 times faster (on a standard GPU) and 180 times smaller than the state-of-the-art semantic segmentation network PSPNet, while its category-wise accuracy is only 8% less. We evaluated EPSNet on a variety of semantic segmentation datasets including Cityscapes, PASCAL VOC, and a breast biopsy whole slide image dataset. Under the same constraints on memory and computation, ESPNet outperforms all the current efficient CNN networks such as MobileNet, ShuffleNet, and ENet on both standard metrics and our newly introduced performance metrics that measure efficiency on edge devices. Our network can process high resolution images at a rate of 112 and 9 frames per second on a standard GPU and edge device, respectively more details | 0.0089 | 63.1 | 47.1 | 79.0 | |
ENet with the Lovász-Softmax loss | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks | Maxim Berman, Amal Rannen Triki, Matthew B. Blaschko | arxiv | The Lovász-Softmax loss is a novel surrogate for optimizing the IoU measure in neural networks. Here we finetune the weights provided by the authors of ENet (arXiv:1606.02147) with this loss, for 10'000 iterations on training dataset. The runtimes are unchanged with respect to the ENet architecture. more details | 0.013 | 61.0 | 45.0 | 77.1 |
DRN_CRL_Coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Dense Relation Network: Learning Consistent and Context-Aware Representation For Semantic Image Segmentation | Yueqing Zhuang | ICIP | DRN_CoarseSemantic image segmentation, which aims at assigning pixel-wise category, is one of challenging image understanding problems. Global context plays an important role on local pixel-wise category assignment. To make the best of global context, in this paper, we propose dense relation network (DRN) and context-restricted loss (CRL) to aggregate global and local information. DRN uses Recurrent Neural Network (RNN) with different skip lengths in spatial directions to get context-aware representations while CRL helps aggregate them to learn consistency. Compared with previous methods, our proposed method takes full advantage of hierarchical contextual representations to produce high-quality results. Extensive experiments demonstrate that our methods achieves significant state-of-the-art performances on Cityscapes and Pascal Context benchmarks, with mean-IoU of 82.8% and 49.0% respectively. more details | n/a | 80.7 | 72.4 | 89.0 |
ShuffleSeg | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | ShuffleSeg: Real-time Semantic Segmentation Network | Mostafa Gamal, Mennatullah Siam, Mo'men Abdel-Razek | Under Review by ICIP 2018 | ShuffleSeg: An efficient realtime semantic segmentation network with skip connections and ShuffleNet units more details | n/a | 62.2 | 46.5 | 77.9 |
SkipNet-MobileNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | RTSeg: Real-time Semantic Segmentation Framework | Mennatullah Siam, Mostafa Gamal, Moemen Abdel-Razek, Senthil Yogamani, Martin Jagersand | Under Review by ICIP 2018 | An efficient realtime semantic segmentation network with skip connections based on MobileNet. more details | n/a | 63.0 | 47.6 | 78.4 |
ThunderNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.0104 | 69.3 | 56.0 | 82.6 | ||
PAC: Perspective-adaptive Convolutions | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Perspective-adaptive Convolutions for Scene Parsing | Rui Zhang, Sheng Tang, Yongdong Zhang, Jintao Li, and Shuicheng Yan | IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) | Many existing scene parsing methods adopt Convolutional Neural Networks with receptive fields of fixed sizes and shapes, which frequently results in inconsistent predictions of large objects and invisibility of small objects. To tackle this issue, we propose perspective-adaptive convolutions to acquire receptive fields of flexible sizes and shapes during scene parsing. Through adding a new perspective regression layer, we can dynamically infer the position-adaptive perspective coefficient vectors utilized to reshape the convolutional patches. Consequently, the receptive fields can be adjusted automatically according to the various sizes and perspective deformations of the objects in scene images. Our proposed convolutions are differentiable to learn the convolutional parameters and perspective coefficients in an end-to-end way without any extra training supervision of object sizes. Furthermore, considering that the standard convolutions lack contextual information and spatial dependencies, we propose a context adaptive bias to capture both local and global contextual information through average pooling on the local feature patches and global feature maps, followed by flexible attentive summing to the convolutional results. The attentive weights are position-adaptive and context-aware, and can be learned through adding an additional context regression layer. Experiments on Cityscapes and ADE20K datasets well demonstrate the effectiveness of the proposed methods. more details | n/a | 78.3 | 68.4 | 88.3 |
SU_Net | no | no | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 75.0 | 63.4 | 86.6 | ||
MobileNetV2Plus | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Huijun Liu | MobileNetV2Plus more details | n/a | 72.9 | 61.8 | 83.9 | ||
DeepLabv3+ | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation | Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, Hartwig Adam | arXiv | Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We will provide more details in the coming update on the arXiv report. more details | n/a | 81.9 | 74.1 | 89.8 |
RFMobileNetV2Plus | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Huijun Liu | Receptive Filed MobileNetV2Plus for Semantic Segmentation more details | n/a | 75.8 | 66.3 | 85.3 | ||
GoogLeNetV1_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | GoogLeNet-v1 FCN trained on Cityscapes, KITTI, and ScanNet, as required by the Robust Vision Challenge at CVPR'18 (http://robustvision.net/) more details | n/a | 64.4 | 48.6 | 80.3 | ||
SAITv2 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.025 | 62.1 | 45.5 | 78.7 | ||
GUNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Guided Upsampling Network for Real-Time Semantic Segmentation | Davide Mazzini | arxiv | Guided Upsampling Network for Real-Time Semantic Segmentation more details | 0.03 | 69.1 | 55.2 | 83.0 |
RMNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | A fast and light net for semantic segmentation. more details | 0.014 | 67.7 | 53.5 | 81.9 | ||
ContextNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ContextNet: Exploring Context and Detail for Semantic Segmentation in Real-time | Rudra PK Poudel, Ujwal Bonde, Stephan Liwicki, Christopher Zach | arXiv | Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naive adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representations to produce competitive semantic segmentation in real-time with low memory requirements. ContextNet combines a deep branch at low resolution that captures global context information efficiently with a shallow branch that focuses on high-resolution segmentation details. We analyze our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024x2048) resolution. more details | 0.0238 | 64.3 | 48.1 | 80.5 |
RFLR | yes | yes | yes | yes | yes | yes | no | no | no | no | 4 | 4 | no | no | Random Forest with Learned Representations for Semantic Segmentation | Byeongkeun Kang, Truong Q. Nguyen | IEEE Transactions on Image Processing | Random Forest with Learned Representations for Semantic Segmentation more details | 0.03 | 22.6 | 20.3 | 24.9 |
DPC | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Searching for Efficient Multi-Scale Architectures for Dense Image Prediction | Liang-Chieh Chen, Maxwell D. Collins, Yukun Zhu, George Papandreou, Barret Zoph, Florian Schroff, Hartwig Adam, Jonathon Shlens | NIPS 2018 | In this work we explore the construction of meta-learning techniques for dense image prediction focused on the tasks of scene parsing. Constructing viable search spaces in this domain is challenging because of the multi-scale representation of visual information and the necessity to operate on high resolution imagery. Based on a survey of techniques in dense image prediction, we construct a recursive search space and demonstrate that even with efficient random search, we can identify architectures that achieve state-of-the-art performance. Additionally, the resulting architecture (called DPC for Dense Prediction Cell) is more computationally efficient, requiring half the parameters and half the computational cost as previous state of the art systems. more details | n/a | 82.5 | 74.9 | 90.0 |
NV-ADLR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | NVIDIA Applied Deep Learning Research more details | n/a | 82.2 | 73.6 | 90.9 | ||
Adaptive Affinity Field on PSPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Adaptive Affinity Field for Semantic Segmentation | Tsung-Wei Ke*, Jyh-Jing Hwang*, Ziwei Liu, Stella X. Yu | ECCV 2018 | Existing semantic segmentation methods mostly rely on per-pixel supervision, unable to capture structural regularity present in natural images. Instead of learning to enforce semantic labels on individual pixels, we propose to enforce affinity field patterns in individual pixel neighbourhoods, i.e., the semantic label patterns of whether neighbouring pixels are in the same segment should match between the prediction and the ground-truth. The affinity fields characterize geometric relationships within the image, such as "motorcycles have round wheels". We further develop a novel method for learning the optimal neighbourhood size for each semantic category, with an adversarial loss that optimizes over worst-case scenarios. Unlike the common Conditional Random Field (CRF) approaches, our adaptive affinity field (AAF) method has no extra parameters during inference, and is less sensitive to appearance changes in the image. more details | n/a | 78.5 | 69.1 | 87.8 |
APMoE_seg_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Pixel-wise Attentional Gating for Parsimonious Pixel Labeling | Shu Kong, Charless Fowlkes | arxiv | The Pixel-level Attentional Gating (PAG) unit is trained to choose for each pixel the pooling size to adopt to aggregate contextual region around it. There are multiple branches with different dilate rates for varied pooling size, thus varying receptive field. For this ROB challenge, PAG is expected to robustly aggregate information for final prediction. This is our entry for Robust Vision Challenge 2018 workshop (ROB). The model is based on ResNet50, trained over mixed dataset of Cityscapes, ScanNet and Kitti. more details | 0.9 | 66.1 | 50.9 | 81.4 |
BatMAN_ROB | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | batch-normalized multistage attention network more details | 1.0 | 65.0 | 48.2 | 81.9 | ||
HiSS_ROB | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.06 | 60.5 | 42.2 | 78.8 | ||
VENUS_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | VENUS_ROB more details | n/a | 66.7 | 52.7 | 80.8 | ||
VlocNet++_ROB | no | no | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 60.8 | 42.2 | 79.3 | ||
AHiSS_ROB | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | Augmented Hierarchical Semantic Segmentation more details | 0.06 | 62.9 | 45.9 | 79.8 | ||
IBN-PSP-SA_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | IBN-PSP-SA_ROB more details | n/a | 72.0 | 58.6 | 85.4 | ||
LDN2_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Ladder DenseNet: https://ivankreso.github.io/publication/ladder-densenet/ more details | 1.0 | 77.1 | 66.1 | 88.1 | ||
MiniNet | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 0.004 | 44.8 | 25.1 | 64.4 | ||
AdapNetv2_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 62.4 | 44.3 | 80.5 | ||
MapillaryAI_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 80.2 | 71.3 | 89.0 | ||
FCN101_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 38.5 | 12.9 | 64.0 | ||
MaskRCNN_BOSH | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Jin shengtao, Yi zhihao, Liu wei [Our team name is firefly] | Bosh autodrive challenge more details | n/a | 70.0 | 60.0 | 80.0 | ||
EnsembleModel_Bosch | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Jin shengtao, Yi zhihao, Liu wei [Our team name was MaskRCNN_BOSH,firefly] | we've ensembled three model(erfnet,deeplab-mobilenet,tusimple) and gained 0.57 improvment of IoU Classes value. The best single model is 73.8549 more details | n/a | 72.9 | 61.7 | 84.1 | ||
EVANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 73.1 | 61.7 | 84.4 | ||
CLRCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | CLRCNet: Cascaded Low-Rank Convolutions for Semantic Segmentation in Real-time | Anonymous | A lightweight and real-time semantic segmentation method. more details | 0.013 | 68.0 | 53.4 | 82.5 | |
Edgenet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | A lightweight semantic segmentation network combined with edge information and channel-wise attention mechanism. more details | 0.03 | 75.0 | 63.9 | 86.1 | ||
L2-SP | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Explicit Inductive Bias for Transfer Learning with Convolutional Networks | Xuhong Li, Yves Grandvalet, Franck Davoine | ICML-2018 | With a simple variant of weight decay, L2-SP regularization (see the paper for details), we reproduced PSPNet based on the original ResNet-101 using "train_fine + val_fine + train_extra" set (2975 + 500 + 20000 images), with a small batch size 8. The sync batch normalization layer is implemented in Tensorflow (see the code). more details | n/a | 78.5 | 68.9 | 88.1 |
ALV303 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.2 | 79.2 | 69.9 | 88.6 | ||
NCTU-ITRI | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | For the purpose of fast semantic segmentation, we design a CNN-based encoder-decoder architecture, which is called DSNet. The encoder part is constructed based on the concept of DenseNet, and a simple decoder is adopted to make the network more efficient without degrading the accuracy. We pre-train the encoder network on the ImageNet dataset. Then, only the fine-annotated Cityscapes dataset (2975 training images) is used to train the complete DSNet. The DSNet demonstrates a good trade-off between accuracy and speed. It can process 68 frames per second on 1024x512 resolution images on a single GTX 1080 Ti GPU. more details | 0.0147 | 70.8 | 58.4 | 83.3 | ||
ADSCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ADSCNet: Asymmetric Depthwise Separable Convolution for Semantic Segmentation in Real-time | Anonymous | A lightweight and real-time semantic segmentation method for mobile devices. more details | 0.013 | 68.7 | 55.1 | 82.3 | |
SRC-B-MachineLearningLab | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Jianlong Yuan, Zelu Deng, Shu Wang, Zhenbo Luo | Samsung Research Center MachineLearningLab. The result is tested by multi scale and filp. The paper is in preparing. more details | n/a | 81.5 | 73.7 | 89.2 | ||
Tencent AI Lab | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 80.4 | 72.3 | 88.4 | ||
ERINet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | Efficient residual inception networks for real-time semantic segmentation more details | 0.023 | 73.4 | 61.5 | 85.4 | ||
PGCNet_Res101_fine | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | we choose the ResNet101 pretrained on ImageNet as our backbone, then we use both the train-fine and the val-fine data to train our model with batch size=8 for 8w iterations without any bells and whistles. We will release our paper latter. more details | n/a | 81.1 | 73.3 | 88.9 | ||
EDANet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Efficient Dense Modules of Asymmetric Convolution for Real-Time Semantic Segmentation | Shao-Yuan Lo (NCTU), Hsueh-Ming Hang (NCTU), Sheng-Wei Chan (ITRI), Jing-Jhih Lin (ITRI) | Training data: Fine annotations only (train+val. set, 2975+500 images) without any pretraining nor coarse annotations. For training on fine annotations (train set only, 2975 images), it attains a mIoU of 66.3%. Runtime: (resolution 512x1024) 0.0092s on a single GTX 1080Ti, 0.0123s on a single Titan X. more details | 0.0092 | 69.9 | 56.6 | 83.3 | |
OCNet_ResNet101_fine | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Context is essential for various computer vision tasks. The state-of-the-art scene parsing methods define the context as the prior of the scene categories (e.g., bathroom, badroom, street). Such scene context is not suitable for the street scene parsing tasks as most of the scenes are similar. In this work, we propose the Object Context that captures the prior of the object's category that the pixel belongs to. We compute the object context by aggregating all the pixels' features according to a attention map that encodes the probability of each pixel that it belongs to the same category with the associated pixel. Specifically, We employ the self-attention method to compute the pixel-wise attention map. We further propose the Pyramid Object Context and Atrous Spatial Pyramid Object Context to handle the problem of multi-scales. more details | n/a | 81.1 | 73.2 | 89.1 | ||
Knowledge-Aware | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Knowledge-Aware Semantic Segmentation more details | n/a | 78.0 | 68.7 | 87.4 | ||
CASIA_IVA_DANet_NoCoarse | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Dual Attention Network for Scene Segmentation | Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang,and Hanqing Lu | CVPR2019 | we address the scene segmentation task by capturing rich contextual dependencies based on the selfattention mechanism. Unlike previous works that capture contexts by multi-scale features fusion, we propose a Dual Attention Networks (DANet) to adaptively integrate local features with their global dependencies. Specifically, we append two types of attention modules on top of traditional dilated FCN, which model the semantic interdependencies in spatial and channel dimensions respectively. The position attention module selectively aggregates the features at each position by a weighted sum of the features at all positions. Similar features would be related to each other regardless of their distances. Meanwhile, the channel attention module selectively emphasizes interdependent channel maps by integrating associated features among all channel maps. We sum the outputs of the two attention modules to further improve feature representation which contributes to more precise segmentation results more details | n/a | 82.6 | 74.8 | 90.5 |
LDFNet | yes | yes | no | no | no | no | yes | yes | no | no | 2 | 2 | yes | yes | Incorporating Luminance, Depth and Color Information by Fusion-based Networks for Semantic Segmentation | Shang-Wei Hung, Shao-Yuan Lo | We propose a preferred solution, which incorporates Luminance, Depth and color information by a Fusion-based network named LDFNet. It includes a distinctive encoder sub-network to process the depth maps and further employs the luminance images to assist the depth information in a process. LDFNet achieves very competitive results compared to the other state-of-art systems on the challenging Cityscapes dataset, while it maintains an inference speed faster than most of the existing top-performing networks. The experimental results show the effectiveness of the proposed information-fused approach and the potential of LDFNet for road scene understanding tasks. more details | n/a | 74.2 | 62.6 | 85.8 | |
CGNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Tianyi Wu et al | we propose a novel Context Guided Network for semantic segmentation on mobile devices. We first design a Context Guided (CG) block by considering the inherent characteristic of semantic segmentation. CG Block aggregates local feature, surrounding context feature and global context feature effectively and efficiently. Based on the CG block, we develop Context Guided Network (CGNet), which not only has a strong capacity of localization and recognition, but also has a low computational and memory footprint. Under a similar number of parameters, the proposed CGNet significantly outperforms existing segmentation networks. Extensive experiments on Cityscapes and CamVid datasets verify the effectiveness of the proposed approach. Specifically, without any post-processing, the proposed approach achieves 64.8% mean IoU on Cityscapes test set with less than 0.5 M parameters, and has a frame-rate of 50 fps on one NVIDIA Tesla K80 card for 2048 × 1024 high-resolution image. more details | 0.02 | 67.5 | 53.7 | 81.3 | ||
SAITv2-light | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.025 | 69.4 | 55.1 | 83.7 | ||
Deform_ResNet_Balanced | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.258 | 63.2 | 50.6 | 75.8 | ||
NfS-Seg | yes | yes | yes | yes | no | no | yes | yes | yes | yes | no | no | no | no | Uncertainty-Aware Knowledge Distillation for Real-Time Scene Segmentation: 7.43 GFLOPs at Full-HD Image with 120 fps | Anonymous | more details | 0.00837312 | 70.1 | 56.1 | 84.0 | |
Improving Semantic Segmentation via Video Propagation and Label Relaxation | yes | yes | yes | yes | no | no | no | no | yes | yes | no | no | yes | yes | Improving Semantic Segmentation via Video Propagation and Label Relaxation | Yi Zhu, Karan Sapra, Fitsum A. Reda, Kevin J. Shih, Shawn Newsam, Andrew Tao, Bryan Catanzaro | CVPR 2019 | Semantic segmentation requires large amounts of pixel-wise annotations to learn accurate models. In this paper, we present a video prediction-based methodology to scale up training sets by synthesizing new training samples in order to improve the accuracy of semantic segmentation networks. We exploit video prediction models' ability to predict future frames in order to also predict future labels. A joint propagation strategy is also proposed to alleviate mis-alignments in synthesized samples. We demonstrate that training segmentation models on datasets augmented by the synthesized samples lead to significant improvements in accuracy. Furthermore, we introduce a novel boundary label relaxation technique that makes training robust to annotation noise and propagation artifacts along object boundaries. Our proposed methods achieve state-of-the-art mIoUs of 83.5% on Cityscapes and 82.9% on CamVid. Our single model, without model ensembles, achieves 72.8% mIoU on the KITTI semantic segmentation test set, which surpasses the winning entry of the ROB challenge 2018. more details | n/a | 82.0 | 73.6 | 90.5 |
Spatial Sampling Net | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Spatial Sampling Network for Fast Scene Understanding | Davide Mazzini, Raimondo Schettini | CVPR 2019 Workshop on Autonomous Driving | We propose a network architecture to perform efficient scene understanding. This work presents three main novelties: the first is an Improved Guided Upsampling Module that can replace in toto the decoder part in common semantic segmentation networks. Our second contribution is the introduction of a new module based on spatial sampling to perform Instance Segmentation. It provides a very fast instance segmentation, needing only thresholding as post-processing step at inference time. Finally, we propose a novel efficient network design that includes the new modules and we test it against different datasets for outdoor scene understanding. more details | 0.00884 | 66.5 | 52.4 | 80.7 |
SwiftNetRN-18 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images | Marin Oršić, Ivan Krešo, Petra Bevandić, Siniša Šegvić | CVPR 2019 | more details | 0.0243 | 77.2 | 67.2 | 87.1 |
Fast-SCNN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Fast-SCNN: Fast Semantic Segmentation Network | Rudra PK Poudel, Stephan Liwicki, Roberto Cipolla | The encoder-decoder framework is state-of-the-art for offline semantic image segmentation. Since the rise in autonomous systems, real-time computation is increasingly desirable. In this paper, we introduce fast segmentation convolutional neural network (Fast-SCNN), an above real-time semantic segmentation model on high resolution image data (1024x2048px) suited to efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, we introduce our `learning to downsample' module which computes low-level features for multiple resolution branches simultaneously. Our network combines spatial detail at high resolution with deep features extracted at lower resolution, yielding an accuracy of 68.0% mean intersection over union at 123.5 frames per second on Cityscapes. We also show that large scale pre-training is unnecessary. We thoroughly validate our metric in experiments with ImageNet pre-training and the coarse labeled data of Cityscapes. Finally, we show even faster computation with competitive results on subsampled inputs, without any network modifications. more details | 0.0081 | 63.5 | 46.1 | 80.8 | |
Fast-SCNN (Half-resolution) | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Fast-SCNN: Fast Semantic Segmentation Network | Rudra P K Poudel, Stephan Liwicki, Roberto Cipolla | The encoder-decoder framework is state-of-the-art for offline semantic image segmentation. Since the rise in autonomous systems, real-time computation is increasingly desirable. In this paper, we introduce fast segmentation convolutional neural network (Fast-SCNN), an above real-time semantic segmentation model on high resolution image data (1024x2048px) suited to efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, we introduce our `learning to downsample' module which computes low-level features for multiple resolution branches simultaneously. Our network combines spatial detail at high resolution with deep features extracted at lower resolution, yielding an accuracy of 68.0% mean intersection over union at 123.5 frames per second on Cityscapes. We also show that large scale pre-training is unnecessary. We thoroughly validate our metric in experiments with ImageNet pre-training and the coarse labeled data of Cityscapes. Finally, we show even faster computation with competitive results on subsampled inputs, without any network modifications. more details | 0.0035 | 57.1 | 37.6 | 76.6 | |
Fast-SCNN (Quarter-resolution) | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Fast-SCNN: Fast Semantic Segmentation Network | Rudra P K Poudel, Stephan Liwicki, Roberto Cipolla | The encoder-decoder framework is state-of-the-art for offline semantic image segmentation. Since the rise in autonomous systems, real-time computation is increasingly desirable. In this paper, we introduce fast segmentation convolutional neural network (Fast-SCNN), an above real-time semantic segmentation model on high resolution image data (1024x2048px) suited to efficient computation on embedded devices with low memory. Building on existing two-branch methods for fast segmentation, we introduce our `learning to downsample' module which computes low-level features for multiple resolution branches simultaneously. Our network combines spatial detail at high resolution with deep features extracted at lower resolution, yielding an accuracy of 68.0% mean intersection over union at 123.5 frames per second on Cityscapes. We also show that large scale pre-training is unnecessary. We thoroughly validate our metric in experiments with ImageNet pre-training and the coarse labeled data of Cityscapes. Finally, we show even faster computation with competitive results on subsampled inputs, without any network modifications. more details | 0.00206 | 48.2 | 28.7 | 67.7 | |
DSNet | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | yes | yes | DSNet for Real-Time Driving Scene Semantic Segmentation | Wenfu Wang | DSNet for Real-Time Driving Scene Semantic Segmentation more details | 0.027 | 70.7 | 58.0 | 83.4 | |
SwiftNetRN-18 pyramid | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 74.6 | 64.2 | 85.0 | ||
DF-Seg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Partial Order Pruning: for Best Speed/Accuracy Trade-off in Neural Architecture Search | Xin Li, Yiming Zhou, Zheng Pan, Jiashi Feng | CVPR 2019 | DF1-Seg-d8 more details | 0.007 | 69.6 | 57.1 | 82.2 |
DF-Seg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | DF2-Seg2 more details | 0.018 | 73.3 | 60.7 | 86.0 | ||
DDAR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | DiDi Labs, AR Group more details | n/a | 81.5 | 74.0 | 89.1 | ||
LDN-121 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Ladder-style DenseNets for Semantic Segmentation of Large Images | Ivan Kreso, Josip Krapac, Sinisa Segvic | Ladder DenseNet-121 trained on train+val, fine labels only. Single-scale inference. more details | 0.048 | 78.4 | 68.6 | 88.1 | |
TKCN | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Tree-structured Kronecker Convolutional Network for Semantic Segmentation | Tianyi Wu, Sheng Tang, Rui Zhang, Juan Cao, Jintao Li | more details | n/a | 81.5 | 73.6 | 89.5 | |
RPNet | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Residual Pyramid Learning for Single-Shot Semantic Segmentation | Xiaoyu Chen, Xiaotian Lou, Lianfa Bai, Jing Han | arXiv | we put forward a method for single-shot segmentation in a feature residual pyramid network (RPNet), which learns the main and residuals of segmentation by decomposing the label at different levels of residual blocks. more details | 0.008 | 72.3 | 59.0 | 85.7 |
navi | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | yuxb | mutil scale test more details | n/a | 79.1 | 68.2 | 89.9 | ||
Auto-DeepLab-L | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation | Chenxi Liu, Liang-Chieh Chen, Florian Schroff, Hartwig Adam, Wei Hua, Alan Yuille, Li Fei-Fei | arxiv | In this work, we study Neural Architecture Search for semantic image segmentation, an important computer vision task that assigns a semantic label to every pixel in an image. Existing works often focus on searching the repeatable cell structure, while hand-designing the outer network structure that controls the spatial resolution changes. This choice simplifies the search space, but becomes increasingly problematic for dense image prediction which exhibits a lot more network level architectural variations. Therefore, we propose to search the network level structure in addition to the cell level structure, which forms a hierarchical architecture search space. We present a network level search space that includes many popular designs, and develop a formulation that allows efficient gradient-based architecture search (3 P100 GPU days on Cityscapes images). We demonstrate the effectiveness of the proposed method on the challenging Cityscapes, PASCAL VOC 2012, and ADE20K datasets. Without any ImageNet pretraining, our architecture searched specifically for semantic image segmentation attains state-of-the-art performance. Please refer to https://arxiv.org/abs/1901.02985 for details. more details | n/a | 82.0 | 74.2 | 89.8 |
LiteSeg-Darknet19 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | LiteSeg: A Litewiegth ConvNet for Semantic Segmentation | Taha Emara, Hossam E. Abd El Munim, Hazem M. Abbas | DICTA 2019 | more details | 0.0102 | 76.1 | 65.9 | 86.3 |
AdapNet++ | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Self-Supervised Model Adaptation for Multimodal Semantic Segmentation | Abhinav Valada, Rohit Mohan, Wolfram Burgard | IJCV 2019 | In this work, we propose the AdapNet++ architecture for semantic segmentation that aims to achieve the right trade-off between performance and computational complexity of the model. AdapNet++ incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling (eASPP) module that has a larger effective receptive field with more than 10x fewer parameters compared to the standard ASPP, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details. Comprehensive empirical evaluations on the challenging Cityscapes, Synthia, SUN RGB-D, ScanNet and Freiburg Forest datasets demonstrate that our architecture achieves state-of-the-art performance while simultaneously being efficient in terms of both the number of parameters and inference time. Please refer to https://arxiv.org/abs/1808.03833 for details. A live demo on various datasets can be viewed at http://deepscene.cs.uni-freiburg.de more details | n/a | 80.1 | 71.5 | 88.7 |
SSMA | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | yes | yes | Self-Supervised Model Adaptation for Multimodal Semantic Segmentation | Abhinav Valada, Rohit Mohan, Wolfram Burgard | IJCV 2019 | Learning to reliably perceive and understand the scene is an integral enabler for robots to operate in the real-world. This problem is inherently challenging due to the multitude of object types as well as appearance changes caused by varying illumination and weather conditions. Leveraging complementary modalities can enable learning of semantically richer representations that are resilient to such perturbations. Despite the tremendous progress in recent years, most multimodal convolutional neural network approaches directly concatenate feature maps from individual modality streams rendering the model incapable of focusing only on the relevant complementary information for fusion. To address this limitation, we propose a mutimodal semantic segmentation framework that dynamically adapts the fusion of modality-specific features while being sensitive to the object category, spatial location and scene context in a self-supervised manner. Specifically, we propose an architecture consisting of two modality-specific encoder streams that fuse intermediate encoder representations into a single decoder using our proposed SSMA fusion mechanism which optimally combines complementary features. As intermediate representations are not aligned across modalities, we introduce an attention scheme for better correlation. Extensive experimental evaluations on the challenging Cityscapes, Synthia, SUN RGB-D, ScanNet and Freiburg Forest datasets demonstrate that our architecture achieves state-of-the-art performance in addition to providing exceptional robustness in adverse perceptual conditions. Please refer to https://arxiv.org/abs/1808.03833 for details. A live demo on various datasets can be viewed at http://deepscene.cs.uni-freiburg.de more details | n/a | 81.7 | 73.6 | 89.8 |
LiteSeg-Mobilenet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | LiteSeg: A Litewiegth ConvNet for Semantic Segmentation | Taha Emara, Hossam E. Abd El Munim, Hazem M. Abbas | DICTA 2019 | more details | 0.0062 | 72.0 | 62.1 | 82.0 |
LiteSeg-Shufflenet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | LiteSeg: A Litewiegth ConvNet for Semantic Segmentation | Taha Emara, Hossam E. Abd El Munim, Hazem M. Abbas | DICTA 2019 | more details | 0.007518 | 67.3 | 55.0 | 79.5 |
Fast OCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 80.7 | 72.4 | 89.0 | ||
ShuffleNet v2 + DPC | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | An efficient solution for semantic segmentation: ShuffleNet V2 with atrous separable convolutions | Sercan Turkmen, Janne Heikkila | ShuffleNet v2 with DPC at output_stride 16. more details | n/a | 69.9 | 56.7 | 83.1 | |
ERSNet-coarse | yes | yes | yes | yes | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 0.012 | 67.8 | 53.1 | 82.6 | ||
MiniNet-v2-coarse | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.012 | 68.3 | 54.4 | 82.2 | ||
SwiftNetRN-18 ensemble | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images | Marin Oršić, Ivan Krešo, Petra Bevandić, Siniša Šegvić | CVPR 2019 | more details | n/a | 76.5 | 66.4 | 86.6 |
EFC_sync | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Anonymous | more details | n/a | 77.8 | 67.3 | 88.4 | ||
PL-Seg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Partial Order Pruning: for Best Speed/Accuracy Trade-of in Neural Architecture Search | Xin Li, Yiming Zhou, Zheng Pan, Jiashi Feng | CVPR 2019 | Following "partial order pruning", we conduct architecture searching experiments on Snapdragon 845 platform, and obtained PL1A/PL1A-Seg. 1、Snapdragon 845 2、NCNN Library 3、latency evaluated at 640x384 more details | 0.0192 | 67.7 | 52.9 | 82.5 |
MiniNet-v2-pretrained | yes | yes | yes | yes | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 0.012 | 68.4 | 53.8 | 82.9 | ||
GALD-Net | yes | yes | yes | yes | yes | yes | yes | yes | no | no | no | no | yes | yes | Global Aggregation then Local Distribution in Fully Convolutional Networks | Xiangtai Li, Li Zhang, Ansheng You, Maoke Yang, Kuiyuan Yang, Yunhai Tong | BMVC 2019 | We propose Global Aggregation then Local Distribution (GALD) scheme to distribute global information to each position adaptively according to the local information around the position. (Joint work: Key Laboratory of Machine Perception, School of EECS @Peking University and DeepMotion AI Research ) more details | n/a | 81.9 | 74.5 | 89.4 |
GALD-net | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Global Aggregation then Local Distribution in Fully Convolutional Networks | Xiangtai Li, Li Zhang, Ansheng You, Maoke Yang, Kuiyuan Yang, Yunhai Tong | BMVC 2019 | We propose Global Aggregation then Local Distribution (GALD) scheme to distribute global information to each position adaptively according the local information surrounding the position. more details | n/a | 81.4 | 73.8 | 89.1 |
ndnet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.024 | 64.7 | 48.5 | 80.9 | ||
HRNetV2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | High-Resolution Representations for Labeling Pixels and Regions | Ke Sun, Yang Zhao, Borui Jiang, Tianheng Cheng, Bin Xiao, Dong Liu, Yadong Mu, Xinggang Wang, Wenyu Liu, Jingdong Wang | The high-resolution network (HRNet) recently developed for human pose estimation, maintains high-resolution representations through the whole process by connecting high-to-low resolution convolutions in parallel and produces strong high-resolution representations by repeatedly conducting fusions across parallel convolutions. more details | n/a | 82.1 | 74.8 | 89.4 | |
SPGNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | SPGNet: Semantic Prediction Guidance for Scene Parsing | Bowen Cheng, Liang-Chieh Chen, Yunchao Wei, Yukun Zhu, Zilong Huang, Jinjun Xiong, Thomas Huang, Wen-Mei Hwu, Honghui Shi | ICCV 2019 | Multi-scale context module and single-stage encoder-decoder structure are commonly employed for semantic segmentation. The multi-scale context module refers to the operations to aggregate feature responses from a large spatial extent, while the single-stage encoder-decoder structure encodes the high-level semantic information in the encoder path and recovers the boundary information in the decoder path. In contrast, multi-stage encoder-decoder networks have been widely used in human pose estimation and show superior performance than their single-stage counterpart. However, few efforts have been attempted to bring this effective design to semantic segmentation. In this work, we propose a Semantic Prediction Guidance (SPG) module which learns to re-weight the local features through the guidance from pixel-wise semantic prediction. We find that by carefully re-weighting features across stages, a two-stage encoder-decoder network coupled with our proposed SPG module can significantly outperform its one-stage counterpart with similar parameters and computations. Finally, we report experimental results on the semantic segmentation benchmark Cityscapes, in which our SPGNet attains 81.1% on the test set using only 'fine' annotations. more details | n/a | 82.1 | 74.5 | 89.7 |
LDN-161 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Ladder-style DenseNets for Semantic Segmentation of Large Images | Ivan Kreso, Josip Krapac, Sinisa Segvic | Ladder DenseNet-161 trained on train+val, fine labels only. Inference on multi-scale inputs. more details | 2.0 | 79.1 | 69.1 | 89.2 | |
GGCF | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 81.3 | 73.1 | 89.4 | ||
GFF-Net | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | GFF: Gated Fully Fusion for Semantic Segmentation | Xiangtai Li, Houlong Zhao, Yunhai Tong, Kuiyuan Yang | We proposed Gated Fully Fusion (GFF) to fuse features from multiple levels through gates in a fully connected way. Specifically, features at each level are enhanced by higher-level features with stronger semantics and lower-level features with more details, and gates are used to control the pass of useful information which significantly reducing noise propagation during fusion. (Joint work: Key Laboratory of Machine Perception, School of EECS @Peking University and DeepMotion AI Research ) more details | n/a | 81.4 | 73.6 | 89.3 | |
Gated-SCNN | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Gated-SCNN: Gated Shape CNNs for Semantic Segmentation | Towaki Takikawa, David Acuna, Varun Jampani, Sanja Fidler | more details | n/a | 82.7 | 74.6 | 90.7 | |
ESPNetv2 | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | ESPNetv2: A Light-weight, Power Efficient, and General Purpose Convolutional Neural Network | Sachin Mehta, Mohammad Rastegari, Linda Shapiro, and Hannaneh Hajishirzi | CVPR 2019 | We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2, for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on three different tasks: (1) object classification, (2) semantic segmentation, and (3) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-of-the-art methods. Our network has better generalization properties than ShuffleNetv2 when tested on the MSCOCO multi-object classification task and the Cityscapes urban scene semantic segmentation task. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at https://github.com/sacmehta/ESPNetv2 more details | n/a | 66.3 | 51.4 | 81.2 |
MRFM | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Multi Receptive Field Network for Semantic Segmentation | Jianlong Yuan, Zelu Deng, Shu Wang, Zhenbo Luo | WACV2020 | Semantic segmentation is one of the key tasks in comput- er vision, which is to assign a category label to each pixel in an image. Despite significant progress achieved recently, most existing methods still suffer from two challenging is- sues: 1) the size of objects and stuff in an image can be very diverse, demanding for incorporating multi-scale features into the fully convolutional networks (FCNs); 2) the pixel- s close to or at the boundaries of object/stuff are hard to classify due to the intrinsic weakness of convolutional net- works. To address the first issue, we propose a new Multi- Receptive Field Module (MRFM), explicitly taking multi- scale features into account. For the second issue, we design an edge-aware loss which is effective in distinguishing the boundaries of object/stuff. With these two designs, our Mul- ti Receptive Field Network achieves new state-of-the-art re- sults on two widely-used semantic segmentation benchmark datasets. Specifically, we achieve a mean IoU of 83.0% on the Cityscapes dataset and 88.4% mean IoU on the Pascal VOC2012 dataset. more details | n/a | 82.0 | 74.8 | 89.2 |
DGCNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Dual Graph Convolutional Network for Semantic Segmentation | Li Zhang*, Xiangtai Li*, Anurag Arnab, Kuiyuan Yang, Yunhai Tong, Philip H.S. Torr | BMVC 2019 | We propose Dual Graph Convolutional Network (DGCNet) models the global context of the input feature by modelling two orthogonal graphs in a single framework. (Joint work: University of Oxford, Peking University and DeepMotion AI Research) more details | n/a | 81.1 | 72.9 | 89.2 |
dpcan_trainval_os16_225 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 81.1 | 73.1 | 89.1 | ||
Learnable Tree Filter | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Learnable Tree Filter for Structure-preserving Feature Transform | Lin Song; Yanwei Li; Zeming Li; Gang Yu; Hongbin Sun; Jian Sun; Nanning Zheng | NeurIPS 2019 | Learnable Tree Filter for Structure-preserving Feature Transform more details | n/a | 81.1 | 72.9 | 89.3 |
FreeNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 70.2 | 58.9 | 81.6 | ||
HRNetV2 + OCR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | High-Resolution Representations for Labeling Pixels and Regions; OCNet: Object Context Network for Scene Parsing | HRNet Team; OCR Team | HRNetV2W48 + OCR. OCR is an extension of object context networks https://arxiv.org/pdf/1809.00916.pdf more details | n/a | 81.7 | 73.9 | 89.4 | |
Valeo DAR Germany | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Valeo DAR Germany, New Algo Lab more details | n/a | 82.2 | 74.4 | 89.9 | ||
GLNet_fine | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | The proposed network architecture, combined with spatial information and multi scale context information, and repair the boundaries and details of the segmented object through channel attention modules.(Use the train-fine and the val-fine data) more details | n/a | 79.5 | 70.4 | 88.5 | ||
MCDN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 81.2 | 72.4 | 90.1 | ||
AAF+GLR | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 79.3 | 70.3 | 88.3 | ||
HRNetV2 + OCR (w/ ASP) | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | openseg-group (OCR team + HRNet team) | Our approach is based on a single HRNet48V2 and an OCR module combined with ASPP. We apply depth based multi-scale ensemble weights during testing (provided by DeepMotion AI Research) . more details | n/a | 83.5 | 76.8 | 90.1 | ||
CASIA_IVA_DRANet-101_NoCoarse | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 84.4 | 77.8 | 91.0 | ||
Hyundai Mobis AD Lab | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Hyundai Mobis AD Lab, DL-DB Group, AA (Automated Annotator) Team | more details | n/a | 82.4 | 74.4 | 90.4 | ||
EFRNet-13 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.0146 | 70.1 | 57.5 | 82.7 | ||
FarSee-Net | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | FarSee-Net: Real-Time Semantic Segmentation by Efficient Multi-scale Context Aggregation and Feature Space Super-resolution | Zhanpeng Zhang and Kaipeng Zhang | IEEE International Conference on Robotics and Automation (ICRA) 2020 | FarSee-Net: Real-Time Semantic Segmentation by Efficient Multi-scale Context Aggregation and Feature Space Super-resolution Real-time semantic segmentation is desirable in many robotic applications with limited computation resources. One challenge of semantic segmentation is to deal with the objectscalevariationsandleveragethecontext.Howtoperform multi-scale context aggregation within limited computation budget is important. In this paper, firstly, we introduce a novel and efficient module called Cascaded Factorized Atrous Spatial Pyramid Pooling (CF-ASPP). It is a lightweight cascaded structure for Convolutional Neural Networks (CNNs) to efficiently leverage context information. On the other hand, for runtime efficiency, state-of-the-art methods will quickly decrease the spatial size of the inputs or feature maps in the early network stages. The final high-resolution result is usuallyobtainedbynon-parametricup-samplingoperation(e.g. bilinear interpolation). Differently, we rethink this pipeline and treat it as a super-resolution process. We use optimized superresolution operation in the up-sampling step and improve the accuracy, especially in sub-sampled input image scenario for real-time applications. By fusing the above two improvements, our methods provide better latency-accuracy trade-off than the other state-of-the-art methods. In particular, we achieve 68.4% mIoU at 84 fps on the Cityscapes test set with a single Nivida Titan X (Maxwell) GPU card. The proposed module can be plugged into any feature extraction CNN and benefits from the CNN structure development. more details | 0.0119 | 69.7 | 55.8 | 83.7 |
C3Net [2,3,7,13] | no | no | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | C3: Concentrated-Comprehensive Convolution and its application to semantic segmentation | Hyojin Park, Youngjoon Yoo, Geonseok Seo, Dongyoon Han, Sangdoo Yun, Nojun Kwak | more details | n/a | 67.8 | 53.8 | 81.7 | |
Panoptic-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | Our proposed bottom-up Panoptic-DeepLab is conceptually simple yet delivers state-of-the-art results. The Panoptic-DeepLab adopts dual-ASPP and dual-decoder modules, specific to semantic segmentation and instance segmentation respectively. The semantic segmentation prediction follows the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation prediction involves a simple instance center regression, where the model learns to predict instance centers as well as the offset from each pixel to its corresponding center. This submission exploits only Cityscapes fine annotations. more details | n/a | 78.4 | 71.8 | 84.9 | |
EKENet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.0229 | 69.9 | 57.2 | 82.6 | ||
SPSSN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Stage Pooling Semantic Segmentation Network more details | n/a | 70.2 | 56.6 | 83.9 | ||
FC-HarDNet-70 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | HarDNet: A Low Memory Traffic Network | Ping Chao, Chao-Yang Kao, Yu-Shan Ruan, Chien-Hsiang Huang, Youn-Long Lin | ICCV 2019 | Fully Convolutional Harmonic DenseNet 70 U-shape encoder-decoder structure with HarDNet blocks Trained with single scale loss at stride-4 validation mIoU=77.7 more details | 0.015 | 76.7 | 66.3 | 87.0 |
BFP | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Boundary-Aware Feature Propagation for Scene Segmentation | Henghui Ding, Xudong Jiang, Ai Qun Liu, Nadia Magnenat Thalmann, and Gang Wang | IEEE International Conference on Computer Vision (ICCV), 2019 | Boundary-Aware Feature Propagation for Scene Segmentation more details | n/a | 81.4 | 73.0 | 89.9 |
FasterSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | FasterSeg: Searching for Faster Real-time Semantic Segmentation | Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, Zhangyang Wang | ICLR 2020 | We present FasterSeg, an automatically designed semantic segmentation network with not only state-of-the-art performance but also faster speed than current methods. Utilizing neural architecture search (NAS), FasterSeg is discovered from a novel and broader search space integrating multi-resolution branches, that has been recently found to be vital in manually designed segmentation models. To better calibrate the balance between the goals of high accuracy and low latency, we propose a decoupled and fine-grained latency regularization, that effectively overcomes our observed phenomenons that the searched networks are prone to "collapsing" to low-latency yet poor-accuracy models. Moreover, we seamlessly extend FasterSeg to a new collaborative search (co-searching) framework, simultaneously searching for a teacher and a student network in the same single run. The teacher-student distillation further boosts the student model's accuracy. Experiments on popular segmentation benchmarks demonstrate the competency of FasterSeg. For example, FasterSeg can run over 30% faster than the closest manually designed competitor on Cityscapes, while maintaining comparable accuracy. more details | 0.00613 | 73.6 | 62.2 | 84.9 |
VCD-NoCoarse | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 83.2 | 76.5 | 89.8 | ||
NAVINFO_DLR | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | pengfei zhang | weighted aspp+ohem+hard region refine more details | n/a | 83.7 | 77.4 | 90.0 | ||
LBPSS | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | CVPR 2020 submission #5455 more details | 0.9 | 67.5 | 54.7 | 80.3 | ||
KANet_Res101 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 82.5 | 74.9 | 90.1 | ||
Learnable Tree Filter V2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Rethinking Learnable Tree Filter for Generic Feature Transform | Lin Song, Yanwei Li, Zhengkai Jiang, Zeming Li, Xiangyu Zhang, Hongbin Sun, Jian Sun, Nanning Zheng | NeurIPS 2020 | Based on ResNet-101 backbone and FPN architecture. more details | n/a | 83.6 | 77.3 | 89.8 |
GPSNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 81.6 | 73.7 | 89.5 | ||
FTFNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | An Efficient Network Focused on Tiny Feature Maps for Real-Time Semantic Segmentation more details | 0.0088 | 70.9 | 58.4 | 83.5 | ||
iFLYTEK-CV | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | iFLYTEK Research, CV Group more details | n/a | 82.4 | 74.6 | 90.2 | ||
F2MF-short | yes | yes | no | no | no | no | no | no | yes | yes | no | no | yes | yes | Warp to the Future: Joint Forecasting of Features and Feature Motion | Josip Saric, Marin Orsic, Tonci Antunovic, Sacha Vrazic, Sinisa Segvic | The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | Our method forecasts semantic segmentation 3 timesteps into the future. more details | n/a | 61.5 | 47.1 | 75.9 |
HPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | High-Order Paired-ASPP Networks for Semantic Segmentation | Yu Zhang, Xin Sun, Junyu Dong, Changrui Chen, Yue Shen | more details | n/a | 80.4 | 71.5 | 89.3 | |
HANet (fine-train only) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | TBA | Anonymous | We use only fine-training data. more details | n/a | 79.5 | 69.5 | 89.5 | |
F2MF-mid | yes | yes | no | no | no | no | no | no | yes | yes | no | no | yes | yes | Warp to the Future: Joint Forecasting of Features and Feature Motion | Josip Saric, Marin Orsic, Tonci Antunovic, Sacha Vrazic, Sinisa Segvic | The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | Our method forecasts semantic segmentation 9 timesteps into the future. more details | n/a | 48.0 | 32.0 | 64.0 |
EMANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Expectation Maximization Attention Networks for Semantic Segmentation | Xia Li, Zhisheng Zhong, Jianlong Wu, Yibo Yang, Zhouchen Lin, Hong Liu | ICCV 2019 | more details | n/a | 80.3 | 72.1 | 88.5 |
PartnerNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | PARTNERNET: A LIGHTWEIGHT AND EFFICIENT PARTNER NETWORK FOR SEMANTIC SEGMENTATION more details | 0.0058 | 73.6 | 62.2 | 85.0 | ||
SwiftNet RN18 pyr sepBN MVD | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient semantic segmentation with pyramidal fusion | M Oršić, S Šegvić | Pattern Recognition 2020 | more details | 0.029 | 77.9 | 67.7 | 88.0 |
Tencent YYB VisualAlgo | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Tencent YYB VisualAlgo Group more details | n/a | 82.0 | 73.5 | 90.5 | ||
MoKu Lab | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Alibaba, MoKu AI Lab, CV Group more details | n/a | 83.1 | 75.5 | 90.7 | ||
HRNetV2 + OCR + SegFix | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Object-Contextual Representations for Semantic Segmentation | Yuhui Yuan, Xilin Chen, Jingdong Wang | First, we pre-train "HRNet+OCR" method on the Mapillary training set (achieves 50.8% on the Mapillary val set). Second, we fine-tune the model with the Cityscapes training, validation and coarse set. Finally, we apply the "SegFix" scheme to further improve the results. more details | n/a | 83.9 | 77.4 | 90.4 | |
DecoupleSegNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Improving Semantic Segmentation via Decoupled Body and Edge Supervision | Xiangtai Li, Xia Li, Li Zhang, Guangliang Cheng, Jianping Shi, Zhouchen Lin, Shaohua Tan, and Yunhai Tong | ECCV-2020 | In this paper, We propose a new paradigm for semantic segmentation. Our insight is that appealing performance of semantic segmentation re- quires explicitly modeling the object body and edge, which correspond to the high and low frequency of the image. To do so, we first warp the image feature by learning a flow field to make the object part more consistent. The resulting body feature and the residual edge feature are further optimized under decoupled supervision by explicitly sampling dif- ferent parts (body or edge) pixels. The code and models have been released. more details | n/a | 81.4 | 72.7 | 90.1 |
LGE A&B Center: HANet (ResNet-101) | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Cars Can’t Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks | Sungha Choi (LGE, Korea Univ.), Joanne T. Kim (Korea Univ.), Jaegul Choo (KAIST) | CVPR 2020 | Dataset: "fine train + fine val", No coarse, Backbone: ImageNet pretrained ResNet-101 more details | n/a | 81.2 | 72.4 | 89.9 |
DCNAS | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | DCNAS: Densely Connected Neural Architecture Search for Semantic Image Segmentation | Xiong Zhang, Hongmin Xu, Hong Mo, Jianchao Tan, Cheng Yang, Wenqi Ren | Neural Architecture Search (NAS) has shown great potentials in automatically designing scalable network architectures for dense image predictions. However, existing NAS algorithms usually compromise on restricted search space and search on proxy task to meet the achievable computational demands. To allow as wide as possible network architectures and avoid the gap between target and proxy dataset, we propose a Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information, over a large-scale target dataset. Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs. Moreover, by combining both path-level and channel-level sampling strategies, we design a fusion module to reduce the memory consumption of ample search space. more details | n/a | 82.5 | 75.4 | 89.5 | |
GPNet-ResNet101 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 82.0 | 74.0 | 90.0 | ||
Axial-DeepLab-XL [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 76.7 | 70.4 | 83.0 |
Axial-DeepLab-L [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 78.9 | 72.3 | 85.5 |
Axial-DeepLab-L [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 76.2 | 69.7 | 82.7 |
LGE A&B Center: HANet (ResNext-101) | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Cars Can’t Fly up in the Sky: Improving Urban-Scene Segmentation via Height-driven Attention Networks | Sungha Choi (LGE, Korea Univ.), Joanne T. Kim (Korea Univ.), Jaegul Choo (KAIST) | CVPR 2020 | Dataset: "fine train + fine val + coarse", Backbone: Mapillary pretrained ResNext-101 more details | n/a | 80.7 | 71.4 | 89.9 |
ERINet-v2 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Efficient Residual Inception Network | MINJONG KIM, SUYOUNG CHI | ongoing | more details | 0.00526316 | 70.0 | 56.8 | 83.3 |
Naive-Student (iterative semi-supervised learning with Panoptic-DeepLab) | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Semi-Supervised Learning in Video Sequences for Urban Scene Segmentation | Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D. Collins, Ekin D. Cubuk, Barret Zoph, Hartwig Adam, Jonathon Shlens | Supervised learning in large discriminative models is a mainstay for modern computer vision. Such an approach necessitates investing in large-scale human-annotated datasets for achieving state-of-the-art results. In turn, the efficacy of supervised learning may be limited by the size of the human annotated dataset. This limitation is particularly notable for image segmentation tasks, where the expense of human annotation is especially large, yet large amounts of unlabeled data may exist. In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences to improve the performance on urban scene segmentation, simultaneously tackling semantic, instance, and panoptic segmentation. The goal of this work is to avoid the construction of sophisticated, learned architectures specific to label propagation (e.g., patch matching and optical flow). Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data. The procedure is iterated for several times. As a result, our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks, reaching the performance of 67.8% PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences to surpass state-of-the-art performance on core computer vision tasks. more details | n/a | 82.0 | 76.1 | 87.8 | |
Axial-DeepLab-XL [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 79.7 | 73.2 | 86.3 |
TUE-5LSM0-g23 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Deeplabv3+decoder more details | n/a | 67.8 | 54.7 | 81.0 | ||
PBRNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | modified MobileNetV2 backbone + Prediction and Boundary attention-based Refinement Module (PBRM) more details | 0.0107 | 77.3 | 67.0 | 87.6 | ||
ResNeSt200 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | ResNeSt: Split-Attention Networks | Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Zhi Zhang, Haibin Lin, Yue Sun, Tong He, Jonas Mueller, R. Manmatha, Mu Li, and Alexander Smola | DeepLabV3+ network with ResNeSt200 backbone. more details | n/a | 81.3 | 72.6 | 89.9 | |
Panoptic-DeepLab [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | We employ a stronger backbone, WR-41, in Panoptic-DeepLab. For Panoptic-DeepLab, please refer to https://arxiv.org/abs/1911.10194. For wide-ResNet-41 (WR-41) backbone, please refer to https://arxiv.org/abs/2005.10266. more details | n/a | 82.8 | 76.8 | 88.9 | |
EaNet-V1 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Parsing Very High Resolution Urban Scene Images by Learning Deep ConvNets with Edge-Aware Loss | Xianwei Zheng, Linxi Huan, Gui-Song Xia, Jianya Gong | Parsing very high resolution (VHR) urban scene images into regions with semantic meaning, e.g. buildings and cars, is a fundamental task necessary for interpreting and understanding urban scenes. However, due to the huge quantity of details contained in an image and the large variations of objects in scale and appearance, the existing semantic segmentation methods often break one object into pieces, or confuse adjacent objects and thus fail to depict these objects consistently. To address this issue, we propose a concise and effective edge-aware neural network (EaNet) for urban scene semantic segmentation. The proposed EaNet model is deployed as a standard balanced encoder-decoder framework. Specifically, we devised two plug-and-play modules that append on top of the encoder and decoder respectively, i.e., the large kernel pyramid pooling (LKPP) and the edge-aware loss (EA loss) function, to extend the model ability in learning discriminating features. The LKPP module captures rich multi-scale context with strong continuous feature relations to promote coherent labeling of multi-scale urban objects. The EA loss module learns edge information directly from semantic segmentation prediction, which avoids costly post-processing or extra edge detection. During training, EA loss imposes a strong geometric awareness to guide object structure learning at both the pixel- and image-level, and thus effectively separates confusing objects with sharp contours. more details | n/a | 77.8 | 66.8 | 88.9 | |
EfficientPS [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 83.5 | 76.8 | 90.3 | |
FSFNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Accelerator-Aware Fast Spatial Feature Network for Real-Time Semantic Segmentation | Minjong Kim, Byungjae Park, Suyoung Chi | IEEE Access | Semantic segmentation is performed to understand an image at the pixel level; it is widely used in the field of autonomous driving. In recent years, deep neural networks achieve good accuracy performance; however, there exist few models that have a good trade-off between high accuracy and low inference time. In this paper, we propose a fast spatial feature network (FSFNet), an optimized lightweight semantic segmentation model using an accelerator, offering high performance as well as faster inference speed than current methods. FSFNet employs the FSF and MRA modules. The FSF module has three different types of subset modules to extract spatial features efficiently. They are designed in consideration of the size of the spatial domain. The multi-resolution aggregation module combines features that are extracted at different resolutions to reconstruct the segmentation image accurately. Our approach is able to run at over 203 FPS at full resolution 1024 x 2048) in a single NVIDIA 1080Ti GPU, and obtains a result of 69.13% mIoU on the Cityscapes test dataset. Compared with existing models in real-time semantic segmentation, our proposed model retains remarkable accuracy while having high FPS that is over 30% faster than the state-of-the-art model. The experimental results proved that our model is an ideal approach for the Cityscapes dataset. more details | 0.0049261 | 72.6 | 60.4 | 84.7 |
Hierarchical Multi-Scale Attention for Semantic Segmentation | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Hierarchical Multi-Scale Attention for Semantic Segmentation | Andrew Tao, Karan Sapra, Bryan Catanzaro | Multi-scale inference is commonly used to improve the results of semantic segmentation. Multiple images scales are passed through a network and then the results are combined with averaging or max pooling. In this work, we present an attention-based approach to combining multi-scale predictions. We show that predictions at certain scales are better at resolving particular failures modes and that the network learns to favor those scales for such cases in order to generate better predictions. Our attention mechanism is hierarchical, which enables it to be roughly 4x more memory efficient to train than other recent approaches. In addition to enabling faster training, this allows us to train with larger crop sizes which leads to greater model accuracy. We demonstrate the result of our method on two datasets: Cityscapes and Mapillary Vistas. For Cityscapes, which has a large number of weakly labelled images, we also leverage auto-labelling to improve generalization. Using our approach we achieve a new state-of-the-art results in both Mapillary (61.1 IOU val) and Cityscapes (85.4 IOU test). more details | n/a | 85.4 | 79.3 | 91.5 | |
SANet | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 25.0 | 80.2 | 70.8 | 89.7 | ||
SJTU_hpm | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | no | no | Hard Pixel Mining for Depth Privileged Semantic Segmentation | Zhangxuan Gu, Li Niu*, Haohua Zhao, and Liqing Zhang | more details | n/a | 79.1 | 69.2 | 89.0 | |
FANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | FANet: Feature Aggregation Network for Semantic Segmentation | Tanmay Singha, Duc-Son Pham, and Aneesh Krishna | Feature Aggregation Network for Semantic Segmentation more details | n/a | 61.1 | 43.4 | 78.8 | |
Hard Pixel Mining for Depth Privileged Semantic Segmentation | yes | yes | yes | yes | no | no | yes | yes | no | no | no | no | no | no | Hard Pixel Mining for Depth Privileged Semantic Segmentation | Zhangxuan Gu, Li Niu, Haohua Zhao, and Liqing Zhang | Semantic segmentation has achieved remarkable progress but remains challenging due to the complex scene, object occlusion, and so on. Some research works have attempted to use extra information such as a depth map to help RGB based semantic segmentation because the depth map could provide complementary geometric cues. However, due to the inaccessibility of depth sensors, depth information is usually unavailable for the test images. In this paper, we leverage only the depth of training images as the privileged information to mine the hard pixels in semantic segmentation, in which depth information is only available for training images but not available for test images. Specifically, we propose a novel Loss Weight Module, which outputs a loss weight map by employing two depth-related measurements of hard pixels: Depth Prediction Error and Depthaware Segmentation Error. The loss weight map is then applied to segmentation loss, with the goal of learning a more robust model by paying more attention to the hard pixels. Besides, we also explore a curriculum learning strategy based on the loss weight map. Meanwhile, to fully mine the hard pixels on different scales, we apply our loss weight module to multi-scale side outputs. Our hard pixels mining method achieves the state-of-the-art results on three benchmark datasets, and even outperforms the methods which need depth input during testing. more details | n/a | 82.6 | 74.9 | 90.3 | |
MSeg1080_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | MSeg: A Composite Dataset for Multi-domain Semantic Segmentation | John Lambert*, Zhuang Liu*, Ozan Sener, James Hays, Vladlen Koltun | CVPR 2020 | We present MSeg, a composite dataset that unifies semantic segmentation datasets from different domains. A naive merge of the constituent datasets yields poor performance due to inconsistent taxonomies and annotation practices. We reconcile the taxonomies and bring the pixel-level annotations into alignment by relabeling more than 220,000 object masks in more than 80,000 images, requiring more than 1.34 years of collective annotator effort. The resulting composite dataset enables training a single semantic segmentation model that functions effectively across domains and generalizes to datasets that were not seen during training. We adopt zero-shot cross-dataset transfer as a benchmark to systematically evaluate a model’s robustness and show that MSeg training yields substantially more robust models in comparison to training on individual datasets or naive mixing of datasets without the presented contributions. more details | 0.49 | 79.5 | 70.9 | 88.2 |
SA-Gate (ResNet-101,OS=16) | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Bi-directional Cross-Modality Feature Propagation with Separation-and-Aggregation Gate for RGB-D Semantic Segmentation | Xiaokang Chen, Kwan-Yee Lin, Jingbo Wang, Wayne Wu, Chen Qian, Hongsheng Li, and Gang Zeng | European Conference on Computer Vision (ECCV), 2020 | RGB+HHA input, input resolution = 800x800, output stride = 16, training 240 epochs, no coarse data is used. more details | n/a | 83.0 | 76.1 | 90.0 |
seamseg_rvcsubset | no | no | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Porzi, Lorenzo and Rota Bulò, Samuel and Colovic, Aleksander and Kontschieder, Peter | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019 | Seamless Scene Segmentation Resnet101, pretrained on Imagenet; supplied with altered MVD to include WildDash2 classes; does not contain other RVC label policies (i.e. no ADE20K/COCO-specific classes -> rvcsubset and not a proper submission) more details | n/a | 67.9 | 60.8 | 75.0 |
HRNet + LKPP + EA loss | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 78.6 | 67.6 | 89.6 | ||
SN_RN152pyrx8_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | In Defense of Pre-trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images | Marin Oršić, Ivan Krešo, Petra Bevandić, Siniša Šegvić | CVPR 2019 | more details | 1.0 | 75.9 | 65.2 | 86.6 |
EffPS_b1bs4_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | EfficientPS with EfficientNet-b1 backbone. Trained with a batch size of 4. more details | n/a | 69.2 | 54.3 | 84.0 | |
AttaNet_light | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | AttaNet: Attention-Augmented Network for Fast and Accurate Scene Parsing(AAAI21) | Anonymous | more details | n/a | 72.1 | 58.8 | 85.4 | |
CFPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 72.2 | 60.4 | 84.0 | ||
Seg_UJS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 84.4 | 77.9 | 90.9 | ||
Bilateral_attention_semantic | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | we use bilateral attention mechanism for semantic segmentation more details | 0.0141 | 79.5 | 71.0 | 87.9 | ||
Panoptic-DeepLab w/ SWideRNet [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 81.7 | 75.8 | 87.6 | |
ESANet RGB-D (small input) | yes | yes | no | no | no | no | yes | yes | no | no | 2 | 2 | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB-D data with half the input resolution. more details | 0.0427 | 70.7 | 56.8 | 84.6 | |
ESANet RGB (small input) | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | ESANet: Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB images with half the input resolution. more details | 0.031 | 66.5 | 50.5 | 82.6 | |
ESANet RGB-D | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB-D data. more details | 0.1613 | 79.0 | 69.3 | 88.7 | |
DAHUA-ARI | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | multi-scale and refineNet more details | n/a | 85.4 | 79.2 | 91.6 | ||
ESANet RGB | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis | Daniel Seichter, Mona Köhler, Benjamin Lewandowski, Tim Wengefeld and Horst-Michael Gross | ESANet: Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. ESANet-R34-NBt1D using RGB images only. more details | 0.1205 | 76.2 | 65.5 | 86.9 | |
DCNAS+ASPP [Mapillary Vistas] | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Existing NAS algorithms usually compromise on restricted search space or search on proxy task to meet the achievable computational demands. To allow as wide as possible network architectures and avoid the gap between realistic and proxy setting, we propose a novel Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information, over a large-scale target dataset without proxy. Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs. Moreover, by combining both path-level and channel-level sampling strategies, we design a fusion module and mixture layer to reduce the memory consumption of ample search space, hence favor the proxyless searching. Compared with contemporary works, experiments reveal that the proxyless searching scheme is capable of bridge the gap between searching and training environments. more details | n/a | 85.3 | 79.2 | 91.5 | ||
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 83.3 | 77.7 | 89.0 | |
DCNAS+ASPP | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | DCNAS: Densely Connected Neural Architecture Search for Semantic ImageSegmentation | Anonymous | Existing NAS algorithms usually compromise on restricted search space or search on proxy task to meet the achievable computational demands. To allow as wide as possible network architectures and avoid the gap between realistic and proxy setting, we propose a novel Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information, over a large-scale target dataset without proxy. Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs. Moreover, by combining both path-level and channel-level sampling strategies, we design a fusion module and mixture layer to reduce the memory consumption of ample search space, hence favor the proxyless searching. more details | n/a | 84.6 | 78.1 | 91.2 | |
ddl_seg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 84.4 | 77.9 | 90.9 | ||
CABiNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | CABiNet: Efficient Context Aggregation Network for Low-Latency Semantic Segmentation | Saumya Kumaar, Ye Lyu, Francesco Nex, Michael Ying Yang | With the increasing demand of autonomous machines, pixel-wise semantic segmentation for visual scene understanding needs to be not only accurate but also efficient for any potential real-time applications. In this paper, we propose CABiNet (Context Aggregated Bi-lateral Network), a dual branch convolutional neural network (CNN), with significantly lower computational costs as compared to the state-of-the-art, while maintaining a competitive prediction accuracy. Building upon the existing multi-branch architectures for high-speed semantic segmentation, we design a cheap high resolution branch for effective spatial detailing and a context branch with light-weight versions of global aggregation and local distribution blocks, potent to capture both long-range and local contextual dependencies required for accurate semantic segmentation, with low computational overheads. Specifically, we achieve 76.6% and 75.9% mIOU on Cityscapes validation and test sets respectively, at 76 FPS on an NVIDIA RTX 2080Ti and 8 FPS on a Jetson Xavier NX. Codes and training models will be made publicly available. more details | 0.013 | 75.7 | 62.8 | 88.7 | |
Margin calibration | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | The model is DeepLab v3+ backend on SEResNeXt50. We used the margin calibration with log-loss as the learning objective. more details | n/a | 81.8 | 73.6 | 90.0 | ||
MT-SSSR | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | n/a | 81.6 | 74.6 | 88.6 | ||
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas + Pseudo-labels] | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. Following Naive-Student, this model is additionally trained with pseudo-labels generated from Cityscapes Video and train-extra set (i.e., the coarse annotations are not used, but the images are). more details | n/a | 85.1 | 80.1 | 90.1 | |
DSANet: Dilated Spatial Attention for Real-time Semantic Segmentation in Urban Street Scenes | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | we present computationally efficient network named DSANet, which follows a two-branch strategy to tackle the problem of real-time semantic segmentation in urban scenes. We first design a Context branch, which employs Depth-wise Asymmetric ShuffleNet DAS as main building block to acquire sufficient receptive fields. In addition, we propose a dual attention module consisting of dilated spatial attention and channel attention to make full use of the multi-level feature maps simultaneously, which helps predict the pixel-wise labels in each stage. Meanwhile, Spatial Encoding Network is used to enhance semantic information by preserving the spatial details. Finally, to better combine context information and spatial information, we introduce a Simple Feature Fusion Module to combine the features from the two branches. more details | n/a | 72.5 | 62.2 | 82.7 | ||
UJS_model | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 85.1 | 78.9 | 91.4 | ||
Mobilenetv3-small-backbone real-time segmentation | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | The model is a dual-path network with mobilenetv3-small backbone. PSP module was used as the context aggregation block. We also use feature fusion module at x16, x32. The features of the two branches are then concatenated and fused with a bottleneck conv. Only train data is used to train the model excluding validation data. And evaluation was done by single scale input images. more details | 0.02 | 67.5 | 53.5 | 81.4 | ||
M2FANet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Urban street scene analysis using lightweight multi-level multi-path feature aggregation network | Tanmay Singha; Duc-Son Pham; Aneesh Krishna | Multiagent and Grid Systems Journal | more details | n/a | 67.8 | 53.8 | 81.7 |
AFPNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.03 | 75.2 | 64.3 | 86.0 | ||
YOLO V5s with Segmentation Head | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Anonymous | Multitask model. fine tune from COCO detection pretrained model, train semantic segmentation and object detection(transfer from instance label) at the same time more details | 0.007 | 70.4 | 57.1 | 83.7 | ||
FSFFNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | A Lightweight Multi-scale Feature Fusion Network for Real-Time Semantic Segmentation | Tanmay Singha, Duc-Son Pham, Aneesh Krishna, Tom Gedeon | International Conference on Neural Information Processing 2021 | Feature Scaling Feature Fusion Network more details | n/a | 68.5 | 55.2 | 81.8 |
Qualcomm AI Research | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | InverseForm: A Loss Function for Structured Boundary-Aware Segmentation | Shubhankar Borse, Ying Wang, Yizhe Zhang, Fatih Porikli | CVPR 2021 oral | more details | n/a | 85.6 | 79.8 | 91.5 |
HIK-CCSLT | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 85.0 | 78.8 | 91.2 | ||
BFNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | BFNet | Jiaqi Fan | more details | n/a | 71.3 | 59.1 | 83.4 | |
Hai Wang+Yingfeng Cai-research group | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.00164 | 84.6 | 78.0 | 91.2 | ||
Jiangsu_university_Intelligent_Drive_AI | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 84.6 | 78.0 | 91.2 | ||
MCANet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Anonymous | more details | n/a | 72.8 | 60.1 | 85.6 | ||
UFONet (half-resolution) | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | UFO RPN: A Region Proposal Network for Ultra Fast Object Detection | Wenkai Li, Andy Song | The 34th Australasian Joint Conference on Artificial Intelligence | more details | n/a | 56.7 | 37.4 | 76.0 |
SCMNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 68.0 | 53.6 | 82.3 | ||
FsaNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | FsaNet: Frequency Self-attention for Semantic Segmentation | Anonymous | more details | n/a | 79.5 | 70.2 | 88.8 | |
SCMNet coarse | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | SCMNet: Shared Context Mining Network for Real-time Semantic Segmentation | Tanmay Singha; Moritz Bergemann; Duc-Son Pham; Aneesh Krishna | 2021 Digital Image Computing: Techniques and Applications (DICTA) | more details | n/a | 69.1 | 55.1 | 83.2 |
SAIT SeeThroughNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 85.7 | 79.8 | 91.6 | ||
JSU_IDT_group | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 84.8 | 78.4 | 91.1 | ||
DLA_HRNet48OCR_MSFLIP_000 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | This set of predictions is from DLA (differentiable lattice assignment network) with "HRNet48+OCR-Head" as base segmentation model. The model is, first trained on coarse-data, and then trained on fine-annotated train/val sets. Multi-scale (0.5, 0.75, 1.0, 1.25, 1.5, 1.75) and flip scheme is adopted during inference. more details | n/a | 84.5 | 77.8 | 91.2 | ||
MYBank-AIoT | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 85.8 | 80.1 | 91.4 | ||
kMaX-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | k-means Mask Transformer | Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen | ECCV 2022 | kMaX-DeepLab w/ ConvNeXt-L backbone (ImageNet-22k + 1k pretrained). This result is obtained by the kMaX-DeepLab trained for Panoptic Segmentation task. No test-time augmentation or other external dataset. more details | n/a | 82.7 | 77.0 | 88.4 |
LeapAI | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | Using advanced AI techniques. more details | n/a | 84.2 | 77.1 | 91.3 | ||
adlab_iiau_ldz | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | meticulous-caiman_2022.05.01_03.32 more details | n/a | 85.1 | 78.7 | 91.4 | ||
SFRSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | A Real-Time Semantic Segmentation Model Using Iteratively Shared Features In Multiple Sub-Encoders | Tanmay Singha, Duc-Son Pham, Aneesh Krishna | Pattern Recognition | more details | n/a | 65.8 | 51.5 | 80.2 |
PIDNet-S | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | PIDNet: A Real-time Semantic Segmentation Network Inspired from PID Controller | Anonymous | more details | 0.0107 | 77.9 | 68.5 | 87.3 | |
Vision Transformer Adapter for Dense Predictions | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Vision Transformer Adapter for Dense Predictions | Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, Yu Qiao | ViT-Adapter-L, BEiT pre-train, multi-scale testing more details | n/a | 83.4 | 76.6 | 90.2 | |
SSNet | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Anonymous | more details | n/a | 78.2 | 69.2 | 87.2 | ||
SDBNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | SDBNet: Lightweight Real-time Semantic Segmentation Using Short-term Dense Bottleneck | Tanmay Singha, Duc-Son Pham, Aneesh Krishna | 2022 International Conference on Digital Image Computing: Techniques and Applications (DICTA) | more details | n/a | 69.4 | 55.7 | 83.1 |
MeiTuan-BaseModel | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 85.4 | 79.7 | 91.0 | ||
SDBNetV2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Improved Short-term Dense Bottleneck network for efficient scene analysis | Tanmay Singha; Duc-Son Pham; Aneesh Krishna | Computer Vision and Image Understanding | more details | n/a | 70.6 | 57.3 | 83.8 |
mogo_semantic | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 85.2 | 79.1 | 91.3 | ||
UDSSEG_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | UDSSEG_RVC more details | n/a | 77.1 | 66.2 | 88.0 | ||
MIX6D_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | MIX6D_RVC more details | n/a | 74.6 | 65.0 | 84.2 | ||
FAN_NV_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Hybrid-Base + Segformer more details | n/a | 78.3 | 68.1 | 88.5 | ||
UNIV_CNP_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | RVC 2022 more details | n/a | 70.2 | 58.8 | 81.5 | ||
AntGroup-AI-VisionAlgo | yes | yes | yes | yes | no | no | no | no | yes | yes | no | no | no | no | Anonymous | AntGroup AI vision algo more details | n/a | 84.5 | 78.1 | 90.8 | ||
InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions | Wenhai Wang, Jifeng Dai, Zhe Chen, Zhenhang Huang, Zhiqi Li, Xizhou Zhu, Xiaowei Hu, Tong Lu, Lewei Lu, Hongsheng Li, Xiaogang Wang, Yu Qiao | CVPR 2023 | We use Mask2Former as the segmentation framework, and initialize our InternImage-H model with the pre-trained weights on the 427M joint dataset of public Laion-400M, YFCC-15M, and CC12M. Following common practices, we first pre-train on Mapillary Vistas for 80k iterations, and then fine-tune on Cityscapes for 80k iterations. The crop size is set to 1024×1024 in this experiment. As a result, our InternImage-H achieves 87.0 multi-scale mIoU on the validation set, and 86.1 multi-scale mIoU on the test set. more details | n/a | 85.0 | 79.7 | 90.2 |
Dense Prediction with Attentive Feature aggregation | yes | yes | yes | yes | no | no | no | no | no | no | no | no | yes | yes | Dense Prediction with Attentive Feature Aggregation | Yung-Hsu Yang, Thomas E. Huang, Min Sun, Samuel Rota Bulò, Peter Kontschieder, Fisher Yu | WACV 2023 | We propose Attentive Feature Aggregation (AFA) to exploit both spatial and channel information for semantic segmentation and boundary detection. more details | n/a | 82.2 | 74.6 | 89.8 |
W3_FAFM | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Junyan Yang, Qian Xu, Lei La | Team: BOSCH-XC-DX-WAVE3 more details | 0.029309 | 80.3 | 72.1 | 88.6 | ||
HRN | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Hierarchical residual network more details | 45.0 | 77.2 | 67.5 | 86.9 | ||
HRN+DCNv2_for_DOAS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | HRN with DCNv2 for DOAS in paper "Dynamic Obstacle Avoidance System based on Rapid Instance Segmentation Network" more details | 0.032 | 81.0 | 73.3 | 88.7 | ||
GEELY-ATC-SEG | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 85.4 | 80.0 | 90.8 | ||
PMSDSEN | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient Parallel Multi-Scale Detail and Semantic Encoding Network for Lightweight Semantic Segmentation | Xiao Liu, Xiuya Shi, Lufei Chen, Linbo Qing, Chao Ren | ACM International Conference on Multimedia 2023 | MM '23: Proceedings of the 31th ACM International Conference on Multimedia more details | n/a | 75.7 | 65.3 | 86.1 |
ECFD | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | backbone: ConvNext-Large more details | n/a | 82.1 | 74.9 | 89.4 | ||
DWGSeg-L75 | yes | yes | no | no | no | no | no | no | no | no | 1.3 | 1.3 | no | no | Anonymous | more details | 0.00755 | 75.0 | 63.8 | 86.2 | ||
VLTSeg | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | VLTSeg: Simple Transfer of CLIP-Based Vision-Language Representations for Domain Generalized Semantic Segmentation | Christoph Hümmer, Manuel Schwonberg, Liangwei Zhou, Hu Cao, Alois Knoll, Hanno Gottschalk | more details | n/a | 85.3 | 80.0 | 90.7 | |
CGMANet_v1 | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Context Guided Multi-scale Attention for Real-time Semantic Segmentation of Road-scene | Saquib Mazhar | Context Guided Multi-scale Attention for Real-time Semantic Segmentation of Road-scene more details | n/a | 74.9 | 64.9 | 84.8 | |
SERNet-Former_v2 | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 81.0 | 73.5 | 88.5 |
Instance-Level Semantic Labeling Task
AP on class-level
name | fine | fine | coarse | coarse | 16-bit | 16-bit | depth | depth | video | video | sub | sub | code | code | title | authors | venue | description | Runtime [s] | average | person | rider | car | truck | bus | train | motorcycle | bicycle |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
R-CNN + MCG convex hull | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | The Cityscapes Dataset for Semantic Urban Scene Understanding | M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, B. Schiele | CVPR 2016 | We compute MCG object proposals [1] and use their convex hulls as instance candidates. These proposals are scored by a Fast R-CNN detector [2]. [1] P. Arbelaez, J. Pont-Tuset, J. Barron, F. Marqués, and J. Malik. Multiscale combinatorial grouping. In CVPR, 2014. [2] R. Girshick. Fast R-CNN. In ICCV, 2015. more details | 60.0 | 4.6 | 1.3 | 0.6 | 10.5 | 6.1 | 9.7 | 5.9 | 1.7 | 0.5 |
Pixel-level Encoding for Instance Segmentation | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Pixel-level Encoding and Depth Layering for Instance-level Semantic Labeling | J. Uhrig, M. Cordts, U. Franke, and T. Brox | GCPR 2016 | We predict three encoding channels from a single image using an FCN: semantic labels, depth classes, and an instance-aware representation based on directions towards instance centers. Using low-level computer vision techniques, we obtain pixel-level and instance-level semantic labeling paired with a depth estimate of the instances. more details | n/a | 8.9 | 12.5 | 11.7 | 22.5 | 3.3 | 5.9 | 3.2 | 6.9 | 5.1 |
Instance-level Segmentation of Vehicles by Deep Contours | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Instance-level Segmentation of Vehicles by Deep Contours | Jan van den Brand, Matthias Ochs and Rudolf Mester | Asian Conference on Computer Vision - Workshop on Computer Vision Technologies for Smart Vehicle | Our method uses the fully convolutional network (FCN) for semantic labeling and for estimating the boundary of each vehicle. Even though a contour is in general a one pixel wide structure which cannot be directly learned by a CNN, our network addresses this by providing areas around the contours. Based on these areas, we separate the individual vehicle instances. more details | 0.2 | 2.3 | 0.0 | 0.0 | 18.2 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
Boundary-aware Instance Segmentation | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Boundary-aware Instance Segmentation | Zeeshan Hayder, Xuming He, Mathieu Salzmann | CVPR 2017 | End-to-end model for instance segmentation using VGG16 network Previously listed as "Shape-Aware Instance Segmentation" more details | n/a | 17.4 | 14.6 | 12.9 | 35.7 | 16.0 | 23.2 | 19.0 | 10.3 | 7.8 |
RecAttend | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | n/a | 9.5 | 9.2 | 3.1 | 27.5 | 8.0 | 12.1 | 7.9 | 4.8 | 3.3 | ||
Joint Graph Decomposition and Node Labeling | yes | yes | no | no | no | no | no | no | no | no | 8 | 8 | no | no | Joint Graph Decomposition and Node Labeling: Problem, Algorithms, Applications | Evgeny Levinkov, Jonas Uhrig, Siyu Tang, Mohamed Omran, Eldar Insafutdinov, Alexander Kirillov, Carsten Rother, Thomas Brox, Bernt Schiele, Bjoern Andres | Computer Vision and Pattern Recognition (CVPR) 2017 | more details | n/a | 9.8 | 6.5 | 9.3 | 23.1 | 6.7 | 10.9 | 10.3 | 6.8 | 4.6 |
InstanceCut | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | InstanceCut: from Edges to Instances with MultiCut | A. Kirillov, E. Levinkov, B. Andres, B. Savchynskyy, C. Rother | Computer Vision and Pattern Recognition (CVPR) 2017 | InstanceCut represents the problem by two output modalities: (i) an instance-agnostic semantic segmentation and (ii) all instance-boundaries. The former is computed from a standard CNN for semantic segmentation, and the latter is derived from a new instance-aware edge detection model. To reason globally about the optimal partitioning of an image into instances, we combine these two modalities into a novel MultiCut formulation. more details | n/a | 13.0 | 10.0 | 8.0 | 23.7 | 14.0 | 19.5 | 15.2 | 9.3 | 4.7 |
Semantic Instance Segmentation with a Discriminative Loss Function | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Semantic Instance Segmentation with a Discriminative Loss Function | Bert De Brabandere, Davy Neven, Luc Van Gool | Deep Learning for Robotic Vision, workshop at CVPR 2017 | This method uses a discriminative loss function, operating at the pixel level, that encourages a convolutional network to produce a representation of the image that can easily be clustered into instances with a simple post-processing step. The loss function encourages the network to map each pixel to a point in feature space so that pixels belonging to the same instance lie close together while different instances are separated by a wide margin. Previously listed as "PPLoss". more details | n/a | 17.5 | 13.5 | 16.2 | 24.4 | 16.8 | 23.9 | 19.2 | 15.2 | 10.7 |
SGN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | SGN: Sequential Grouping Networks for Instance Segmentation | Shu Liu, Jiaya Jia, Sanja Fidler, Raquel Urtasun | ICCV 2017 | Instance segmentation using a sequence of neural networks, each solving a sub-grouping problem of increasing semantic complexity in order to gradually compose objects out of pixels. more details | n/a | 25.0 | 21.8 | 20.1 | 39.4 | 24.8 | 33.2 | 30.8 | 17.7 | 12.4 |
Mask R-CNN [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Mask R-CNN | Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick | Mask R-CNN, ResNet-50-FPN, Cityscapes [fine-only] + COCO more details | n/a | 32.0 | 34.8 | 27.0 | 49.1 | 30.1 | 40.9 | 30.9 | 24.1 | 18.7 | |
Mask R-CNN [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Mask R-CNN | Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick | Mask R-CNN, ResNet-50-FPN, Cityscapes fine-only more details | n/a | 26.2 | 30.5 | 23.7 | 46.9 | 22.8 | 32.2 | 18.6 | 19.1 | 16.0 | |
Deep Watershed Transformation | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Deep Watershed Transformation for Instance Segmentation | Min Bai and Raquel Urtasun | CVPR 2017 | Instance segmentation using a watershed transformation inspired CNN. The input RGB image is augmented using the semantic segmentation from the recent PSPNet by H. Zhao et al. Previously named "DWT". more details | n/a | 19.4 | 15.5 | 14.1 | 31.5 | 22.5 | 27.0 | 22.9 | 13.9 | 8.0 |
Foveal Vision for Instance Segmentation of Road Images | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Foveal Vision for Instance Segmentation of Road Images | Benedikt Ortelt, Christian Herrmann, Dieter Willersinn, Jürgen Beyerer | VISAPP 2018 | Directly based on 'Pixel-level Encoding for Instance Segmentation'. Adds an improved angular distance measure and a foveal concept to better address small objects at the vanishing point of the road. more details | n/a | 12.5 | 13.4 | 11.4 | 24.5 | 9.4 | 14.5 | 12.2 | 8.0 | 6.7 |
SegNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.5 | 29.5 | 29.9 | 23.4 | 43.4 | 29.8 | 41.0 | 33.3 | 18.7 | 16.7 | ||
DCME | yes | yes | no | no | no | no | no | no | no | no | 3 | 3 | no | no | Distance to Center of Mass Encoding for Instance Segmentation | Thomio Watanabe and Denis Wolf | 2018 21st International Conference on Intelligent Transportation Systems (ITSC) | more details | n/a | 3.8 | 1.8 | 0.7 | 15.5 | 2.0 | 4.3 | 4.6 | 0.9 | 0.3 |
RRL | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Anonymous | more details | n/a | 29.7 | 33.8 | 26.9 | 51.9 | 24.2 | 35.6 | 25.3 | 20.9 | 18.7 | ||
PANet [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Path Aggregation Network for Instance Segmentation | Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, Jiaya Jia | CVPR 2018 | PANet, ResNet-50 as base model, Cityscapes fine-only, training hyper-parameters are adopted from Mask R-CNN. more details | n/a | 31.8 | 36.8 | 30.4 | 54.8 | 27.0 | 36.3 | 25.5 | 22.6 | 20.8 |
PANet [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Path Aggregation Network for Instance Segmentation | Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, Jiaya Jia | CVPR 2018 | PANet, ResNet-50 as base model, Cityscapes fine-only + COCO, training hyper-parameters are adopted from Mask R-CNN. more details | n/a | 36.4 | 41.5 | 33.6 | 58.2 | 31.8 | 45.3 | 28.7 | 28.2 | 24.1 |
LCIS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 15.1 | 15.1 | 14.8 | 23.7 | 12.9 | 16.8 | 15.4 | 12.4 | 9.3 | ||
Pixelwise Instance Segmentation with a Dynamically Instantiated Network | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Pixelwise Instance Segmentation with a Dynamically Instantiated Network | Anurag Arnab and Philip H. S. Torr | Computer Vision and Pattern Recognition (CVPR) 2017 | We propose an Instance Segmentation system that produces a segmentation map where each pixel is assigned an object class and instance identity label (this has recently been termed "Panoptic Segmentation"). Our method is based on an initial semantic segmentation module which feeds into an instance subnetwork. This subnetwork uses the initial category-level segmentation, along with cues from the output of an object detector, within an end-to-end CRF to predict instances. This part of our model is dynamically instantiated to produce a variable number of instances per image. Our end-to-end approach requires no post-processing and considers the image holistically, instead of processing independent proposals. As a result, it reasons about occlusions (unlike some related work, a single pixel cannot belong to multiple instances). more details | n/a | 23.4 | 21.0 | 18.4 | 31.7 | 22.8 | 31.1 | 31.0 | 19.6 | 11.7 |
PolygonRNN++ | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient Annotation of Segmentation Datasets with Polygon-RNN++ | D. Acuna, H. Ling, A. Kar, and S. Fidler | CVPR 2018 | more details | n/a | 25.5 | 29.4 | 21.8 | 48.3 | 21.1 | 32.3 | 23.7 | 13.6 | 13.6 |
GMIS: Graph Merge for Instance Segmentation | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Yiding Liu, Siyu Yang, Bin Li, Wengang Zhou, Jizheng Xu, Houqiang Li, Yan Lu | more details | n/a | 27.6 | 29.3 | 24.1 | 42.7 | 25.4 | 37.2 | 32.9 | 17.6 | 11.9 | ||
TCnet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | TCnet more details | n/a | 32.6 | 37.3 | 29.9 | 51.5 | 28.7 | 41.1 | 28.7 | 24.9 | 19.1 | ||
MaskRCNN_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | MaskRCNN Instance segmentation baseline for ROB challenge using default parameters from Matterport's implementation of Mask RCNN https://github.com/matterport/Mask_RCNN more details | n/a | 10.2 | 19.1 | 10.5 | 34.1 | 2.7 | 7.2 | 0.0 | 8.0 | 0.0 | ||
Multitask Learning | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics | Alex Kendall, Yarin Gal and Roberto Cipolla | CVPR 2018 | Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task. more details | n/a | 21.6 | 19.2 | 21.4 | 36.6 | 18.8 | 26.8 | 15.9 | 19.4 | 14.5 |
Deep Coloring | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous ECCV submission #2955 more details | n/a | 24.9 | 22.3 | 21.3 | 40.9 | 23.1 | 33.6 | 28.3 | 17.8 | 12.3 | ||
MRCNN_VSCMLab_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | MaskRCNN+FPN with pre-trained COCO model. ms-training with short edge [800, 1024] inference with shore edge size 800 Randomly subsample ScanNet to the size close to CityScape optimizer: Adam learning rate: start from 1e-4 to 1e-3 with linear warm up schedule. decrease by factor of 0.1 at 200, 300 epoch. epoch: 400 step per epoch: 500 roi_per_im: 512 more details | 1.0 | 14.8 | 15.7 | 11.5 | 36.7 | 13.6 | 18.7 | 14.3 | 8.3 | 0.0 | ||
BAMRCNN_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 0.3 | 0.0 | 0.0 | 1.5 | 0.2 | 0.3 | 0.0 | 0.0 | 0.0 | ||
NL_ROI_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Non-local ROI on Mask R-CNN more details | n/a | 24.0 | 28.3 | 20.6 | 45.5 | 22.3 | 30.6 | 17.3 | 15.1 | 12.1 | ||
RUSH_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 32.1 | 35.6 | 30.9 | 50.3 | 30.8 | 38.0 | 25.2 | 25.7 | 20.4 | ||
MaskRCNN_BOSH | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Jin shengtao, Yi zhihao, Liu wei [Our team name is firefly] | MaskRCNN segmentation baseline for Bosh autodrive challenge , using Matterport's implementation of Mask RCNN https://github.com/matterport/Mask_RCNN 55k iterations, default parameters (backbone :resenet 101) 19hours for training more details | n/a | 12.8 | 14.3 | 9.4 | 25.2 | 10.9 | 15.2 | 13.6 | 7.6 | 6.3 | ||
NV-ADLR | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | NVIDIA Applied Deep Learning Research more details | n/a | 35.3 | 39.5 | 29.5 | 56.3 | 34.2 | 44.7 | 30.3 | 27.1 | 21.1 | ||
Sogou_MM | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Global Concatenating Feature Enhancement for Instance Segmentation | Hang Yang, Xiaozhe Xin, Wenwen Yang, Bin Li | Global Concatenating Feature Enhancement for Instance Segmentation more details | n/a | 37.2 | 39.1 | 32.0 | 54.6 | 37.9 | 47.7 | 36.8 | 27.6 | 21.5 | |
Instance Segmentation by Jointly Optimizing Spatial Embeddings and Clustering Bandwidth | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Instance Segmentation by Jointly Optimizing Spatial Embeddings and Clustering Bandwidth | Davy Neven, Bert De Brabandere, Marc Proesmans and Luc Van Gool | CVPR 2019 | Fine only - ERFNet backbone more details | 0.1 | 27.7 | 34.5 | 26.1 | 52.4 | 21.7 | 31.2 | 16.4 | 20.1 | 18.9 |
Instance Annotation | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Instance Segmentation as Image Segmentation Annotation | Thomio Watanabe and Denis F. Wolf | 2019 IEEE Intelligent Vehicles Symposium (IV) | Based on DCME more details | 4.416 | 7.7 | 6.7 | 3.1 | 24.1 | 6.0 | 9.8 | 6.4 | 3.6 | 2.1 |
NJUST | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Ang Li, Chongyang Zhang | Mask R-CNN based on FPN enhancement and Mask Rescore, etc. Only one single model SE-ResNext-152 with COCO pre-train used; more details | n/a | 38.9 | 44.0 | 35.2 | 57.9 | 36.2 | 48.7 | 35.1 | 30.5 | 23.9 | ||
BshapeNet+ [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | BshapeNet: Object Detection and Instance Segmentation with Bounding Shape Masks | Ba Rom Kang, Ha Young Kim | BshapeNet+, ResNet-50-FPN as base model, Cityscapes [fine-only] more details | n/a | 27.3 | 29.7 | 23.4 | 46.7 | 26.1 | 33.3 | 24.8 | 20.3 | 14.1 | |
SSAP | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | SSAP: Single-Shot Instance Segmentation With Affinity Pyramid | Naiyu Gao, Yanhu Shan, Yupei Wang, Xin Zhao, Yinan Yu, Ming Yang, Kaiqi Huang | ICCV 2019 | SSAP, ResNet-101, Cityscapes fine-only more details | n/a | 32.7 | 35.4 | 25.5 | 55.9 | 33.2 | 43.9 | 31.9 | 19.5 | 16.2 |
Spatial Sampling Net | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Spatial Sampling Network for Fast Scene Understanding | Davide Mazzini, Raimondo Schettini | CVPR 2019 Workshop on Autonomous Driving | We propose a network architecture to perform efficient scene understanding. This work presents three main novelties: the first is an Improved Guided Upsampling Module that can replace in toto the decoder part in common semantic segmentation networks. Our second contribution is the introduction of a new module based on spatial sampling to perform Instance Segmentation. It provides a very fast instance segmentation, needing only thresholding as post-processing step at inference time. Finally, we propose a novel efficient network design that includes the new modules and we test it against different datasets for outdoor scene understanding. more details | 0.00884 | 9.2 | 8.8 | 3.2 | 24.0 | 10.0 | 13.2 | 8.5 | 4.4 | 1.5 |
UPSNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | UPSNet: A Unified Panoptic Segmentation Network | Yuwen Xiong, Renjie Liao, Hengshuang Zhao, Rui Hu, Min Bai, Ersin Yumer, Raquel Urtasun | CVPR 2019 | more details | 0.227 | 33.0 | 35.9 | 27.4 | 51.9 | 31.8 | 43.1 | 31.4 | 23.8 | 19.1 |
Sem2Ins | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous NeurIPS19 submission #4671 more details | n/a | 19.3 | 17.7 | 17.4 | 27.2 | 21.1 | 26.2 | 20.5 | 14.1 | 10.1 | ||
BshapeNet+ [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | BshapeNet: Object Detection and Instance Segmentation with Bounding Shape Masks | Ba Rom Kang, Ha Young Kim | BshapeNet+ single model, ResNet-50-FPN as base model, Cityscapes [fine-only + COCO] more details | n/a | 32.9 | 36.6 | 24.8 | 50.4 | 33.7 | 41.0 | 33.7 | 25.4 | 17.8 | |
AdaptIS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Adaptive Instance Selection network architecture for class-agnostic instance segmentation. Given an input image and a point (x, y), it generates a mask for the object located at (x, y). The network adapts to the input point with a help of AdaIN layers, thus producing different masks for different objects on the same image. AdaptIS generates pixel-accurate object masks, therefore it accurately segments objects of complex shape or severely occluded ones. more details | n/a | 32.5 | 31.4 | 29.1 | 49.8 | 31.6 | 41.7 | 39.4 | 24.7 | 12.1 | ||
AInnoSegmentation | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Faen Zhang, Jiahong Wu, Haotian Cao, Zhizheng Yang, Jianfei Song, Ze Huang, Jiashui Huang, Shenglan Ben | AInnoSegmentation use SE-Resnet 152 as backbone and FPN model to extract multi-level features and use self-develop method to combine multi-features and use COCO datasets to pre-train model and so on more details | n/a | 39.5 | 42.3 | 32.6 | 57.6 | 40.0 | 51.3 | 39.8 | 30.6 | 22.1 | ||
iFLYTEK-CV | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | iFLYTEK Research, CV Group more details | n/a | 42.3 | 44.1 | 36.9 | 59.2 | 45.4 | 52.8 | 43.2 | 31.7 | 24.6 | ||
Panoptic-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | Our proposed bottom-up Panoptic-DeepLab is conceptually simple yet delivers state-of-the-art results. The Panoptic-DeepLab adopts dual-ASPP and dual-decoder modules, specific to semantic segmentation and instance segmentation respectively. The semantic segmentation prediction follows the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation prediction involves a simple instance center regression, where the model learns to predict instance centers as well as the offset from each pixel to its corresponding center. This submission exploits only Cityscapes fine annotations. This entry fixes a minor inference bug (i.e., same trained model) for instance segmentation, compared to the previous submission. more details | n/a | 34.6 | 34.3 | 28.9 | 55.1 | 32.8 | 41.5 | 36.6 | 26.3 | 21.6 | |
snake | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Deep Snake for Real-Time Instance Segmentation | Sida Peng, Wen Jiang, Huaijin Pi, Xiuli Li, Hujun Bao, Xiaowei Zhou | CVPR 2020 | more details | 0.217 | 31.7 | 37.2 | 27.0 | 56.0 | 29.5 | 40.5 | 28.2 | 19.0 | 16.4 |
PolyTransform | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | PolyTransform: Deep Polygon Transformer for Instance Segmentation | Justin Liang, Namdar Homayounfar, Wei-Chiu Ma, Yuwen Xiong, Rui Hu, Raquel Urtasun | more details | n/a | 40.1 | 42.4 | 34.8 | 58.5 | 39.8 | 50.0 | 41.3 | 30.9 | 23.4 | |
StixelPointNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Learning Stixel-Based Instance Segmentation | Monty Santarossa, Lukas Schneider, Claudius Zelenka, Lars Schmarje, Reinhard Koch, Uwe Franke | IV 2021 | An adapted version of the PointNet is trained on Stixels as input for instance segmentation. more details | 0.035 | 8.5 | 9.0 | 7.3 | 15.8 | 12.8 | 16.3 | 0.0 | 3.5 | 3.5 |
Axial-DeepLab-XL [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 34.0 | 32.3 | 28.2 | 52.6 | 32.6 | 41.8 | 38.1 | 25.9 | 20.5 |
PolyTransform + SegFix | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | openseg | We simply apply a novel post-processing scheme based on the PolyTransform (thanks to the authors of PolyTransform for providing their segmentation results). The performance of the baseline PolyTransform is 40.1% and our method achieves 41.2%. Besides, our method also could improve the results of PointRend and PANet by more than 1.0% without any re-training or fine-tuning the segmentation models. more details | n/a | 41.2 | 44.3 | 35.9 | 60.5 | 40.5 | 51.2 | 41.6 | 31.7 | 24.1 | |
GAIS-Net | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Geometry-Aware Instance Segmentation with Disparity Maps | Cho-Ying Wu, Xiaoyan Hu, Michael Happold, Qiangeng Xu, Ulrich Neumann | Scalability in Autonomous Driving, workshop at CVPR 2020 | Geometry-Aware Instance Segmentation with Disparity Maps more details | n/a | 32.3 | 36.0 | 29.0 | 52.8 | 29.7 | 39.8 | 28.9 | 23.3 | 18.5 |
Axial-DeepLab-L [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 38.1 | 34.7 | 30.4 | 55.1 | 40.9 | 49.7 | 43.5 | 29.0 | 21.7 |
LevelSet R-CNN [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | LevelSet R-CNN: A Deep Variational Method for Instance Segmentation | Namdar Homayounfar*, Yuwen Xiong*, Justin Liang*, Wei-Chiu Ma, Raquel Urtasun | ECCV 2020 | Obtaining precise instance segmentation masks is of high importance in many modern applications such as robotic manipulation and autonomous driving. Currently, many state of the art models are based on the Mask R-CNN framework which, while very powerful, outputs masks at low resolutions which could result in imprecise boundaries. On the other hand, classic variational methods for segmentation impose desirable global and local data and geometry constraints on the masks by optimizing an energy functional. While mathematically elegant, their direct dependence on good initialization, non-robust image cues and manual setting of hyperparameters renders them unsuitable for modern applications. We propose LevelSet R-CNN, which combines the best of both worlds by obtaining powerful feature representations that are combined in an end-to-end manner with a variational segmentation framework. We demonstrate the effectiveness of our approach on COCO and Cityscapes datasets. more details | n/a | 33.3 | 37.0 | 29.3 | 54.6 | 30.4 | 39.4 | 30.2 | 25.5 | 20.3 |
LevelSet R-CNN [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | LevelSet R-CNN: A Deep Variational Method for Instance Segmentation | Namdar Homayounfar*, Yuwen Xiong*, Justin Liang*, Wei-Chiu Ma, Raquel Urtasun | ECCV 2020 | Obtaining precise instance segmentation masks is of high importance in many modern applications such as robotic manipulation and autonomous driving. Currently, many state of the art models are based on the Mask R-CNN framework which, while very powerful, outputs masks at low resolutions which could result in imprecise boundaries. On the other hand, classic variational methods for segmentation impose desirable global and local data and geometry constraints on the masks by optimizing an energy functional. While mathematically elegant, their direct dependence on good initialization, non-robust image cues and manual setting of hyperparameters renders them unsuitable for modern applications. We propose LevelSet R-CNN, which combines the best of both worlds by obtaining powerful feature representations that are combined in an end-to-end manner with a variational segmentation framework. We demonstrate the effectiveness of our approach on COCO and Cityscapes datasets. more details | n/a | 40.0 | 43.4 | 33.9 | 59.0 | 37.6 | 49.4 | 39.4 | 32.5 | 24.9 |
Axial-DeepLab-L [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 33.3 | 32.0 | 27.8 | 52.4 | 30.7 | 44.2 | 34.0 | 25.7 | 19.2 |
Deep Affinity Net [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Deep Affinity Net: Instance Segmentation via Affinity | Xingqian Xu, Mangtik Chiu, Thomas Huang, Honghui Shi | A proposal-free method that uses FPN generated features and network predicted 4-neighbor affinities to reconstruct instance segments. During inference time, an efficient graph partitioning algorithm, Cascade-GAEC, is introduced to overcome the long execution time in the high-resolution graph partitioning problem. more details | n/a | 27.5 | 24.5 | 22.2 | 43.7 | 29.5 | 38.3 | 31.9 | 18.0 | 12.1 | |
Naive-Student (iterative semi-supervised learning with Panoptic-DeepLab) | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Semi-Supervised Learning in Video Sequences for Urban Scene Segmentation | Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D. Collins, Ekin D. Cubuk, Barret Zoph, Hartwig Adam, Jonathon Shlens | Supervised learning in large discriminative models is a mainstay for modern computer vision. Such an approach necessitates investing in large-scale human-annotated datasets for achieving state-of-the-art results. In turn, the efficacy of supervised learning may be limited by the size of the human annotated dataset. This limitation is particularly notable for image segmentation tasks, where the expense of human annotation is especially large, yet large amounts of unlabeled data may exist. In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences to improve the performance on urban scene segmentation, simultaneously tackling semantic, instance, and panoptic segmentation. The goal of this work is to avoid the construction of sophisticated, learned architectures specific to label propagation (e.g., patch matching and optical flow). Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data. The procedure is iterated for several times. As a result, our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks, reaching the performance of 67.8% PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences to surpass state-of-the-art performance on core computer vision tasks. more details | n/a | 42.6 | 40.5 | 35.3 | 60.0 | 44.7 | 53.4 | 44.1 | 35.8 | 26.7 | |
Axial-DeepLab-XL [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 39.6 | 36.6 | 32.5 | 56.6 | 41.0 | 52.4 | 43.7 | 30.8 | 23.5 |
Panoptic-DeepLab [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | We employ a stronger backbone, WR-41, in Panoptic-DeepLab. For Panoptic-DeepLab, please refer to https://arxiv.org/abs/1911.10194. For wide-ResNet-41 (WR-41) backbone, please refer to https://arxiv.org/abs/2005.10266. more details | n/a | 40.6 | 37.9 | 32.2 | 58.3 | 44.2 | 53.5 | 39.5 | 34.4 | 25.2 | |
EfficientPS [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 39.8 | 43.1 | 34.8 | 59.0 | 38.1 | 49.6 | 38.9 | 29.0 | 25.7 | |
seamseg_rvcsubset | no | no | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Porzi, Lorenzo and Rota Bulò, Samuel and Colovic, Aleksander and Kontschieder, Peter | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019 | Seamless Scene Segmentation Resnet101, pretrained on Imagenet; supplied with altered MVD to include WildDash2 classes; does not contain other RVC label policies (i.e. no ADE20K/COCO-specific classes -> rvcsubset and not a proper submission) more details | n/a | 22.1 | 27.1 | 18.0 | 37.5 | 26.4 | 30.4 | 9.8 | 15.8 | 11.7 |
UniDet_RVC | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 300.0 | 29.8 | 31.4 | 22.4 | 45.9 | 30.5 | 41.5 | 31.8 | 20.7 | 14.4 | ||
EffPS_b1bs4_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | EfficientPS with EfficientNet-b1 backbone. Trained with a batch size of 4. more details | n/a | 21.3 | 25.6 | 19.5 | 44.2 | 23.8 | 29.4 | 2.0 | 13.5 | 12.1 | |
Panoptic-DeepLab w/ SWideRNet [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 38.0 | 36.8 | 33.2 | 57.2 | 38.8 | 45.0 | 38.9 | 30.2 | 23.8 | |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 42.2 | 37.7 | 34.6 | 58.2 | 45.1 | 54.8 | 47.2 | 34.0 | 26.0 | |
PolyTransform + SegFix + BPR | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation | Chufeng Tang*, Hang Chen*, Xiao Li, Jianmin Li, Zhaoxiang Zhang, Xiaolin Hu | CVPR 2021 | Tremendous efforts have been made on instance segmentation but the mask quality is still not satisfactory. The boundaries of predicted instance masks are usually imprecise due to the low spatial resolution of feature maps and the imbalance problem caused by the extremely low proportion of boundary pixels. To address these issues, we propose a conceptually simple yet effective post-processing refinement framework to improve the boundary quality based on the results of any instance segmentation model, termed BPR. Following the idea of looking closer to segment boundaries better, we extract and refine a series of small boundary patches along the predicted instance boundaries. The refinement is accomplished by a boundary patch refinement network at higher resolution. The proposed BPR framework yields significant improvements over the Mask R-CNN baseline on Cityscapes benchmark, especially on the boundary-aware metrics. Moreover, by applying the BPR framework to the PolyTransform + SegFix baseline, we reached 1st place on the Cityscapes leaderboard. more details | n/a | 42.7 | 46.0 | 37.1 | 62.8 | 41.3 | 52.7 | 43.7 | 32.6 | 25.1 |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas + Pseudo-labels] | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. Following Naive-Student, this model is additionally trained with pseudo-labels generated from Cityscapes Video and train-extra set (i.e., the coarse annotations are not used, but the images are). more details | n/a | 43.4 | 39.3 | 34.9 | 59.6 | 47.9 | 57.4 | 45.9 | 35.8 | 26.8 | |
CenterPoly | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 0.045 | 15.5 | 17.5 | 11.5 | 33.9 | 13.1 | 16.6 | 15.3 | 9.0 | 7.3 | ||
HRI-INST | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | HRI-INST more details | n/a | 43.8 | 42.2 | 37.0 | 59.2 | 46.9 | 57.3 | 47.3 | 33.9 | 26.4 | ||
DH-ARI | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | DH-ARI more details | n/a | 44.4 | 47.8 | 39.4 | 64.9 | 44.1 | 54.2 | 40.1 | 36.5 | 28.0 | ||
HRI-TRANS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | HRI transformer instance segmentation more details | n/a | 44.5 | 44.6 | 37.0 | 62.4 | 46.9 | 57.3 | 47.3 | 34.0 | 26.4 | ||
kMaX-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | k-means Mask Transformer | Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen | ECCV 2022 | kMaX-DeepLab w/ ConvNeXt-L backbone (ImageNet-22k + 1k pretrained). This result is obtained by the kMaX-DeepLab trained for Panoptic Segmentation task. No test-time augmentation or other external dataset. more details | n/a | 39.7 | 36.2 | 33.8 | 55.2 | 40.4 | 53.1 | 47.4 | 29.9 | 21.9 |
QueryInst-Parallel Completion | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Hai Wang ;Shilin Zhu ;PuPu ;Meng; Le; Apple; Rong | We propose a novel feature complete network framework queryinst parallel completion. First, the global context module is introduced into the backbone network to obtain instance information. Then, parallel semantic branch and parallel global branch are proposed to extract the semantic information and global information of feature layer, so as to complete the ROI features. In addition, we also propose a feature transfer structure, which explicitly increases the connection between detection and segmentation branches, changes the gradient back-propagation path, and indirectly complements the ROI features. more details | n/a | 35.4 | 41.4 | 31.5 | 58.4 | 29.2 | 44.0 | 31.6 | 25.0 | 21.9 | ||
CenterPoly v2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Real-time instance segmentation with polygons using an Intersection-over-Union loss | Katia Jodogne-del Litto, Guillaume-Alexandre Bilodeau | more details | 0.045 | 16.6 | 18.0 | 10.7 | 32.8 | 17.7 | 23.9 | 13.7 | 10.0 | 6.3 | |
Jiangsu-University-Environmental-Perception | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 40.3 | 45.3 | 35.8 | 59.6 | 35.7 | 50.5 | 36.2 | 33.8 | 25.6 |
AP 50 % on class-level
name | fine | fine | coarse | coarse | 16-bit | 16-bit | depth | depth | video | video | sub | sub | code | code | title | authors | venue | description | Runtime [s] | average | person | rider | car | truck | bus | train | motorcycle | bicycle |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
R-CNN + MCG convex hull | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | The Cityscapes Dataset for Semantic Urban Scene Understanding | M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, B. Schiele | CVPR 2016 | We compute MCG object proposals [1] and use their convex hulls as instance candidates. These proposals are scored by a Fast R-CNN detector [2]. [1] P. Arbelaez, J. Pont-Tuset, J. Barron, F. Marqués, and J. Malik. Multiscale combinatorial grouping. In CVPR, 2014. [2] R. Girshick. Fast R-CNN. In ICCV, 2015. more details | 60.0 | 12.9 | 5.6 | 3.9 | 26.0 | 13.8 | 26.3 | 15.8 | 8.6 | 3.1 |
Pixel-level Encoding for Instance Segmentation | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Pixel-level Encoding and Depth Layering for Instance-level Semantic Labeling | J. Uhrig, M. Cordts, U. Franke, and T. Brox | GCPR 2016 | We predict three encoding channels from a single image using an FCN: semantic labels, depth classes, and an instance-aware representation based on directions towards instance centers. Using low-level computer vision techniques, we obtain pixel-level and instance-level semantic labeling paired with a depth estimate of the instances. more details | n/a | 21.1 | 31.8 | 33.8 | 37.8 | 7.6 | 12.0 | 8.5 | 20.5 | 17.2 |
Instance-level Segmentation of Vehicles by Deep Contours | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Instance-level Segmentation of Vehicles by Deep Contours | Jan van den Brand, Matthias Ochs and Rudolf Mester | Asian Conference on Computer Vision - Workshop on Computer Vision Technologies for Smart Vehicle | Our method uses the fully convolutional network (FCN) for semantic labeling and for estimating the boundary of each vehicle. Even though a contour is in general a one pixel wide structure which cannot be directly learned by a CNN, our network addresses this by providing areas around the contours. Based on these areas, we separate the individual vehicle instances. more details | 0.2 | 3.7 | 0.0 | 0.0 | 29.2 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
Boundary-aware Instance Segmentation | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Boundary-aware Instance Segmentation | Zeeshan Hayder, Xuming He, Mathieu Salzmann | CVPR 2017 | End-to-end model for instance segmentation using VGG16 network Previously listed as "Shape-Aware Instance Segmentation" more details | n/a | 36.7 | 34.0 | 40.4 | 54.7 | 27.2 | 40.1 | 38.9 | 32.2 | 26.0 |
RecAttend | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | n/a | 18.9 | 21.2 | 12.7 | 41.9 | 13.9 | 20.7 | 15.5 | 14.7 | 10.5 | ||
Joint Graph Decomposition and Node Labeling | yes | yes | no | no | no | no | no | no | no | no | 8 | 8 | no | no | Joint Graph Decomposition and Node Labeling: Problem, Algorithms, Applications | Evgeny Levinkov, Jonas Uhrig, Siyu Tang, Mohamed Omran, Eldar Insafutdinov, Alexander Kirillov, Carsten Rother, Thomas Brox, Bernt Schiele, Bjoern Andres | Computer Vision and Pattern Recognition (CVPR) 2017 | more details | n/a | 23.2 | 18.4 | 29.5 | 38.3 | 16.1 | 21.5 | 24.5 | 21.4 | 16.0 |
InstanceCut | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | InstanceCut: from Edges to Instances with MultiCut | A. Kirillov, E. Levinkov, B. Andres, B. Savchynskyy, C. Rother | Computer Vision and Pattern Recognition (CVPR) 2017 | InstanceCut represents the problem by two output modalities: (i) an instance-agnostic semantic segmentation and (ii) all instance-boundaries. The former is computed from a standard CNN for semantic segmentation, and the latter is derived from a new instance-aware edge detection model. To reason globally about the optimal partitioning of an image into instances, we combine these two modalities into a novel MultiCut formulation. more details | n/a | 27.9 | 28.0 | 26.8 | 44.8 | 22.2 | 30.4 | 30.1 | 25.1 | 15.7 |
Semantic Instance Segmentation with a Discriminative Loss Function | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Semantic Instance Segmentation with a Discriminative Loss Function | Bert De Brabandere, Davy Neven, Luc Van Gool | Deep Learning for Robotic Vision, workshop at CVPR 2017 | This method uses a discriminative loss function, operating at the pixel level, that encourages a convolutional network to produce a representation of the image that can easily be clustered into instances with a simple post-processing step. The loss function encourages the network to map each pixel to a point in feature space so that pixels belonging to the same instance lie close together while different instances are separated by a wide margin. Previously listed as "PPLoss". more details | n/a | 35.9 | 32.0 | 40.7 | 43.2 | 28.5 | 39.1 | 35.7 | 37.9 | 29.8 |
SGN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | SGN: Sequential Grouping Networks for Instance Segmentation | Shu Liu, Jiaya Jia, Sanja Fidler, Raquel Urtasun | ICCV 2017 | Instance segmentation using a sequence of neural networks, each solving a sub-grouping problem of increasing semantic complexity in order to gradually compose objects out of pixels. more details | n/a | 44.9 | 45.2 | 47.7 | 59.7 | 36.3 | 45.4 | 53.7 | 39.5 | 31.8 |
Mask R-CNN [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Mask R-CNN | Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick | Mask R-CNN, ResNet-50-FPN, Cityscapes [fine-only] + COCO more details | n/a | 58.1 | 67.1 | 65.4 | 71.8 | 42.3 | 61.0 | 53.9 | 54.3 | 49.0 | |
Mask R-CNN [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Mask R-CNN | Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick | Mask R-CNN, ResNet-50-FPN, Cityscapes fine-only more details | n/a | 49.9 | 60.7 | 59.5 | 68.3 | 33.1 | 48.2 | 38.9 | 46.5 | 43.9 | |
Deep Watershed Transformation | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Deep Watershed Transformation for Instance Segmentation | Min Bai and Raquel Urtasun | CVPR 2017 | Instance segmentation using a watershed transformation inspired CNN. The input RGB image is augmented using the semantic segmentation from the recent PSPNet by H. Zhao et al. Previously named "DWT". more details | n/a | 35.3 | 34.0 | 36.9 | 48.5 | 31.3 | 40.1 | 36.2 | 32.9 | 22.9 |
Foveal Vision for Instance Segmentation of Road Images | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Foveal Vision for Instance Segmentation of Road Images | Benedikt Ortelt, Christian Herrmann, Dieter Willersinn, Jürgen Beyerer | VISAPP 2018 | Directly based on 'Pixel-level Encoding for Instance Segmentation'. Adds an improved angular distance measure and a foveal concept to better address small objects at the vanishing point of the road. more details | n/a | 25.2 | 31.5 | 29.7 | 40.0 | 16.0 | 23.8 | 21.7 | 19.2 | 19.9 |
SegNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.5 | 55.6 | 57.9 | 56.5 | 67.5 | 46.2 | 65.1 | 63.0 | 43.9 | 44.8 | ||
DCME | yes | yes | no | no | no | no | no | no | no | no | 3 | 3 | no | no | Distance to Center of Mass Encoding for Instance Segmentation | Thomio Watanabe and Denis Wolf | 2018 21st International Conference on Intelligent Transportation Systems (ITSC) | more details | n/a | 7.7 | 5.9 | 3.3 | 25.6 | 4.0 | 8.3 | 10.0 | 3.4 | 1.4 |
RRL | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Anonymous | more details | n/a | 56.1 | 67.1 | 63.7 | 78.4 | 35.8 | 54.3 | 49.6 | 50.9 | 48.8 | ||
PANet [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Path Aggregation Network for Instance Segmentation | Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, Jiaya Jia | CVPR 2018 | PANet, ResNet-50 as base model, Cityscapes fine-only, training hyper-parameters are adopted from Mask R-CNN. more details | n/a | 57.1 | 68.2 | 66.3 | 78.5 | 38.7 | 55.2 | 48.5 | 51.9 | 49.9 |
PANet [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Path Aggregation Network for Instance Segmentation | Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, Jiaya Jia | CVPR 2018 | PANet, ResNet-50 as base model, Cityscapes fine-only + COCO, training hyper-parameters are adopted from Mask R-CNN. more details | n/a | 63.1 | 74.4 | 71.5 | 83.3 | 43.0 | 65.2 | 50.9 | 59.6 | 57.3 |
LCIS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 30.8 | 33.3 | 37.4 | 42.5 | 21.9 | 27.6 | 27.9 | 32.0 | 23.9 | ||
Pixelwise Instance Segmentation with a Dynamically Instantiated Network | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Pixelwise Instance Segmentation with a Dynamically Instantiated Network | Anurag Arnab and Philip H. S. Torr | Computer Vision and Pattern Recognition (CVPR) 2017 | We propose an Instance Segmentation system that produces a segmentation map where each pixel is assigned an object class and instance identity label (this has recently been termed "Panoptic Segmentation"). Our method is based on an initial semantic segmentation module which feeds into an instance subnetwork. This subnetwork uses the initial category-level segmentation, along with cues from the output of an object detector, within an end-to-end CRF to predict instances. This part of our model is dynamically instantiated to produce a variable number of instances per image. Our end-to-end approach requires no post-processing and considers the image holistically, instead of processing independent proposals. As a result, it reasons about occlusions (unlike some related work, a single pixel cannot belong to multiple instances). more details | n/a | 45.2 | 46.8 | 48.2 | 55.8 | 33.4 | 45.5 | 53.7 | 44.9 | 33.0 |
PolygonRNN++ | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient Annotation of Segmentation Datasets with Polygon-RNN++ | D. Acuna, H. Ling, A. Kar, and S. Fidler | CVPR 2018 | more details | n/a | 45.5 | 55.0 | 49.3 | 70.0 | 29.7 | 47.5 | 41.4 | 34.1 | 36.8 |
GMIS: Graph Merge for Instance Segmentation | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Yiding Liu, Siyu Yang, Bin Li, Wengang Zhou, Jizheng Xu, Houqiang Li, Yan Lu | more details | n/a | 44.6 | 50.6 | 47.4 | 56.9 | 33.9 | 47.0 | 52.8 | 39.2 | 29.2 | ||
TCnet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | TCnet more details | n/a | 59.0 | 70.0 | 69.2 | 76.7 | 42.0 | 61.1 | 48.9 | 55.2 | 49.1 | ||
MaskRCNN_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | MaskRCNN Instance segmentation baseline for ROB challenge using default parameters from Matterport's implementation of Mask RCNN https://github.com/matterport/Mask_RCNN more details | n/a | 25.2 | 50.0 | 41.8 | 60.2 | 7.3 | 15.2 | 0.0 | 27.4 | 0.0 | ||
Multitask Learning | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics | Alex Kendall, Yarin Gal and Roberto Cipolla | CVPR 2018 | Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task. more details | n/a | 39.0 | 38.1 | 46.3 | 54.8 | 28.4 | 40.8 | 25.0 | 42.2 | 36.5 |
Deep Coloring | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous ECCV submission #2955 more details | n/a | 46.2 | 47.7 | 49.2 | 63.5 | 34.5 | 46.3 | 52.9 | 42.0 | 33.1 | ||
MRCNN_VSCMLab_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | MaskRCNN+FPN with pre-trained COCO model. ms-training with short edge [800, 1024] inference with shore edge size 800 Randomly subsample ScanNet to the size close to CityScape optimizer: Adam learning rate: start from 1e-4 to 1e-3 with linear warm up schedule. decrease by factor of 0.1 at 200, 300 epoch. epoch: 400 step per epoch: 500 roi_per_im: 512 more details | 1.0 | 29.5 | 37.7 | 38.1 | 57.7 | 21.1 | 29.4 | 26.6 | 25.1 | 0.0 | ||
BAMRCNN_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 0.9 | 0.0 | 0.0 | 5.3 | 0.7 | 0.9 | 0.0 | 0.2 | 0.0 | ||
NL_ROI_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Non-local ROI on Mask R-CNN more details | n/a | 45.8 | 58.1 | 54.7 | 70.0 | 32.2 | 46.3 | 33.1 | 36.2 | 35.6 | ||
RUSH_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 55.5 | 62.9 | 64.9 | 71.9 | 42.4 | 53.8 | 45.9 | 53.9 | 48.2 | ||
MaskRCNN_BOSH | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Jin shengtao, Yi zhihao, Liu wei [Our team name is firefly] | MaskRCNN segmentation baseline for Bosh autodrive challenge , using Matterport's implementation of Mask RCNN https://github.com/matterport/Mask_RCNN 55k iterations, default parameters (backbone :resenet 101) 19hours for training more details | n/a | 28.0 | 34.2 | 31.8 | 46.9 | 16.8 | 25.1 | 26.1 | 22.6 | 20.9 | ||
NV-ADLR | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | NVIDIA Applied Deep Learning Research more details | n/a | 61.5 | 73.3 | 67.7 | 81.2 | 46.4 | 61.9 | 52.6 | 56.8 | 52.2 | ||
Sogou_MM | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Global Concatenating Feature Enhancement for Instance Segmentation | Hang Yang, Xiaozhe Xin, Wenwen Yang, Bin Li | Global Concatenating Feature Enhancement for Instance Segmentation more details | n/a | 64.5 | 73.4 | 70.8 | 80.3 | 51.7 | 66.4 | 60.7 | 58.6 | 54.0 | |
Instance Segmentation by Jointly Optimizing Spatial Embeddings and Clustering Bandwidth | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Instance Segmentation by Jointly Optimizing Spatial Embeddings and Clustering Bandwidth | Davy Neven, Bert De Brabandere, Marc Proesmans and Luc Van Gool | CVPR 2019 | Fine only - ERFNet backbone more details | 0.1 | 50.9 | 65.1 | 58.8 | 75.3 | 33.1 | 45.2 | 32.4 | 48.4 | 48.8 |
Instance Annotation | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Instance Segmentation as Image Segmentation Annotation | Thomio Watanabe and Denis F. Wolf | 2019 IEEE Intelligent Vehicles Symposium (IV) | Based on DCME more details | 4.416 | 14.9 | 17.1 | 8.8 | 38.1 | 10.7 | 15.1 | 12.7 | 10.7 | 6.5 |
NJUST | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Ang Li, Chongyang Zhang | Mask R-CNN based on FPN enhancement and Mask Rescore, etc. Only one single model SE-ResNext-152 with COCO pre-train used; more details | n/a | 64.1 | 76.0 | 70.5 | 81.1 | 48.1 | 65.5 | 57.8 | 58.4 | 55.7 | ||
BshapeNet+ [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | BshapeNet: Object Detection and Instance Segmentation with Bounding Shape Masks | Ba Rom Kang, Ha Young Kim | BshapeNet+, ResNet-50-FPN as base model, Cityscapes [fine-only] more details | n/a | 50.4 | 57.8 | 57.2 | 68.7 | 37.2 | 48.0 | 48.6 | 46.5 | 39.3 | |
SSAP | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | SSAP: Single-Shot Instance Segmentation With Affinity Pyramid | Naiyu Gao, Yanhu Shan, Yupei Wang, Xin Zhao, Yinan Yu, Ming Yang, Kaiqi Huang | ICCV 2019 | SSAP, ResNet-101, Cityscapes fine-only more details | n/a | 51.8 | 62.5 | 52.3 | 75.7 | 41.8 | 55.4 | 51.8 | 38.2 | 36.4 |
Spatial Sampling Net | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Spatial Sampling Network for Fast Scene Understanding | Davide Mazzini, Raimondo Schettini | CVPR 2019 Workshop on Autonomous Driving | We propose a network architecture to perform efficient scene understanding. This work presents three main novelties: the first is an Improved Guided Upsampling Module that can replace in toto the decoder part in common semantic segmentation networks. Our second contribution is the introduction of a new module based on spatial sampling to perform Instance Segmentation. It provides a very fast instance segmentation, needing only thresholding as post-processing step at inference time. Finally, we propose a novel efficient network design that includes the new modules and we test it against different datasets for outdoor scene understanding. more details | 0.00884 | 16.8 | 22.0 | 11.7 | 35.4 | 13.4 | 19.3 | 16.2 | 11.4 | 5.4 |
UPSNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | UPSNet: A Unified Panoptic Segmentation Network | Yuwen Xiong, Renjie Liao, Hengshuang Zhao, Rui Hu, Min Bai, Ersin Yumer, Raquel Urtasun | CVPR 2019 | more details | 0.227 | 59.6 | 69.1 | 66.4 | 76.9 | 44.6 | 62.4 | 54.6 | 53.9 | 49.2 |
Sem2Ins | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous NeurIPS19 submission #4671 more details | n/a | 36.4 | 39.9 | 42.7 | 46.3 | 29.9 | 37.1 | 35.2 | 32.2 | 27.7 | ||
BshapeNet+ [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | BshapeNet: Object Detection and Instance Segmentation with Bounding Shape Masks | Ba Rom Kang, Ha Young Kim | BshapeNet+ single model, ResNet-50-FPN as base model, Cityscapes [fine-only + COCO] more details | n/a | 58.8 | 70.7 | 63.1 | 75.1 | 46.7 | 57.5 | 53.0 | 56.6 | 47.9 | |
AdaptIS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Adaptive Instance Selection network architecture for class-agnostic instance segmentation. Given an input image and a point (x, y), it generates a mask for the object located at (x, y). The network adapts to the input point with a help of AdaIN layers, thus producing different masks for different objects on the same image. AdaptIS generates pixel-accurate object masks, therefore it accurately segments objects of complex shape or severely occluded ones. more details | n/a | 52.5 | 59.5 | 56.4 | 75.1 | 39.0 | 52.8 | 56.6 | 47.5 | 33.2 | ||
AInnoSegmentation | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Faen Zhang, Jiahong Wu, Haotian Cao, Zhizheng Yang, Jianfei Song, Ze Huang, Jiashui Huang, Shenglan Ben | AInnoSegmentation use SE-Resnet 152 as backbone and FPN model to extract multi-level features and use self-develop method to combine multi-features and use COCO datasets to pre-train model and so on more details | n/a | 66.0 | 75.6 | 69.4 | 83.3 | 53.7 | 70.0 | 62.0 | 59.8 | 53.9 | ||
iFLYTEK-CV | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | iFLYTEK Research, CV Group more details | n/a | 71.1 | 79.4 | 76.1 | 85.2 | 61.0 | 73.2 | 70.8 | 63.8 | 59.3 | ||
Panoptic-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | Our proposed bottom-up Panoptic-DeepLab is conceptually simple yet delivers state-of-the-art results. The Panoptic-DeepLab adopts dual-ASPP and dual-decoder modules, specific to semantic segmentation and instance segmentation respectively. The semantic segmentation prediction follows the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation prediction involves a simple instance center regression, where the model learns to predict instance centers as well as the offset from each pixel to its corresponding center. This submission exploits only Cityscapes fine annotations. This entry fixes a minor inference bug (i.e., same trained model) for instance segmentation, compared to the previous submission. more details | n/a | 57.3 | 63.9 | 61.1 | 77.9 | 42.0 | 53.4 | 56.1 | 53.3 | 50.5 | |
snake | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Deep Snake for Real-Time Instance Segmentation | Sida Peng, Wen Jiang, Huaijin Pi, Xiuli Li, Hujun Bao, Xiaowei Zhou | CVPR 2020 | more details | 0.217 | 58.4 | 71.7 | 66.0 | 81.5 | 41.2 | 58.8 | 51.4 | 50.4 | 46.6 |
PolyTransform | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | PolyTransform: Deep Polygon Transformer for Instance Segmentation | Justin Liang, Namdar Homayounfar, Wei-Chiu Ma, Yuwen Xiong, Rui Hu, Raquel Urtasun | more details | n/a | 65.9 | 75.8 | 71.8 | 82.5 | 52.2 | 68.7 | 63.3 | 58.7 | 54.4 | |
StixelPointNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Learning Stixel-Based Instance Segmentation | Monty Santarossa, Lukas Schneider, Claudius Zelenka, Lars Schmarje, Reinhard Koch, Uwe Franke | IV 2021 | An adapted version of the PointNet is trained on Stixels as input for instance segmentation. more details | 0.035 | 19.3 | 24.6 | 24.9 | 31.8 | 22.3 | 27.3 | 0.0 | 11.6 | 11.4 |
Axial-DeepLab-XL [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 55.9 | 61.2 | 59.2 | 75.0 | 41.7 | 52.9 | 58.0 | 51.9 | 47.0 |
PolyTransform + SegFix | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | openseg | We simply apply a novel post-processing scheme based on the PolyTransform (thanks to the authors of PolyTransform for providing their segmentation results). The performance of the baseline PolyTransform is 40.1% and our method achieves 41.2%. Besides, our method also could improve the results of PointRend and PANet by more than 1.0% without any re-training or fine-tuning the segmentation models. more details | n/a | 66.1 | 76.2 | 72.1 | 82.8 | 52.4 | 68.7 | 63.3 | 58.8 | 54.3 | |
GAIS-Net | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Geometry-Aware Instance Segmentation with Disparity Maps | Cho-Ying Wu, Xiaoyan Hu, Michael Happold, Qiangeng Xu, Ulrich Neumann | Scalability in Autonomous Driving, workshop at CVPR 2020 | Geometry-Aware Instance Segmentation with Disparity Maps more details | n/a | 59.5 | 68.7 | 66.8 | 78.0 | 42.7 | 59.1 | 56.1 | 55.0 | 49.6 |
Axial-DeepLab-L [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 61.6 | 64.0 | 62.8 | 77.3 | 52.7 | 63.1 | 63.1 | 58.8 | 50.6 |
LevelSet R-CNN [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | LevelSet R-CNN: A Deep Variational Method for Instance Segmentation | Namdar Homayounfar*, Yuwen Xiong*, Justin Liang*, Wei-Chiu Ma, Raquel Urtasun | ECCV 2020 | Obtaining precise instance segmentation masks is of high importance in many modern applications such as robotic manipulation and autonomous driving. Currently, many state of the art models are based on the Mask R-CNN framework which, while very powerful, outputs masks at low resolutions which could result in imprecise boundaries. On the other hand, classic variational methods for segmentation impose desirable global and local data and geometry constraints on the masks by optimizing an energy functional. While mathematically elegant, their direct dependence on good initialization, non-robust image cues and manual setting of hyperparameters renders them unsuitable for modern applications. We propose LevelSet R-CNN, which combines the best of both worlds by obtaining powerful feature representations that are combined in an end-to-end manner with a variational segmentation framework. We demonstrate the effectiveness of our approach on COCO and Cityscapes datasets. more details | n/a | 58.2 | 68.3 | 66.6 | 77.3 | 42.7 | 55.6 | 53.4 | 53.3 | 48.8 |
LevelSet R-CNN [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | LevelSet R-CNN: A Deep Variational Method for Instance Segmentation | Namdar Homayounfar*, Yuwen Xiong*, Justin Liang*, Wei-Chiu Ma, Raquel Urtasun | ECCV 2020 | Obtaining precise instance segmentation masks is of high importance in many modern applications such as robotic manipulation and autonomous driving. Currently, many state of the art models are based on the Mask R-CNN framework which, while very powerful, outputs masks at low resolutions which could result in imprecise boundaries. On the other hand, classic variational methods for segmentation impose desirable global and local data and geometry constraints on the masks by optimizing an energy functional. While mathematically elegant, their direct dependence on good initialization, non-robust image cues and manual setting of hyperparameters renders them unsuitable for modern applications. We propose LevelSet R-CNN, which combines the best of both worlds by obtaining powerful feature representations that are combined in an end-to-end manner with a variational segmentation framework. We demonstrate the effectiveness of our approach on COCO and Cityscapes datasets. more details | n/a | 65.7 | 76.3 | 72.5 | 83.2 | 49.4 | 66.8 | 58.8 | 61.8 | 57.1 |
Axial-DeepLab-L [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 54.9 | 60.5 | 60.5 | 74.0 | 39.5 | 55.9 | 53.5 | 50.2 | 45.5 |
Deep Affinity Net [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Deep Affinity Net: Instance Segmentation via Affinity | Xingqian Xu, Mangtik Chiu, Thomas Huang, Honghui Shi | A proposal-free method that uses FPN generated features and network predicted 4-neighbor affinities to reconstruct instance segments. During inference time, an efficient graph partitioning algorithm, Cascade-GAEC, is introduced to overcome the long execution time in the high-resolution graph partitioning problem. more details | n/a | 48.0 | 51.4 | 53.2 | 66.7 | 38.8 | 51.2 | 49.8 | 40.2 | 32.8 | |
Naive-Student (iterative semi-supervised learning with Panoptic-DeepLab) | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Semi-Supervised Learning in Video Sequences for Urban Scene Segmentation | Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D. Collins, Ekin D. Cubuk, Barret Zoph, Hartwig Adam, Jonathon Shlens | Supervised learning in large discriminative models is a mainstay for modern computer vision. Such an approach necessitates investing in large-scale human-annotated datasets for achieving state-of-the-art results. In turn, the efficacy of supervised learning may be limited by the size of the human annotated dataset. This limitation is particularly notable for image segmentation tasks, where the expense of human annotation is especially large, yet large amounts of unlabeled data may exist. In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences to improve the performance on urban scene segmentation, simultaneously tackling semantic, instance, and panoptic segmentation. The goal of this work is to avoid the construction of sophisticated, learned architectures specific to label propagation (e.g., patch matching and optical flow). Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data. The procedure is iterated for several times. As a result, our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks, reaching the performance of 67.8% PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences to surpass state-of-the-art performance on core computer vision tasks. more details | n/a | 67.6 | 71.8 | 69.1 | 82.9 | 56.6 | 68.8 | 65.6 | 66.2 | 59.7 | |
Axial-DeepLab-XL [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 64.2 | 66.1 | 65.5 | 79.0 | 52.9 | 67.8 | 66.0 | 62.5 | 53.6 |
Panoptic-DeepLab [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | We employ a stronger backbone, WR-41, in Panoptic-DeepLab. For Panoptic-DeepLab, please refer to https://arxiv.org/abs/1911.10194. For wide-ResNet-41 (WR-41) backbone, please refer to https://arxiv.org/abs/2005.10266. more details | n/a | 66.4 | 69.5 | 68.8 | 81.7 | 56.2 | 69.2 | 60.8 | 66.2 | 58.9 | |
EfficientPS [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 64.9 | 74.4 | 69.7 | 80.0 | 48.8 | 64.9 | 64.3 | 58.0 | 59.1 | |
seamseg_rvcsubset | no | no | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Porzi, Lorenzo and Rota Bulò, Samuel and Colovic, Aleksander and Kontschieder, Peter | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019 | Seamless Scene Segmentation Resnet101, pretrained on Imagenet; supplied with altered MVD to include WildDash2 classes; does not contain other RVC label policies (i.e. no ADE20K/COCO-specific classes -> rvcsubset and not a proper submission) more details | n/a | 39.4 | 52.6 | 40.7 | 55.4 | 37.2 | 44.2 | 17.8 | 36.7 | 30.3 |
UniDet_RVC | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 300.0 | 52.4 | 59.7 | 56.2 | 68.4 | 41.0 | 57.6 | 52.8 | 46.1 | 37.7 | ||
EffPS_b1bs4_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | EfficientPS with EfficientNet-b1 backbone. Trained with a batch size of 4. more details | n/a | 38.5 | 48.4 | 49.4 | 63.8 | 32.9 | 41.3 | 3.7 | 35.3 | 33.1 | |
Panoptic-DeepLab w/ SWideRNet [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 61.0 | 67.6 | 66.0 | 80.6 | 49.4 | 56.8 | 54.8 | 58.5 | 54.3 | |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 67.5 | 69.8 | 69.0 | 82.8 | 57.0 | 69.0 | 69.0 | 65.2 | 58.1 | |
PolyTransform + SegFix + BPR | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation | Chufeng Tang*, Hang Chen*, Xiao Li, Jianmin Li, Zhaoxiang Zhang, Xiaolin Hu | CVPR 2021 | Tremendous efforts have been made on instance segmentation but the mask quality is still not satisfactory. The boundaries of predicted instance masks are usually imprecise due to the low spatial resolution of feature maps and the imbalance problem caused by the extremely low proportion of boundary pixels. To address these issues, we propose a conceptually simple yet effective post-processing refinement framework to improve the boundary quality based on the results of any instance segmentation model, termed BPR. Following the idea of looking closer to segment boundaries better, we extract and refine a series of small boundary patches along the predicted instance boundaries. The refinement is accomplished by a boundary patch refinement network at higher resolution. The proposed BPR framework yields significant improvements over the Mask R-CNN baseline on Cityscapes benchmark, especially on the boundary-aware metrics. Moreover, by applying the BPR framework to the PolyTransform + SegFix baseline, we reached 1st place on the Cityscapes leaderboard. more details | n/a | 66.5 | 77.0 | 72.4 | 83.8 | 52.7 | 68.6 | 63.3 | 59.4 | 54.9 |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas + Pseudo-labels] | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. Following Naive-Student, this model is additionally trained with pseudo-labels generated from Cityscapes Video and train-extra set (i.e., the coarse annotations are not used, but the images are). more details | n/a | 68.7 | 71.9 | 69.0 | 83.0 | 59.4 | 72.3 | 67.0 | 67.2 | 60.2 | |
CenterPoly | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 0.045 | 39.5 | 49.7 | 46.7 | 61.2 | 24.8 | 35.0 | 33.5 | 35.5 | 29.6 | ||
HRI-INST | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | HRI-INST more details | n/a | 70.4 | 74.9 | 72.8 | 84.5 | 60.6 | 74.8 | 69.1 | 64.8 | 61.8 | ||
DH-ARI | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | DH-ARI more details | n/a | 68.4 | 78.2 | 73.8 | 86.3 | 55.4 | 68.7 | 60.3 | 65.0 | 59.2 | ||
HRI-TRANS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | HRI transformer instance segmentation more details | n/a | 71.4 | 78.2 | 72.8 | 87.3 | 60.6 | 74.8 | 69.1 | 66.9 | 61.8 | ||
kMaX-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | k-means Mask Transformer | Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen | ECCV 2022 | kMaX-DeepLab w/ ConvNeXt-L backbone (ImageNet-22k + 1k pretrained). This result is obtained by the kMaX-DeepLab trained for Panoptic Segmentation task. No test-time augmentation or other external dataset. more details | n/a | 61.3 | 63.5 | 64.9 | 75.2 | 50.9 | 65.9 | 64.4 | 56.0 | 49.7 |
QueryInst-Parallel Completion | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Hai Wang ;Shilin Zhu ;PuPu ;Meng; Le; Apple; Rong | We propose a novel feature complete network framework queryinst parallel completion. First, the global context module is introduced into the backbone network to obtain instance information. Then, parallel semantic branch and parallel global branch are proposed to extract the semantic information and global information of feature layer, so as to complete the ROI features. In addition, we also propose a feature transfer structure, which explicitly increases the connection between detection and segmentation branches, changes the gradient back-propagation path, and indirectly complements the ROI features. more details | n/a | 60.9 | 72.9 | 67.6 | 83.0 | 41.2 | 61.9 | 53.4 | 53.3 | 53.6 | ||
CenterPoly v2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Real-time instance segmentation with polygons using an Intersection-over-Union loss | Katia Jodogne-del Litto, Guillaume-Alexandre Bilodeau | more details | 0.045 | 39.4 | 51.6 | 42.8 | 59.9 | 31.5 | 41.9 | 27.9 | 34.8 | 25.0 | |
Jiangsu-University-Environmental-Perception | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 67.6 | 78.4 | 73.2 | 85.3 | 50.6 | 70.7 | 57.1 | 65.6 | 59.8 |
AP 100 m on class-level
name | fine | fine | coarse | coarse | 16-bit | 16-bit | depth | depth | video | video | sub | sub | code | code | title | ),(ors | venue | description | Runtime [s] | average | person | rider | car | truck | bus | train | motorcycle | bicycle |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
R-CNN + MCG convex hull | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | The Cityscapes Dataset for Semantic Urban Scene Understanding | M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, B. Schiele | CVPR 2016 | We compute MCG object proposals [1] and use their convex hulls as instance candidates. These proposals are scored by a Fast R-CNN detector [2]. [1] P. Arbelaez, J. Pont-Tuset, J. Barron, F. Marqués, and J. Malik. Multiscale combinatorial grouping. In CVPR, 2014. [2] R. Girshick. Fast R-CNN. In ICCV, 2015. more details | 60.0 | 7.7 | 2.6 | 1.1 | 17.5 | 10.6 | 17.4 | 9.2 | 2.6 | 0.9 |
Pixel-level Encoding for Instance Segmentation | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Pixel-level Encoding and Depth Layering for Instance-level Semantic Labeling | J. Uhrig, M. Cordts, U. Franke, and T. Brox | GCPR 2016 | We predict three encoding channels from a single image using an FCN: semantic labels, depth classes, and an instance-aware representation based on directions towards instance centers. Using low-level computer vision techniques, we obtain pixel-level and instance-level semantic labeling paired with a depth estimate of the instances. more details | n/a | 15.3 | 24.4 | 20.3 | 36.4 | 5.5 | 10.6 | 5.2 | 10.5 | 9.2 |
Instance-level Segmentation of Vehicles by Deep Contours | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Instance-level Segmentation of Vehicles by Deep Contours | Jan van den Brand, Matthias Ochs and Rudolf Mester | Asian Conference on Computer Vision - Workshop on Computer Vision Technologies for Smart Vehicle | Our method uses the fully convolutional network (FCN) for semantic labeling and for estimating the boundary of each vehicle. Even though a contour is in general a one pixel wide structure which cannot be directly learned by a CNN, our network addresses this by providing areas around the contours. Based on these areas, we separate the individual vehicle instances. more details | 0.2 | 3.9 | 0.0 | 0.0 | 31.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
Boundary-aware Instance Segmentation | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Boundary-aware Instance Segmentation | Zeeshan Hayder, Xuming He, Mathieu Salzmann | CVPR 2017 | End-to-end model for instance segmentation using VGG16 network Previously listed as "Shape-Aware Instance Segmentation" more details | n/a | 29.3 | 30.3 | 22.7 | 58.2 | 24.9 | 38.6 | 29.9 | 15.3 | 14.3 |
RecAttend | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | n/a | 16.8 | 19.6 | 5.5 | 46.8 | 14.2 | 21.5 | 13.1 | 7.2 | 6.1 | ||
Joint Graph Decomposition and Node Labeling | yes | yes | no | no | no | no | no | no | no | no | 8 | 8 | no | no | Joint Graph Decomposition and Node Labeling: Problem, Algorithms, Applications | Evgeny Levinkov, Jonas Uhrig, Siyu Tang, Mohamed Omran, Eldar Insafutdinov, Alexander Kirillov, Carsten Rother, Thomas Brox, Bernt Schiele, Bjoern Andres | Computer Vision and Pattern Recognition (CVPR) 2017 | more details | n/a | 16.8 | 13.5 | 16.6 | 38.4 | 11.3 | 19.2 | 16.9 | 10.4 | 8.3 |
InstanceCut | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | InstanceCut: from Edges to Instances with MultiCut | A. Kirillov, E. Levinkov, B. Andres, B. Savchynskyy, C. Rother | Computer Vision and Pattern Recognition (CVPR) 2017 | InstanceCut represents the problem by two output modalities: (i) an instance-agnostic semantic segmentation and (ii) all instance-boundaries. The former is computed from a standard CNN for semantic segmentation, and the latter is derived from a new instance-aware edge detection model. To reason globally about the optimal partitioning of an image into instances, we combine these two modalities into a novel MultiCut formulation. more details | n/a | 22.1 | 19.7 | 14.0 | 38.9 | 24.8 | 34.4 | 23.1 | 13.7 | 8.0 |
Semantic Instance Segmentation with a Discriminative Loss Function | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Semantic Instance Segmentation with a Discriminative Loss Function | Bert De Brabandere, Davy Neven, Luc Van Gool | Deep Learning for Robotic Vision, workshop at CVPR 2017 | This method uses a discriminative loss function, operating at the pixel level, that encourages a convolutional network to produce a representation of the image that can easily be clustered into instances with a simple post-processing step. The loss function encourages the network to map each pixel to a point in feature space so that pixels belonging to the same instance lie close together while different instances are separated by a wide margin. Previously listed as "PPLoss". more details | n/a | 27.8 | 25.1 | 27.5 | 40.0 | 24.4 | 39.4 | 26.5 | 22.2 | 17.9 |
SGN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | SGN: Sequential Grouping Networks for Instance Segmentation | Shu Liu, Jiaya Jia, Sanja Fidler, Raquel Urtasun | ICCV 2017 | Instance segmentation using a sequence of neural networks, each solving a sub-grouping problem of increasing semantic complexity in order to gradually compose objects out of pixels. more details | n/a | 38.9 | 36.7 | 32.7 | 60.1 | 39.9 | 53.7 | 44.1 | 24.4 | 20.0 |
Mask R-CNN [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Mask R-CNN | Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick | Mask R-CNN, ResNet-50-FPN, Cityscapes [fine-only] + COCO more details | n/a | 45.8 | 51.3 | 39.3 | 67.9 | 42.8 | 58.8 | 46.8 | 31.4 | 27.9 | |
Mask R-CNN [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Mask R-CNN | Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick | Mask R-CNN, ResNet-50-FPN, Cityscapes fine-only more details | n/a | 37.6 | 46.2 | 35.6 | 65.5 | 31.1 | 46.0 | 27.5 | 24.9 | 24.3 | |
Deep Watershed Transformation | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Deep Watershed Transformation for Instance Segmentation | Min Bai and Raquel Urtasun | CVPR 2017 | Instance segmentation using a watershed transformation inspired CNN. The input RGB image is augmented using the semantic segmentation from the recent PSPNet by H. Zhao et al. Previously named "DWT". more details | n/a | 31.4 | 27.7 | 23.1 | 50.8 | 37.9 | 46.4 | 33.7 | 19.4 | 12.7 |
Foveal Vision for Instance Segmentation of Road Images | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Foveal Vision for Instance Segmentation of Road Images | Benedikt Ortelt, Christian Herrmann, Dieter Willersinn, Jürgen Beyerer | VISAPP 2018 | Directly based on 'Pixel-level Encoding for Instance Segmentation'. Adds an improved angular distance measure and a foveal concept to better address small objects at the vanishing point of the road. more details | n/a | 20.4 | 24.5 | 19.6 | 39.3 | 14.5 | 24.2 | 18.5 | 11.1 | 11.1 |
SegNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.5 | 43.2 | 48.1 | 35.1 | 61.4 | 43.2 | 61.7 | 44.6 | 26.1 | 25.4 | ||
DCME | yes | yes | no | no | no | no | no | no | no | no | 3 | 3 | no | no | Distance to Center of Mass Encoding for Instance Segmentation | Thomio Watanabe and Denis Wolf | 2018 21st International Conference on Intelligent Transportation Systems (ITSC) | more details | n/a | 6.6 | 3.7 | 1.3 | 26.6 | 3.6 | 8.1 | 7.7 | 1.3 | 0.6 |
RRL | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Anonymous | more details | n/a | 40.9 | 49.7 | 38.4 | 69.3 | 31.3 | 49.6 | 34.5 | 26.5 | 27.6 | ||
PANet [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Path Aggregation Network for Instance Segmentation | Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, Jiaya Jia | CVPR 2018 | PANet, ResNet-50 as base model, Cityscapes fine-only, training hyper-parameters are adopted from Mask R-CNN. more details | n/a | 44.2 | 53.9 | 43.2 | 73.4 | 37.2 | 50.7 | 36.0 | 29.0 | 30.6 |
PANet [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Path Aggregation Network for Instance Segmentation | Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, Jiaya Jia | CVPR 2018 | PANet, ResNet-50 as base model, Cityscapes fine-only + COCO, training hyper-parameters are adopted from Mask R-CNN. more details | n/a | 49.2 | 58.7 | 46.7 | 75.8 | 41.7 | 61.9 | 38.9 | 35.7 | 34.4 |
LCIS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 24.2 | 28.6 | 24.5 | 36.8 | 21.1 | 27.0 | 21.6 | 17.6 | 16.2 | ||
Pixelwise Instance Segmentation with a Dynamically Instantiated Network | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Pixelwise Instance Segmentation with a Dynamically Instantiated Network | Anurag Arnab and Philip H. S. Torr | Computer Vision and Pattern Recognition (CVPR) 2017 | We propose an Instance Segmentation system that produces a segmentation map where each pixel is assigned an object class and instance identity label (this has recently been termed "Panoptic Segmentation"). Our method is based on an initial semantic segmentation module which feeds into an instance subnetwork. This subnetwork uses the initial category-level segmentation, along with cues from the output of an object detector, within an end-to-end CRF to predict instances. This part of our model is dynamically instantiated to produce a variable number of instances per image. Our end-to-end approach requires no post-processing and considers the image holistically, instead of processing independent proposals. As a result, it reasons about occlusions (unlike some related work, a single pixel cannot belong to multiple instances). more details | n/a | 36.8 | 38.4 | 30.4 | 47.6 | 35.6 | 49.8 | 44.8 | 27.5 | 20.3 |
PolygonRNN++ | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient Annotation of Segmentation Datasets with Polygon-RNN++ | D. Acuna, H. Ling, A. Kar, and S. Fidler | CVPR 2018 | more details | n/a | 39.3 | 49.6 | 34.6 | 69.3 | 32.4 | 52.5 | 36.1 | 18.3 | 21.7 |
GMIS: Graph Merge for Instance Segmentation | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Yiding Liu, Siyu Yang, Bin Li, Wengang Zhou, Jizheng Xu, Houqiang Li, Yan Lu | more details | n/a | 42.7 | 47.9 | 38.0 | 66.3 | 39.5 | 60.8 | 47.2 | 23.9 | 18.1 | ||
TCnet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | TCnet more details | n/a | 45.0 | 53.1 | 41.9 | 69.1 | 38.4 | 56.8 | 41.4 | 31.3 | 27.7 | ||
MaskRCNN_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | MaskRCNN Instance segmentation baseline for ROB challenge using default parameters from Matterport's implementation of Mask RCNN https://github.com/matterport/Mask_RCNN more details | n/a | 14.6 | 29.6 | 15.8 | 48.7 | 2.5 | 11.4 | 0.0 | 8.8 | 0.0 | ||
Multitask Learning | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics | Alex Kendall, Yarin Gal and Roberto Cipolla | CVPR 2018 | Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task. more details | n/a | 35.0 | 38.2 | 35.5 | 60.4 | 26.7 | 42.7 | 24.4 | 27.5 | 24.7 |
Deep Coloring | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous ECCV submission #2955 more details | n/a | 39.0 | 38.8 | 34.4 | 61.1 | 38.5 | 54.2 | 39.2 | 25.3 | 20.1 | ||
MRCNN_VSCMLab_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | MaskRCNN+FPN with pre-trained COCO model. ms-training with short edge [800, 1024] inference with shore edge size 800 Randomly subsample ScanNet to the size close to CityScape optimizer: Adam learning rate: start from 1e-4 to 1e-3 with linear warm up schedule. decrease by factor of 0.1 at 200, 300 epoch. epoch: 400 step per epoch: 500 roi_per_im: 512 more details | 1.0 | 24.8 | 30.3 | 19.6 | 57.7 | 22.4 | 32.4 | 24.1 | 12.1 | 0.0 | ||
BAMRCNN_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 0.2 | 0.0 | 0.0 | 1.2 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | ||
NL_ROI_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Non-local ROI on Mask R-CNN more details | n/a | 36.1 | 45.1 | 31.8 | 65.6 | 33.3 | 47.6 | 26.1 | 19.9 | 19.1 | ||
RUSH_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 45.2 | 53.2 | 44.3 | 69.3 | 40.8 | 54.9 | 34.7 | 33.8 | 30.8 | ||
MaskRCNN_BOSH | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Jin shengtao, Yi zhihao, Liu wei [Our team name is firefly] | MaskRCNN segmentation baseline for Bosh autodrive challenge , using Matterport's implementation of Mask RCNN https://github.com/matterport/Mask_RCNN 55k iterations, default parameters (backbone :resenet 101) 19hours for training more details | n/a | 22.1 | 28.6 | 16.6 | 40.3 | 18.5 | 26.8 | 23.0 | 11.5 | 11.5 | ||
NV-ADLR | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | NVIDIA Applied Deep Learning Research more details | n/a | 49.3 | 56.7 | 42.8 | 75.2 | 46.7 | 63.2 | 43.0 | 35.7 | 30.9 | ||
Sogou_MM | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Global Concatenating Feature Enhancement for Instance Segmentation | Hang Yang, Xiaozhe Xin, Wenwen Yang, Bin Li | Global Concatenating Feature Enhancement for Instance Segmentation more details | n/a | 51.1 | 54.8 | 45.0 | 72.6 | 52.2 | 66.5 | 51.4 | 35.3 | 30.8 | |
Instance Segmentation by Jointly Optimizing Spatial Embeddings and Clustering Bandwidth | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Instance Segmentation by Jointly Optimizing Spatial Embeddings and Clustering Bandwidth | Davy Neven, Bert De Brabandere, Marc Proesmans and Luc Van Gool | CVPR 2019 | Fine only - ERFNet backbone more details | 0.1 | 37.8 | 49.9 | 36.9 | 71.1 | 26.2 | 43.6 | 22.7 | 25.2 | 27.0 |
Instance Annotation | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Instance Segmentation as Image Segmentation Annotation | Thomio Watanabe and Denis F. Wolf | 2019 IEEE Intelligent Vehicles Symposium (IV) | Based on DCME more details | 4.416 | 13.6 | 14.1 | 5.6 | 41.4 | 10.1 | 17.7 | 10.9 | 5.5 | 3.7 |
NJUST | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Ang Li, Chongyang Zhang | Mask R-CNN based on FPN enhancement and Mask Rescore, etc. Only one single model SE-ResNext-152 with COCO pre-train used; more details | n/a | 53.0 | 60.9 | 49.3 | 75.8 | 48.3 | 68.4 | 47.8 | 39.8 | 33.7 | ||
BshapeNet+ [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | BshapeNet: Object Detection and Instance Segmentation with Bounding Shape Masks | Ba Rom Kang, Ha Young Kim | BshapeNet+, ResNet-50-FPN as base model, Cityscapes [fine-only] more details | n/a | 40.5 | 47.9 | 36.8 | 67.8 | 36.7 | 49.4 | 36.2 | 27.3 | 21.9 | |
SSAP | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | SSAP: Single-Shot Instance Segmentation With Affinity Pyramid | Naiyu Gao, Yanhu Shan, Yupei Wang, Xin Zhao, Yinan Yu, Ming Yang, Kaiqi Huang | ICCV 2019 | SSAP, ResNet-101, Cityscapes fine-only more details | n/a | 47.3 | 54.3 | 39.2 | 78.3 | 48.1 | 67.6 | 42.7 | 25.5 | 22.5 |
Spatial Sampling Net | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Spatial Sampling Network for Fast Scene Understanding | Davide Mazzini, Raimondo Schettini | CVPR 2019 Workshop on Autonomous Driving | We propose a network architecture to perform efficient scene understanding. This work presents three main novelties: the first is an Improved Guided Upsampling Module that can replace in toto the decoder part in common semantic segmentation networks. Our second contribution is the introduction of a new module based on spatial sampling to perform Instance Segmentation. It provides a very fast instance segmentation, needing only thresholding as post-processing step at inference time. Finally, we propose a novel efficient network design that includes the new modules and we test it against different datasets for outdoor scene understanding. more details | 0.00884 | 16.4 | 18.6 | 5.8 | 41.0 | 18.1 | 24.4 | 14.3 | 6.5 | 2.6 |
UPSNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | UPSNet: A Unified Panoptic Segmentation Network | Yuwen Xiong, Renjie Liao, Hengshuang Zhao, Rui Hu, Min Bai, Ersin Yumer, Raquel Urtasun | CVPR 2019 | more details | 0.227 | 46.8 | 52.6 | 39.9 | 71.1 | 44.4 | 62.0 | 45.0 | 31.0 | 28.5 |
Sem2Ins | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous NeurIPS19 submission #4671 more details | n/a | 29.3 | 28.3 | 27.8 | 42.1 | 31.7 | 41.8 | 28.9 | 19.4 | 14.5 | ||
BshapeNet+ [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | BshapeNet: Object Detection and Instance Segmentation with Bounding Shape Masks | Ba Rom Kang, Ha Young Kim | BshapeNet+ single model, ResNet-50-FPN as base model, Cityscapes [fine-only + COCO] more details | n/a | 47.3 | 53.7 | 37.2 | 70.3 | 46.1 | 61.1 | 49.0 | 33.6 | 27.4 | |
AdaptIS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Adaptive Instance Selection network architecture for class-agnostic instance segmentation. Given an input image and a point (x, y), it generates a mask for the object located at (x, y). The network adapts to the input point with a help of AdaIN layers, thus producing different masks for different objects on the same image. AdaptIS generates pixel-accurate object masks, therefore it accurately segments objects of complex shape or severely occluded ones. more details | n/a | 48.2 | 49.7 | 45.9 | 69.4 | 45.6 | 65.0 | 58.0 | 33.8 | 18.3 | ||
AInnoSegmentation | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Faen Zhang, Jiahong Wu, Haotian Cao, Zhizheng Yang, Jianfei Song, Ze Huang, Jiashui Huang, Shenglan Ben | AInnoSegmentation use SE-Resnet 152 as backbone and FPN model to extract multi-level features and use self-develop method to combine multi-features and use COCO datasets to pre-train model and so on more details | n/a | 53.9 | 59.6 | 46.7 | 76.4 | 53.6 | 70.8 | 52.8 | 38.8 | 32.4 | ||
iFLYTEK-CV | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | iFLYTEK Research, CV Group more details | n/a | 55.7 | 60.9 | 51.0 | 77.4 | 57.3 | 70.2 | 55.3 | 39.2 | 34.6 | ||
Panoptic-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | Our proposed bottom-up Panoptic-DeepLab is conceptually simple yet delivers state-of-the-art results. The Panoptic-DeepLab adopts dual-ASPP and dual-decoder modules, specific to semantic segmentation and instance segmentation respectively. The semantic segmentation prediction follows the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation prediction involves a simple instance center regression, where the model learns to predict instance centers as well as the offset from each pixel to its corresponding center. This submission exploits only Cityscapes fine annotations. This entry fixes a minor inference bug (i.e., same trained model) for instance segmentation, compared to the previous submission. more details | n/a | 50.5 | 54.2 | 43.8 | 76.4 | 47.7 | 63.4 | 49.3 | 35.4 | 33.6 | |
snake | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Deep Snake for Real-Time Instance Segmentation | Sida Peng, Wen Jiang, Huaijin Pi, Xiuli Li, Hujun Bao, Xiaowei Zhou | CVPR 2020 | more details | 0.217 | 43.2 | 52.2 | 37.6 | 74.7 | 38.6 | 57.1 | 38.9 | 23.5 | 23.0 |
PolyTransform | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | PolyTransform: Deep Polygon Transformer for Instance Segmentation | Justin Liang, Namdar Homayounfar, Wei-Chiu Ma, Yuwen Xiong, Rui Hu, Raquel Urtasun | more details | n/a | 54.8 | 60.0 | 49.0 | 77.3 | 53.9 | 68.0 | 57.2 | 40.1 | 33.2 | |
StixelPointNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Learning Stixel-Based Instance Segmentation | Monty Santarossa, Lukas Schneider, Claudius Zelenka, Lars Schmarje, Reinhard Koch, Uwe Franke | IV 2021 | An adapted version of the PointNet is trained on Stixels as input for instance segmentation. more details | 0.035 | 15.1 | 18.6 | 13.1 | 26.5 | 21.4 | 29.3 | 0.0 | 5.6 | 6.5 |
Axial-DeepLab-XL [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 49.6 | 51.4 | 42.5 | 74.4 | 47.5 | 63.6 | 50.2 | 35.6 | 31.6 |
PolyTransform + SegFix | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | openseg | We simply apply a novel post-processing scheme based on the PolyTransform (thanks to the authors of PolyTransform for providing their segmentation results). The performance of the baseline PolyTransform is 40.1% and our method achieves 41.2%. Besides, our method also could improve the results of PointRend and PANet by more than 1.0% without any re-training or fine-tuning the segmentation models. more details | n/a | 56.0 | 61.8 | 50.4 | 79.2 | 54.8 | 69.3 | 57.5 | 40.8 | 34.3 | |
GAIS-Net | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Geometry-Aware Instance Segmentation with Disparity Maps | Cho-Ying Wu, Xiaoyan Hu, Michael Happold, Qiangeng Xu, Ulrich Neumann | Scalability in Autonomous Driving, workshop at CVPR 2020 | Geometry-Aware Instance Segmentation with Disparity Maps more details | n/a | 44.6 | 51.8 | 41.3 | 71.2 | 40.0 | 54.9 | 41.5 | 29.4 | 26.3 |
Axial-DeepLab-L [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 54.3 | 54.2 | 45.1 | 76.6 | 57.4 | 73.1 | 58.1 | 37.0 | 33.1 |
LevelSet R-CNN [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | LevelSet R-CNN: A Deep Variational Method for Instance Segmentation | Namdar Homayounfar*, Yuwen Xiong*, Justin Liang*, Wei-Chiu Ma, Raquel Urtasun | ECCV 2020 | Obtaining precise instance segmentation masks is of high importance in many modern applications such as robotic manipulation and autonomous driving. Currently, many state of the art models are based on the Mask R-CNN framework which, while very powerful, outputs masks at low resolutions which could result in imprecise boundaries. On the other hand, classic variational methods for segmentation impose desirable global and local data and geometry constraints on the masks by optimizing an energy functional. While mathematically elegant, their direct dependence on good initialization, non-robust image cues and manual setting of hyperparameters renders them unsuitable for modern applications. We propose LevelSet R-CNN, which combines the best of both worlds by obtaining powerful feature representations that are combined in an end-to-end manner with a variational segmentation framework. We demonstrate the effectiveness of our approach on COCO and Cityscapes datasets. more details | n/a | 47.5 | 55.2 | 42.5 | 74.9 | 41.7 | 57.5 | 44.5 | 33.4 | 30.1 |
LevelSet R-CNN [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | LevelSet R-CNN: A Deep Variational Method for Instance Segmentation | Namdar Homayounfar*, Yuwen Xiong*, Justin Liang*, Wei-Chiu Ma, Raquel Urtasun | ECCV 2020 | Obtaining precise instance segmentation masks is of high importance in many modern applications such as robotic manipulation and autonomous driving. Currently, many state of the art models are based on the Mask R-CNN framework which, while very powerful, outputs masks at low resolutions which could result in imprecise boundaries. On the other hand, classic variational methods for segmentation impose desirable global and local data and geometry constraints on the masks by optimizing an energy functional. While mathematically elegant, their direct dependence on good initialization, non-robust image cues and manual setting of hyperparameters renders them unsuitable for modern applications. We propose LevelSet R-CNN, which combines the best of both worlds by obtaining powerful feature representations that are combined in an end-to-end manner with a variational segmentation framework. We demonstrate the effectiveness of our approach on COCO and Cityscapes datasets. more details | n/a | 54.5 | 60.8 | 47.8 | 77.6 | 51.4 | 68.6 | 53.2 | 41.0 | 35.4 |
Axial-DeepLab-L [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 48.8 | 51.3 | 42.4 | 74.5 | 45.2 | 66.6 | 45.3 | 34.8 | 30.0 |
Deep Affinity Net [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Deep Affinity Net: Instance Segmentation via Affinity | Xingqian Xu, Mangtik Chiu, Thomas Huang, Honghui Shi | A proposal-free method that uses FPN generated features and network predicted 4-neighbor affinities to reconstruct instance segments. During inference time, an efficient graph partitioning algorithm, Cascade-GAEC, is introduced to overcome the long execution time in the high-resolution graph partitioning problem. more details | n/a | 41.5 | 39.4 | 35.1 | 63.0 | 43.4 | 61.2 | 46.3 | 24.9 | 18.8 | |
Naive-Student (iterative semi-supervised learning with Panoptic-DeepLab) | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Semi-Supervised Learning in Video Sequences for Urban Scene Segmentation | Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D. Collins, Ekin D. Cubuk, Barret Zoph, Hartwig Adam, Jonathon Shlens | Supervised learning in large discriminative models is a mainstay for modern computer vision. Such an approach necessitates investing in large-scale human-annotated datasets for achieving state-of-the-art results. In turn, the efficacy of supervised learning may be limited by the size of the human annotated dataset. This limitation is particularly notable for image segmentation tasks, where the expense of human annotation is especially large, yet large amounts of unlabeled data may exist. In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences to improve the performance on urban scene segmentation, simultaneously tackling semantic, instance, and panoptic segmentation. The goal of this work is to avoid the construction of sophisticated, learned architectures specific to label propagation (e.g., patch matching and optical flow). Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data. The procedure is iterated for several times. As a result, our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks, reaching the performance of 67.8% PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences to surpass state-of-the-art performance on core computer vision tasks. more details | n/a | 57.9 | 59.9 | 50.8 | 80.2 | 59.5 | 73.2 | 55.6 | 45.1 | 39.2 | |
Axial-DeepLab-XL [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 55.5 | 56.2 | 47.9 | 77.8 | 55.9 | 73.5 | 57.3 | 39.7 | 35.6 |
Panoptic-DeepLab [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | We employ a stronger backbone, WR-41, in Panoptic-DeepLab. For Panoptic-DeepLab, please refer to https://arxiv.org/abs/1911.10194. For wide-ResNet-41 (WR-41) backbone, please refer to https://arxiv.org/abs/2005.10266. more details | n/a | 55.6 | 56.5 | 46.2 | 78.6 | 59.6 | 72.7 | 50.3 | 43.6 | 37.0 | |
EfficientPS [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 52.9 | 58.5 | 48.5 | 77.1 | 48.5 | 69.3 | 48.3 | 36.3 | 36.3 | |
seamseg_rvcsubset | no | no | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Porzi, Lorenzo and Rota Bulò, Samuel and Colovic, Aleksander and Kontschieder, Peter | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019 | Seamless Scene Segmentation Resnet101, pretrained on Imagenet; supplied with altered MVD to include WildDash2 classes; does not contain other RVC label policies (i.e. no ADE20K/COCO-specific classes -> rvcsubset and not a proper submission) more details | n/a | 31.2 | 40.5 | 27.5 | 53.2 | 33.2 | 41.7 | 15.6 | 19.8 | 18.3 |
UniDet_RVC | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 300.0 | 44.1 | 49.8 | 34.3 | 65.9 | 45.3 | 62.0 | 45.3 | 27.5 | 22.9 | ||
EffPS_b1bs4_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | EfficientPS with EfficientNet-b1 backbone. Trained with a batch size of 4. more details | n/a | 31.7 | 41.7 | 29.6 | 62.2 | 33.4 | 46.0 | 3.3 | 18.0 | 19.2 | |
Panoptic-DeepLab w/ SWideRNet [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 53.7 | 56.1 | 48.3 | 77.7 | 51.9 | 67.0 | 53.1 | 39.7 | 35.5 | |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 57.8 | 56.6 | 49.9 | 77.9 | 60.0 | 76.1 | 59.0 | 43.8 | 38.7 | |
PolyTransform + SegFix + BPR | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation | Chufeng Tang*, Hang Chen*, Xiao Li, Jianmin Li, Zhaoxiang Zhang, Xiaolin Hu | CVPR 2021 | Tremendous efforts have been made on instance segmentation but the mask quality is still not satisfactory. The boundaries of predicted instance masks are usually imprecise due to the low spatial resolution of feature maps and the imbalance problem caused by the extremely low proportion of boundary pixels. To address these issues, we propose a conceptually simple yet effective post-processing refinement framework to improve the boundary quality based on the results of any instance segmentation model, termed BPR. Following the idea of looking closer to segment boundaries better, we extract and refine a series of small boundary patches along the predicted instance boundaries. The refinement is accomplished by a boundary patch refinement network at higher resolution. The proposed BPR framework yields significant improvements over the Mask R-CNN baseline on Cityscapes benchmark, especially on the boundary-aware metrics. Moreover, by applying the BPR framework to the PolyTransform + SegFix baseline, we reached 1st place on the Cityscapes leaderboard. more details | n/a | 57.5 | 63.4 | 51.6 | 81.1 | 55.8 | 71.0 | 59.9 | 41.9 | 35.3 |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas + Pseudo-labels] | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. Following Naive-Student, this model is additionally trained with pseudo-labels generated from Cityscapes Video and train-extra set (i.e., the coarse annotations are not used, but the images are). more details | n/a | 58.9 | 58.6 | 50.7 | 79.8 | 62.7 | 77.6 | 56.0 | 45.5 | 39.8 | |
CenterPoly | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 0.045 | 23.3 | 29.4 | 18.3 | 50.7 | 18.2 | 23.8 | 22.2 | 12.2 | 11.7 | ||
HRI-INST | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | HRI-INST more details | n/a | 57.5 | 58.5 | 50.8 | 77.6 | 59.4 | 76.9 | 58.5 | 42.3 | 36.0 | ||
DH-ARI | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | DH-ARI more details | n/a | 58.5 | 64.3 | 54.6 | 82.7 | 57.3 | 73.1 | 51.9 | 45.4 | 38.9 | ||
HRI-TRANS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | HRI transformer instance segmentation more details | n/a | 57.9 | 60.4 | 50.8 | 80.1 | 59.4 | 76.9 | 58.5 | 41.0 | 36.0 | ||
kMaX-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | k-means Mask Transformer | Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen | ECCV 2022 | kMaX-DeepLab w/ ConvNeXt-L backbone (ImageNet-22k + 1k pretrained). This result is obtained by the kMaX-DeepLab trained for Panoptic Segmentation task. No test-time augmentation or other external dataset. more details | n/a | 57.2 | 56.9 | 50.6 | 78.4 | 57.1 | 76.5 | 64.9 | 39.8 | 33.1 |
QueryInst-Parallel Completion | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Hai Wang ;Shilin Zhu ;PuPu ;Meng; Le; Apple; Rong | We propose a novel feature complete network framework queryinst parallel completion. First, the global context module is introduced into the backbone network to obtain instance information. Then, parallel semantic branch and parallel global branch are proposed to extract the semantic information and global information of feature layer, so as to complete the ROI features. In addition, we also propose a feature transfer structure, which explicitly increases the connection between detection and segmentation branches, changes the gradient back-propagation path, and indirectly complements the ROI features. more details | n/a | 48.1 | 58.3 | 43.7 | 76.8 | 38.8 | 61.6 | 43.6 | 31.8 | 30.7 | ||
CenterPoly v2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Real-time instance segmentation with polygons using an Intersection-over-Union loss | Katia Jodogne-del Litto, Guillaume-Alexandre Bilodeau | more details | 0.045 | 24.8 | 28.3 | 16.2 | 49.0 | 26.1 | 35.9 | 19.3 | 13.1 | 10.2 | |
Jiangsu-University-Environmental-Perception | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 53.4 | 61.2 | 49.0 | 77.8 | 46.5 | 69.0 | 46.3 | 42.2 | 35.5 |
AP 50 m on class-level
name | fine | fine | coarse | coarse | 16-bit | 16-bit | depth | depth | video | video | sub | sub | code | code | title | authors | venue | description | Runtime [s] | average | person | rider | car | truck | bus | train | motorcycle | bicycle |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
R-CNN + MCG convex hull | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | The Cityscapes Dataset for Semantic Urban Scene Understanding | M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson, U. Franke, S. Roth, B. Schiele | CVPR 2016 | We compute MCG object proposals [1] and use their convex hulls as instance candidates. These proposals are scored by a Fast R-CNN detector [2]. [1] P. Arbelaez, J. Pont-Tuset, J. Barron, F. Marqués, and J. Malik. Multiscale combinatorial grouping. In CVPR, 2014. [2] R. Girshick. Fast R-CNN. In ICCV, 2015. more details | 60.0 | 10.3 | 2.7 | 1.1 | 21.2 | 14.0 | 25.2 | 14.2 | 2.7 | 1.0 |
Pixel-level Encoding for Instance Segmentation | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Pixel-level Encoding and Depth Layering for Instance-level Semantic Labeling | J. Uhrig, M. Cordts, U. Franke, and T. Brox | GCPR 2016 | We predict three encoding channels from a single image using an FCN: semantic labels, depth classes, and an instance-aware representation based on directions towards instance centers. Using low-level computer vision techniques, we obtain pixel-level and instance-level semantic labeling paired with a depth estimate of the instances. more details | n/a | 16.7 | 25.0 | 21.0 | 40.7 | 6.7 | 13.5 | 6.4 | 11.2 | 9.3 |
Instance-level Segmentation of Vehicles by Deep Contours | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Instance-level Segmentation of Vehicles by Deep Contours | Jan van den Brand, Matthias Ochs and Rudolf Mester | Asian Conference on Computer Vision - Workshop on Computer Vision Technologies for Smart Vehicle | Our method uses the fully convolutional network (FCN) for semantic labeling and for estimating the boundary of each vehicle. Even though a contour is in general a one pixel wide structure which cannot be directly learned by a CNN, our network addresses this by providing areas around the contours. Based on these areas, we separate the individual vehicle instances. more details | 0.2 | 4.9 | 0.0 | 0.0 | 39.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
Boundary-aware Instance Segmentation | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Boundary-aware Instance Segmentation | Zeeshan Hayder, Xuming He, Mathieu Salzmann | CVPR 2017 | End-to-end model for instance segmentation using VGG16 network Previously listed as "Shape-Aware Instance Segmentation" more details | n/a | 34.0 | 31.5 | 23.4 | 63.1 | 32.2 | 50.5 | 40.4 | 16.5 | 14.6 |
RecAttend | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | n/a | 20.9 | 20.7 | 5.8 | 54.2 | 17.9 | 32.1 | 21.9 | 7.8 | 6.4 | ||
Joint Graph Decomposition and Node Labeling | yes | yes | no | no | no | no | no | no | no | no | 8 | 8 | no | no | Joint Graph Decomposition and Node Labeling: Problem, Algorithms, Applications | Evgeny Levinkov, Jonas Uhrig, Siyu Tang, Mohamed Omran, Eldar Insafutdinov, Alexander Kirillov, Carsten Rother, Thomas Brox, Bernt Schiele, Bjoern Andres | Computer Vision and Pattern Recognition (CVPR) 2017 | more details | n/a | 20.3 | 14.0 | 17.4 | 43.9 | 15.0 | 26.1 | 26.2 | 11.6 | 8.5 |
InstanceCut | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | InstanceCut: from Edges to Instances with MultiCut | A. Kirillov, E. Levinkov, B. Andres, B. Savchynskyy, C. Rother | Computer Vision and Pattern Recognition (CVPR) 2017 | InstanceCut represents the problem by two output modalities: (i) an instance-agnostic semantic segmentation and (ii) all instance-boundaries. The former is computed from a standard CNN for semantic segmentation, and the latter is derived from a new instance-aware edge detection model. To reason globally about the optimal partitioning of an image into instances, we combine these two modalities into a novel MultiCut formulation. more details | n/a | 26.1 | 20.1 | 14.6 | 42.5 | 32.3 | 44.7 | 31.7 | 14.3 | 8.2 |
Semantic Instance Segmentation with a Discriminative Loss Function | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | yes | yes | Semantic Instance Segmentation with a Discriminative Loss Function | Bert De Brabandere, Davy Neven, Luc Van Gool | Deep Learning for Robotic Vision, workshop at CVPR 2017 | This method uses a discriminative loss function, operating at the pixel level, that encourages a convolutional network to produce a representation of the image that can easily be clustered into instances with a simple post-processing step. The loss function encourages the network to map each pixel to a point in feature space so that pixels belonging to the same instance lie close together while different instances are separated by a wide margin. Previously listed as "PPLoss". more details | n/a | 31.0 | 25.1 | 28.2 | 44.0 | 28.6 | 47.7 | 32.5 | 23.5 | 18.0 |
SGN | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | SGN: Sequential Grouping Networks for Instance Segmentation | Shu Liu, Jiaya Jia, Sanja Fidler, Raquel Urtasun | ICCV 2017 | Instance segmentation using a sequence of neural networks, each solving a sub-grouping problem of increasing semantic complexity in order to gradually compose objects out of pixels. more details | n/a | 44.5 | 36.8 | 33.3 | 63.2 | 50.7 | 67.4 | 59.2 | 25.3 | 20.0 |
Mask R-CNN [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Mask R-CNN | Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick | Mask R-CNN, ResNet-50-FPN, Cityscapes [fine-only] + COCO more details | n/a | 49.5 | 51.5 | 40.1 | 69.9 | 49.3 | 69.2 | 55.9 | 31.9 | 27.9 | |
Mask R-CNN [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Mask R-CNN | Kaiming He, Georgia Gkioxari, Piotr Dollár, Ross Girshick | Mask R-CNN, ResNet-50-FPN, Cityscapes fine-only more details | n/a | 40.1 | 46.2 | 35.9 | 67.4 | 37.8 | 51.2 | 32.9 | 25.3 | 24.3 | |
Deep Watershed Transformation | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Deep Watershed Transformation for Instance Segmentation | Min Bai and Raquel Urtasun | CVPR 2017 | Instance segmentation using a watershed transformation inspired CNN. The input RGB image is augmented using the semantic segmentation from the recent PSPNet by H. Zhao et al. Previously named "DWT". more details | n/a | 36.8 | 27.4 | 23.7 | 53.5 | 47.1 | 64.3 | 45.1 | 20.2 | 13.1 |
Foveal Vision for Instance Segmentation of Road Images | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Foveal Vision for Instance Segmentation of Road Images | Benedikt Ortelt, Christian Herrmann, Dieter Willersinn, Jürgen Beyerer | VISAPP 2018 | Directly based on 'Pixel-level Encoding for Instance Segmentation'. Adds an improved angular distance measure and a foveal concept to better address small objects at the vanishing point of the road. more details | n/a | 22.1 | 24.7 | 20.2 | 42.5 | 17.2 | 27.6 | 21.8 | 11.7 | 11.3 |
SegNet | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.5 | 45.8 | 48.0 | 35.2 | 62.5 | 50.8 | 70.0 | 48.1 | 26.3 | 25.3 | ||
DCME | yes | yes | no | no | no | no | no | no | no | no | 3 | 3 | no | no | Distance to Center of Mass Encoding for Instance Segmentation | Thomio Watanabe and Denis Wolf | 2018 21st International Conference on Intelligent Transportation Systems (ITSC) | more details | n/a | 9.5 | 4.1 | 1.4 | 35.5 | 5.1 | 14.7 | 12.9 | 1.5 | 0.6 |
RRL | yes | yes | no | no | no | no | yes | yes | no | no | no | no | no | no | Anonymous | more details | n/a | 41.8 | 49.4 | 38.4 | 70.8 | 32.0 | 53.6 | 36.7 | 26.5 | 27.2 | ||
PANet [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Path Aggregation Network for Instance Segmentation | Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, Jiaya Jia | CVPR 2018 | PANet, ResNet-50 as base model, Cityscapes fine-only, training hyper-parameters are adopted from Mask R-CNN. more details | n/a | 46.0 | 53.7 | 43.5 | 75.5 | 40.1 | 56.2 | 39.0 | 29.5 | 30.2 |
PANet [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Path Aggregation Network for Instance Segmentation | Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, Jiaya Jia | CVPR 2018 | PANet, ResNet-50 as base model, Cityscapes fine-only + COCO, training hyper-parameters are adopted from Mask R-CNN. more details | n/a | 51.8 | 58.7 | 46.9 | 77.9 | 47.6 | 70.8 | 42.3 | 36.0 | 34.2 |
LCIS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 25.8 | 29.3 | 25.3 | 38.1 | 27.0 | 27.7 | 24.3 | 18.1 | 16.3 | ||
Pixelwise Instance Segmentation with a Dynamically Instantiated Network | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Pixelwise Instance Segmentation with a Dynamically Instantiated Network | Anurag Arnab and Philip H. S. Torr | Computer Vision and Pattern Recognition (CVPR) 2017 | We propose an Instance Segmentation system that produces a segmentation map where each pixel is assigned an object class and instance identity label (this has recently been termed "Panoptic Segmentation"). Our method is based on an initial semantic segmentation module which feeds into an instance subnetwork. This subnetwork uses the initial category-level segmentation, along with cues from the output of an object detector, within an end-to-end CRF to predict instances. This part of our model is dynamically instantiated to produce a variable number of instances per image. Our end-to-end approach requires no post-processing and considers the image holistically, instead of processing independent proposals. As a result, it reasons about occlusions (unlike some related work, a single pixel cannot belong to multiple instances). more details | n/a | 40.9 | 39.2 | 31.2 | 49.7 | 43.3 | 61.6 | 53.5 | 28.7 | 20.4 |
PolygonRNN++ | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Efficient Annotation of Segmentation Datasets with Polygon-RNN++ | D. Acuna, H. Ling, A. Kar, and S. Fidler | CVPR 2018 | more details | n/a | 43.4 | 50.1 | 35.2 | 72.7 | 41.3 | 62.1 | 44.8 | 19.1 | 22.1 |
GMIS: Graph Merge for Instance Segmentation | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Yiding Liu, Siyu Yang, Bin Li, Wengang Zhou, Jizheng Xu, Houqiang Li, Yan Lu | more details | n/a | 47.9 | 47.9 | 38.4 | 70.6 | 48.0 | 75.1 | 59.7 | 25.1 | 18.3 | ||
TCnet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | TCnet more details | n/a | 47.8 | 53.1 | 42.3 | 70.9 | 44.6 | 65.1 | 47.5 | 31.5 | 27.5 | ||
MaskRCNN_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | MaskRCNN Instance segmentation baseline for ROB challenge using default parameters from Matterport's implementation of Mask RCNN https://github.com/matterport/Mask_RCNN more details | n/a | 14.6 | 29.3 | 15.5 | 49.6 | 1.6 | 12.8 | 0.0 | 8.1 | 0.0 | ||
Multitask Learning | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics | Alex Kendall, Yarin Gal and Roberto Cipolla | CVPR 2018 | Numerous deep learning applications benefit from multi-task learning with multiple regression and classification objectives. In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task's loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. This allows us to simultaneously learn various quantities with different units or scales in both classification and regression settings. We demonstrate our model learning per-pixel depth regression, semantic and instance segmentation from a monocular input image. Perhaps surprisingly, we show our model can learn multi-task weightings and outperform separate models trained individually on each task. more details | n/a | 37.0 | 38.9 | 36.3 | 64.0 | 29.1 | 46.6 | 28.2 | 28.5 | 24.7 |
Deep Coloring | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous ECCV submission #2955 more details | n/a | 44.0 | 38.9 | 35.0 | 63.6 | 48.5 | 68.7 | 50.4 | 26.6 | 20.2 | ||
MRCNN_VSCMLab_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | MaskRCNN+FPN with pre-trained COCO model. ms-training with short edge [800, 1024] inference with shore edge size 800 Randomly subsample ScanNet to the size close to CityScape optimizer: Adam learning rate: start from 1e-4 to 1e-3 with linear warm up schedule. decrease by factor of 0.1 at 200, 300 epoch. epoch: 400 step per epoch: 500 roi_per_im: 512 more details | 1.0 | 29.3 | 30.9 | 20.3 | 62.7 | 28.1 | 40.9 | 38.4 | 13.1 | 0.0 | ||
BAMRCNN_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 0.1 | 0.0 | 0.0 | 0.5 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | ||
NL_ROI_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Non-local ROI on Mask R-CNN more details | n/a | 40.8 | 45.2 | 32.6 | 69.1 | 41.2 | 59.7 | 38.7 | 20.8 | 19.2 | ||
RUSH_ROB | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 46.3 | 53.2 | 44.6 | 70.5 | 42.6 | 59.5 | 35.4 | 34.0 | 30.4 | ||
MaskRCNN_BOSH | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Jin shengtao, Yi zhihao, Liu wei [Our team name is firefly] | MaskRCNN segmentation baseline for Bosh autodrive challenge , using Matterport's implementation of Mask RCNN https://github.com/matterport/Mask_RCNN 55k iterations, default parameters (backbone :resenet 101) 19hours for training more details | n/a | 26.7 | 29.5 | 17.4 | 43.7 | 25.3 | 41.1 | 32.5 | 12.7 | 11.8 | ||
NV-ADLR | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | NVIDIA Applied Deep Learning Research more details | n/a | 53.5 | 56.8 | 43.7 | 77.9 | 54.9 | 73.2 | 54.1 | 36.2 | 31.0 | ||
Sogou_MM | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Global Concatenating Feature Enhancement for Instance Segmentation | Hang Yang, Xiaozhe Xin, Wenwen Yang, Bin Li | Global Concatenating Feature Enhancement for Instance Segmentation more details | n/a | 54.5 | 54.9 | 45.5 | 74.5 | 60.1 | 75.4 | 59.5 | 35.6 | 30.7 | |
Instance Segmentation by Jointly Optimizing Spatial Embeddings and Clustering Bandwidth | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Instance Segmentation by Jointly Optimizing Spatial Embeddings and Clustering Bandwidth | Davy Neven, Bert De Brabandere, Marc Proesmans and Luc Van Gool | CVPR 2019 | Fine only - ERFNet backbone more details | 0.1 | 37.3 | 49.4 | 37.0 | 72.5 | 24.2 | 43.5 | 19.6 | 26.0 | 26.5 |
Instance Annotation | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Instance Segmentation as Image Segmentation Annotation | Thomio Watanabe and Denis F. Wolf | 2019 IEEE Intelligent Vehicles Symposium (IV) | Based on DCME more details | 4.416 | 16.6 | 15.3 | 6.0 | 50.6 | 11.6 | 22.5 | 16.5 | 6.4 | 3.9 |
NJUST | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Ang Li, Chongyang Zhang | Mask R-CNN based on FPN enhancement and Mask Rescore, etc. Only one single model SE-ResNext-152 with COCO pre-train used; more details | n/a | 55.4 | 60.9 | 50.0 | 77.6 | 53.2 | 75.9 | 52.0 | 40.0 | 33.6 | ||
BshapeNet+ [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | BshapeNet: Object Detection and Instance Segmentation with Bounding Shape Masks | Ba Rom Kang, Ha Young Kim | BshapeNet+, ResNet-50-FPN as base model, Cityscapes [fine-only] more details | n/a | 43.1 | 48.2 | 37.2 | 70.5 | 42.3 | 56.7 | 39.5 | 28.8 | 21.7 | |
SSAP | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | SSAP: Single-Shot Instance Segmentation With Affinity Pyramid | Naiyu Gao, Yanhu Shan, Yupei Wang, Xin Zhao, Yinan Yu, Ming Yang, Kaiqi Huang | ICCV 2019 | SSAP, ResNet-101, Cityscapes fine-only more details | n/a | 51.4 | 54.5 | 39.8 | 81.8 | 54.9 | 77.8 | 54.4 | 25.9 | 22.3 |
Spatial Sampling Net | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Spatial Sampling Network for Fast Scene Understanding | Davide Mazzini, Raimondo Schettini | CVPR 2019 Workshop on Autonomous Driving | We propose a network architecture to perform efficient scene understanding. This work presents three main novelties: the first is an Improved Guided Upsampling Module that can replace in toto the decoder part in common semantic segmentation networks. Our second contribution is the introduction of a new module based on spatial sampling to perform Instance Segmentation. It provides a very fast instance segmentation, needing only thresholding as post-processing step at inference time. Finally, we propose a novel efficient network design that includes the new modules and we test it against different datasets for outdoor scene understanding. more details | 0.00884 | 21.4 | 19.9 | 6.2 | 49.4 | 25.2 | 36.8 | 23.4 | 7.5 | 2.7 |
UPSNet | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | UPSNet: A Unified Panoptic Segmentation Network | Yuwen Xiong, Renjie Liao, Hengshuang Zhao, Rui Hu, Min Bai, Ersin Yumer, Raquel Urtasun | CVPR 2019 | more details | 0.227 | 50.7 | 52.9 | 40.7 | 73.6 | 52.5 | 72.9 | 53.3 | 31.7 | 28.4 |
Sem2Ins | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous NeurIPS19 submission #4671 more details | n/a | 32.8 | 27.9 | 28.2 | 43.1 | 38.8 | 50.8 | 39.4 | 19.4 | 14.6 | ||
BshapeNet+ [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | BshapeNet: Object Detection and Instance Segmentation with Bounding Shape Masks | Ba Rom Kang, Ha Young Kim | BshapeNet+ single model, ResNet-50-FPN as base model, Cityscapes [fine-only + COCO] more details | n/a | 50.7 | 54.0 | 38.1 | 72.9 | 53.0 | 69.4 | 56.8 | 33.9 | 27.3 | |
AdaptIS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Adaptive Instance Selection network architecture for class-agnostic instance segmentation. Given an input image and a point (x, y), it generates a mask for the object located at (x, y). The network adapts to the input point with a help of AdaIN layers, thus producing different masks for different objects on the same image. AdaptIS generates pixel-accurate object masks, therefore it accurately segments objects of complex shape or severely occluded ones. more details | n/a | 52.1 | 49.8 | 46.8 | 71.3 | 54.3 | 77.2 | 63.8 | 35.4 | 18.1 | ||
AInnoSegmentation | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Faen Zhang, Jiahong Wu, Haotian Cao, Zhizheng Yang, Jianfei Song, Ze Huang, Jiashui Huang, Shenglan Ben | AInnoSegmentation use SE-Resnet 152 as backbone and FPN model to extract multi-level features and use self-develop method to combine multi-features and use COCO datasets to pre-train model and so on more details | n/a | 56.7 | 59.9 | 47.5 | 78.9 | 58.5 | 80.7 | 56.5 | 39.3 | 32.6 | ||
iFLYTEK-CV | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | iFLYTEK Research, CV Group more details | n/a | 58.7 | 61.2 | 51.8 | 79.7 | 63.4 | 78.8 | 60.9 | 39.6 | 34.6 | ||
Panoptic-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | Our proposed bottom-up Panoptic-DeepLab is conceptually simple yet delivers state-of-the-art results. The Panoptic-DeepLab adopts dual-ASPP and dual-decoder modules, specific to semantic segmentation and instance segmentation respectively. The semantic segmentation prediction follows the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation prediction involves a simple instance center regression, where the model learns to predict instance centers as well as the offset from each pixel to its corresponding center. This submission exploits only Cityscapes fine annotations. This entry fixes a minor inference bug (i.e., same trained model) for instance segmentation, compared to the previous submission. more details | n/a | 53.1 | 54.3 | 44.6 | 78.9 | 52.7 | 70.2 | 54.3 | 36.3 | 33.6 | |
snake | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Deep Snake for Real-Time Instance Segmentation | Sida Peng, Wen Jiang, Huaijin Pi, Xiuli Li, Hujun Bao, Xiaowei Zhou | CVPR 2020 | more details | 0.217 | 44.7 | 52.1 | 37.7 | 77.1 | 39.9 | 62.6 | 42.2 | 23.4 | 22.7 |
PolyTransform | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | PolyTransform: Deep Polygon Transformer for Instance Segmentation | Justin Liang, Namdar Homayounfar, Wei-Chiu Ma, Yuwen Xiong, Rui Hu, Raquel Urtasun | more details | n/a | 58.0 | 60.1 | 49.4 | 80.1 | 62.3 | 75.3 | 63.0 | 40.8 | 33.3 | |
StixelPointNet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Learning Stixel-Based Instance Segmentation | Monty Santarossa, Lukas Schneider, Claudius Zelenka, Lars Schmarje, Reinhard Koch, Uwe Franke | IV 2021 | An adapted version of the PointNet is trained on Stixels as input for instance segmentation. more details | 0.035 | 18.3 | 19.3 | 13.8 | 29.5 | 27.7 | 42.8 | 0.0 | 6.3 | 6.9 |
Axial-DeepLab-XL [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 53.1 | 51.6 | 42.9 | 77.1 | 57.5 | 73.4 | 54.1 | 36.6 | 31.7 |
PolyTransform + SegFix | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Anonymous | openseg | We simply apply a novel post-processing scheme based on the PolyTransform (thanks to the authors of PolyTransform for providing their segmentation results). The performance of the baseline PolyTransform is 40.1% and our method achieves 41.2%. Besides, our method also could improve the results of PointRend and PANet by more than 1.0% without any re-training or fine-tuning the segmentation models. more details | n/a | 59.2 | 61.9 | 51.0 | 81.8 | 63.5 | 76.4 | 63.5 | 41.5 | 34.2 | |
GAIS-Net | yes | yes | no | no | no | no | yes | yes | no | no | no | no | yes | yes | Geometry-Aware Instance Segmentation with Disparity Maps | Cho-Ying Wu, Xiaoyan Hu, Michael Happold, Qiangeng Xu, Ulrich Neumann | Scalability in Autonomous Driving, workshop at CVPR 2020 | Geometry-Aware Instance Segmentation with Disparity Maps more details | n/a | 46.6 | 51.9 | 41.9 | 73.4 | 45.4 | 58.4 | 45.8 | 29.8 | 26.3 |
Axial-DeepLab-L [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 57.3 | 54.3 | 45.6 | 78.9 | 63.5 | 83.6 | 61.7 | 37.6 | 33.2 |
LevelSet R-CNN [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | LevelSet R-CNN: A Deep Variational Method for Instance Segmentation | Namdar Homayounfar*, Yuwen Xiong*, Justin Liang*, Wei-Chiu Ma, Raquel Urtasun | ECCV 2020 | Obtaining precise instance segmentation masks is of high importance in many modern applications such as robotic manipulation and autonomous driving. Currently, many state of the art models are based on the Mask R-CNN framework which, while very powerful, outputs masks at low resolutions which could result in imprecise boundaries. On the other hand, classic variational methods for segmentation impose desirable global and local data and geometry constraints on the masks by optimizing an energy functional. While mathematically elegant, their direct dependence on good initialization, non-robust image cues and manual setting of hyperparameters renders them unsuitable for modern applications. We propose LevelSet R-CNN, which combines the best of both worlds by obtaining powerful feature representations that are combined in an end-to-end manner with a variational segmentation framework. We demonstrate the effectiveness of our approach on COCO and Cityscapes datasets. more details | n/a | 50.3 | 55.3 | 42.8 | 78.0 | 48.4 | 62.1 | 51.3 | 34.7 | 30.0 |
LevelSet R-CNN [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | LevelSet R-CNN: A Deep Variational Method for Instance Segmentation | Namdar Homayounfar*, Yuwen Xiong*, Justin Liang*, Wei-Chiu Ma, Raquel Urtasun | ECCV 2020 | Obtaining precise instance segmentation masks is of high importance in many modern applications such as robotic manipulation and autonomous driving. Currently, many state of the art models are based on the Mask R-CNN framework which, while very powerful, outputs masks at low resolutions which could result in imprecise boundaries. On the other hand, classic variational methods for segmentation impose desirable global and local data and geometry constraints on the masks by optimizing an energy functional. While mathematically elegant, their direct dependence on good initialization, non-robust image cues and manual setting of hyperparameters renders them unsuitable for modern applications. We propose LevelSet R-CNN, which combines the best of both worlds by obtaining powerful feature representations that are combined in an end-to-end manner with a variational segmentation framework. We demonstrate the effectiveness of our approach on COCO and Cityscapes datasets. more details | n/a | 58.1 | 61.1 | 48.7 | 80.4 | 60.2 | 76.6 | 61.0 | 41.9 | 35.3 |
Axial-DeepLab-L [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 52.0 | 51.3 | 43.0 | 77.1 | 54.4 | 76.7 | 47.4 | 35.7 | 30.1 |
Deep Affinity Net [fine-only] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Deep Affinity Net: Instance Segmentation via Affinity | Xingqian Xu, Mangtik Chiu, Thomas Huang, Honghui Shi | A proposal-free method that uses FPN generated features and network predicted 4-neighbor affinities to reconstruct instance segments. During inference time, an efficient graph partitioning algorithm, Cascade-GAEC, is introduced to overcome the long execution time in the high-resolution graph partitioning problem. more details | n/a | 46.9 | 39.6 | 36.0 | 66.5 | 54.6 | 76.6 | 56.8 | 26.2 | 18.8 | |
Naive-Student (iterative semi-supervised learning with Panoptic-DeepLab) | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Semi-Supervised Learning in Video Sequences for Urban Scene Segmentation | Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D. Collins, Ekin D. Cubuk, Barret Zoph, Hartwig Adam, Jonathon Shlens | Supervised learning in large discriminative models is a mainstay for modern computer vision. Such an approach necessitates investing in large-scale human-annotated datasets for achieving state-of-the-art results. In turn, the efficacy of supervised learning may be limited by the size of the human annotated dataset. This limitation is particularly notable for image segmentation tasks, where the expense of human annotation is especially large, yet large amounts of unlabeled data may exist. In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences to improve the performance on urban scene segmentation, simultaneously tackling semantic, instance, and panoptic segmentation. The goal of this work is to avoid the construction of sophisticated, learned architectures specific to label propagation (e.g., patch matching and optical flow). Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data. The procedure is iterated for several times. As a result, our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks, reaching the performance of 67.8% PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences to surpass state-of-the-art performance on core computer vision tasks. more details | n/a | 59.8 | 60.1 | 51.4 | 82.6 | 65.7 | 79.6 | 53.8 | 46.0 | 39.1 | |
Axial-DeepLab-XL [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 57.4 | 56.4 | 48.4 | 80.2 | 62.2 | 78.2 | 58.1 | 40.5 | 35.4 |
Panoptic-DeepLab [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | We employ a stronger backbone, WR-41, in Panoptic-DeepLab. For Panoptic-DeepLab, please refer to https://arxiv.org/abs/1911.10194. For wide-ResNet-41 (WR-41) backbone, please refer to https://arxiv.org/abs/2005.10266. more details | n/a | 57.0 | 56.6 | 46.9 | 80.7 | 63.1 | 79.5 | 48.6 | 44.2 | 36.8 | |
EfficientPS [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 55.8 | 58.1 | 49.0 | 78.6 | 55.6 | 79.0 | 54.1 | 36.1 | 35.9 | |
seamseg_rvcsubset | no | no | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Porzi, Lorenzo and Rota Bulò, Samuel and Colovic, Aleksander and Kontschieder, Peter | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019 | Seamless Scene Segmentation Resnet101, pretrained on Imagenet; supplied with altered MVD to include WildDash2 classes; does not contain other RVC label policies (i.e. no ADE20K/COCO-specific classes -> rvcsubset and not a proper submission) more details | n/a | 32.1 | 40.0 | 27.2 | 53.7 | 33.3 | 41.8 | 23.3 | 19.2 | 18.0 |
UniDet_RVC | yes | yes | no | no | no | no | no | no | no | no | 2 | 2 | no | no | Anonymous | more details | 300.0 | 48.0 | 50.0 | 35.1 | 68.2 | 55.6 | 70.4 | 52.6 | 28.8 | 22.9 | ||
EffPS_b1bs4_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | EfficientPS with EfficientNet-b1 backbone. Trained with a batch size of 4. more details | n/a | 35.1 | 41.4 | 29.8 | 63.4 | 42.0 | 59.9 | 6.0 | 18.8 | 19.2 | |
Panoptic-DeepLab w/ SWideRNet [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 55.4 | 56.2 | 48.8 | 80.1 | 57.2 | 73.2 | 51.3 | 40.7 | 35.6 | |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 59.6 | 56.8 | 50.4 | 80.3 | 63.3 | 83.7 | 58.8 | 45.1 | 38.6 | |
PolyTransform + SegFix + BPR | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation | Chufeng Tang*, Hang Chen*, Xiao Li, Jianmin Li, Zhaoxiang Zhang, Xiaolin Hu | CVPR 2021 | Tremendous efforts have been made on instance segmentation but the mask quality is still not satisfactory. The boundaries of predicted instance masks are usually imprecise due to the low spatial resolution of feature maps and the imbalance problem caused by the extremely low proportion of boundary pixels. To address these issues, we propose a conceptually simple yet effective post-processing refinement framework to improve the boundary quality based on the results of any instance segmentation model, termed BPR. Following the idea of looking closer to segment boundaries better, we extract and refine a series of small boundary patches along the predicted instance boundaries. The refinement is accomplished by a boundary patch refinement network at higher resolution. The proposed BPR framework yields significant improvements over the Mask R-CNN baseline on Cityscapes benchmark, especially on the boundary-aware metrics. Moreover, by applying the BPR framework to the PolyTransform + SegFix baseline, we reached 1st place on the Cityscapes leaderboard. more details | n/a | 60.7 | 63.5 | 52.2 | 83.7 | 64.2 | 77.7 | 66.6 | 42.7 | 35.2 |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas + Pseudo-labels] | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. Following Naive-Student, this model is additionally trained with pseudo-labels generated from Cityscapes Video and train-extra set (i.e., the coarse annotations are not used, but the images are). more details | n/a | 60.9 | 58.9 | 51.1 | 82.3 | 68.4 | 85.1 | 55.4 | 46.2 | 39.9 | |
CenterPoly | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Anonymous | more details | 0.045 | 24.5 | 29.5 | 18.6 | 54.3 | 21.1 | 23.8 | 24.0 | 12.7 | 11.6 | ||
HRI-INST | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | HRI-INST more details | n/a | 61.0 | 58.9 | 51.8 | 81.0 | 67.9 | 86.7 | 61.7 | 43.7 | 36.1 | ||
DH-ARI | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | DH-ARI more details | n/a | 61.6 | 64.4 | 55.3 | 85.2 | 62.1 | 82.0 | 58.9 | 46.3 | 38.7 | ||
HRI-TRANS | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | HRI transformer instance segmentation more details | n/a | 61.3 | 60.7 | 51.8 | 83.0 | 67.9 | 86.7 | 61.7 | 42.2 | 36.1 | ||
kMaX-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | k-means Mask Transformer | Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen | ECCV 2022 | kMaX-DeepLab w/ ConvNeXt-L backbone (ImageNet-22k + 1k pretrained). This result is obtained by the kMaX-DeepLab trained for Panoptic Segmentation task. No test-time augmentation or other external dataset. more details | n/a | 61.2 | 57.1 | 51.4 | 81.4 | 65.1 | 85.6 | 75.9 | 40.3 | 33.1 |
QueryInst-Parallel Completion | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Hai Wang ;Shilin Zhu ;PuPu ;Meng; Le; Apple; Rong | We propose a novel feature complete network framework queryinst parallel completion. First, the global context module is introduced into the backbone network to obtain instance information. Then, parallel semantic branch and parallel global branch are proposed to extract the semantic information and global information of feature layer, so as to complete the ROI features. In addition, we also propose a feature transfer structure, which explicitly increases the connection between detection and segmentation branches, changes the gradient back-propagation path, and indirectly complements the ROI features. more details | n/a | 50.8 | 58.5 | 43.9 | 79.3 | 43.5 | 68.7 | 49.9 | 32.5 | 30.6 | ||
CenterPoly v2 | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Real-time instance segmentation with polygons using an Intersection-over-Union loss | Katia Jodogne-del Litto, Guillaume-Alexandre Bilodeau | more details | 0.045 | 27.2 | 28.0 | 16.3 | 51.8 | 31.6 | 43.2 | 23.4 | 13.4 | 10.0 | |
Jiangsu-University-Environmental-Perception | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 55.9 | 61.5 | 49.1 | 80.3 | 47.3 | 76.3 | 54.4 | 43.1 | 35.3 |
Panoptic Semantic Labeling Task
PQ on class-level
name | fine | fine | coarse | coarse | 16-bit | 16-bit | depth | depth | video | video | sub | sub | code | code | title | authors | venue | description | Runtime [s] | all | things | stuff | road | sidewalk | building | wall | fence | pole | traffic light | traffic sign | vegetation | terrain | sky | person | rider | car | truck | bus | train | motorcycle | bicycle |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
HANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Holistic Attention Network for End-to-End Panoptic Segmentation more details | n/a | 51.2 | 40.4 | 59.0 | 97.6 | 69.9 | 84.9 | 17.6 | 21.0 | 42.7 | 42.5 | 60.7 | 89.0 | 34.5 | 87.9 | 46.0 | 33.9 | 57.1 | 39.4 | 46.8 | 31.3 | 36.3 | 32.5 | ||
TASCNet-enhanced | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Learning to Fuse Things and Stuff | Jie Li, Allan Raventos, Arjun Bhargava, Takaaki Tagawa, Adrien Gaidon | Arxiv | We proposed a joint network for panoptic segmentation, which is a variation of our previous work, TASCNet. (https://arxiv.org/pdf/1812.01192.pdf) A shared backbone (ResNeXt-101) pretrained on COCO detection is used. more details | n/a | 60.7 | 53.4 | 66.0 | 98.2 | 75.5 | 87.8 | 33.3 | 35.2 | 53.9 | 52.5 | 66.8 | 90.0 | 43.1 | 89.9 | 55.2 | 52.6 | 66.9 | 49.4 | 57.6 | 55.7 | 46.9 | 43.2 |
Sem2Ins | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous NeurIPS19 submission #4671 more details | n/a | 52.3 | 37.8 | 62.8 | 89.0 | 74.2 | 86.0 | 30.5 | 35.5 | 51.9 | 47.7 | 66.9 | 88.3 | 37.3 | 83.8 | 39.8 | 39.7 | 47.7 | 36.8 | 43.2 | 31.6 | 33.9 | 29.5 | ||
Pixelwise Instance Segmentation with a Dynamically Instantiated Network | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Pixelwise Instance Segmentation with a Dynamically Instantiated Network | Anurag Arnab and Philip H.S Torr | Computer Vision and Pattern Recognition (CVPR) 2017 | Results are produced using the method from our CVPR 2017 paper, "Pixelwise Instance Segmentation with a Dynamically Instantiated Network." On the instance segmentation benchmark, the identical model achieved a mean AP of 23.4 This model also served as the fully supervised baseline in our ECCV 2018 paper, "Weakly- and Semi-Supervised Panoptic Segmentation". more details | n/a | 55.4 | 44.0 | 63.7 | 98.3 | 75.2 | 87.6 | 31.2 | 35.7 | 43.3 | 47.7 | 65.1 | 89.6 | 38.1 | 88.7 | 44.7 | 43.1 | 50.8 | 40.6 | 49.9 | 46.2 | 42.4 | 34.7 |
Seamless Scene Segmentation | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Lorenzo Porzi, Samuel Rota Bulò, Aleksander Colovic and Peter Kontschieder | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019 | Seamless Scene Segmentation is a CNN-based architecture that can be trained end-to-end to predict a complete class- and instance-specific labeling for each pixel in an image. To tackle this task, also known as "Panoptic Segmentation", we take advantage of a novel segmentation head that seamlessly integrates multi-scale features generated by a Feature Pyramid Network with contextual information conveyed by a light-weight DeepLab-like module. In this submission we use a single model, with a ResNet50 backbone, pre-trained on ImageNet and Mapillary Vistas Research Edition, and fine-tuned on Cityscapes' fine training set. Inference is single-shot, without any form of test-time augmentation. Validation scores of the submitted model are 64.97 PQ, 68.04 PQ stuff, 60.75 PQ thing, 80.73 IoU. more details | n/a | 62.6 | 56.0 | 67.5 | 98.3 | 76.8 | 88.8 | 36.6 | 39.7 | 59.0 | 51.6 | 65.7 | 90.4 | 45.8 | 89.6 | 57.7 | 53.5 | 68.9 | 52.6 | 62.2 | 54.7 | 51.2 | 47.0 |
SSAP | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | SSAP: Single-Shot Instance Segmentation With Affinity Pyramid | Naiyu Gao, Yanhu Shan, Yupei Wang, Xin Zhao, Yinan Yu, Ming Yang, Kaiqi Huang | ICCV 2019 | SSAP, ResNet-101, Cityscapes fine-only more details | n/a | 58.9 | 48.4 | 66.5 | 98.0 | 75.9 | 88.1 | 34.7 | 38.0 | 56.1 | 51.6 | 68.8 | 89.7 | 43.6 | 87.3 | 50.3 | 46.1 | 65.6 | 43.2 | 54.8 | 48.1 | 41.9 | 37.3 |
Panoptic-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | Our proposed bottom-up Panoptic-DeepLab is conceptually simple yet delivers state-of-the-art results. The Panoptic-DeepLab adopts dual-ASPP and dual-decoder modules, specific to semantic segmentation and instance segmentation respectively. The semantic segmentation prediction follows the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation prediction involves a simple instance center regression, where the model learns to predict instance centers as well as the offset from each pixel to its corresponding center. This submission exploits only Cityscapes fine annotations. more details | n/a | 62.3 | 52.1 | 69.7 | 98.5 | 78.1 | 89.0 | 38.8 | 38.6 | 64.3 | 61.5 | 70.9 | 90.8 | 46.0 | 90.4 | 54.1 | 50.3 | 66.8 | 44.9 | 58.1 | 51.1 | 47.4 | 44.4 | |
iFLYTEK-CV | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | iFLYTEK Research, CV Group more details | n/a | 66.0 | 58.0 | 71.8 | 98.7 | 79.6 | 90.1 | 45.5 | 46.3 | 65.4 | 59.6 | 74.5 | 91.5 | 47.9 | 90.3 | 59.2 | 56.6 | 70.0 | 54.9 | 63.7 | 61.5 | 51.8 | 46.2 | ||
Unifying Training and Inference for Panoptic Segmentation [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Unifying Training and Inference for Panoptic Segmentation | Qizhu Li, Xiaojuan Qi, Philip H.S. Torr | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | We present an end-to-end network to bridge the gap between training and inference pipeline for panoptic segmentation. In contrast to recent works, our network exploits a parametrised, yet lightweight panoptic segmentation submodule, powered by an end-to-end learnt dense instance affinity, to capture the probability that any pair of pixels belong to the same instance. This panoptic submodule gives rise to a novel propagation mechanism for panoptic logits and enables the network to output a coherent panoptic segmentation map for both “stuff” and “thing” classes, without any post-processing. This model uses a ResNet-50 backbone, and is trained with only Cityscapes' fine data. more details | n/a | 61.0 | 52.7 | 67.1 | 98.2 | 76.0 | 87.5 | 33.5 | 37.5 | 56.3 | 56.0 | 69.1 | 89.9 | 44.3 | 89.5 | 54.0 | 52.7 | 64.3 | 49.5 | 57.4 | 52.4 | 47.8 | 43.9 |
Unifying Training and Inference for Panoptic Segmentation [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Unifying Training and Inference for Panoptic Segmentation | Qizhu Li, Xiaojuan Qi, Philip H.S. Torr | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | We present an end-to-end network to bridge the gap between training and inference pipeline for panoptic segmentation. In contrast to recent works, our network exploits a parametrised, yet lightweight panoptic segmentation submodule, powered by an end-to-end learnt dense instance affinity, to capture the probability that any pair of pixels belong to the same instance. This panoptic submodule gives rise to a novel propagation mechanism for panoptic logits and enables the network to output a coherent panoptic segmentation map for both “stuff” and “thing” classes, without any post-processing. This model uses a ResNet-101 backbone, and is pretrained on COCO 2017 training images and finetuned on Cityscapes' fine data. more details | n/a | 63.3 | 56.0 | 68.5 | 98.4 | 77.3 | 88.7 | 40.1 | 39.1 | 58.6 | 56.3 | 69.8 | 90.2 | 45.7 | 89.8 | 56.5 | 54.7 | 65.8 | 51.5 | 62.5 | 61.2 | 50.6 | 45.4 |
Axial-DeepLab-XL [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 62.8 | 53.8 | 69.3 | 98.5 | 78.5 | 88.8 | 38.1 | 41.4 | 62.4 | 56.5 | 71.8 | 90.8 | 45.7 | 90.3 | 54.5 | 50.8 | 67.8 | 50.2 | 59.3 | 55.6 | 48.3 | 43.8 |
Axial-DeepLab-L [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 65.6 | 56.9 | 71.9 | 98.6 | 79.5 | 89.9 | 44.1 | 47.5 | 66.8 | 59.7 | 72.9 | 91.3 | 49.4 | 90.9 | 55.9 | 52.9 | 68.8 | 55.2 | 64.5 | 62.2 | 50.6 | 44.8 |
Axial-DeepLab-L [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 62.7 | 53.4 | 69.5 | 98.4 | 77.7 | 88.6 | 37.9 | 40.6 | 63.3 | 59.4 | 71.6 | 90.7 | 45.2 | 90.4 | 53.8 | 50.8 | 66.7 | 47.6 | 61.1 | 57.7 | 47.0 | 42.4 |
Naive-Student (iterative semi-supervised learning with Panoptic-DeepLab) | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Semi-Supervised Learning in Video Sequences for Urban Scene Segmentation | Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D. Collins, Ekin D. Cubuk, Barret Zoph, Hartwig Adam, Jonathon Shlens | Supervised learning in large discriminative models is a mainstay for modern computer vision. Such an approach necessitates investing in large-scale human-annotated datasets for achieving state-of-the-art results. In turn, the efficacy of supervised learning may be limited by the size of the human annotated dataset. This limitation is particularly notable for image segmentation tasks, where the expense of human annotation is especially large, yet large amounts of unlabeled data may exist. In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences to improve the performance on urban scene segmentation, simultaneously tackling semantic, instance, and panoptic segmentation. The goal of this work is to avoid the construction of sophisticated, learned architectures specific to label propagation (e.g., patch matching and optical flow). Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data. The procedure is iterated for several times. As a result, our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks, reaching the performance of 67.8% PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences to surpass state-of-the-art performance on core computer vision tasks. more details | n/a | 67.8 | 61.5 | 72.4 | 98.6 | 80.5 | 90.3 | 45.2 | 47.7 | 68.8 | 59.6 | 74.1 | 91.9 | 48.5 | 90.9 | 60.2 | 58.0 | 72.2 | 60.0 | 69.2 | 64.5 | 56.6 | 51.3 | |
Axial-DeepLab-XL [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 66.6 | 58.7 | 72.3 | 98.7 | 79.8 | 90.3 | 47.6 | 50.0 | 67.6 | 58.9 | 73.4 | 91.5 | 46.4 | 91.0 | 57.2 | 54.9 | 69.8 | 57.3 | 69.0 | 62.7 | 52.5 | 46.6 |
EfficientPS [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 64.1 | 56.7 | 69.4 | 98.5 | 79.1 | 89.1 | 41.7 | 41.7 | 59.9 | 55.6 | 70.1 | 90.9 | 47.2 | 90.1 | 60.9 | 57.5 | 70.3 | 48.4 | 60.0 | 55.1 | 50.9 | 50.4 | |
Panoptic-DeepLab [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | We employ a stronger backbone, WR-41, in Panoptic-DeepLab. For Panoptic-DeepLab, please refer to https://arxiv.org/abs/1911.10194. For wide-ResNet-41 (WR-41) backbone, please refer to https://arxiv.org/abs/2005.10266. more details | n/a | 66.5 | 58.8 | 72.0 | 98.7 | 79.8 | 90.1 | 44.4 | 46.7 | 68.8 | 59.9 | 74.5 | 91.6 | 47.5 | 90.6 | 58.5 | 55.3 | 70.6 | 57.7 | 67.5 | 57.3 | 54.2 | 49.5 | |
EfficientPS [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 67.1 | 60.9 | 71.6 | 98.6 | 79.8 | 90.1 | 44.3 | 45.6 | 66.1 | 58.6 | 73.7 | 91.4 | 48.7 | 90.4 | 61.6 | 58.6 | 71.5 | 58.8 | 68.1 | 65.3 | 51.8 | 51.4 | |
seamseg_rvcsubset | no | no | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Porzi, Lorenzo and Rota Bulò, Samuel and Colovic, Aleksander and Kontschieder, Peter | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019 | Seamless Scene Segmentation Resnet101, pretrained on Imagenet; supplied with altered MVD to include WildDash2 classes; does not contain other RVC label policies (i.e. no ADE20K/COCO-specific classes -> rvcsubset and not a proper submission) more details | n/a | 51.9 | 41.4 | 59.5 | 87.0 | 59.8 | 85.1 | 30.8 | 30.2 | 50.2 | 47.0 | 58.5 | 88.7 | 29.0 | 87.8 | 49.7 | 40.7 | 57.9 | 39.2 | 49.1 | 25.6 | 36.9 | 32.5 |
EffPS_b1bs4_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | EfficientPS with EfficientNet-b1 backbone. Trained with a batch size of 4. more details | n/a | 48.0 | 40.8 | 53.2 | 97.2 | 64.6 | 84.1 | 22.1 | 20.5 | 28.6 | 20.4 | 42.8 | 86.8 | 30.9 | 87.9 | 47.9 | 45.5 | 62.0 | 42.7 | 48.6 | 8.8 | 36.7 | 34.5 | |
Panoptic-DeepLab w/ SWideRNet [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 64.8 | 56.5 | 70.9 | 98.6 | 79.3 | 89.5 | 39.5 | 47.0 | 66.5 | 58.6 | 73.3 | 91.2 | 45.9 | 90.7 | 58.4 | 56.3 | 70.7 | 51.3 | 62.0 | 53.5 | 51.6 | 48.2 | |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 67.8 | 60.9 | 72.8 | 98.7 | 79.9 | 90.3 | 46.4 | 48.8 | 69.3 | 60.7 | 76.1 | 91.7 | 47.7 | 91.7 | 59.8 | 58.3 | 72.0 | 59.4 | 69.1 | 62.9 | 55.2 | 50.7 | |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas + Pseudo-labels] | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. Following Naive-Student, this model is additionally trained with pseudo-labels generated from Cityscapes Video and train-extra set (i.e., the coarse annotations are not used, but the images are). more details | n/a | 68.5 | 61.9 | 73.3 | 98.7 | 80.8 | 90.6 | 46.9 | 53.7 | 70.6 | 60.0 | 74.6 | 91.9 | 47.6 | 90.9 | 60.6 | 57.4 | 72.6 | 61.7 | 72.1 | 62.2 | 56.5 | 51.7 | |
hri_panoptic | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 68.0 | 61.0 | 73.1 | 98.8 | 81.0 | 90.5 | 47.9 | 50.3 | 67.4 | 60.4 | 75.2 | 91.7 | 49.0 | 91.7 | 60.6 | 56.3 | 72.4 | 62.2 | 68.6 | 61.7 | 55.5 | 50.4 | ||
COPS | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Combinatorial Optimization for Panoptic Segmentation: A Fully Differentiable Approach | Ahmed Abbas, Paul Swoboda | NeurIPS 2021 | COPS fully differentiable with ResNet 50 backbone. more details | n/a | 60.0 | 51.8 | 65.9 | 98.1 | 75.1 | 87.6 | 35.0 | 37.1 | 51.3 | 53.5 | 66.2 | 89.8 | 42.7 | 88.8 | 51.8 | 49.0 | 64.0 | 45.8 | 57.2 | 59.6 | 45.3 | 41.8 |
kMaX-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | k-means Mask Transformer | Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2022 | kMaX-DeepLab w/ ConvNeXt-L backbone (ImageNet-22k + 1k pretrained). This result is obtained by the kMaX-DeepLab trained for Panoptic Segmentation task. No test-time augmentation or other external dataset. more details | n/a | 66.2 | 59.6 | 70.9 | 98.6 | 79.6 | 89.8 | 45.6 | 45.9 | 60.1 | 57.6 | 72.1 | 91.2 | 48.7 | 91.2 | 56.0 | 56.6 | 68.9 | 57.7 | 69.2 | 67.8 | 53.4 | 47.2 |
SQ on class-level
name | fine | fine | coarse | coarse | 16-bit | 16-bit | depth | depth | video | video | sub | sub | code | code | title | authors | venue | description | Runtime [s] | all | things | stuff | road | sidewalk | building | wall | fence | pole | traffic light | traffic sign | vegetation | terrain | sky | person | rider | car | truck | bus | train | motorcycle | bicycle |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
HANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Holistic Attention Network for End-to-End Panoptic Segmentation more details | n/a | 77.7 | 75.3 | 79.4 | 97.7 | 82.3 | 88.4 | 69.7 | 69.4 | 65.4 | 69.0 | 74.1 | 90.5 | 74.8 | 92.2 | 73.8 | 64.3 | 81.4 | 82.5 | 82.8 | 77.9 | 71.6 | 67.7 | ||
TASCNet-enhanced | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Learning to Fuse Things and Stuff | Jie Li, Allan Raventos, Arjun Bhargava, Takaaki Tagawa, Adrien Gaidon | Arxiv | We proposed a joint network for panoptic segmentation, which is a variation of our previous work, TASCNet. (https://arxiv.org/pdf/1812.01192.pdf) A shared backbone (ResNeXt-101) pretrained on COCO detection is used. more details | n/a | 81.0 | 79.7 | 82.0 | 98.3 | 84.1 | 90.4 | 74.2 | 73.0 | 68.0 | 73.8 | 77.7 | 91.3 | 78.2 | 93.2 | 77.4 | 74.2 | 84.6 | 85.3 | 86.5 | 81.2 | 75.5 | 72.5 |
Sem2Ins | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous NeurIPS19 submission #4671 more details | n/a | 78.9 | 77.2 | 80.2 | 89.1 | 82.7 | 88.9 | 75.0 | 75.3 | 67.3 | 71.8 | 76.4 | 89.9 | 76.7 | 88.6 | 73.6 | 71.7 | 80.0 | 85.3 | 85.4 | 78.7 | 73.6 | 69.4 | ||
Pixelwise Instance Segmentation with a Dynamically Instantiated Network | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Pixelwise Instance Segmentation with a Dynamically Instantiated Network | Anurag Arnab and Philip H.S Torr | Computer Vision and Pattern Recognition (CVPR) 2017 | Results are produced using the method from our CVPR 2017 paper, "Pixelwise Instance Segmentation with a Dynamically Instantiated Network." On the instance segmentation benchmark, the identical model achieved a mean AP of 23.4 This model also served as the fully supervised baseline in our ECCV 2018 paper, "Weakly- and Semi-Supervised Panoptic Segmentation". more details | n/a | 79.7 | 77.3 | 81.5 | 98.4 | 84.3 | 90.2 | 74.0 | 73.7 | 66.3 | 72.3 | 75.8 | 90.9 | 77.9 | 92.5 | 74.4 | 70.8 | 78.5 | 84.3 | 85.2 | 81.3 | 73.9 | 70.1 |
Seamless Scene Segmentation | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Lorenzo Porzi, Samuel Rota Bulò, Aleksander Colovic and Peter Kontschieder | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019 | Seamless Scene Segmentation is a CNN-based architecture that can be trained end-to-end to predict a complete class- and instance-specific labeling for each pixel in an image. To tackle this task, also known as "Panoptic Segmentation", we take advantage of a novel segmentation head that seamlessly integrates multi-scale features generated by a Feature Pyramid Network with contextual information conveyed by a light-weight DeepLab-like module. In this submission we use a single model, with a ResNet50 backbone, pre-trained on ImageNet and Mapillary Vistas Research Edition, and fine-tuned on Cityscapes' fine training set. Inference is single-shot, without any form of test-time augmentation. Validation scores of the submitted model are 64.97 PQ, 68.04 PQ stuff, 60.75 PQ thing, 80.73 IoU. more details | n/a | 82.1 | 80.3 | 83.5 | 98.4 | 85.1 | 91.0 | 76.4 | 75.0 | 70.4 | 77.3 | 80.6 | 91.9 | 78.5 | 93.6 | 78.5 | 74.8 | 84.9 | 87.4 | 86.3 | 81.0 | 76.3 | 73.2 |
SSAP | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | SSAP: Single-Shot Instance Segmentation With Affinity Pyramid | Naiyu Gao, Yanhu Shan, Yupei Wang, Xin Zhao, Yinan Yu, Ming Yang, Kaiqi Huang | ICCV 2019 | SSAP, ResNet-101, Cityscapes fine-only more details | n/a | 82.4 | 82.9 | 82.0 | 98.1 | 84.8 | 90.7 | 74.8 | 74.6 | 68.0 | 72.4 | 76.8 | 91.4 | 78.8 | 91.8 | 81.3 | 76.3 | 88.2 | 91.5 | 91.7 | 83.9 | 76.1 | 74.3 |
Panoptic-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | Our proposed bottom-up Panoptic-DeepLab is conceptually simple yet delivers state-of-the-art results. The Panoptic-DeepLab adopts dual-ASPP and dual-decoder modules, specific to semantic segmentation and instance segmentation respectively. The semantic segmentation prediction follows the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation prediction involves a simple instance center regression, where the model learns to predict instance centers as well as the offset from each pixel to its corresponding center. This submission exploits only Cityscapes fine annotations. more details | n/a | 82.4 | 80.7 | 83.6 | 98.6 | 85.4 | 91.1 | 77.1 | 75.8 | 71.1 | 75.4 | 80.7 | 91.9 | 79.3 | 93.2 | 77.7 | 73.6 | 84.9 | 87.2 | 88.5 | 83.9 | 76.8 | 72.8 | |
iFLYTEK-CV | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | iFLYTEK Research, CV Group more details | n/a | 83.2 | 81.3 | 84.6 | 98.8 | 86.6 | 92.0 | 78.6 | 76.3 | 72.8 | 78.2 | 82.0 | 92.4 | 79.3 | 93.5 | 78.4 | 75.6 | 85.3 | 88.4 | 88.1 | 84.0 | 77.0 | 73.4 | ||
Unifying Training and Inference for Panoptic Segmentation [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Unifying Training and Inference for Panoptic Segmentation | Qizhu Li, Xiaojuan Qi, Philip H.S. Torr | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | We present an end-to-end network to bridge the gap between training and inference pipeline for panoptic segmentation. In contrast to recent works, our network exploits a parametrised, yet lightweight panoptic segmentation submodule, powered by an end-to-end learnt dense instance affinity, to capture the probability that any pair of pixels belong to the same instance. This panoptic submodule gives rise to a novel propagation mechanism for panoptic logits and enables the network to output a coherent panoptic segmentation map for both “stuff” and “thing” classes, without any post-processing. This model uses a ResNet-50 backbone, and is trained with only Cityscapes' fine data. more details | n/a | 81.4 | 79.6 | 82.8 | 98.3 | 84.3 | 90.4 | 75.6 | 74.5 | 68.8 | 75.9 | 79.2 | 91.4 | 78.8 | 93.1 | 77.4 | 74.4 | 83.8 | 85.3 | 85.1 | 82.5 | 75.8 | 72.7 |
Unifying Training and Inference for Panoptic Segmentation [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Unifying Training and Inference for Panoptic Segmentation | Qizhu Li, Xiaojuan Qi, Philip H.S. Torr | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | We present an end-to-end network to bridge the gap between training and inference pipeline for panoptic segmentation. In contrast to recent works, our network exploits a parametrised, yet lightweight panoptic segmentation submodule, powered by an end-to-end learnt dense instance affinity, to capture the probability that any pair of pixels belong to the same instance. This panoptic submodule gives rise to a novel propagation mechanism for panoptic logits and enables the network to output a coherent panoptic segmentation map for both “stuff” and “thing” classes, without any post-processing. This model uses a ResNet-101 backbone, and is pretrained on COCO 2017 training images and finetuned on Cityscapes' fine data. more details | n/a | 82.4 | 81.0 | 83.4 | 98.6 | 85.6 | 90.9 | 76.9 | 75.6 | 69.7 | 76.2 | 79.6 | 91.7 | 79.7 | 93.1 | 78.1 | 75.2 | 84.2 | 87.7 | 88.4 | 84.2 | 77.1 | 73.3 |
Axial-DeepLab-XL [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 82.4 | 81.0 | 83.5 | 98.6 | 85.2 | 90.9 | 76.1 | 75.1 | 71.4 | 76.9 | 80.3 | 91.8 | 78.6 | 93.2 | 77.5 | 74.3 | 85.2 | 88.3 | 89.0 | 84.4 | 76.5 | 73.0 |
Axial-DeepLab-L [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 83.0 | 81.0 | 84.5 | 98.7 | 85.7 | 91.7 | 78.1 | 77.9 | 72.9 | 78.0 | 82.1 | 92.0 | 78.9 | 93.5 | 78.0 | 74.3 | 85.3 | 87.7 | 88.8 | 84.9 | 76.1 | 72.8 |
Axial-DeepLab-L [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 82.2 | 80.7 | 83.3 | 98.6 | 85.0 | 90.9 | 76.3 | 75.4 | 70.9 | 75.3 | 80.1 | 91.6 | 79.0 | 93.2 | 77.6 | 73.2 | 85.3 | 87.3 | 89.0 | 83.9 | 77.1 | 72.6 |
Naive-Student (iterative semi-supervised learning with Panoptic-DeepLab) | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Semi-Supervised Learning in Video Sequences for Urban Scene Segmentation | Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D. Collins, Ekin D. Cubuk, Barret Zoph, Hartwig Adam, Jonathon Shlens | Supervised learning in large discriminative models is a mainstay for modern computer vision. Such an approach necessitates investing in large-scale human-annotated datasets for achieving state-of-the-art results. In turn, the efficacy of supervised learning may be limited by the size of the human annotated dataset. This limitation is particularly notable for image segmentation tasks, where the expense of human annotation is especially large, yet large amounts of unlabeled data may exist. In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences to improve the performance on urban scene segmentation, simultaneously tackling semantic, instance, and panoptic segmentation. The goal of this work is to avoid the construction of sophisticated, learned architectures specific to label propagation (e.g., patch matching and optical flow). Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data. The procedure is iterated for several times. As a result, our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks, reaching the performance of 67.8% PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences to surpass state-of-the-art performance on core computer vision tasks. more details | n/a | 83.8 | 81.6 | 85.4 | 98.8 | 86.7 | 92.1 | 79.1 | 78.5 | 74.7 | 80.1 | 83.5 | 92.5 | 79.6 | 93.8 | 78.6 | 75.1 | 85.5 | 88.0 | 88.3 | 85.9 | 77.7 | 73.9 | |
Axial-DeepLab-XL [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 83.5 | 81.3 | 85.0 | 98.8 | 86.3 | 91.9 | 78.4 | 78.8 | 73.9 | 79.3 | 83.0 | 92.2 | 79.2 | 93.5 | 78.4 | 74.9 | 85.5 | 87.6 | 88.4 | 85.6 | 76.9 | 73.3 |
EfficientPS [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 82.6 | 80.9 | 83.8 | 98.6 | 85.9 | 91.2 | 78.5 | 76.0 | 70.2 | 76.3 | 80.3 | 92.0 | 78.9 | 93.4 | 79.0 | 75.5 | 85.2 | 87.9 | 87.5 | 82.6 | 76.1 | 73.5 | |
Panoptic-DeepLab [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | We employ a stronger backbone, WR-41, in Panoptic-DeepLab. For Panoptic-DeepLab, please refer to https://arxiv.org/abs/1911.10194. For wide-ResNet-41 (WR-41) backbone, please refer to https://arxiv.org/abs/2005.10266. more details | n/a | 83.5 | 81.1 | 85.3 | 98.7 | 86.6 | 92.0 | 79.8 | 78.1 | 74.4 | 79.9 | 83.2 | 92.4 | 79.8 | 93.4 | 78.2 | 74.2 | 85.3 | 87.7 | 88.5 | 83.9 | 77.4 | 73.2 | |
EfficientPS [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 83.4 | 81.5 | 84.8 | 98.7 | 86.3 | 91.7 | 78.4 | 77.6 | 72.7 | 79.1 | 82.5 | 92.2 | 80.4 | 93.6 | 79.5 | 75.9 | 85.9 | 88.2 | 87.8 | 84.0 | 76.8 | 73.9 | |
seamseg_rvcsubset | no | no | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Porzi, Lorenzo and Rota Bulò, Samuel and Colovic, Aleksander and Kontschieder, Peter | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019 | Seamless Scene Segmentation Resnet101, pretrained on Imagenet; supplied with altered MVD to include WildDash2 classes; does not contain other RVC label policies (i.e. no ADE20K/COCO-specific classes -> rvcsubset and not a proper submission) more details | n/a | 78.5 | 78.0 | 78.9 | 88.6 | 79.1 | 87.4 | 73.4 | 70.9 | 67.2 | 72.7 | 75.9 | 90.6 | 68.9 | 92.7 | 76.5 | 72.2 | 84.7 | 84.9 | 84.4 | 79.4 | 71.9 | 69.7 |
EffPS_b1bs4_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | EfficientPS with EfficientNet-b1 backbone. Trained with a batch size of 4. more details | n/a | 78.3 | 78.7 | 78.0 | 97.4 | 80.1 | 87.3 | 73.0 | 66.9 | 62.0 | 63.3 | 71.5 | 89.4 | 74.5 | 93.0 | 78.3 | 71.6 | 84.4 | 85.8 | 86.1 | 80.8 | 72.4 | 70.4 | |
Panoptic-DeepLab w/ SWideRNet [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 83.4 | 81.7 | 84.5 | 98.7 | 86.0 | 91.6 | 76.8 | 76.9 | 72.9 | 79.2 | 82.5 | 92.2 | 79.7 | 93.6 | 78.5 | 75.4 | 85.4 | 87.8 | 89.2 | 86.4 | 77.2 | 73.8 | |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 83.8 | 81.8 | 85.3 | 98.7 | 86.6 | 92.2 | 78.5 | 77.8 | 74.9 | 80.4 | 83.2 | 92.5 | 79.7 | 93.3 | 78.4 | 75.6 | 85.5 | 88.3 | 89.3 | 85.7 | 77.5 | 73.9 | |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas + Pseudo-labels] | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. Following Naive-Student, this model is additionally trained with pseudo-labels generated from Cityscapes Video and train-extra set (i.e., the coarse annotations are not used, but the images are). more details | n/a | 83.9 | 81.7 | 85.6 | 98.8 | 86.6 | 92.2 | 79.8 | 78.6 | 74.9 | 80.4 | 83.9 | 92.6 | 79.7 | 93.8 | 78.2 | 75.7 | 85.4 | 88.9 | 89.0 | 85.3 | 77.4 | 74.0 | |
hri_panoptic | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 84.3 | 82.1 | 85.9 | 98.9 | 87.2 | 92.4 | 80.2 | 79.5 | 75.0 | 80.9 | 83.9 | 92.8 | 81.0 | 93.4 | 78.9 | 76.7 | 85.6 | 88.8 | 89.7 | 83.8 | 78.9 | 74.7 | ||
COPS | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Combinatorial Optimization for Panoptic Segmentation: A Fully Differentiable Approach | Ahmed Abbas, Paul Swoboda | NeurIPS 2021 | COPS fully differentiable with ResNet 50 backbone. more details | n/a | 81.4 | 80.2 | 82.3 | 98.3 | 84.7 | 90.1 | 76.6 | 75.1 | 67.0 | 74.8 | 77.3 | 91.2 | 78.3 | 92.5 | 76.5 | 74.1 | 83.6 | 87.6 | 89.0 | 83.8 | 75.3 | 71.7 |
kMaX-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | k-means Mask Transformer | Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2022 | kMaX-DeepLab w/ ConvNeXt-L backbone (ImageNet-22k + 1k pretrained). This result is obtained by the kMaX-DeepLab trained for Panoptic Segmentation task. No test-time augmentation or other external dataset. more details | n/a | 84.0 | 82.6 | 85.1 | 98.7 | 86.4 | 91.9 | 79.4 | 78.5 | 73.6 | 78.7 | 82.7 | 92.2 | 80.3 | 93.4 | 79.7 | 76.4 | 86.2 | 88.5 | 89.4 | 87.9 | 78.7 | 74.2 |
RQ on class-level
name | fine | fine | coarse | coarse | 16-bit | 16-bit | depth | depth | video | video | sub | sub | code | code | title | authors | venue | description | Runtime [s] | all | things | stuff | road | sidewalk | building | wall | fence | pole | traffic light | traffic sign | vegetation | terrain | sky | person | rider | car | truck | bus | train | motorcycle | bicycle |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
HANet | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Holistic Attention Network for End-to-End Panoptic Segmentation more details | n/a | 63.9 | 53.5 | 71.4 | 99.8 | 84.9 | 96.1 | 25.3 | 30.3 | 65.4 | 61.6 | 82.0 | 98.4 | 46.1 | 95.4 | 62.3 | 52.6 | 70.2 | 47.8 | 56.6 | 40.2 | 50.7 | 48.0 | ||
TASCNet-enhanced | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Learning to Fuse Things and Stuff | Jie Li, Allan Raventos, Arjun Bhargava, Takaaki Tagawa, Adrien Gaidon | Arxiv | We proposed a joint network for panoptic segmentation, which is a variation of our previous work, TASCNet. (https://arxiv.org/pdf/1812.01192.pdf) A shared backbone (ResNeXt-101) pretrained on COCO detection is used. more details | n/a | 73.8 | 67.0 | 78.8 | 99.9 | 89.7 | 97.1 | 44.9 | 48.3 | 79.4 | 71.1 | 85.9 | 98.6 | 55.1 | 96.5 | 71.3 | 70.9 | 79.0 | 57.9 | 66.6 | 68.6 | 62.1 | 59.5 |
Sem2Ins | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | Anonymous NeurIPS19 submission #4671 more details | n/a | 65.2 | 48.9 | 77.0 | 99.9 | 89.7 | 96.7 | 40.6 | 47.2 | 77.2 | 66.4 | 87.6 | 98.2 | 48.6 | 94.6 | 54.0 | 55.3 | 59.6 | 43.2 | 50.6 | 40.2 | 46.1 | 42.5 | ||
Pixelwise Instance Segmentation with a Dynamically Instantiated Network | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Pixelwise Instance Segmentation with a Dynamically Instantiated Network | Anurag Arnab and Philip H.S Torr | Computer Vision and Pattern Recognition (CVPR) 2017 | Results are produced using the method from our CVPR 2017 paper, "Pixelwise Instance Segmentation with a Dynamically Instantiated Network." On the instance segmentation benchmark, the identical model achieved a mean AP of 23.4 This model also served as the fully supervised baseline in our ECCV 2018 paper, "Weakly- and Semi-Supervised Panoptic Segmentation". more details | n/a | 68.1 | 57.0 | 76.1 | 99.9 | 89.1 | 97.1 | 42.2 | 48.4 | 65.3 | 66.0 | 85.9 | 98.5 | 48.9 | 95.9 | 60.0 | 60.9 | 64.7 | 48.1 | 58.6 | 56.8 | 57.4 | 49.5 |
Seamless Scene Segmentation | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Lorenzo Porzi, Samuel Rota Bulò, Aleksander Colovic and Peter Kontschieder | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019 | Seamless Scene Segmentation is a CNN-based architecture that can be trained end-to-end to predict a complete class- and instance-specific labeling for each pixel in an image. To tackle this task, also known as "Panoptic Segmentation", we take advantage of a novel segmentation head that seamlessly integrates multi-scale features generated by a Feature Pyramid Network with contextual information conveyed by a light-weight DeepLab-like module. In this submission we use a single model, with a ResNet50 backbone, pre-trained on ImageNet and Mapillary Vistas Research Edition, and fine-tuned on Cityscapes' fine training set. Inference is single-shot, without any form of test-time augmentation. Validation scores of the submitted model are 64.97 PQ, 68.04 PQ stuff, 60.75 PQ thing, 80.73 IoU. more details | n/a | 75.3 | 69.6 | 79.4 | 99.9 | 90.3 | 97.6 | 47.9 | 53.0 | 83.9 | 66.7 | 81.6 | 98.4 | 58.4 | 95.7 | 73.5 | 71.5 | 81.1 | 60.2 | 72.0 | 67.5 | 67.0 | 64.3 |
SSAP | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | SSAP: Single-Shot Instance Segmentation With Affinity Pyramid | Naiyu Gao, Yanhu Shan, Yupei Wang, Xin Zhao, Yinan Yu, Ming Yang, Kaiqi Huang | ICCV 2019 | SSAP, ResNet-101, Cityscapes fine-only more details | n/a | 70.6 | 58.3 | 79.6 | 99.9 | 89.5 | 97.1 | 46.4 | 50.9 | 82.5 | 71.3 | 89.5 | 98.1 | 55.3 | 95.1 | 61.8 | 60.5 | 74.3 | 47.2 | 59.8 | 57.3 | 55.0 | 50.1 |
Panoptic-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | Our proposed bottom-up Panoptic-DeepLab is conceptually simple yet delivers state-of-the-art results. The Panoptic-DeepLab adopts dual-ASPP and dual-decoder modules, specific to semantic segmentation and instance segmentation respectively. The semantic segmentation prediction follows the typical design of any semantic segmentation model (e.g., DeepLab), while the instance segmentation prediction involves a simple instance center regression, where the model learns to predict instance centers as well as the offset from each pixel to its corresponding center. This submission exploits only Cityscapes fine annotations. more details | n/a | 74.8 | 64.7 | 82.1 | 99.8 | 91.4 | 97.6 | 50.3 | 50.9 | 90.3 | 81.6 | 87.9 | 98.7 | 58.0 | 97.0 | 69.7 | 68.3 | 78.6 | 51.4 | 65.6 | 60.8 | 61.7 | 61.0 | |
iFLYTEK-CV | yes | yes | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | iFLYTEK Research, CV Group more details | n/a | 78.5 | 71.3 | 83.8 | 99.9 | 91.9 | 98.0 | 57.9 | 60.6 | 89.8 | 76.3 | 90.8 | 99.0 | 60.4 | 96.6 | 75.5 | 74.9 | 82.0 | 62.1 | 72.3 | 73.2 | 67.3 | 62.9 | ||
Unifying Training and Inference for Panoptic Segmentation [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Unifying Training and Inference for Panoptic Segmentation | Qizhu Li, Xiaojuan Qi, Philip H.S. Torr | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | We present an end-to-end network to bridge the gap between training and inference pipeline for panoptic segmentation. In contrast to recent works, our network exploits a parametrised, yet lightweight panoptic segmentation submodule, powered by an end-to-end learnt dense instance affinity, to capture the probability that any pair of pixels belong to the same instance. This panoptic submodule gives rise to a novel propagation mechanism for panoptic logits and enables the network to output a coherent panoptic segmentation map for both “stuff” and “thing” classes, without any post-processing. This model uses a ResNet-50 backbone, and is trained with only Cityscapes' fine data. more details | n/a | 73.9 | 66.2 | 79.6 | 99.9 | 90.2 | 96.9 | 44.3 | 50.3 | 81.8 | 73.8 | 87.1 | 98.4 | 56.3 | 96.2 | 69.7 | 70.8 | 76.7 | 58.0 | 67.5 | 63.5 | 63.1 | 60.3 |
Unifying Training and Inference for Panoptic Segmentation [COCO] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Unifying Training and Inference for Panoptic Segmentation | Qizhu Li, Xiaojuan Qi, Philip H.S. Torr | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020 | We present an end-to-end network to bridge the gap between training and inference pipeline for panoptic segmentation. In contrast to recent works, our network exploits a parametrised, yet lightweight panoptic segmentation submodule, powered by an end-to-end learnt dense instance affinity, to capture the probability that any pair of pixels belong to the same instance. This panoptic submodule gives rise to a novel propagation mechanism for panoptic logits and enables the network to output a coherent panoptic segmentation map for both “stuff” and “thing” classes, without any post-processing. This model uses a ResNet-101 backbone, and is pretrained on COCO 2017 training images and finetuned on Cityscapes' fine data. more details | n/a | 75.9 | 69.1 | 80.9 | 99.9 | 90.3 | 97.7 | 52.2 | 51.7 | 84.1 | 73.9 | 87.7 | 98.4 | 57.3 | 96.5 | 72.4 | 72.8 | 78.2 | 58.7 | 70.7 | 72.6 | 65.6 | 61.9 |
Axial-DeepLab-XL [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 75.2 | 66.3 | 81.7 | 99.9 | 92.1 | 97.6 | 50.1 | 55.2 | 87.4 | 73.4 | 89.3 | 98.9 | 58.1 | 97.0 | 70.3 | 68.4 | 79.5 | 56.8 | 66.7 | 65.8 | 63.1 | 60.0 |
Axial-DeepLab-L [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 78.1 | 70.1 | 84.0 | 99.9 | 92.8 | 98.1 | 56.4 | 61.0 | 91.7 | 76.5 | 88.8 | 99.2 | 62.7 | 97.2 | 71.6 | 71.2 | 80.6 | 63.0 | 72.6 | 73.3 | 66.5 | 61.6 |
Axial-DeepLab-L [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 75.3 | 66.0 | 82.1 | 99.9 | 91.4 | 97.5 | 49.7 | 53.9 | 89.4 | 78.9 | 89.4 | 99.0 | 57.2 | 97.0 | 69.4 | 69.4 | 78.3 | 54.5 | 68.6 | 68.8 | 60.9 | 58.4 |
Naive-Student (iterative semi-supervised learning with Panoptic-DeepLab) | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Semi-Supervised Learning in Video Sequences for Urban Scene Segmentation | Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D. Collins, Ekin D. Cubuk, Barret Zoph, Hartwig Adam, Jonathon Shlens | Supervised learning in large discriminative models is a mainstay for modern computer vision. Such an approach necessitates investing in large-scale human-annotated datasets for achieving state-of-the-art results. In turn, the efficacy of supervised learning may be limited by the size of the human annotated dataset. This limitation is particularly notable for image segmentation tasks, where the expense of human annotation is especially large, yet large amounts of unlabeled data may exist. In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences to improve the performance on urban scene segmentation, simultaneously tackling semantic, instance, and panoptic segmentation. The goal of this work is to avoid the construction of sophisticated, learned architectures specific to label propagation (e.g., patch matching and optical flow). Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data. The procedure is iterated for several times. As a result, our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks, reaching the performance of 67.8% PQ, 42.6% AP, and 85.2% mIOU on the test set. We view this work as a notable step towards building a simple procedure to harness unlabeled video sequences to surpass state-of-the-art performance on core computer vision tasks. more details | n/a | 80.2 | 75.3 | 83.8 | 99.9 | 92.9 | 98.1 | 57.2 | 60.8 | 92.1 | 74.3 | 88.8 | 99.4 | 61.0 | 96.9 | 76.6 | 77.2 | 84.5 | 68.2 | 78.4 | 75.2 | 72.9 | 69.5 | |
Axial-DeepLab-XL [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation | Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2020 (spotlight) | Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. more details | n/a | 79.0 | 72.0 | 84.0 | 100.0 | 92.5 | 98.3 | 60.7 | 63.5 | 91.4 | 74.3 | 88.4 | 99.2 | 58.6 | 97.3 | 72.9 | 73.3 | 81.7 | 65.4 | 78.0 | 73.2 | 68.2 | 63.5 |
EfficientPS [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 76.8 | 70.2 | 81.7 | 99.9 | 92.0 | 97.8 | 53.1 | 55.0 | 85.4 | 72.9 | 87.2 | 98.7 | 59.8 | 96.4 | 77.1 | 76.2 | 82.6 | 55.1 | 68.5 | 66.7 | 66.8 | 68.7 | |
Panoptic-DeepLab [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation | Bowen Cheng, Maxwell D. Collins, Yukun Zhu, Ting Liu, Thomas S. Huang, Hartwig Adam, Liang-Chieh Chen | We employ a stronger backbone, WR-41, in Panoptic-DeepLab. For Panoptic-DeepLab, please refer to https://arxiv.org/abs/1911.10194. For wide-ResNet-41 (WR-41) backbone, please refer to https://arxiv.org/abs/2005.10266. more details | n/a | 78.8 | 72.5 | 83.5 | 99.9 | 92.1 | 97.9 | 55.7 | 59.8 | 92.5 | 75.0 | 89.6 | 99.2 | 59.5 | 96.9 | 74.8 | 74.5 | 82.7 | 65.8 | 76.3 | 68.3 | 70.1 | 67.6 | |
EfficientPS [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | Understanding the scene in which an autonomous robot operates is critical for its competent functioning. Such scene comprehension necessitates recognizing instances of traffic participants along with general scene semantics which can be effectively addressed by the panoptic segmentation task. In this paper, we introduce the Efficient Panoptic Segmentation (EfficientPS) architecture that consists of a shared backbone which efficiently encodes and fuses semantically rich multi-scale features. We incorporate a new semantic head that aggregates fine and contextual features coherently and a new variant of Mask R-CNN as the instance head. We also propose a novel panoptic fusion module that congruously integrates the output logits from both the heads of our EfficientPS architecture to yield the final panoptic segmentation output. Additionally, we introduce the KITTI panoptic segmentation dataset that contains panoptic annotations for the popularly challenging KITTI benchmark. Extensive evaluations on Cityscapes, KITTI, Mapillary Vistas and Indian Driving Dataset demonstrate that our proposed architecture consistently sets the new state-of-the-art on all these four benchmarks while being the most efficient and fast panoptic segmentation architecture to date. more details | n/a | 79.6 | 74.6 | 83.3 | 99.9 | 92.4 | 98.3 | 56.5 | 58.7 | 90.8 | 74.1 | 89.4 | 99.1 | 60.5 | 96.6 | 77.5 | 77.1 | 83.2 | 66.7 | 77.6 | 77.7 | 67.5 | 69.5 | |
seamseg_rvcsubset | no | no | no | no | no | no | no | no | no | no | no | no | yes | yes | Seamless Scene Segmentation | Porzi, Lorenzo and Rota Bulò, Samuel and Colovic, Aleksander and Kontschieder, Peter | The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2019 | Seamless Scene Segmentation Resnet101, pretrained on Imagenet; supplied with altered MVD to include WildDash2 classes; does not contain other RVC label policies (i.e. no ADE20K/COCO-specific classes -> rvcsubset and not a proper submission) more details | n/a | 64.8 | 53.0 | 73.3 | 98.2 | 75.6 | 97.4 | 41.9 | 42.6 | 74.7 | 64.7 | 77.1 | 97.9 | 42.1 | 94.7 | 65.0 | 56.4 | 68.5 | 46.2 | 58.1 | 32.2 | 51.3 | 46.6 |
EffPS_b1bs4_RVC | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | EfficientPS: Efficient Panoptic Segmentation | Rohit Mohan, Abhinav Valada | EfficientPS with EfficientNet-b1 backbone. Trained with a batch size of 4. more details | n/a | 59.2 | 51.9 | 64.4 | 99.8 | 80.7 | 96.3 | 30.3 | 30.7 | 46.1 | 32.2 | 59.9 | 97.1 | 41.4 | 94.5 | 61.2 | 63.6 | 73.4 | 49.8 | 56.4 | 10.9 | 50.7 | 49.0 | |
Panoptic-DeepLab w/ SWideRNet [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 77.0 | 69.2 | 82.7 | 99.9 | 92.2 | 97.7 | 51.5 | 61.2 | 91.3 | 74.0 | 88.8 | 99.0 | 57.6 | 96.9 | 74.4 | 74.7 | 82.7 | 58.4 | 69.5 | 61.9 | 66.9 | 65.3 | |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas] | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. more details | n/a | 80.2 | 74.4 | 84.4 | 99.9 | 92.3 | 97.9 | 59.1 | 62.7 | 92.6 | 75.5 | 91.4 | 99.2 | 59.8 | 98.3 | 76.2 | 77.1 | 84.2 | 67.3 | 77.3 | 73.4 | 71.2 | 68.6 | |
Panoptic-DeepLab w/ SWideRNet [Mapillary Vistas + Pseudo-labels] | yes | yes | no | no | no | no | no | no | yes | yes | no | no | no | no | Scaling Wide Residual Networks for Panoptic Segmentation | Liang-Chieh Chen, Huiyu Wang, Siyuan Qiao | We revisit the architecture design of Wide Residual Networks. We design a baseline model by incorporating the simple and effective Squeeze-and-Excitation and Switchable Atrous Convolution to the Wide-ResNets. Its network capacity is further scaled up or down by adjusting the width (i.e., channel size) and depth (i.e., number of layers), resulting in a family of SWideRNets (short for Scaling Wide Residual Networks). We demonstrate that such a simple scaling scheme, coupled with grid search, identifies several SWideRNets that significantly advance state-of-the-art performance on panoptic segmentation datasets in both the fast model regime and strong model regime. Following Naive-Student, this model is additionally trained with pseudo-labels generated from Cityscapes Video and train-extra set (i.e., the coarse annotations are not used, but the images are). more details | n/a | 80.9 | 75.6 | 84.8 | 99.9 | 93.3 | 98.4 | 58.7 | 68.3 | 94.3 | 74.6 | 88.9 | 99.2 | 59.8 | 96.9 | 77.5 | 75.9 | 85.0 | 69.5 | 81.0 | 72.8 | 73.0 | 69.9 | |
hri_panoptic | yes | yes | no | no | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 79.9 | 74.1 | 84.1 | 99.9 | 92.9 | 97.9 | 59.8 | 63.3 | 89.9 | 74.7 | 89.6 | 98.9 | 60.4 | 98.2 | 76.8 | 73.4 | 84.6 | 70.1 | 76.5 | 73.6 | 70.4 | 67.5 | ||
COPS | yes | yes | no | no | no | no | no | no | no | no | 4 | 4 | no | no | Combinatorial Optimization for Panoptic Segmentation: A Fully Differentiable Approach | Ahmed Abbas, Paul Swoboda | NeurIPS 2021 | COPS fully differentiable with ResNet 50 backbone. more details | n/a | 72.6 | 64.6 | 78.5 | 99.8 | 88.6 | 97.2 | 45.6 | 49.4 | 76.6 | 71.5 | 85.7 | 98.4 | 54.6 | 96.1 | 67.7 | 66.1 | 76.6 | 52.2 | 64.3 | 71.1 | 60.2 | 58.3 |
kMaX-DeepLab [Cityscapes-fine] | yes | yes | no | no | no | no | no | no | no | no | no | no | yes | yes | k-means Mask Transformer | Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen | ECCV 2022 | kMaX-DeepLab w/ ConvNeXt-L backbone (ImageNet-22k + 1k pretrained). This result is obtained by the kMaX-DeepLab trained for Panoptic Segmentation task. No test-time augmentation or other external dataset. more details | n/a | 77.9 | 71.9 | 82.3 | 99.9 | 92.2 | 97.7 | 57.4 | 58.4 | 81.6 | 73.2 | 87.2 | 99.0 | 60.6 | 97.7 | 70.3 | 74.1 | 79.9 | 65.2 | 77.4 | 77.1 | 67.9 | 63.6 |
3D Vehicle Detection Task
All average metrics
name | 3d | 3d | 16-bit | 16-bit | depth | depth | video | video | sub | sub | code | code | title | authors | venue | description | Runtime [s] | DS | AP | BEV | OS Yaw | OS PitchRoll | SizeSim |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3D-GCK | yes | yes | no | no | no | no | no | no | no | no | no | no | Single-Shot 3D Detection of Vehicles from Monocular RGB Images via Geometry Constrained Keypoints in Real-Time | Nils Gählert, Jun-Jun Wan, Nicolas Jourdan, Jan Finkbeiner, Uwe Franke, and Joachim Denzler | IV 2020 | 3D-GCK is based on the standard SSD 2D object detection framework and lifts the 2D detections to 3D space by predicting additional regression and classification parameters. Hence, the runtime is kept close to pure 2D object detection. The additional parameters are transformed to 3D bounding box keypoints within the network under geometric constraints. 3D-GCK features a full 3D description including all three angles of rotation without supervision by any labeled ground truth data for the object's orientation, as it focuses on certain keypoints within the image plane. more details | 0.04 | 37.4 | 42.5 | 96.1 | 81.9 | 100.0 | 70.7 |
HW-Noah-AVPNet2.3 | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.04 | 40.1 | 43.5 | 96.0 | 88.0 | 100.0 | 82.1 | ||
iFlytek-ZBGKRD-fcos3d-depth-norm | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 42.9 | 47.6 | 96.6 | 80.4 | 100.0 | 80.4 |
DS on class-level
name | 3d | 3d | 16-bit | 16-bit | depth | depth | video | video | sub | sub | code | code | title | authors | venue | description | Runtime [s] | all | car | truck | bus | train | motorcycle | bicycle |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3D-GCK | yes | yes | no | no | no | no | no | no | no | no | no | no | Single-Shot 3D Detection of Vehicles from Monocular RGB Images via Geometry Constrained Keypoints in Real-Time | Nils Gählert, Jun-Jun Wan, Nicolas Jourdan, Jan Finkbeiner, Uwe Franke, and Joachim Denzler | IV 2020 | 3D-GCK is based on the standard SSD 2D object detection framework and lifts the 2D detections to 3D space by predicting additional regression and classification parameters. Hence, the runtime is kept close to pure 2D object detection. The additional parameters are transformed to 3D bounding box keypoints within the network under geometric constraints. 3D-GCK features a full 3D description including all three angles of rotation without supervision by any labeled ground truth data for the object's orientation, as it focuses on certain keypoints within the image plane. more details | 0.04 | 37.4 | 67.5 | 29.0 | 32.3 | 23.1 | 32.9 | 39.9 |
HW-Noah-AVPNet2.3 | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | 0.04 | 40.1 | 77.2 | 30.0 | 29.9 | 24.5 | 37.2 | 42.0 | ||
iFlytek-ZBGKRD-fcos3d-depth-norm | yes | yes | no | no | no | no | no | no | no | no | no | no | Anonymous | more details | n/a | 42.9 | 75.8 | 33.3 | 41.7 | 23.6 | 39.6 | 43.5 |