Normalized Cut Loss for Weaklysupervised CNN Segmentation Meng

  • Slides: 1
Download presentation
Normalized Cut Loss for Weakly-supervised CNN Segmentation Meng Tang 1, 2 Abdelaziz Djelouah 1

Normalized Cut Loss for Weakly-supervised CNN Segmentation Meng Tang 1, 2 Abdelaziz Djelouah 1 Federico Perazzi 1, 3 Yuri Boykov 2 Christopher 1 Schroers 1 Disney Research Zurich, Switzerland 2 University of Waterloo, Canada 3 Adobe Research Previous work: Proposal Generation Our methodology: Regularized Loss E. g. Normalized Cut Regularized Loss q Train CNN from full but “fake” proposals: q Joint loss for labeled & unlabeled pixels q (pointwise) Empirical risk loss q (pairwise or high-order) regularization loss q “Shallow” normalized cut segmentation q Pairwise affinity Wij on RGBXY q Balanced color clustering q “Deep” normalized cut loss Scribbles (partial masks) scribbles Proposals (full masks) unknown pixels q Via “shallow” segmentation e. g. graph cut: data term ground truth w/ full masks empirical risk loss regularization loss for labeled pixels for unlabeled pixels regularization term q What’s wrong with proposals? q a heuristic to mimic full supervision q mislead training to fit errors q require expensive inference test image [Lin et al. 2016] q Examples of regularization: q clustering criteria, e. g. normalized cut (NC) q pairwise CRF q Gradient computation only, no inference w/ p. CE loss only w/ extra NC loss q Gradient of normalized cut q Towards better color clustering network output ? ? empirical risk loss for labeled data length 0. 5 follow-up this work length 0 proposal method regularization loss for unlabeled data Partial CE as Loss Sampling Follow-up Work [Tang et al. ar. Xiv 2018] q Simple but overlooked q Sampling in Stochastic Gradient Descent q scribble as sampling q up=1 for scribbles, 0 otherwise q Other regularization as losses: q pairwise CRF regularization q normalized Cut plus CRF q Besides weak-supervision q full-supervision (all fully labeled) q semi-supervision (w/ unlabeled images) labeled & unlabeled data 1. Lin et al. "Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. " CVPR 2016. 68. 7 68. 8 75. 6 76. 8 65. 1 62. 4 72. 8 74. 5 q Train with shorter scribbles gradient output labeled data only 62. 0 60. 4 69. 5 72. 8 Full Sup. click Semi-supervised deep learning [Weston et al. 2012] input ground truth Weak Supervison p. CE + NC Deep. Lab-MSc-large. FOV+CRF Deep. Lab-VGG 16 Res. Net 101+CRF length 1 Motivation: Regularized Loss for Semi-supervised Learning and U w/ NC loss q m. IOU on val set with different networks length 0. 3 training image Given M labeled data unlabeled data , learn w/ proposals Network shallow NC partial Cross Entropy (p. CE) Experiments 2. Weston et al. "Deep learning via semi-supervised embedding. " Neural Networks: Tricks of the Trade, 2012 3. Tang et al. "On Regularized Losses for Weakly-supervised CNN Segmentation. " ar. Xiv 2018.