Box for Mask and Mask for Box: weak losses for multi-task partially supervised learning
Published as a poster at BMVC 2024.
Object detection and semantic segmentation are both scene understanding tasks yet they differ in data structure and information level. Object detection requires box coordinates for object instances while semantic segmentation requires pixel-wise class labels. Making use of one task’s information to train the other would be beneficial for multi-task partially supervised learning where each training example is annotated only for a single task, having the potential to expand training sets with different-task datasets. This paper studies various weak losses for partially annotated data in combination with existing supervised losses. We propose Box-for-Mask and Mask-for-Box strategies, and their combination BoMBo, to distil necessary information from one task annotations to train the other. Ablation studies and experimental results on VOC and COCO datasets show favorable results for the proposed idea. Source code and data splits can be found at github.com/lhoangan/multas.
Citation
@inproceedings{leh2024box,
title = {Box for Mask and Mask for Box: weak losses for multi-task partially supervised learning},
author = {L{\^e}, Hoang-{\^A}n and Berg, Paul and Pham, Minh-Tan},
booktitle = {Proceedings of the British Machine Vision Conference (BMVC)},
year = {2024}
}