CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency


Abstract

Unsupervised domain adaptation algorithms aim to transfer the knowledge learned from one domain to another (e.g., synthetic to real images). The adapted representa- tions often do not capture pixel-level domain shifts that are crucial for dense prediction tasks (e.g., semantic segmenta- tion). In this paper, we present a novel pixel-wise adversarial domain adaptation algorithm. By leveraging image-to-image translation methods for data augmentation, our key insight is that while the translated images between domains may differ in styles, their predictions for the task should be consistent. We exploit this property and introduce a cross-domain consistency loss that enforces our adapted model to produce consistent predictions. Through extensive experimental results, we show that our method compares favorably against the state-of-the-art on a wide variety of unsupervised domain adaptation tasks.
Citation

Yun-Chun Chen, Yen-Yu Lin, Ming-Hsuan Yang, and Jia-Bin Huang, "CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency", in IEEE Conference on Computer Vision and Pattern Recognition, 2019.


BibTex
@inproceedings{CrDoCo,
  author    = {Chen, Yun-Chun and Lin, Yen-Yu and Yang, Ming-Hsuan and Huang, Jia-Bin}, 
  title     = {CrDoCo: Pixel-level Domain Transfer with Cross-Domain Consistency},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year      = {2019}
}
      

CrDoCo

References