Abstract
Unsupervised domain adaptation methods utilize feature re-presentations of instances in the source and target domains to eliminate domain shifts. It is worth noting that the instance features are closely related to the entire distribution of the domain, and the current information after adaptation in the execution of the domain adaptation task is closely related to the original features. Common methods are based on only one of these pieces of information and do not make sufficient use of them. We develop the Self-Reinforcing Feedback Domain Adaptation Channel (SRFC). Pioneeringly, on the feature representation of the network, SRFC fuses global and instance information simultaneously, and utilizes the past history and current information in domain adaptation, so that the information can be effectively enhanced to better complete the domain adaptation. Through the designed self-reinforcing feedback mechanism, SRFC skillfully integrates multi-level information in a robust way in the process of domain adaptation, and actively enhances the availability and comprehensive value of features in domain adaptation with manageable continuous feedback. Experiments on benchmark datasets verify the advantages of SRFC fusion information for instance feature enhancement and domain adaptation. The modular Self-Reinforcing Feedback Domain Adaptation Channel has scalability and R &D potential, and we hope that it can be extended to more domain adaptation networks using enhanced instance representations to better accomplish different tasks.
This work was funded by Haihe Laboratory in Tianjin, Grants No. 22HHXCJC00007.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ang, K.H., Chong, G., Li, Y.: PID control system analysis, design, and technology. IEEE Trans. Control Syst. Technol. 13(4), 559–576 (2005)
Bachman, P., Hjelm, R.D., Buchwalter, W.: Learning representations by maximizing mutual information across views. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)
Donahue, J., et al.: Decaf: a deep convolutional activation feature for generic visual recognition. In: International Conference on Machine Learning, pp. 647–655. PMLR (2014)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Fu, J., et al.: Dual attention network for scene segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3146–3154 (2019)
Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. The journal of machine learning research 17(1), 2096–2030 (2016)
Ghifary, M., Kleijn, W.B., Zhang, M.: Domain adaptive neural networks for object recognition. In: Pham, D.-N., Park, S.-B. (eds.) PRICAI 2014. LNCS (LNAI), vol. 8862, pp. 898–904. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13560-1_76
Gopalan, R., Li, R., Chellappa, R.: Domain adaptation for object recognition: an unsupervised approach. In: 2011 International Conference on Computer Vision, pp. 999–1006. IEEE (2011)
He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9729–9738 (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Kim, D., Saito, K., Oh, T.H., Plummer, B.A., Sclaroff, S., Saenko, K.: Cross-domain self-supervised learning for domain adaptation with few source labels. arXiv preprint arXiv:2003.08264 (2020)
Li, S., et al.: Domain conditioned adaptation network. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11386–11393 (2020)
Long, M., Cao, Y., Wang, J., Jordan, M.: Learning transferable features with deep adaptation networks. In: International Conference on Machine Learning, pp. 97–105. PMLR (2015)
Long, M., Cao, Z., Wang, J., Jordan, M.I.: Conditional adversarial domain adaptation. In: Advances in Neural Information Processing Systems, vol. 31 (2018)
Long, M., Zhu, H., Wang, J., Jordan, M.I.: Unsupervised domain adaptation with residual transfer networks. In: Advances in Neural Information Processing Systems, 29 (2016)
Long, M., Zhu, H., Wang, J., Jordan, M.I.: Deep transfer learning with joint adaptation networks. In: International Conference on Machine Learning, pp. 2208–2217. PMLR (2017)
Medsker, L.R., Jain, L.: Recurrent neural networks. Des. Appl. 5, 64–67 (2001)
Minorsky, N.: Directional stability of automatically steered bodies. J. Am. Soc. Naval Eng. 34(2), 280–309 (1922)
Pei, Z., Cao, Z., Long, M., Wang, J.: Multi-adversarial domain adaptation. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Singh, A.: CLDA: contrastive learning for semi-supervised domain adaptation. In: Advances in Neural Information Processing Systems, vol. 34 (2021)
Thota, M., Leontidis, G.: Contrastive domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2209–2218 (2021)
Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7167–7176 (2017)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Wang, X., Li, L., Ye, W., Long, M., Wang, J.: Transferable attention for domain adaptation. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 5345–5352 (2019)
Zhang, Y., Tang, H., Jia, K., Tan, M.: Domain-symmetric networks for adversarial domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5031–5040 (2019)
Zhang, Y., Xie, S., Davison, B.D.: Transductive learning via improved geodesic sampling. In: BMVC, p. 122 (2019)
Zuo, Y., Yao, H., Xu, C.: Attention-based multi-source domain adaptation. IEEE Trans. Image Process. 30, 3793–3803 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Jia, Y., Zhang, X., Lan, L., Luo, Z. (2023). Self-Reinforcing Feedback Domain Adaptation Channel. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Lecture Notes in Computer Science, vol 13623. Springer, Cham. https://doi.org/10.1007/978-3-031-30105-6_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-30105-6_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-30104-9
Online ISBN: 978-3-031-30105-6
eBook Packages: Computer ScienceComputer Science (R0)