Abstract
This paper introduces a novel approach to address the challenges of transfer learning, which aims to efficiently train a classifier for a new domain using supervised information from similar domains. Traditional transfer learning methods may fail to maintain the discriminative features of the target domain due to the scarcity of labelled data and the use of irrelevant source domain data distribution subspace, resulting in poor metrics. To overcome these challenges, the proposed approach, called KDADP, transforms the data distribution of both the source and target domains into a lower-dimensional subspace while preserving their discriminatory information. The KDADP model maximizes between-class variance and minimizes within-class variance with L1 penalization, enabling the recovery of the most useful characteristics and reducing the model’s complexity. Experimental results on three real-world domain adaptation datasets demonstrate that the proposed KDADP model significantly improves classification performance and outperforms state-of-the-art primitive, shallow, and deeper domain adaptation methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Cai, H., Chen, H., Song, Y., Zhang, C., Zhao, X., Yin, D.: Data manipulation: towards effective instance learning for neural dialogue generation via learning to augment and reweight. arXiv preprint arXiv:2004.02594 (2020)
Li, M., Zhai, Y.M., Luo, Y.W., Ge, P.F., Ren, C.X.: Enhanced transport distance for unsupervised domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13936–13944 (2020)
Lin, T., Zha, H.: Riemannian manifold learning. IEEE Trans. Pattern Anal. Mach. Intell. 30(5), 796–809 (2008)
Liu, F., et al.: Probabilistic margins for instance reweighting in adversarial training. In: Advances in Neural Information Processing Systems, vol. 34, pp. 23258–23269 (2021)
Long, M., Wang, J., Ding, G., Sun, J., Yu, P.S.: Transfer feature learning with joint distribution adaptation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2200–2207 (2013)
Long, M., Wang, J., Ding, G., Sun, J., Yu, P.S.: Transfer joint matching for unsupervised domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1410–1417 (2014)
Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE Trans. Neural Netw. 22(2), 199–210 (2010)
Wang, W., Li, H., Ding, Z., Wang, Z.: Rethink maximum mean discrepancy for domain adaptation. arXiv preprint arXiv:2007.00689 (2020)
Yang, L., Zhong, P.: Discriminative and informative joint distribution adaptation for unsupervised domain adaptation. Knowl.-Based Syst. 207, 106394 (2020)
Zhang, J., Li, W., Ogunbona, P.: Joint geometrical and statistical alignment for visual domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1859–1867 (2017)
Zhang, Z., Wang, J., Zha, H.: Adaptive manifold learning. IEEE Trans. Pattern Anal. Mach. Intell. 34(2), 253–265 (2011)
Zhu, Y., et al.: Deep subdomain adaptation network for image classification. IEEE Trans. Neural Netw. Learn. Syst. 32(4), 1713–1722 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Prakash, J., Ghorai, M., Sanodiya, R. (2023). Transfer Learning: Kernel-Based Domain Adaptation with Distance-Based Penalization. In: Maji, P., Huang, T., Pal, N.R., Chaudhury, S., De, R.K. (eds) Pattern Recognition and Machine Intelligence. PReMI 2023. Lecture Notes in Computer Science, vol 14301. Springer, Cham. https://doi.org/10.1007/978-3-031-45170-6_20
Download citation
DOI: https://doi.org/10.1007/978-3-031-45170-6_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-45169-0
Online ISBN: 978-3-031-45170-6
eBook Packages: Computer ScienceComputer Science (R0)