Skip to main content
Log in

RKHS subspace domain adaption via minimum distribution gap

  • Theoretical Advances
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Subspace learning of Reproducing Kernel Hilbert Space (RKHS) is most popular among domain adaption applications. The key goal is to embed the source and target domain samples into a common RKHS subspace where their distributions could match better. However, most existing domain adaption measures are either based on the first-order statistics that can’t accurately qualify the difference of distributions for non-Guassian distributions or complicated co-variance matrix that is difficult to be used and optimized. In this paper, we propose a neat and effective RKHS subspace domain adaption measure: Minimum Distribution Gap (MDG), where the rigorous mathematical formula can be derived to learn the weighting matrix of the optimized orthogonal Hilbert subspace basis via the Lagrange Multiplier Method. To show the efficiency of the proposed MDG measure, extensive numerical experiments with different datasets have been performed and the comparisons with four other state-of-the-art algorithms in the literature show that the proposed MDG measure is very promising.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. https://github.com/jindongwang/transferlearning/tree/master/data.

  2. http://www.daviddlewis.com/resources/testcollections/reuters21578/.

  3. http://yann.lecun.com/exdb/mnist/index.html.

  4. http://www-i6.informatik.rwth-aachen.de/.

References

  1. Bruzzone L, Marconcini M (2010) Domain adaptation problems: a DASVM classification technique and a circular validation strategy. IEEE Trans Pattern Anal Mach Intell 32(5):770–787. https://doi.org/10.1109/TPAMI.2009.57

    Article  Google Scholar 

  2. Gopalan R, Li R, Chellappa R (2014) Unsupervised adaptation across domain shifts by generating intermediate data representations. IEEE Trans Pattern Anal Mach Intell 36(11):2288–2302. https://doi.org/10.1109/TPAMI.2013.249

    Article  Google Scholar 

  3. Zhang Y, Deng B, Tang H, Zhang L, Jia K (2020) Unsupervised multi-class domain adaptation: theory, algorithms, and practice. IEEE Trans Pattern Anal Mach Intell https://doi.org/10.1109/TPAMI.2020.3036956

  4. Chen B, Lam W, Tsang IW, Wong TL (2013) Discovering low-rank shared concept space for adapting text mining models. IEEE Trans Pattern Anal Mach Intell 35(6):1284–1297. https://doi.org/10.1109/TPAMI.2012.243

    Article  Google Scholar 

  5. Li J, Lu K, Huang Z, Zhu L, Shen HT (2019) Transfer independently together: a generalized framework for domain adaptation. IEEE Trans Cybern 49(6):2144–2155. https://doi.org/10.1109/TCYB.2018.2820174

    Article  Google Scholar 

  6. Jiang M, Huang W, Huang Z, Yen GG (2017) Integration of global and local metrics for domain adaptation learning via dimensionality reduction. IEEE Trans Cybern 47(1):38–51. https://doi.org/10.1109/TCYB.2015.2502483

    Article  Google Scholar 

  7. Pan SJ, Tsang IW, Kwok JT, Yang Q (2011) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw 22(2):199–210. https://doi.org/10.1109/TNN.2010.2091281

    Article  Google Scholar 

  8. Li L, Zhang Z (2019) Semi-supervised domain adaptation by covariance matching. IEEE Trans Pattern Anal Mach Intell 41(11):2724–2739. https://doi.org/10.1109/TPAMI.2018.2866846

    Article  Google Scholar 

  9. Steinwart I, Hush D, Scovel C (2006) An explicit description of the reproducing kernel Hilbert spaces of Gaussian RBF kernels. IEEE Trans Inf Theory 52(10):4635–4643. https://doi.org/10.1109/TIT.2006.88171

    Article  MathSciNet  MATH  Google Scholar 

  10. Zhang Z, Wang M, Nehorai A (2020) Optimal transport in reproducing kernel Hilbert spaces: theory and applications. IEEE Trans Pattern Anal Mach Intell 42(7):1741–1754. https://doi.org/10.1109/TPAMI.2019.2903050

    Article  Google Scholar 

  11. Deng WY, Lendasse A, Ong YS, Tsang IWH, Chen L, Zheng QH (2019) Domain adaption via feature selection on explicit feature map. IEEE Trans Neural Netw Learn Syst 30(4):1180–1190. https://doi.org/10.1109/TNNLS.2018.2863240

    Article  MathSciNet  Google Scholar 

  12. Feng Y, Yuan Y, Lu X (2021) Person reidentification via unsupervised cross-view metric learning. IEEE Trans Cybern 51(4):1849–1859. https://doi.org/10.1109/TCYB.2019.2909480

    Article  Google Scholar 

  13. Tao D, Jin L, Wang Y, Li X (2015) Person reidentification by minimum classification error-based KISS metric learning. IEEE Trans Cybern 45(2):242–252. https://doi.org/10.1109/TCYB.2014.2323992

    Article  Google Scholar 

  14. Gretton A, Borgwardt K, Rasch M, Schölkopf B, Smola A (2006) A kernel method for the two-sample-problem. Adv Neural Inf Process Syst 19:513–520

    MATH  Google Scholar 

  15. Sun B, Saenko K (2016) Deep coral: correlation alignment for deep domain adaptation. In: European conference on computer vision. Springer, pp 443–450

  16. Pan SJ, Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359. https://doi.org/10.1109/TKDE.2009.191

    Article  Google Scholar 

  17. Pan SJ, Tsang IW, Kwok JT, Yang Q (2011) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw 22(2):199–210. https://doi.org/10.1109/TNN.2010.2091281

    Article  Google Scholar 

  18. Yan K, Kou L, Zhang D (2018) Learning domain-invariant subspace using domain features and independence maximization. IEEE Trans Cybern 48(1):288–299. https://doi.org/10.1109/TCYB.2016.2633306

    Article  Google Scholar 

  19. Li J, Jing M, Lu K, Zhu L, Shen HT (2019) Locality preserving joint transfer for domain adaptation. IEEE Trans Image Process 28(12):6103–6115. https://doi.org/10.1109/TIP.2019.2924174

    Article  MathSciNet  MATH  Google Scholar 

  20. Tsai YHH, Yeh YR, Wang YCF (2016) Learning cross-domain landmarks for heterogeneous domain adaptation. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 5081–5090

  21. Li J, Lu K, Huang Z, Zhu L, Shen HT (2019) Heterogeneous domain adaptation through progressive alignment. IEEE Trans Neural Netw Learn Syst 30(5):1381–1391. https://doi.org/10.1109/TNNLS.2018.2868854

    Article  MathSciNet  Google Scholar 

  22. Tzeng E, Hoffman J, Zhang N, Saenko K, Darrell T (2014) Deep domain confusion: maximizing for domain invariance. arXiv:1412.3474

  23. Long M, Cao Y, Wang J, Jordan M (2015) Learning transferable features with deep adaptation networks. In: International conference on machine learning. PMLR, pp 97–105

  24. Liang J, Li L, Zhao C (2021) A transfer learning approach for compressed sensing in 6G-IoT. IEEE Internet Things J 8(20):15276–15283

    Article  Google Scholar 

  25. Saitoh S, Sawano Y (eds) (2016) Theory of reproducing kernels and applications. Springer, Singapore

    MATH  Google Scholar 

  26. Gori F, Martínez-Herrero R (2021) Reproducing kernel Hilbert spaces for wave optics: tutorial. JOSA A 38(5):737–748

    Article  Google Scholar 

  27. Yosida K et al (1965) Functional analysis. Springer, Berlin

    Book  MATH  Google Scholar 

  28. Paulsen VI, Raghupathi M (2016) An introduction to the theory of reproducing kernel Hilbert spaces. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  29. Fukunaga K (2013) Introduction to statistical pattern recognition. Elsevier, Amsterdam

    MATH  Google Scholar 

  30. Gong B, Shi Y, Sha F, Grauman K (2012) Geodesic flow kernel for unsupervised domain adaptation. In: IEEE conference on computer vision and pattern recognition, vol 2012. IEEE, pp 2066–2073

  31. Wold S, Esbensen K, Geladi P (1987) Principal component analysis. Chemom Intell Lab Syst 2(1–3):37–52

    Article  Google Scholar 

  32. Peterson LE (2009) K-nearest neighbor. Scholarpedia 4(2):1883

    Article  Google Scholar 

  33. Bay H, Tuytelaars T, Van Gool L (2006) Surf: speeded up robust features. In: European conference on computer vision. Springer, pp 404–417

Download references

Funding

No funding was received to assist with the preparation of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Zhengming Ma or Shaolin Liao.

Ethics declarations

Conflict of interest

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Identical random variables

Appendix A: Identical random variables

Two second-order moment random variables are identical if and only if their statistical mean square error is zero,

$$\begin{aligned} E\left[ \left| Y-Y' \right| ^{2}\right] =0 \Leftrightarrow Y = Y'. \end{aligned}$$
(22)

To demonstrate this, the variance of Eq. (22) can be expressed as follows

$$\begin{aligned} E\left[ \left| Y-Y' \right| ^{2}\right] = \int \int _{\Omega (Y, Y')} \left( y-y'\right) ^2 p(y, y') dydy'. \end{aligned}$$
(23)

Because both \(\left( y-y'\right) ^2\) and \( p(y, y')\) are semi-definite or non-negative, Eq. (23) is zero when one of the following two conditions are met for all points in the probability domain \(\Omega \),

$$\begin{aligned} \left\{ \begin{matrix} \left( y-y'\right) ^2 = 0; \\ p(y, y') = 0. \end{matrix} \right. \end{aligned}$$
(24)

It can be shown that Eq. (24) is equivalent to the joint probability \(p(y, y') = f(y) \delta (y-y')\),

$$\begin{aligned}{} & {} p(y) = \int _{y'} p(y, y') dy' = \int _{y'} f(y) \delta (y-y') dy' = f(y), \nonumber \\{} & {} p(y') = \int _{y} p(y, y')dy = \int _{y} f(y) \delta (y-y')dy = f(y), \end{aligned}$$
(25)

from which the marginal probabilities of Y and \(Y'\) are identical and Eq. 22 is proved.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qiu, Y., Zhang, C., Xiong, C. et al. RKHS subspace domain adaption via minimum distribution gap. Pattern Anal Applic 26, 1425–1439 (2023). https://doi.org/10.1007/s10044-023-01170-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-023-01170-y

Keywords

Navigation