Skip to main content
Log in

Random walk-based fuzzy linear discriminant analysis for dimensionality reduction

  • Original Paper
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Dealing with high-dimensional data has always been a major problem with the research of pattern recognition and machine learning, and linear discriminant analysis (LDA) is one of the most popular methods for dimensionality reduction. However, it suffers from the problem of being too sensitive to outliers. Hence to solve this problem, fuzzy membership can be introduced to enhance the performance of algorithms by reducing the effects of outliers. In this paper, we analyze the existing fuzzy strategies and propose a new effective one based on Markov random walks. The new fuzzy strategy can maintain high consistency of local and global discriminative information and preserve statistical properties of dataset. In addition, based on the proposed fuzzy strategy, we then derive an efficient fuzzy LDA algorithm by incorporating the fuzzy membership into learning. Theoretical analysis and extensive simulations show the effectiveness of our algorithm. The presented results demonstrate that our proposed algorithm can achieve significantly improved results compared with other existing algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  • Belhumeur PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces vs. fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720

    Article  Google Scholar 

  • Chen L, Liao H, Ko M, Lin J, Yu G (2000) A new LDA-based face recognition system which can solve the small sample size problem. Pattern Recogn 33(10):1713–1726

    Article  Google Scholar 

  • Chung FRK (1997) Spectral graph theory. American Mathematical Society, Providence

  • Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  • Fukunaga K (1990) Introduction to statistical pattern classification. Academic Press, New York

  • Garcia S, Herrera F (2008) An extension on “Statistical Comparisons of Classifiers over Multiple Datasets” for all Pairwise Comparisons. J Mach Learn Res 9:2677–2694

    MATH  Google Scholar 

  • Graham DB, Allinson NM (1998) Characterizing virtual eigensignatures for general purpose face recognition in face recognition: from theory to application. NATO ASI Series F, Computer and Systems Sciences, vol 163, pp 446–456

  • Hastie T, Tibshirani R, Friedman J (2001) The elements of statistical learning data mining, inference and prediction. Springer, Berlin

  • He X, Yan S, Hu Y, Niyogi P, Zhang H (2005) Face recognition using Laplacianfaces. IEEE Trans Pattern Anal Mach Intell 27(3):328–340

    Article  Google Scholar 

  • Howland P, Park H (2004) Generalizing discriminant analysis using the generalized singular value decomposition. IEEE Trans Pattern Anal Mach Intell 26(8):995–1005

    Article  Google Scholar 

  • Hull J (1994) A database for handwritten text recognition research. IEEE Trans Pattern Recogn Mach Intell 16(5):550–554

    Article  Google Scholar 

  • Keller JM, Gray MR, Givens JA (1985) A fuzzy k-nearest neighbor algorithm. IEEE Trans Syst Man Cybern 15(4):580–585

    Google Scholar 

  • Kwak KC, Pedrycz W (2005) Face recognition using a fuzzy Fisherface classifier. Pattern Recogn 38(10):1717–1732

    Article  Google Scholar 

  • Liu X, Lu C, Chen F (2010) Spatial outlier detection: random walk based approaches. In: ACM SIG SPATIAL Proceedings of GIS

  • Moonesignhe HDK, Tan P (2006) Outlier detection using random walks. In: Proceedings of ICTAI

  • Muller KR, Mika S, Ratsch G, Tsuda K, Scholkopf B (2001) An introduction to kernel-based learning algorithms. IEEE Trans Neural Netw 12(2):181–201

    Article  Google Scholar 

  • Nene SA, Nayar SK, Murase H (1996) Columbia object image library (COIL-20). Technical report CUCS-005-96, Columbia University

  • Roweis S, Saul L (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323–2326

    Article  Google Scholar 

  • Song X, Zheng Y, Wu X, Yang X, Yang J (2009) A complete fuzzy discriminant analysis approach for face recognition. Appl Soft Comput 10(1):208–214

    Article  Google Scholar 

  • Sun T, Chen S (2007) Class label versus sample label-based CCA. Appl Math Comput 185(1):272–283

    Article  MathSciNet  MATH  Google Scholar 

  • Sun L, Ceran B, Ye J (2010) A scalable two-stage approach for a class of dimensionality reduction techniques. In: Proceedings of KDD

  • Tenenbaum JB, de Silva V, Langford JC (2000) A global geometric framework for nonlinear dimensionality reduction. Science 290:2319–2323

    Article  Google Scholar 

  • Turk M, Pentland A (1991) Face recognition using Eigenfaces. In: Proceedings of CVPR

  • Wang X, Davidson I (2009) Discovering contexts and contextual outliers using random walks in graphs. In: Proceedings of ICDM

  • Yang J, Frangi AF, Yang J, Zhang D, Jin Z (2005) KPCA plus LDA: a complete kernel Fisher discriminant framework for feature extraction and recognition. IEEE Trans Pattern Anal Mach Intell 27(2):230–244

    Article  Google Scholar 

  • Ye J (2005) Characterization of a family of algorithms for generalized discriminant analysis on undersampled problems. J Mach Learn Res 6:483–502

    Google Scholar 

  • Ye J (2007) Least square linear discriminant analysis. In: Proceedings of ICML

  • Ye J, Li Q (2005) A two-stage linear discirminant analysis via QR-decomposition. IEEE Trans Pattern Anal Mach Intell 27(6):929–941

    Article  Google Scholar 

  • Yu H, Yang J (2001) A direct LDA algorithm for high-dimensional data with application to face recognition. Pattern Recogn 34(10):2067–2070

    Article  MATH  Google Scholar 

  • Zhang Z, Dai G, Jordan MI (2009) A flexible and efficient algorithm for regularized Fisher discriminant analysis. In: Proceedings of ECML PKDD

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mingbo Zhao.

Appendix

Appendix

1.1 Proof of Corollary 1

According to Eq. (12), we have

$$ \begin{aligned} ( {T^{\rm T} T} )_{ij} &= \sum\limits_{k = 1}^{c} {\left( {\frac{{w_{ki} }}{{\sqrt {F_{kk} } }} - \frac{{\sqrt {F_{kk} } }}{l}} \right)\left( {\frac{{w_{kj} }}{{\sqrt {F_{kk} } }} - \frac{{\sqrt {F_{kk} } }}{l}} \right)} \\ &= \sum\limits_{k = 1}^{c} {\frac{{w_{ki} w_{kj} }}{{F_{kk} }}} - \sum\limits_{k = 1}^{c} {\frac{{w_{ki} }}{l}} - \sum\limits_{k = 1}^{c} {\frac{{w_{kj} }}{l}} + \sum\limits_{k = 1}^{c} {\frac{{F_{kk} }}{{l^{2} }}} \\ &= \sum\limits_{k = 1}^{c} {\frac{{w_{ki} w_{kj} }}{{F_{kk} }}} - \frac{1}{l} = - ( {\widetilde{{A_{\rm b} }}} )_{ij}. \end{aligned} $$
(26)

According to Eq. (8), we have

$$ \begin{aligned} ( {\widetilde{{D_{\rm b} }}} )_{ii} = \sum\limits_{j = 1}^{l} {( {\widetilde{{A_{\rm b} }}} )_{ij} } &= \sum\limits_{j = 1}^{l} {\left( {\frac{1}{l} - \sum\limits_{k = 1}^{c} {\frac{{w_{ki} w_{kj} }}{{F_{kk} }}} } \right)} \\ &= 1 - \sum\limits_{k = 1}^{c} {\frac{1}{{F_{kk} }}\sum\limits_{j = 1}^{l} {w_{ki} w_{kj} } } \\ &= 1 - \sum\limits_{k = 1}^{c} {w_{ki} } = 1 - 1 = 0 \\ \end{aligned} $$
(27)

The second equality holds as \( \sum\nolimits_{j=1}^{l} w_{kj}= F_{kk} \) and the third equality holds as \( \sum\nolimits_{k=1}^{c} w_{ki}= 1. \) We then have \( ( {\widetilde{{L_{\rm b} }}} )_{ij} = ( {\widetilde{{D_{\rm b} }}} )_{ii} - ( {\widetilde{{A_{\rm b} }}} )_{ij} = - ( {\widetilde{{A_{\rm b} }}})_{ij} , \) hence we prove \( T^{\rm T} T = \widetilde{{L_{\rm b} }}. \)

1.2 Proof of Corollary 2

We prove Corollary 2 using a recursive algorithm. According to Eq. (18) and W(0) = Y, have

$$ \begin{aligned} \sum\limits_{i = 1}^{c} {w_{ij} ( 1 )} &= \alpha \sum\limits_{i = 1}^{c} {\sum\limits_{{x_{k} \in N_{k} ( {x_{j} } )}} {y_{ik} s_{kj} } } + ( {1 - \alpha } )\sum\limits_{i = 1}^{c} {y_{ij} } \\ &= \alpha \sum\limits_{{x_{k} \in N_{k} ( {x_{i} } )}} {s_{kj} \sum\limits_{i = 1}^{c} {y_{ik} } } + ( {1 - \alpha } )\sum\limits_{i = 1}^{c} {y_{ij} } \\ &= \alpha \sum\limits_{{x_{k} \in N_{k} ( {x_{i} })}} {s_{kj} } + ( {1 - \alpha } ) = \alpha + ( {1 - \alpha } ) = 1 \end{aligned} $$
(28)

The third equality holds as \( \sum\nolimits_{i=1}^{c} y_{ik}= 1 \) c i=1 y ik  = 1 and the fourth equality holds as \( \sum\nolimits_{{x_{k} \in N_{k} \left( {x_{i} } \right)}} {s_{kj} } = 1 \). Hence, Eq. (28) indicates that the sum of each column of W(1) is equivalent to 1. We next assume that the sum of each column of W(t) is equivalent to 1, i.e. \( \sum\nolimits_{i=1}^{c} w_{ij} (t) = 1 \) for any iteration t, we then have

$$ \begin{aligned} \sum\limits_{i = 1}^{c} {w_{ij} ( {t + 1})} &= \alpha \sum\limits_{i = 1}^{c} {\sum\limits_{{x_{k} \in N_{k} ( {x_{i} } )}} {w_{ik} ( t )} } s_{kj} + ( {1 - \alpha } )\sum\limits_{i = 1}^{c} {y_{ij} } \\ &= \alpha \sum\limits_{{x_{k} \in N_{k} ( {x_{i} } )}} {s_{kj} \sum\limits_{i = 1}^{c} {w_{ik} ( t )} } + ( {1 - \alpha } )\sum\limits_{i = 1}^{c} {y_{ij} } \\ &= \alpha \sum\limits_{{x_{k} \in N_{k} ( {x_{i} } )}} {s_{kj} } + ( {1 - \alpha } ) = \alpha + ( {1 - \alpha } ) = 1 \end{aligned} $$
(29)

This indicates that the sum of each column of W(t + 1) is also equivalent to 1. Thus, we prove that \( \sum\nolimits_{i = 1}^{c} {w_{ij} } = \sum\nolimits_{i = 1}^{c} {\mathop {\lim }\limits_{t \to \infty } w_{ij} ( t )} = 1. \)

1.3 Proof of Theorem 1

  1. 1.

    Computing V * F via eigen-decomposition (Ye 2005)

Let t be the rank of \( \widetilde{{S_{\rm t} }}, \) by performing SVD to \( \widetilde{{S_{\rm t} }}, \) we have

$$ \widetilde{{S_{\rm t} }} = U\left( {\begin{array}{*{20}l} {\Upsigma_{t}^{2} } & 0 \\ 0 & 0 \\ \end{array} } \right)U^{\rm T} , $$
(30)

where U is an orthogonal matrix, Σ 2 t is a diagonal matrix with rank \( t. \) Let U = [U 1U 2] be a partition of U such that \( U_{1} \in \mathbb{R}^{d \times t} \) and \( U_{2} \in \mathbb{R}^{d \times (d - t)} , \) where U 2 lies in the null space of \( \widetilde{{S_{\rm t} }} \) satisfying \( U_{{_{2} }}^{\rm T} \widetilde{{S_{\rm t} }}U_{2} = 0, \) we then have \( S_{\rm t} = U_{1} \sum_{\rm t}^{2} U_{1}^{\rm T} . \) Since \( \widetilde{{S_{\rm t} }} = \widetilde{{S_{\rm b} }} + \widetilde{{S_{\rm w} }}, \) we have

$$ U^{\rm T} \widetilde{{S_{\rm b} }}U = \left( {\begin{array}{*{20}l} {U_{1}^{\rm T} \widetilde{{S_{\rm b} }}U_{1} } & 0 \\ 0 & 0 \\ \end{array} } \right),U^{\rm T} \widetilde{{S_{\rm w} }}U = \left( {\begin{array}{*{20}l} {U_{1}^{\rm T} \widetilde{{S_{\rm w} }}U_{1} } & 0 \\ 0 & 0 \\ \end{array} } \right) $$
(31)

From Eqs. (30, 31), it follows

$$ I_{\rm t} = \sum_{\rm t}^{ - 1} U_{1}^{T} \widetilde{{S_{\rm t} }}U_{1} \sum_{\rm t}^{ - 1} = \sum_{\rm t}^{ - 1} U_{1}^{\rm T} \widetilde{{S_{\rm b} }}U_{1} \sum_{\rm t}^{ - 1} + \sum_{\rm t}^{ - 1} U_{1}^{T} \widetilde{{S_{\rm w} }}U_{1} \sum_{\rm t}^{ - 1} , $$
(32)

where \( I_{\rm t} \in \mathbb{R}^{t \times t} \) is an identity matrix. Recall that \( \widetilde{{S_{\rm b} }} = \widetilde{{H_{\rm b} }}\widetilde{{H_{\rm b} }}^{\rm T} , \) if we let \( G = \sum_{\rm t}^{ - 1} U_{1}^{\rm T} \widetilde{{H_{\rm b} }} \) and its SVD be \( G = P\sum_{\rm b} Q, \) where \( P \in \mathbb{R}^{t \times t} \) and \( Q \in \mathbb{R}^{t \times c} \) are two orthogonal matrixes and \( \sum_{b} \in \mathbb{R}^{t \times t} \) is a diagonal matrix, we then have

$$ \sum_{t}^{ - 1} U_{1}^{\rm T} \widetilde{{S_{b} }}U_{1} \sum_{t}^{ - 1} = GG^{\rm T} = P\sum_{b}^{2} P^{\rm T} . $$
(33)

Therefore, according to Eqs. (30, 31, 33), we rewrite \( V_{F}^{*} = \widetilde{{S_{\rm t} }}^{ - 1} \widetilde{{S_{\rm b} }} \) as

$$ \begin{aligned} \widetilde{{S_{\rm t} }}^{ - 1} \widetilde{{S_{\rm b} }} &= U_{1} \left( {\begin{array}{*{20}l} {\sum_{t}^{ - 1} \sum_{t}^{ - 1} } & 0 \\ 0 & 0 \\ \end{array} } \right)U^{\rm T} \widetilde{{S_{\rm b} }}U\left( {\begin{array}{*{20}l} {\sum_{t}^{ - 1} \sum_{\rm t} } & 0 \\ 0 & 0 \\ \end{array} } \right)U^{\rm T} \\ &= U\left( {\begin{array}{*{20}l} {\sum_{t}^{ - 1} } & 0 \\ 0 & 0 \\ \end{array} } \right)P\sum_{b}^{2} P^{\rm T} \left( {\begin{array}{*{20}l} {\sum_{t} } & 0 \\ 0 & 0 \\ \end{array} } \right)U^{\rm T} \\ & = U\left( {\begin{array}{*{20}l} {\sum_{t}^{ - 1} P} & 0 \\ 0 & {I^{D - t} } \\ \end{array} } \right)\left( {\begin{array}{*{20}l} {\sum_{b}^{2} } & 0 \\ 0 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}l} {P^{\rm T} \sum_{t} } & 0 \\ 0 & {I^{D - t} } \\ \end{array} } \right)U^{\rm T} \end{aligned} $$
(34)

The first equality holds as Eq. (30), the second equality holds as Eq. (33). From Eq. (34), if we let \( V_{{^{F} }}^{*} = U_{1} \sum_{t}^{ - 1} P, \) we have \( \widetilde{{S_{\rm t} }}^{ - 1} \widetilde{{S_{\rm b} }}V_{{^{F} }}^{*} = \sum_{\rm b}^{2} V_{{^{F} }}^{*} , \) which indicate the column vectors of \( V_{{^{F} }}^{*} \) are eigenvectors of \( \widetilde{{S_{\rm t} }}^{ - 1} \widetilde{{S_{\rm b} }}. \)

  1. 2.

    Equivalence relationship to least square

Recall \( V_{\rm LS}^{*} \) in Eq. (11), it can be rewritten as

$$ \begin{aligned} \widetilde{{S_{\rm t} }}^{ - 1} \widetilde{{H_{\rm b} }} &= U\left( {\begin{array}{*{20}l} {\sum_{t}^{ - 1} \sum_{t}^{ - 1} } & 0 \\ 0 & 0 \\ \end{array} } \right)U^{\rm T} \widetilde{{H_{\rm b} }} \\ & = U_{1} \sum_{t}^{ - 1} \left( {\sum_{t}^{ - 1} U_{1}^{\rm T} \widetilde{{H_{\rm b} }}} \right) \\ & = U_{1} \sum_{t}^{ - 1} G \\ & = U_{1} \sum_{t}^{ - 1} P\sum_{\rm b} Q^{\rm T} \\ & = V_{F}^{*} \sum_{\rm b} Q^{\rm T} \end{aligned} $$
(35)

From Eq. (35), we can neglect Q as it is orthogonal. Thus, the main difference between \( V_{F}^{*} \) and \( V_{\rm LS}^{*} \) is the diagonal matrix \( \sum_{b} . \) We next show that given the condition in Eq. (13), ∑ b is an identity matrix hence resulting in \( V_{\rm LS}^{*} = V_{F}^{*} . \) Let \( H \in \mathbb{R}^{D \times D} \) be a non-degenerate matrix defined as:

$$ H = U\left( {\begin{array}{*{20}l} {\sum_{t}^{ - 1} P} & 0 \\ 0 & {I_{D - t} } \\ \end{array} } \right) $$
(36)

According to Eq. (31) (32) and \( \widetilde{{S_{\rm w} }} = \widetilde{{S_{\rm t} }} - \widetilde{{S_{\rm b} }}, \) we have

$$ H^{T} \widetilde{{S_{\rm t} }}H = \left( {\begin{array}{*{20}l} {I_{\rm t} } & 0 \\ 0 & 0 \\ \end{array} } \right),H^{\rm T} \widetilde{{S_{\rm w} }}H = \left( {\begin{array}{*{20}l} {\sum_{w}^{2} } & 0 \\ 0 & 0 \\ \end{array} } \right),H^{\rm T} \widetilde{{S_{\rm b} }}H = \left( {\begin{array}{*{20}l} {\sum_{b}^{2} } & 0 \\ 0 & 0 \\ \end{array} } \right) $$
(37)

where ∑ 2 b  = diag(σ 21 σ 22 , …, σ 2 t , 0, …, 0) and ∑ 2 w  = diag(τ 21 τ 22 , …, τ 2 t , 0, …, 0) are two diagonal matrixes satisfying σ 2 i  + τ 2 i  = 1, ∀i. This indicates that there is at least one of σ i and τ i to be nonzero, ∀i. Since rank(A) + rank(B) ≥ rank(A + B) (Hull 1994), we have \( {\text{rank}}( {\widetilde{{S_{\rm b} }}}) + {\text{rank}} ( {\widetilde{{S_{\rm w} }}}) \ge {\text{rank}}( {\widetilde{{S_{\rm t} }}}) \) According to the Sylvester’s law of interia (Hull 1994), it follows rank(∑ 2 b ) + rank(∑ 2 w ) ≥ rank(I t ). Let b be the rank of ∑ 2 b and assume rank(∑ 2 b ) + rank(∑ 2 w ) = rank(I t ) + s, to satisfy this rank equality, we have

$$ \begin{gathered} 1 = \sigma_{1}^{2} = \sigma_{2}^{2} = \cdots = \sigma_{b - s}^{2} > \sigma_{b - s + 1}^{2} > \cdots \sigma_{b}^{2} > \sigma_{b + 1}^{2} = \cdots \sigma_{t}^{2} = 0 \hfill \\ 0 = \tau_{1}^{2} = \tau_{2}^{2} = \cdots \tau_{b - s}^{2} < \tau_{b - s + 1}^{2} < \cdots \tau_{b}^{2} < \tau_{b + 1}^{2} = \cdots \tau_{t}^{2} = 1 \hfill \\ \end{gathered} $$
(38)

Since C1 holds, we have s = 0 and 1 = σ 21  = σ 22  = ··· = σ 2 b  > σ 2 b+1  = ··· = σ 2 t  = 0, which indicates that ∑ b is an identity matrix.

1.4 Appendix D

  1. 1.

    Computing V * F via two-stage approach

In the two-stage approach, we first solve a least square problem by regressing X on T, i.e. projecting the original high-dimensional dataset into a low-dimensional subspace, we then calculate a auxiliary matrix M ∈ R d×d and its SVD. Finally, the optimal projection matrix can be obtained from the SVD of M. Since the size of M is very small, the cost for calculating the SVD of M is relatively low. The basic steps of two-stage approach are listed as follows:

  1. 1.

    Solve the least square problem \( \mathop {\min }\limits_{V} \| {T^{T} - X^{T} V} \|_{F}^{2} \) and obtain the optimal solution V *LS .

  2. 2.

    Let \( \widetilde{X} = V_{\rm LS}^{*{\rm T}} X \) and calculate the auxiliary matrix as \( M = V_{\rm LS}^{*{\rm T}} XT^{\rm T} . \)

  3. 3.

    Perform SVD to M as M = U M Σ M U T M and obtain \( V_{M}^{*} = U_{M} \Upsigma_{M}^{ - 1/2} . \)

  4. 4.

    The optimal solution can be given by \( V_{\rm T}^{*} = V_{\rm LS}^{*} V_{M}^{*} . \)

  5. 2.

    Equivalent relationship

We next prove the optimal solution V * T obtained by two-stage approach is equivalent to that in Eq. (34). By solving least square problem in Eq. (11), we have V * LS  = (XX T)−1 XT T. Hence, \( \widetilde{X} = V_{\rm LS}^{*T} X = TX^{\rm T} (XX^{\rm T} )^{ - 1} X \) The auxiliary matrix M can then be given by

$$ M = \widetilde{X}Y^{\rm T} = TX^{\rm T} (XX^{\rm T} )^{ - 1} XT^{\rm T} = \widetilde{{H_{\rm b} }}^{\rm T} U_{1} \sum_{t}^{ - 1} \sum_{t}^{ - 1} U_{1}^{\rm T} \widetilde{{H_{\rm b} }}. $$
(39)

The third equation holds as \( \widetilde{{H_{\rm b} }} = XT^{\rm T} \) and XX T = S t  = U 1 ∑ 2 t U T1 . Since \( G = \sum_{t}^{ - 1} U_{1}^{\rm T} \widetilde{{H_{\rm b} }} \) and its SVD is G = P ∑ b Q T, we have M = G T G = Q ∑ 2 b Q T. This indicates that Q ∑ 2 b Q T is a SVD of M, we thus have V * M  = Q ∑ −1 b and the optimal solution of two-stage approach can be given by:

$$ \begin{aligned} V_{\rm T}^{*} &= V_{\rm LS}^{*} V_{M}^{*} = (XX^{\rm T} )^{ - 1} XY^{\rm T} Q\sum_{b}^{ - 1} \\ & = U_{1} \sum_{1}^{ - 1} \left( {\sum_{1}^{ - 1} U_{1}^{\rm T} \widetilde{{H_{\rm b} }}} \right)Q\sum_{b}^{ - 1} \\ & = U_{1} \sum_{1}^{ - 1} P\sum_{b} Q^{\rm T} Q\sum_{b}^{ - 1} \\ & = U_{1} \sum_{1}^{ - 1} P, \\ \end{aligned} $$
(40)

which is equivalent to \( V_{{^{F} }}^{*} \) in Eq. (34).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhao, M., Chow, T.W.S. & Zhang, Z. Random walk-based fuzzy linear discriminant analysis for dimensionality reduction. Soft Comput 16, 1393–1409 (2012). https://doi.org/10.1007/s00500-012-0843-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-012-0843-3

Keywords

Navigation