Skip to main content

K-Way Spectral Clustering

  • Reference work entry
Encyclopedia of Machine Learning
  • 377 Accesses

In spectral clustering (Luxburg, 2007), the dataset is represented as a similarity graph G = (V, E). The vertices represent the data points. Two vertices are connected if the similarity between the corresponding data points is larger than a certain threshold, and the edge is weighted by the similarity value. Clustering is achieved by choosing a suitable partition of the graph that each group corresponds to one cluster.

A good partition (i.e., a good clustering) is that the edges between different groups have overall low weights and the edges within a group have high weights, which indicates that the points in different clusters are dissimilar from each other and the points within the same cluster are similar to each other. One basic spectral clustering algorithm finds a good partition in the following way:

Given a set of data points P and the similarity matrix S, where S ij measures the similarity between points i, j ∈ P, form a graph. Build a Laplacian matrix L of the graph,

$$L = I -...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Recommended Reading

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Science+Business Media, LLC

About this entry

Cite this entry

Jin, X., Han, J. (2011). K-Way Spectral Clustering. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30164-8_427

Download citation

Publish with us

Policies and ethics