Definition
Phase transition (PT) is a term originally used in physics to denote the transformation of a system from a liquid, solid, or gas state (phase) to another. It is used, by extension, to describe any abrupt and sudden change in one of the order parameters describing an arbitrary system, when a control parameter approaches a critical value (While early studies on PTs in computer science inverted the notions of order and control parameters, this article will stick to the original definition used in Statistical Physics.).
Far from being limited to physical systems, PTs are ubiquitous in sciences, notably in computational science. Typically, hard combinatorial problems display a PT with regard to the probability of existence of a solution. Note that the notion of PT cannot be studied in relation to single-problem instances: it refers to emergent phenomena in an ensembleof...
Recommended Reading
Ales Bianchetti, J., Rouveirol, C., & Sebag, M. (2002). Constraint-based learning of long relational concepts. In C. Sammut (Ed.), Proceedings of international conference on machine learning, ICML’02, (pp. 35–42). San Francisco, CA: Morgan Kauffman.
Alphonse, E., & Osmani, A. (2008). On the connection between the phase transition of the covering test and the learning success rate. Machine Learning, 70(2–3), 135–150.
Baskiotis, N., & Sebag, M. (2004). C4.5 competence map: A phase transition-inspired approach. In Proceedings of international conference on machine learning, Banff, Alberta, Canada (pp. 73–80). Morgan Kaufman.
Botta, M., Giordana, A., & Saitta, L. (1999). An experimental study of phase transitions in matching. In Proceedings of the 16th international joint conference on artificial intelligence, Stockholm, Sweden (pp. 1198–1203).
Botta, M., Giordana, A., Saitta, L., & Sebag, M. (2003). Relational learning as search in a critical region. Journal of Machine Learning Research, 4, 431–463.
Cands, E. J. (2008). The restricted isometry property and its implications for compressed sensing. Compte Rendus de l’Academie des Sciences, Paris, Serie I, 346, 589–592.
Cheeseman, P., Kanefsky, B., & Taylor, W. (1991). Where the really hard problems are. In R. Myopoulos & J. Reiter (Eds.), Proceedings of the 12th international joint conference on artificial intelligence, Sydney, Australia (pp. 331–340). San Francisco, CA: Morgan Kaufmann.
Cornuéjols, A., & Sebag, M. (2008). A note on phase transitions and computational pitfalls of learning from sequences. Journal of Intelligent Information Systems, 31(2), 177–189.
Cortes, C., & Vapnik, V. N. (1995). Support-vector networks. Machine Learning, 20, 273–297.
De Raedt, L. (1997). Logical setting for concept-learning. Artificial Intelligence, 95, 187–202.
De Raedt, L. (1998). Attribute-value learning versus inductive logic programming: The missing links. In Proceedings inductive logic programming, ILP, LNCS, (Vol. 2446, pp. 1–8). London: Springer.
Demongeot, J., & Sené, S. (2008). Boundary conditions and phase transitions in neural networks. Simulation results. Neural Networks, 21(7), 962–970.
Dietterich, T., Lathrop, R., & Lozano-Perez, T. (1997). Solving the multiple-instance problem with axis-parallel rectangles. Artificial Intelligence, 89(1–2), 31–71.
Donoho, D. L., & Tanner, J. (2005). Sparse nonnegative solution of underdetermined linear equations by linear programming. Proceedings of the National Academy of Sciences, 102(27), 9446–9451.
Engel, A., & Van den Broeck, C. (2001). Statistical mechanics of learning. Cambridge: Cambridge University Press.
Gaudel, R., Sebag, M., & Cornuéjols, A. (2007). A phase transition-based perspective on multiple instance kernels. In Proceedings of international conference on inductive logic programming, ILP, Corvallis, OR (pp. 112–121).
Gaudel, R., Sebag, M., & Cornuéjols, A. (2008). A phase transition-based perspective on multiple instance kernels. Lecture notes in computer sciences, (Vol. 4894, pp. 112–121).
Giordana, A., & Saitta, L. (2000). Phase transitions in relational learning. Machine Learning, 41(2), 17–251.
Haussler, D. (1999). Convolutional kernels on discrete structures. Tech. Rep., Computer Science Department, University of California at Santa Cruz.
Hogg, T., Huberman, B. A., & Williams, C. P. (Eds.). (1996). Artificial intelligence: Special Issue on frontiers in problem solving: Phase transitions and complexity, (Vol. 81(1–2)) . Elsevier.
Kramer, S., Lavrac, N., & Flach, P. (2001). Propositionalization approaches to relational data mining. In S. Dzeroski & N. Lavrac (Eds.), Relational data mining, (pp. 262–291). New York: Springer.
Maloberti, J., & Sebag, M. (2004). Fast theta-subsumption with constraint satisfaction algorithms. Machine Learning Journal, 55, 137–174.
Mitchell, T. M. (1982). Generalization as search. Artificial Intelligence, 18, 203–226.
Plotkin, G. (1970). A note on inductive generalization. In Machine Intelligence, (Vol. 5). Edinburgh University Press.
Quinlan, J. R. (1993). C4.5: Programs for machine learning. San Francisco, CA: Morgan Kaufmann.
Rückert, U., & De Raedt, L. (2008). An experimental evaluation of simplicity in rule learning. Artificial Intelligence, 172(1), 19–28.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer Science+Business Media, LLC
About this entry
Cite this entry
Saitta, L., Sebag, M. (2011). Phase Transitions in Machine Learning. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-30164-8_635
Download citation
DOI: https://doi.org/10.1007/978-0-387-30164-8_635
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-30768-8
Online ISBN: 978-0-387-30164-8
eBook Packages: Computer ScienceReference Module Computer Science and Engineering