A subfield of knowledge discovery called stream mining addresses the issue of rapidly changing data. The idea is to be able to deal with the stream of incoming data quickly enough to be able to simultaneously update the corresponding models (e.g., ontologies), as the amount of data is too large to be stored: new evidence from the incoming data is incorporated into the model without storing the data. For instance, modeling ontology changes and evolution over time using text mining methods (TextMining for Semantic Web). The underlying methods are based on the machine learning methods of Online Learning, where the model is built from the initially available data and updated regularly as more data become available.
Examples of data streams include computer network traffic, phone conversations, ATM transactions, web searches, and sensor data.
Cross-References
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Science+Business Media New York
About this entry
Cite this entry
(2017). Stream Mining. In: Sammut, C., Webb, G.I. (eds) Encyclopedia of Machine Learning and Data Mining. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7687-1_789
Download citation
DOI: https://doi.org/10.1007/978-1-4899-7687-1_789
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4899-7685-7
Online ISBN: 978-1-4899-7687-1
eBook Packages: Computer ScienceReference Module Computer Science and Engineering