ABSTRACT
Web has emerged as the most important source of information in the world. This has resulted in need for automated software components to analyze web pages and harvest useful information from them. However, in typical web pages the informative content is surrounded by a very high degree of noise in the form of advertisements, navigation bars, links to other content, etc. Often the noisy content is interspersed with the main content leaving no clean boundaries between them. This noisy content makes the problem of information harvesting from web pages much harder. Therefore, it is essential to be able to identify main content of a web page and automatically isolate it from noisy content for any further analysis. Most existing approaches rely on prior knowledge of website specific templates and hand-crafted rules specific to websites for extraction of relevant content. We propose a generic approach that does not require prior knowledge of website templates. While HTML DOM analysis and visual layout analysis approaches have sometimes been used, we believe that for higher accuracy in content extraction, the analyzing software needs to mimic a human user and understand content in natural language similar to the way humans intuitively do in order to eliminate noisy content.
In this paper, we describe a combination of HTML DOM analysis and Natural Language Processing (NLP) techniques for automated extractions of main article with associated images from web pages.
- B. Adelberg. Nodose - a tool for semi-automatically extracting structured and semistructured data from text documents. In SIGMOD '98: Proceedings of the 1998 ACM SIGMOD international conference on Management of data, pages 283--294, New York, NY, USA, 1998. ACM. Google ScholarDigital Library
- D. Cai, S. Yu, J.-R. Wen, and W.-Y. Ma. Vips: a vision-based page segmentation algorithm. Technical report, Microsoft Research, 2003.Google Scholar
- H. Cunningham, D. Maynard, K. Bontcheva, and V. Tablan. GATE: A framework and graphical development environment for robust NLP tools and applications. In Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics, 2002.Google Scholar
- S. Gupta, G. Kaiser, D. Neistadt, and P. Grimm. Dom-based content extraction of html documents. In WWW '03: Proceedings of the 12th international conference on World Wide Web, pages 207--214, New York, NY, USA, 2003. ACM. Google ScholarDigital Library
- A. H. F. Laender, B. A. Ribeiro-Neto, A. S. da Silva, and J. S. Teixeira. A brief survey of web data extraction tools. SIGMOD Rec., 31(2):84--93, 2002. Google ScholarDigital Library
- S.-H. Lin and J.-M. Ho. Discovering informative content blocks from web documents. In KDD '02: Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 588--593, New York, NY, USA, 2002. ACM. Google ScholarDigital Library
- L. Liu, C. Pu, and W. Han. Xwrap: An xml-enabled wrapper construction system for web information sources. In ICDE, pages 611--621, 2000. Google ScholarDigital Library
- L. Ma, N. Goharian, A. Chowdhury, and M. Chung. Extracting unstructured data from template generated web documents. In CIKM '03: Proceedings of the twelfth international conference on Information and knowledge management, pages 512--515, New York, NY, USA, 2003. ACM. Google ScholarDigital Library
- J. Pasternack and D. Roth. Extracting article text from the web with maximum subsequence segmentation. In WWW '09: Proceedings of the 18th international conference on World wide web, pages 971--980, New York, NY, USA, 2009. ACM. Google ScholarDigital Library
- L. Yi, B. Liu, and X. Li. Eliminating noisy information in web pages for data mining. In KDD '03: Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 296--305, New York, NY, USA, 2003. ACM. Google ScholarDigital Library
Index Terms
- Web document text and images extraction using DOM analysis and natural language processing
Recommendations
DOM based content extraction via text density
SIGIR '11: Proceedings of the 34th international ACM SIGIR conference on Research and development in Information RetrievalIn addition to the main content, most web pages also contain navigation panels, advertisements and copyright and disclaimer notices. This additional content, which is also known as noise, is typically not related to the main subject and may hamper the ...
DOM-based content extraction of HTML documents
WWW '03: Proceedings of the 12th international conference on World Wide WebWeb pages often contain clutter (such as pop-up ads, unnecessary images and extraneous links) around the body of an article that distracts a user from actual content. Extraction of "useful and relevant" content from web pages has many applications, ...
Automating Content Extraction of HTML Documents
Web pages often contain clutter (such as unnecessary images and extraneous links) around the body of an article that distracts a user from actual content. Extraction of "useful and relevant" content from web pages has many applications, including cell ...
Comments