Skip to main content

A New Method for Focused Crawler Cross Tunnel

  • Conference paper
Book cover Rough Sets and Knowledge Technology (RSKT 2006)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4062))

Included in the following conference series:

Abstract

Focused crawlers are programs designed to selectively retrieve Web pages relevant to a specific domain for the use of domain-specific search engines. Tunneling is a heuristic-based method that solves global optimization problem. In this paper we use content block algorithm to enhance focused crawler’s ability of traversing tunnel. The novel Algorithm not only avoid granularity too coarse when evaluation on the whole page but also avoid granularity too fine based on link-context. A comprehensive experiment has been conducted, the result shows obviously that this approach outperforms BestFirst and Anchor text algorithm both in harvest ratio and efficiency.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. De Bra, P., et al.: Information Retrieval in Distributed Hypertexts. In: Proc. 4th Int’l Conf. Intelligent Multimedia Information Retrieval Systems and Management(RIAO 1994), Center of High Int’l Studies of Documentary Information Retrieval(CID), pp. 481–491 (1994)

    Google Scholar 

  2. Hersovici, M., et al.: The SharkSearch Algorithm-An Application: Tailored Web Site Mapping Computer. Networks and ISDN Systems 30(1-7), 317–326

    Google Scholar 

  3. McCallum, A., et al.: Building Domain-Specific Search Engines with Machine Learning Techniques. In: AAAI Spring Symp. Intelligent Agents in Cyberspace, pp. 28–39. AAAI Press, Menlo Park (1999)

    Google Scholar 

  4. Kao, H., Lin, S., Ho, J., Chen, M.-S.: Mining web informative structures and contents based on entropy analysis. IEEE Transactions on Knowledge and Data Engineering 16(1), 41–44 (2004)

    Article  Google Scholar 

  5. Bergmark, D., Lagoze, C., Sbityakov, A.: Focused Crawls, Tunneling, and Digital Libraries. In: Proc. Proc. Of the 6th European Conference on Digital Libaries, Rome, Italy (2002b)

    Google Scholar 

  6. Ziv, B.Y., et al.: Template Detcetion via Data Mining and its Applications. In: Proc. 11th International World Wide Web Conference (2002)

    Google Scholar 

  7. Chakrabarti, S.: Integrating the document object model with hyperlinks for enhanced topic distillation and information extraction. In: Proc. 10th International World Wide Web Conference (2001)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Luo, N., Zuo, W., Yuan, F., Zhang, C. (2006). A New Method for Focused Crawler Cross Tunnel. In: Wang, GY., Peters, J.F., Skowron, A., Yao, Y. (eds) Rough Sets and Knowledge Technology. RSKT 2006. Lecture Notes in Computer Science(), vol 4062. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11795131_92

Download citation

  • DOI: https://doi.org/10.1007/11795131_92

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-36297-5

  • Online ISBN: 978-3-540-36299-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics