Skip to main content

Anti-bot Strategies Based on Human Interactive Proofs

  • Chapter
Handbook of Information and Communication Security

Abstract

Human Interactive Proofs (HIPs) are a class of tests used to counter automated tools. HIPs are based on the discrimination between actions executed by humans and activities undertaken by computers. Several types of HIPs have been proposed, based on hard-to-solve Artificial Intelligence problems, and they can be classified in three major categories: text-based, audio-based and image-based. In this chapter, we give a detailed overview of the currently used anti-bot strategies relying on HIPs. We present their main properties, advantages, limits and effectiveness.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 349.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 449.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 599.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. Basso, S. Sicco: Preventing massive automated access to web resources, Comput. Secur. 28(3/4), 174–188 (2009), doi:10.1016/j.cose.2008.11.002

    Article  Google Scholar 

  2. M. Koster: Robots in the web: threat or treat?, ConneXions 9(4), 2–12 (1995)

    Google Scholar 

  3. A. Basso: Protecting web resources from massive automated access, Technical report RT-114/08, Computer Science Departement, University of Torino (2008), http://www.di.unito.it/basso/papers/captcha-RT114-08.pdf

  4. A. Basso, F. Bergadano, I. Coradazzi, P.D. Checco: Lightweight security for internet polls, EGCDMAS (INSTICC Press, 2004) pp. 46–55

    Google Scholar 

  5. M. Blum, H. S. Baird: First workshop on human interactive proofs, Xerox Palo Alto Research Center, CA (2002) http://www2.parc.com/istl/groups/did/HIP2002/

  6. N.J. Hopper, M. Blum: Secure human identification protocols. In: ASIACRYPT, Lecture Notes in Computer Science, Vol. 224, ed. by C. Boyd (Springer, London 2001) pp. 56–66

    Google Scholar 

  7. H.S. Baird, K. Popat: Human interactive proofs and document image analisys, IAPR 2002: Workshop on document analisys system, Princeton (2002)

    Google Scholar 

  8. M. Chew, H.S. Baird: Baffletext: a Human Interactive Proof, Proc. SPIE/IS&T Document Recognition and Retrieval X Conference, Vol. 4670, SPIE, Santa Clara (2003)

    Google Scholar 

  9. L. von Ahn, M. Blum, N.J. Hopper, J. Langford: CAPTCHA: Using hard AI problems for security. In: EUROCRYPT, Lecture Notes in Computer Science, Vol. 2656, ed. by E. Biham (Springer, Berlin 2003) pp. 294–311

    Google Scholar 

  10. L. von Ahn, M. Blum, J. Langford: Telling humans and computers apart automatically, Commun. ACM 47(2), 56–60 (2004)

    Article  Google Scholar 

  11. A.M. Turing: Computing machinery and intelligence, Mind 59(236), 433–460 (1950)

    Article  MathSciNet  Google Scholar 

  12. M. Naor: Verification of a human in the loop or identification via the turing test. unpublished notes (September 13, 1996), http://www.wisdom.weizmann.ac.il/ naor/PAPERS/human.pdf

  13. R.V. Hall: CAPTCHA as a web security control, available at http://www.richhall.com/captcha/captcha_20051217.htm (last accessed 14 October 2009)

  14. M. Chew, J.D. Tygar: Image recognition CAPTCHAs, Proc. 7th Int. Information Security Conference (ISC 2004) (Springer, 2004) pp. 268–279

    Google Scholar 

  15. A. Coates, H. Baird, R. Fateman: Pessimal print: A reverse turing test, Proc. 6th Intl. Conf. on Document Analysis and Recognition (Seattle, 2001) pp. 1154–1158

    Google Scholar 

  16. G. Mori, J. Malik: Recognizing objects in adversarial clutter: Breaking a visual CAPTCHA, Proc. Conf. Computer Vision and Pattern Recognition, Madison (2003)

    Google Scholar 

  17. reCAPTCHA: Stop Spam, Read Books: Dept. of Computer Science, Carnegie Mellon University, http://www.recaptcha.net/ (last accessed 14 October 2009)

  18. L. von Ahn, B. Maurer, C. Mcmillen, D. Abraham, M. Blum: reCAPTCHA: Human-based character recognition via web security measures, Science 321(5895), 1465–1468 (2008), doi:10.1126/science.1160379

    Article  MathSciNet  Google Scholar 

  19. H.S. Baird, M.A. Moll, S.Y. Wang: Scattertype: A legible but hard-to-segment CAPTCHA, ICDAR ’05: Proc. 8th Int. Conference on Document Analysis and Recognition (IEEE Computer Society, Washington 2005) pp. 935–939, doi:10.1109/ICDAR.2005.205

    Google Scholar 

  20. H.S. Baird: Complex image recognition and web security. In: Data Complexity in Pattern Recognition, ed. by M. Basu, T. Kam (Springer, London 2006)

    Google Scholar 

  21. J. Yan, A.S.E. Ahmad: Breaking visual CAPTCHAs with naive pattern recognition algorithms, Annual Computer Security Applications Conference, Vol. 10(14) (2007) pp. 279–291, doi:10.1109/ACSAC.2007.47

    Google Scholar 

  22. PWNtcha CAPTCHA Decoder: http://sam.zoy.org/pwntcha/ (last accessed 14 October 2009)

  23. aiCaptcha: Using AI to beat CAPTCHA and post comment spam: http://www.brains-n-brawn.com/aiCaptcha (last accessed 14 October 2009)

  24. J. Yan, A.S.E. Ahmad: A low-cost attack on a microsoft CAPTCHA, CCS ’08: Proc. 15th ACM conference on Computer and communications security, New York (2008) pp. 543–554, doi:10.1145/1455770.1455839

    Google Scholar 

  25. Microsoft Live Hotmail under attack by streamlined anti-CAPTCHA and mass-mailing operations, Websense Security Labs, http://securitylabs.websense.com/content/Blogs/3063.aspx (last accessed 14 October 2009)

  26. Googles CAPTCHA busted in recent spammer tactics, Websense Securitylabs: http://securitylabs.websense.com/content/Blogs/2919.aspx (last ac cessed 14 October 2009)

  27. Yahoo! CAPTCHA is broken, Network Security Research and AI: http://network-security-research.blogspot.com/2008/01/yahoo-captcha-is- broken.html (last accessed 14 October 2009)

  28. K. Chellapilla, P. Simard, M. Czerwinski: Computers beat humans at single character recognition in reading-based human interaction proofs (hips), Proc. 2nd Conference on Email and Anti-Spam (CEAS), Palo Alto (2005)

    Google Scholar 

  29. K. Chellapilla, P.Y. Simard: Using Machine Learning to Break Visual Human Interaction Proofs (HIPs) (MIT Press, Cambridge 2005) pp. 265–272

    Google Scholar 

  30. G. Moy, N. Jones, C. Harkless, R. Potter: Distortion estimation techniques in solving visual CAPTCHAs, Proc. CVPR, Vol. 2 (2004) pp. 23–28

    Google Scholar 

  31. K. Chellapilla, K. Larson, P.Y. Simard, M. Czerwinski: Building segmentation based human-friendly human interaction proofs (hips). In: HIP, Lecture Notes in Computer Science, Vol. 3517, ed. by H.S. Baird, D.P. Lopresti (Springer, Berlin 2005) pp. 1–26, doi:10.1007/11427896_1

    Google Scholar 

  32. S. Bohr, A. Shome, J. Z. Simon: Improving auditory CAPTCHA security, TR 2008-32, ISR – Institute for Systems Research (2008), http://hdl.handle.net/1903/8666

  33. G. Kochanski, D. Lopresti, C. Shih: A reverse turing test using speech, Proc. Int. Conferences on Spoken Language Processing (Denver, 2002) pp. 1357–1360

    Google Scholar 

  34. J. Tam, J. Simsa, D. Huggins-Daines, L. von Ahn, M. Blum: Improving audio CAPTCHAs, Proc. 4th Symposium on Usability, Privacy and Security (SOUPS ’08), Pittsburgh (2008)

    Google Scholar 

  35. C. Nancy: Sound oriented captcha, Proc. 1st Workshop on Human Interactive Proofs, Xerox Palo Alto Research Center (2002)

    Google Scholar 

  36. T.Y. Chan: Using a text-to-speech synthesizer to generate a reverse turing test, ICTAI ’03: Proc. 15th IEEE Int. Conference on Tools with Artificial Intelligence (IEEE Computer Society, Washington 2003) p. 226

    Google Scholar 

  37. J. Holman, J. Lazar, J.H. Feng, J. D’Arcy: Developing usable CAPTCHAs for blind users, Assets ’07: Proc. 9th Int. ACM SIGACCESS conference on Computers and accessibility, ACM, New York (2007) pp. 245–246, doi:10.1145/1296843.1296894

    Google Scholar 

  38. A. Schlaikjer: A dual-use speech CAPTCHA: Aiding visually impaired web users while providing transcriptions of audio streams. Technical report cmu-lti-07-014, Carnegie Mellon University (2007), http://www.cs.cmu.edu/ hazen/publications/CMU-LTI-07-014.pdf

  39. Breaking Gmail’s Audio CAPTCHA, Wintercore Labs, B.: http://blog.wintercore.com/?p=11 (last accessed 14 October 2009)

  40. J. Tam, J. Simsa, S. Hyde, L. von Ahn: Breaking audio CAPTCHAs with machine learning techniques, Neural Information Processing Systems, NIPS 2008, Vancouver (2008)

    Google Scholar 

  41. M. Minsky: Mind as society. Thinking Allowed: Conversations on the Leading Edge of Knowledge and Discovery with Dr. Jeffrey Mishlove (1998), http://www.intuition.org/txt/minsky.htm (last accessed 14 October 2009)

  42. J. Elson, J.R. Douceur, J. Howell, J. Saul: Asirra: a CAPTCHA that exploits interest-aligned manual image categorization, CCS ’07: Proc. 14th ACM conference on Computer and communications security, ACM, New York (2007) pp. 366–374, doi:10.1145/1315245.1315291

    Google Scholar 

  43. P. Golle: Machine learning attacks against the Asirra CAPTCHA, CCS ’08: Proc. 15th ACM conference on Computer and communications security, ACM, New York (2008) pp. 535–542, doi:10.1145/1455770.1455838

    Google Scholar 

  44. R. Datta, J. Li, J.Z. Wang: Imagination: a robust image-based CAPTCHA generation system, Proc. 13th ACM international conference on Multimedia (MULTIMEDIA ’05) (ACM Press, New York 2005) pp. 331–334

    Google Scholar 

  45. Y. Rui, Z. Liu: Artifacial: automated reverse turing test using facial features, MULTIMEDIA ’03: Proc. 11th ACM Int. Conference on Multimedia, ACM, New York (2003) pp. 295–298, doi:10.1145/957013.957075

    Google Scholar 

  46. A. Basso, M. Miraglia: Avoiding massive automated voting in internet polls. In: STM2007, Electronic Notes in Theoretical Computer Science, Vol. 197(2) (Elsevier, Amsterdam 2008) pp. 149–157, doi:10.1016/j.entcs.2007.12.024

    Google Scholar 

  47. M. May: Inaccessibility of CAPTCHA: Alternatives to visual turing tests on the web, W3C Working Group Note, http://www.w3.org/TR/turingtest/ (2005)

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Basso, A., Bergadano, F. (2010). Anti-bot Strategies Based on Human Interactive Proofs. In: Stavroulakis, P., Stamp, M. (eds) Handbook of Information and Communication Security. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04117-4_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04117-4_15

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04116-7

  • Online ISBN: 978-3-642-04117-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics