Skip to main content

Accelerating Graph-Based Dependency Parsing with Lock-Free Parallel Perceptron

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11108))

Abstract

Dependency parsing is an important NLP task. A popular approach for dependency parsing is structured perceptron. Still, graph-based dependency parsing has the time complexity of \(O(n^3)\), and it suffers from slow training. To deal with this problem, we propose a parallel algorithm called parallel perceptron. The parallel algorithm can make full use of a multi-core computer which saves a lot of training time. Based on experiments we observe that dependency parsing with parallel perceptron can achieve 8-fold faster training speed than traditional structured perceptron methods when using 10 threads, and with no loss at all in accuracy.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    www.seas.upenn.edu/~strctlrn/MSTParser/MSTParser.html.

References

  1. Bohnet, B.: Very high accuracy and fast dependency parsing is not a contradiction. In: Proceedings of the 23rd International Conference on Computational Linguistics, pp. 89–97 (2010)

    Google Scholar 

  2. Chen, D., Manning, C.D.: A fast and accurate dependency parser using neural networks. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 740–750 (2014)

    Google Scholar 

  3. Collins, M.: Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In: Proceedings of the ACL-2002 Conference on Empirical Methods in Natural Language Processing-Volume 10, pp. 1–8. Association for Computational Linguistics (2002)

    Google Scholar 

  4. Crammer, K., Singer, Y.: On the algorithmic implementation of multiclass kernel-based vector machines. J. Mach. Learn. Res. 2, 265–292 (2002)

    MATH  Google Scholar 

  5. Eisner, J.: Three new probabilistic models for dependency parsing: an exploration. In: Proceedings of the 16th Conference on Computational Linguistics, pp. 340–345 (1996)

    Google Scholar 

  6. Koo, T., Collins, M.: Efficient third-order dependency parsers. In: Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pp. 1–11 (2010)

    Google Scholar 

  7. Le, P., Zuidema, W.: The inside-outside recursive neural network model for dependency parsing. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 729–739 (2014)

    Google Scholar 

  8. Marcus, M.P., Marcinkiewicz, M.A., Santorini, B.: Building a large annotated corpus of English: the penn treebank. Comput. Linguist. 19(2), 313–330 (1993)

    Google Scholar 

  9. McDonald, R., Crammer, K., Pereira, F.: Online large-margin training of dependency parsers. In: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pp. 91–98 (2005)

    Google Scholar 

  10. McDonald, R., Hall, K., Mann, G.: Distributed training strategies for the structured perceptron. In: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 456–464. Association for Computational Linguistics (2010)

    Google Scholar 

  11. McDonald, R.T., Pereira, F.C.N.: Online learning of approximate dependency parsing algorithms. In: 11st Conference of the European Chapter of the Association for Computational Linguistics (2006)

    Google Scholar 

  12. Sun, X.: Towards shockingly easy structured classification: a search-based probabilistic online learning framework. Technical report, arXiv:1503.08381 (2015)

  13. Sun, X.: Asynchronous parallel learning for neural networks and structured models with dense features. In: COLING (2016)

    Google Scholar 

  14. Sun, X., Matsuzaki, T., Okanohara, D., Tsujii, J.: Latent variable perceptron algorithm for structured classification. In: Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009), pp. 1236–1242 (2009)

    Google Scholar 

  15. Sun, X., Ren, X., Ma, S., Wang, H.: meProp: sparsified back propagation for accelerated deep learning with reduced overfitting. In: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6–11 August 2017, pp. 3299–3308 (2017)

    Google Scholar 

  16. Taskar, B., Klein, D., Collins, M., Koller, D., Manning, C.D.: Max-margin parsing. In: Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP), vol. 1, p. 3 (2004)

    Google Scholar 

  17. Titov, I., Henderson, J.: A latent variable model for generative dependency parsing. In: Proceedings of the 10th International Conference on Parsing Technologies, pp. 144–155 (2007)

    Google Scholar 

  18. Wei, B., Sun, X., Ren, X., Xu, J.: Minimal effort back propagation for convolutional neural networks. CoRR abs/1709.05804 (2017)

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers for insightful comments and suggestions on this paper. This work was supported in part by National Natural Science Foundation of China (No. 61673028).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuming Ma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ma, S., Sun, X., Zhang, Y., Wei, B. (2018). Accelerating Graph-Based Dependency Parsing with Lock-Free Parallel Perceptron. In: Zhang, M., Ng, V., Zhao, D., Li, S., Zan, H. (eds) Natural Language Processing and Chinese Computing. NLPCC 2018. Lecture Notes in Computer Science(), vol 11108. Springer, Cham. https://doi.org/10.1007/978-3-319-99495-6_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-99495-6_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-99494-9

  • Online ISBN: 978-3-319-99495-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics