Skip to main content

Disagreement Evaluation of Solutions for Math Word Problem

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases. Research Track (ECML PKDD 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14945))

Abstract

It has been shown that the generation method works well for modeling the math word problem. To enhance models’ performance on math word problems, some studies employ multi-task learning, such as auxiliary tasks trained with the objective function of generation tasks. Previous work has contributed outstandingly to improving the model’s ability to discriminate solution errors by utilizing ranking tasks. However, these approaches only use the solution’s overall representation as the foundation for evaluation, preventing the model from distinguishing between right and wrong solutions. To address this deficiency, we propose a method called disagreement evaluation of solutions that involves training ranking tasks with disagreement points that are located using the longest prefix matching between correct and incorrect solutions. Next, this work employs a multi-stage approach to fine-tune the model incrementally to avoid the instability of joint optimization. Moreover, we explain the cooperation between ranking and generation tasks. Our experiments on the widely used Math23k and MAWPS datasets show that our method can achieve competitive results under low trainable parameters1(Codes are available at https://github.com/vincent-hyx/DEoS/.)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. OpenAI: Gpt-4 technical report. arXiv preprint arXiv:2303.08774 (2023)

  2. Shen, J., et al.: Generate & rank: a multi-task framework for math word problems. In: Findings of the Association for Computational Linguistics: EMNLP 2021, pp. 2269–2279 (2021)

    Google Scholar 

  3. Wang, Y., Liu, X., Shi, S.: Deep neural solver for math word problems. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 845–854 (2017)

    Google Scholar 

  4. Ziegler, D.M., et al.: Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593 (2019)

  5. O’Brien, S., Lewis, M.: Contrastive decoding improves reasoning in large language models. arXiv preprint arXiv:2309.09117 (2023)

  6. Li, X.L., et al.: Contrastive decoding: Open-ended text generation as optimization. In: Rogers, A., Boyd-Graber, J., Okazaki, N. (eds.) Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 12286–12312 (2023)

    Google Scholar 

  7. Liu, Y., Singh, A., Freeman, C.D., Co-Reyes, J.D., Liu, P.J.: Improving large language model fine-tuning for solving math problems. arXiv preprint arXiv:2310.10047 (2023)

  8. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186 (2019)

    Google Scholar 

  9. Hu, E.J., et al.: LoRA: Low-rank adaptation of large language models. In: International Conference on Learning Representations (2022)

    Google Scholar 

  10. Zhang, J., et al.: Graph-to-tree learning for solving math word problems. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. pp. 3928–3937 (2020)

    Google Scholar 

  11. Lin, X., et al.: Hms: A hierarchical solver with dependency-enhanced understanding for math word problem, vol. 35, pp. 4232–4240 (2021)

    Google Scholar 

  12. Li, Z., et al.: Seeking patterns, not just memorizing procedures: Contrastive learning for solving math word problems. In: Findings of the Association for Computational Linguistics: ACL 2022, pp. 2486–2496 (2022)

    Google Scholar 

  13. Xie, Z., Sun, S.: A goal-driven tree-structured neural model for math word problems. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, pp. 5299–5305 (2019)

    Google Scholar 

  14. Jie, Z., Li, J., Lu, W.: Learning to reason deductively: math word problem solving as complex relation extraction. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp. 5944–5955 (2022)

    Google Scholar 

  15. Zhang, W., Shen, Y., Nong, Q., Tan, Z., Ma, Y., Lu, W.: An expression tree decoding strategy for mathematical equation generation. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 439–456 (Dec 2023)

    Google Scholar 

  16. Wang, B., Ju, J., Fan, Y., Dai, X., Huang, S., Chen, J.: Structure-unified M-tree coding solver for math word problem. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 8122–8132 (2022)

    Google Scholar 

  17. Bin, Y., et al.: Non-autoregressive math word problem solver with unified tree structure. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 3290–3301 (2023)

    Google Scholar 

  18. Yang, Z., Qin, J., Chen, J., Liang, X.: Unbiased math word problems benchmark for mitigating solving bias. In: Findings of the Association for Computational Linguistics: NAACL 2022, pp. 1401–1408 (2022)

    Google Scholar 

  19. Zhou, Z., et al.: Learning by analogy: Diverse questions generation in math word problem. In: Findings of the Association for Computational Linguistics: ACL 2023, pp. 11091–11104 (2023)

    Google Scholar 

  20. Liang, Z., Zhang, J., Zhang, X.: Analogical math word problems solving with enhanced problem-solution association. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 9454–9464 (2022)

    Google Scholar 

  21. Raiyan, S.R., Faiyaz, M.N., Kabir, S.M.J., Kabir, M., Mahmud, H., Hasan, M.K.: Math word problem solving by generating linguistic variants of problem statements. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), pp. 362–378 (2023)

    Google Scholar 

  22. Cobbe, K., et al.: Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 (2021)

  23. Liang, Z., Zhang, J., Wang, L., Wang, Y., Shao, J., Zhang, X.: Generalizing math word problem solvers via solution diversification 37, 13183–13191 (2023)

    Google Scholar 

  24. Huang, D., Liu, J., Lin, C.Y., Yin, J.: Neural math word problem solver with reinforcement learning. In: Proceedings of the 27th International Conference on Computational Linguistics, pp. 213–223 (2018)

    Google Scholar 

  25. Wang, L., Zhang, D., Gao, L., Song, J., Guo, L., Shen, H.T.: Mathdqn: Solving Arithmetic Word Problems via Deep Reinforcement Learning, vol. 32 (Apr 2018)

    Google Scholar 

  26. Liang, Z., et al.: MWP-BERT: numeracy-augmented pre-training for math word problem solving. In: Findings of the Association for Computational Linguistics: NAACL 2022, pp. 997–1009 (2022)

    Google Scholar 

  27. Xiao, J., Huang, L., Song, Y., Tang, N.: A recursive tree-structured neural network with goal forgetting and information aggregation for solving math word problems. Inform. Process. Manag. 60(3), 103324 (2023)

    Article  Google Scholar 

  28. Zhu, X., et al.: Solving math word problems via cooperative reasoning induced language models. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 4471–4485 (2023)

    Google Scholar 

  29. Koncel-Kedziorski, R., Roy, S., Amini, A., Kushman, N., Hajishirzi, H.: MAWPS: a math word problem repository. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1152–1157 (2016)

    Google Scholar 

  30. Patel, A., Bhattamishra, S., Goyal, N.: Are NLP models really able to solve simple math word problems? In: Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 2080–2094 (2021)

    Google Scholar 

  31. Du, Z., Qian, Y., Liu, X., Ding, M., Qiu, J., Yang, Z., Tang, J.: GLM: General language model pretraining with autoregressive blank infilling. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 320–335 (2022)

    Google Scholar 

  32. Zeng, A., et al.: GLM-130b: An open bilingual pre-trained model. In: The Eleventh International Conference on Learning Representations (2023)

    Google Scholar 

  33. Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. In: International Conference on Learning Representations (2019)

    Google Scholar 

Download references

Acknowledgments

This work is supported by the Natural Science Foundation of China under Grants No.62266051 and No.61966038, and the Scientific Research and Innovation Project of Postgraduate Students in the Academic Degree of YunNan University (KC-23234112).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaobing Zhou .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xu, Y., Zhang, X., Wang, J., Zhou, X. (2024). Disagreement Evaluation of Solutions for Math Word Problem. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14945. Springer, Cham. https://doi.org/10.1007/978-3-031-70362-1_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-70362-1_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-70361-4

  • Online ISBN: 978-3-031-70362-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics