Skip to main content

Contrafactives and Learnability: An Experiment with Propositional Constants

  • Conference paper
  • First Online:
Logic and Engineering of Natural Language Semantics (LENLS 2022)

Abstract

Holton has drawn attention to a new semantic universal, according to which no natural language has contrafactive attitude verbs. Because factives are universal across natural languages, Holton’s universal is part of a major asymmetry between factive and contrafactive attitude verbs. We previously proposed that this asymmetry arises partly because the meaning of contrafactives is significantly harder to learn than that of factives. Here we extend our work by describing an additional computational experiment that further supports our hypothesis.

This paper reports on research supported by Cambridge University Press and Assessment, University of Cambridge. We thank the NVIDIA Corporation for the donation of the Titan X Pascal GPU used in this research. Simon Wimmer’s work on this paper was supported by a postdoc stipend of the Fritz Thyssen Foundation. We thank audiences in Bochum, Dortmund, Essen, Tokyo, and Utrecht, and anonymous reviewers for LENLS19 and AC23 for discussion of related material. David Strohmaier designed and ran both computational experiments, Simon Wimmer brought philosophical and linguistic discussions to bear on their design and interpretation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Holton adopts two further conditions expressions musts satisfy to count as contrafactives. In parallel with know, he would regard contra as a mental state verb and as responsive (embedding declarative and interrogative complements). For present purposes, however, we set these conditions aside. We take the question of why no natural language has a verb with the features noted in the text to be of independent interest, and expect the work we present here to also go some way toward addressing why no natural language has a verb that satisfies all of Holton’s conditions.

  2. 2.

    Although a non-factive (e.g. believe or think) entails a belief too, it contrasts with factives and contrafactives in triggering neither an uncancelleable inference to the truth/falsity of its declarative complement nor an inference to truth/falsity that projects through entailment-cancelling environments.

  3. 3.

    Another reason is that disprove neither entails a belief that its declarative complement is true nor is a mental state verb, though Hyman explicitly questions the mental state condition, making appeal to that condition dialectically ineffective.

  4. 4.

    [12] also lists pretend and lie as counterexamples to Holton’s universal. But in [8, p. 247], Holton denies that pretend is a contrafactive as its falsity inference is cancellable and in [9], he argues that lie does not embed declarative complements.

  5. 5.

    Most likely there are further reasons for the absence of contrafactives. In future work, we will survey how the costs and benefits of contrafactives tally up.

  6. 6.

    We thank Dilara Malkoc for discussion of the Turkish data.

  7. 7.

    We did not train our network to handle ascriptions in entailment-cancelling environments. We plan to fill this gap in a follow-up experiment.

  8. 8.

    Further details and a comparison of this experiment with others in the literature can be found in [30]. A link to our paper, the code for our network, and further information are available on GitHub [29].

  9. 9.

    A list of the available as well as the best-performing hyperparameters can be found in our online appendix on GitHub [29].

  10. 10.

    Here and below we used the permutation test included in the scipy library.

  11. 11.

    The code for this network and further details are also available on GitHub [29].

  12. 12.

    We could also expand the main training set for the final evaluation, i.e. add the excluded instances of the propositional constants and corresponding representations. However, our approach tests the model’s ability to generalise more strictly, as we require it to generalise to attitude ascriptions it has not seen before.

References

  1. Anvari, A., Maldonado, M., Soria Ruiz, A.: The puzzle of reflexive belief construction in Spanish. Proc. Sinn und Bedeutung 23(1), 57–74 (2019). https://doi.org/10.18148/sub/2019.v23i1.503

  2. Caucheteux, C., King, J.R.: Brains and algorithms partially converge in natural language processing. Commun. Biol. 5(1), 134 (2022). https://doi.org/10.1038/s42003-022-03036-1

    Article  Google Scholar 

  3. Davidson, D.: Actions, reasons, and causes. In: Davidson, D. (ed.) Essays on Actions and Events, pp. 3–19. Oxford University Press, Oxford (2001)

    Chapter  Google Scholar 

  4. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, May 2019. arXiv:1810.04805 [cs]

  5. Glass, L.: The Negatively Biased Mandarin Belief Verb yĭwéi*. Studia Linguistica n/a(n/a) (2022). https://doi.org/10.1111/stul.12202

  6. Goddard, C.: Universals and variation in the lexicon of mental state concepts. In: Words and the Mind: How words capture human experience. Oxford University Press, Oxford (2010)

    Google Scholar 

  7. Hacquard, V., Lidz, J.: On the acquisition of attitude verbs. Ann. Rev. Linguist. 8(1), 193–212 (2022). https://doi.org/10.1146/annurev-linguistics-032521-053009

    Article  Google Scholar 

  8. Holton, R.: I-Facts, factives, and contrafactives. Aristotelian Soc. Supplementary 91(1), 245–266 (2017). https://doi.org/10.1093/arisup/akx003

    Article  Google Scholar 

  9. Holton, R.: Lying about. J. Philos. 116(2), 99–105 (2019). https://doi.org/10.5840/jphil201911625

    Article  Google Scholar 

  10. Hsiao, P.Y.K.: On counterfactual attitudes: a case study of Taiwanese Southern Min. Lingua Sinica 3(1), 4 (2017). https://doi.org/10.1186/s40655-016-0019-7

    Article  Google Scholar 

  11. Huebner, P.A., Sulem, E., Cynthia, F., Roth, D.: BabyBERTa: learning more grammar with small-scale child-directed language. In: Proceedings of the 25th Conference on Computational Natural Language Learning, pp. 624–646 (2021). https://doi.org/10.18653/v1/2021.conll-1.49

  12. Hyman, J.: II-knowledge and belief. Aristotelian Soc. Supplementary 91(1), 267–288 (2017). https://doi.org/10.1093/arisup/akx005

    Article  Google Scholar 

  13. Kadmon, N.: Formal Pragmatics: Semantics, Pragmatics, Presupposition, and Focus. Blackwell, Malden (2001)

    Google Scholar 

  14. Lillicrap, T.P., Cownden, D., Tweed, D.B., Akerman, C.J.: Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7(1), 13276 (2016). https://doi.org/10.1038/ncomms13276

    Article  Google Scholar 

  15. Lillicrap, T.P., Santoro, A., Marris, L., Akerman, C.J., Hinton, G.: Backpropagation and the brain. Nat. Rev. Neurosci. 21(6), 335–346 (2020). https://doi.org/10.1038/s41583-020-0277-3

    Article  Google Scholar 

  16. McCready, E.: The Dynamics of Particles. Ph.D. thesis, University of Texas at Austin (2005). https://repositories.lib.utexas.edu/bitstream/handle/2152/1779/mccreadyjre33399.pdf

  17. Nagel, J.: Factive and nonfactive mental state attribution. Mind Lang. 32(5), 525–544 (2017). https://doi.org/10.1111/mila.12157

    Article  Google Scholar 

  18. Pham, T., Bui, T., Mai, L., Nguyen, A.: Out of Order: How important is the sequential order of words in a sentence in Natural Language Understanding tasks? In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 1145–1160. Online (2021). https://doi.org/10.18653/v1/2021.findings-acl.98

  19. Phillips, J., et al.: Knowledge before Belief. Behavioral and Brain Sciences, pp. 1–37 (2020). https://doi.org/10.1017/S0140525X20000618

  20. Phillips, J., Norby, A.: Factive theory of mind. Mind Lang. 36(1), 3–26 (2021). https://doi.org/10.1111/mila.12267

    Article  Google Scholar 

  21. Rogers, A., Kovaleva, O., Rumshisky, A.: A primer in BERTology: what we know about how BERT works. Trans. Assoc. Comput. Linguist. 8, 842–866 (2020). https://doi.org/10.1162/tacl_a_00349

    Article  Google Scholar 

  22. Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11, 24 (2017). https://doi.org/10.3389/fncom.2017.00024

    Article  Google Scholar 

  23. Schrimpf, M., et al.: The neural architecture of language: integrative modeling converges on predictive processing. Proc. Natl. Acad. Sci. 118(45), November 2021. https://doi.org/10.1073/pnas.2105646118

  24. Shatz, M., Diesendruck, G., Martinez-Beck, I., Akar, D.: The influence of language and socioeconomic status on children’s understanding of false belief. Dev. Psychol. 39(4), 717–729 (2003). https://doi.org/10.1037/0012-1649.39.4.717

    Article  Google Scholar 

  25. Shuxiang, L.: Eight Hundred Words in Contemporary Chinese. Commercial Press, Beijing (1999)

    Google Scholar 

  26. Steinert-Threlkeld, S.: An explanation of the veridical uniformity universal. J. Semant. (2019). https://doi.org/10.1093/jos/ffz019

    Article  Google Scholar 

  27. Steinert-Threlkeld, S., Szymanik, J.: Learnability and semantic universals. Semantics Pragmatics 12 (2019). https://doi.org/10.3765/sp.12.4

  28. Steinert-Threlkeld, S., Szymanik, J.: Ease of learning explains semantic universals. Cognition 195 (2020). https://doi.org/10.1016/j.cognition.2019.104076

  29. Strohmaier, D.: Contrafactives: Exploration of a Grid World, November 2022. https://github.com/dstrohmaier/contrafactives_grid_world

  30. Strohmaier, D., Wimmer, S.: Contrafactives and Learnability. In: Proceedings of the 23rd Amsterdam Colloquium. Amsterdam (2022)

    Google Scholar 

  31. Vaswani, A., et al.: Attention is All you Need. In: 31st Conference on Neural Information Processing Systems, pp. 1–11 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Simon Wimmer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Strohmaier, D., Wimmer, S. (2023). Contrafactives and Learnability: An Experiment with Propositional Constants. In: Bekki, D., Mineshima, K., McCready, E. (eds) Logic and Engineering of Natural Language Semantics. LENLS 2022. Lecture Notes in Computer Science, vol 14213. Springer, Cham. https://doi.org/10.1007/978-3-031-43977-3_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43977-3_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43976-6

  • Online ISBN: 978-3-031-43977-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics