Abstract
Holton has drawn attention to a new semantic universal, according to which no natural language has contrafactive attitude verbs. Because factives are universal across natural languages, Holton’s universal is part of a major asymmetry between factive and contrafactive attitude verbs. We previously proposed that this asymmetry arises partly because the meaning of contrafactives is significantly harder to learn than that of factives. Here we extend our work by describing an additional computational experiment that further supports our hypothesis.
This paper reports on research supported by Cambridge University Press and Assessment, University of Cambridge. We thank the NVIDIA Corporation for the donation of the Titan X Pascal GPU used in this research. Simon Wimmer’s work on this paper was supported by a postdoc stipend of the Fritz Thyssen Foundation. We thank audiences in Bochum, Dortmund, Essen, Tokyo, and Utrecht, and anonymous reviewers for LENLS19 and AC23 for discussion of related material. David Strohmaier designed and ran both computational experiments, Simon Wimmer brought philosophical and linguistic discussions to bear on their design and interpretation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Holton adopts two further conditions expressions musts satisfy to count as contrafactives. In parallel with know, he would regard contra as a mental state verb and as responsive (embedding declarative and interrogative complements). For present purposes, however, we set these conditions aside. We take the question of why no natural language has a verb with the features noted in the text to be of independent interest, and expect the work we present here to also go some way toward addressing why no natural language has a verb that satisfies all of Holton’s conditions.
- 2.
Although a non-factive (e.g. believe or think) entails a belief too, it contrasts with factives and contrafactives in triggering neither an uncancelleable inference to the truth/falsity of its declarative complement nor an inference to truth/falsity that projects through entailment-cancelling environments.
- 3.
Another reason is that disprove neither entails a belief that its declarative complement is true nor is a mental state verb, though Hyman explicitly questions the mental state condition, making appeal to that condition dialectically ineffective.
- 4.
- 5.
Most likely there are further reasons for the absence of contrafactives. In future work, we will survey how the costs and benefits of contrafactives tally up.
- 6.
We thank Dilara Malkoc for discussion of the Turkish data.
- 7.
We did not train our network to handle ascriptions in entailment-cancelling environments. We plan to fill this gap in a follow-up experiment.
- 8.
- 9.
A list of the available as well as the best-performing hyperparameters can be found in our online appendix on GitHub [29].
- 10.
Here and below we used the permutation test included in the scipy library.
- 11.
- 12.
We could also expand the main training set for the final evaluation, i.e. add the excluded instances of the propositional constants and corresponding representations. However, our approach tests the model’s ability to generalise more strictly, as we require it to generalise to attitude ascriptions it has not seen before.
References
Anvari, A., Maldonado, M., Soria Ruiz, A.: The puzzle of reflexive belief construction in Spanish. Proc. Sinn und Bedeutung 23(1), 57–74 (2019). https://doi.org/10.18148/sub/2019.v23i1.503
Caucheteux, C., King, J.R.: Brains and algorithms partially converge in natural language processing. Commun. Biol. 5(1), 134 (2022). https://doi.org/10.1038/s42003-022-03036-1
Davidson, D.: Actions, reasons, and causes. In: Davidson, D. (ed.) Essays on Actions and Events, pp. 3–19. Oxford University Press, Oxford (2001)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, May 2019. arXiv:1810.04805 [cs]
Glass, L.: The Negatively Biased Mandarin Belief Verb yĭwéi*. Studia Linguistica n/a(n/a) (2022). https://doi.org/10.1111/stul.12202
Goddard, C.: Universals and variation in the lexicon of mental state concepts. In: Words and the Mind: How words capture human experience. Oxford University Press, Oxford (2010)
Hacquard, V., Lidz, J.: On the acquisition of attitude verbs. Ann. Rev. Linguist. 8(1), 193–212 (2022). https://doi.org/10.1146/annurev-linguistics-032521-053009
Holton, R.: I-Facts, factives, and contrafactives. Aristotelian Soc. Supplementary 91(1), 245–266 (2017). https://doi.org/10.1093/arisup/akx003
Holton, R.: Lying about. J. Philos. 116(2), 99–105 (2019). https://doi.org/10.5840/jphil201911625
Hsiao, P.Y.K.: On counterfactual attitudes: a case study of Taiwanese Southern Min. Lingua Sinica 3(1), 4 (2017). https://doi.org/10.1186/s40655-016-0019-7
Huebner, P.A., Sulem, E., Cynthia, F., Roth, D.: BabyBERTa: learning more grammar with small-scale child-directed language. In: Proceedings of the 25th Conference on Computational Natural Language Learning, pp. 624–646 (2021). https://doi.org/10.18653/v1/2021.conll-1.49
Hyman, J.: II-knowledge and belief. Aristotelian Soc. Supplementary 91(1), 267–288 (2017). https://doi.org/10.1093/arisup/akx005
Kadmon, N.: Formal Pragmatics: Semantics, Pragmatics, Presupposition, and Focus. Blackwell, Malden (2001)
Lillicrap, T.P., Cownden, D., Tweed, D.B., Akerman, C.J.: Random synaptic feedback weights support error backpropagation for deep learning. Nat. Commun. 7(1), 13276 (2016). https://doi.org/10.1038/ncomms13276
Lillicrap, T.P., Santoro, A., Marris, L., Akerman, C.J., Hinton, G.: Backpropagation and the brain. Nat. Rev. Neurosci. 21(6), 335–346 (2020). https://doi.org/10.1038/s41583-020-0277-3
McCready, E.: The Dynamics of Particles. Ph.D. thesis, University of Texas at Austin (2005). https://repositories.lib.utexas.edu/bitstream/handle/2152/1779/mccreadyjre33399.pdf
Nagel, J.: Factive and nonfactive mental state attribution. Mind Lang. 32(5), 525–544 (2017). https://doi.org/10.1111/mila.12157
Pham, T., Bui, T., Mai, L., Nguyen, A.: Out of Order: How important is the sequential order of words in a sentence in Natural Language Understanding tasks? In: Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pp. 1145–1160. Online (2021). https://doi.org/10.18653/v1/2021.findings-acl.98
Phillips, J., et al.: Knowledge before Belief. Behavioral and Brain Sciences, pp. 1–37 (2020). https://doi.org/10.1017/S0140525X20000618
Phillips, J., Norby, A.: Factive theory of mind. Mind Lang. 36(1), 3–26 (2021). https://doi.org/10.1111/mila.12267
Rogers, A., Kovaleva, O., Rumshisky, A.: A primer in BERTology: what we know about how BERT works. Trans. Assoc. Comput. Linguist. 8, 842–866 (2020). https://doi.org/10.1162/tacl_a_00349
Scellier, B., Bengio, Y.: Equilibrium propagation: bridging the gap between energy-based models and backpropagation. Front. Comput. Neurosci. 11, 24 (2017). https://doi.org/10.3389/fncom.2017.00024
Schrimpf, M., et al.: The neural architecture of language: integrative modeling converges on predictive processing. Proc. Natl. Acad. Sci. 118(45), November 2021. https://doi.org/10.1073/pnas.2105646118
Shatz, M., Diesendruck, G., Martinez-Beck, I., Akar, D.: The influence of language and socioeconomic status on children’s understanding of false belief. Dev. Psychol. 39(4), 717–729 (2003). https://doi.org/10.1037/0012-1649.39.4.717
Shuxiang, L.: Eight Hundred Words in Contemporary Chinese. Commercial Press, Beijing (1999)
Steinert-Threlkeld, S.: An explanation of the veridical uniformity universal. J. Semant. (2019). https://doi.org/10.1093/jos/ffz019
Steinert-Threlkeld, S., Szymanik, J.: Learnability and semantic universals. Semantics Pragmatics 12 (2019). https://doi.org/10.3765/sp.12.4
Steinert-Threlkeld, S., Szymanik, J.: Ease of learning explains semantic universals. Cognition 195 (2020). https://doi.org/10.1016/j.cognition.2019.104076
Strohmaier, D.: Contrafactives: Exploration of a Grid World, November 2022. https://github.com/dstrohmaier/contrafactives_grid_world
Strohmaier, D., Wimmer, S.: Contrafactives and Learnability. In: Proceedings of the 23rd Amsterdam Colloquium. Amsterdam (2022)
Vaswani, A., et al.: Attention is All you Need. In: 31st Conference on Neural Information Processing Systems, pp. 1–11 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Strohmaier, D., Wimmer, S. (2023). Contrafactives and Learnability: An Experiment with Propositional Constants. In: Bekki, D., Mineshima, K., McCready, E. (eds) Logic and Engineering of Natural Language Semantics. LENLS 2022. Lecture Notes in Computer Science, vol 14213. Springer, Cham. https://doi.org/10.1007/978-3-031-43977-3_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-43977-3_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43976-6
Online ISBN: 978-3-031-43977-3
eBook Packages: Computer ScienceComputer Science (R0)