Abstract
Many philosophers of science follow Hempel in embracing both substantive and methodological anti-psychologism regarding the study of explanation. The former thesis denies that explanations are constituted by psychological events, and the latter denies that psychological research can contribute much to the philosophical investigation of the nature of explanation. Substantive anti-psychologism is commonly defended by citing cases, such as hyper-complex descriptions or vast computer simulations, which are reputedly generally agreed to constitute explanations but which defy human comprehension and, as a result, fail to engender any relevant psychological events. It is commonly held that the truth of the substantive thesis would lend support to the methodological thesis. However, the standard argument for the substantive thesis presumes that philosophers’ own judgments about the aforementioned cases issues from mastery of the lay or scientific norms regarding the use of ‘explanation.’ Here we challenge this presumption with a series of experiments indicating that both lay and scientific populations require of explanations that they actually render their targets intelligible. This research not only undermines a standard line of argument for substantive anti-psychologism, it demonstrates the utility of psychological research methods for answering meta-questions about the norms regarding the use of ‘explanation.’





Similar content being viewed by others
Notes
There is a clear difference between comprehending what a set of descriptions or models assert about the world and comprehending why the target happening occurred. This is roughly what Regt (2009) has in mind with the distinction between understanding a phenomenon and understanding a theory. Strevens (2013) refers to it as the distinction between ‘understanding why’ and ‘understanding with.’
We also offer one bit of anecdotal evidence that such a trend exists: At a recent gathering of mechanists (Les Mécaniciens: Salon des Refusés) a prominent attendee polled the room regarding whether understanding was essential in order for a certain hyper-complex, understanding-defying computer simulation to constitute an explanation. The point, again, was to demonstrate that explanation is dissociable from psychological events. Of the forty or so attendees, only one raised a hand in defense of understanding.
There is relatively little discussion of nomic frameworks, which could be because psychologist have focused primarily on lay populations.
These passages are from an abstract of an unpublished paper written for the conference Les Mécaniciens: Salon des Refusés held 9 April 2011. In this same abstract, Craver does suggest that, while psychology has little to teach about what explanations are or about their good-making features, it may be “useful for learning how to discover and teach explanations, for recognizing the kinds of explanatory biases to which humans are prone, and for clarifying why certain forms of explanation should be so appealing and popular among creatures like us.”
The capacity clause might be interpreted weakly, but what we intend here is the strong claim that the capacity to bring about understanding is a non-accidental feature of the model—that is, it is not enough for the model to merely cause understanding of the target. Rather the capacity to bring about understanding must stem directly from what the model asserts about the world such that anyone who comprehends this will, by virtue of that fact, come to understand how or why (at least possibly) the target happening occur(-s/-ed).
In one condition the models are said to actually render the target happening intelligible to someone, and in another they are said to be utterly incapable of rendering the happening intelligible. Between the two extremes are cases where the model has the unexercised capacity to render the target intelligible. We also believed that it was important to avoid testing whether or not participants thought that one person explained (verb) the target happening to another. This so-called ‘pragmatic’ notion of explanation has long been regarded as involving psychological states like finding intelligible (Hempel 1965). Where the current dispute with philosophical orthodoxy lies, however, is with the more ‘ontological’ question of when a model constitutes an explanation (noun).
In the study, the Intelligible condition was also varied in terms of whether or not the theory was described as accurate, possibly accurate, or conceivably accurate but factually inaccurate. We ignore those differences here.
It bears noting that, based upon their comprehension scores, participants in the Potentially Intelligible condition were generally cognizant of the fact that the model mentioned in the vignettes had an unrealized potential to render the target happening intelligible and that the model had no such potential in the Never Intelligible condition.
To be eligible, MTurk workers had to be located in the United States and have at least an 80 % approval rate for jobs done on MTurk. Eligible workers were redirected to Survey Monkey (www.surveymonkey.com), where they completed the study. Afterwards, they were directed back to MTurk, where they were compensated with $.40.
There was some concern that participants might confuse this question with the question of whether or not the model constitutes a good or a satisfying explanation. Participants were thus informed at the outset that they may also be asked to rate the goodness or satisfactoriness of the explanation (cf. Lombrozo and Carey 2006). Because the ‘constitutes an explanation’ question concerns whether or not the modifier ‘explanation’ is even appropriate, only those participants who responded affirmatively to this question were asked the follow-up question of whether or not a good explanation had been provided. Mishra and Brewer (2003) found that when participants were forewarned that they would be asked a series of simple comprehension questions, they paid closer attention to the materials involved in their study. We likewise informed participants in advance that they were to be asked a series of simple questions about the vignettes. Among the questions we asked was one about whether the materials Dr. Pavna posted were too complex, simple enough, too exotic, or too far-fetched to enable other scientists to understand the origin of gamma-ray bursts.
Also following Lombrozo and Carey (2006), we felt that the logic of the situation dictated that we should ask participants to answer the ‘good explanation’ question only if they answered the ‘explanation’ question in the affirmative. It is important to note that our goal here was not to gather data on goodness ratings, but merely to make it clear to participants that they should distinguish the two sorts of question.
Our comprehension check indicated that participants understood the vignettes they read, particularly insofar as intelligibility was concerned. In response to these questions, 85 % of the participants in the Intelligible condition responded that the model was simple enough to enable other scientists to understand the origin of gamma-ray bursts. In the Potentially Intelligible condition, 58 % of participants responded that it was simple enough while 27 % responded that it was too complex. In the Never Intelligible condition, 82 % of participants responded that it was too complex.
We say ‘unavoidably’ because the hypotheses being tested here concern the relevance of the actual or potential provision of intelligibility to someone, not necessarily to oneself. Intuitively, the idea is that most of us would allow that science has produced countless explanations, the details of which we could not hope to understand (at least not without extensive schooling). What makes them explanations is, we hypothesize, not that they render things intelligible to us personally, but that they do (or can) render things intelligible to specialists in the relevant field. One cannot test these ideas simply by examining cases where a model does and does not render the happening intelligible to participants themselves.
To be eligible, MTurk workers had to be located in the United States and have at least an 80 % approval rate for jobs done on MTurk. Eligible workers were redirected to Survey Monkey (www.surveymonkey.com), where they completed the study. Workers were compensated with $.40.
In the experiment, the order of presentation was randomized.
In subsequent experiments, we have begun substituting the plausibility measure for an accuracy measure and including a lengthy distractor vignette, though the results are quite comparable.
As before, participants read one of three versions of the story (Intelligible, Potentially Intelligible, or Never Intelligible), and then they were asked to select from a list the claims that were likely to be true based upon what they read. Afterward, they were asked to specify their primary area of scientific research (Fig. 4) and level of training. They were also asked to specify their age and sex and to share their thoughts regarding the experiment.
References
Braithwaite, R. B. (1946). Teleological explanation: The presidential address. Proceedings of the Aristotelian Society, 47, i–xx.
Bransford, J. D., & Franks, J. J. (1971). The abstraction of linguistic ideas. Cognitive Psychology, 2(4), 331–350.
Brewer, W. F. (2001). Models in science and mental models in scientists and nonscientists. Mind & Society, 2(2), 33–48.
Brewer, W. F., Chinn, C. A., & Samarapungavan, A. (2000). Explanation in scientists and children. In Explanation and cognition. Cambridge, MA: The MIT Press.
Bronson, P., & Merryman, A. (2010). The creativity crisis. Newsweek.
Churchland, P. M. (1989). A neurocomputational perspective: The nature of mind and the structure of science. Cambridge, MA: The MIT Press.
Craver, C. F. (2007). Explaining the brain. New York: Oxford University Press.
de Regt, H. W. (2009). The epistemic value of understanding. Philosophy of Science, 76(5), 585–597.
Geldard, F. A. (1942). Explanation in science. American Scientist, 30(3), 202–211.
Gelman, S. A., & Wellman, H. M. (1991). Insides and essences: Early understandings of the non-obvious. Cognition, 38(3), 213–244.
Gentner, D. (1981). Verb semantic structures in memory for sentences: Evidence for componential representation. Cognitive Psychology, 13(1), 56–83.
Giere, R. N. (1990). Explaining science: A cognitive approach. Chicago: University of Chicago Press.
Gopnik, A. (2000). Explanation as orgasm and the drive for causal knowledge: The function, evolution, and phenomenology of the theory formation system. In F. Keil & R. Wilson (Eds.), Explanation and cognition. Cambridge, MA: The MIT Press.
Graham, G., & Horgan, J. (1994). Southern fundamentalism and the end of philosophy. Philosophical Issues, 5, 219–247.
Hempel, C. G. (1942). The function of general laws in history. The Journal of Philosophy, 39(2), 35–48.
Hempel, C. G. (1965). Aspects of scientific explanation and other essays in the philosophy of science. New York: The Free Press.
Hickling, A. K., & Wellman, H. M. (2001). The emergence of children’s causal explanations and theories: Evidence from everyday conversation. Developmental Psychology, 37(5), 668–683.
Hospers, J. (1946). On explanation. The Journal of Philosophy, 43(13), 337–356.
Jackson, F. (1994). Armchair metaphysics. In J. O’Leary-Hawthorne & M. Michael (Eds.), Philosophy in mind: The place of philosophy in the study of mind. Dordrecht: Kluwer.
Keil, F. C. (2006). Explanation and understanding. Annual Review of Psychology, 57, 227–254.
Keil, F., & Wilson, R. (2000). Explanation and cognition. Cambridge, MA: The MIT Press.
Kornblith, H. (1998). The role of intuition in philosophical inquiry: An account with no unnatural ingredients. In M. DePaul & W. Ramsey (Eds.), Rethinking Intuition: The psychology of intuition and its role in philosophical inquiry. Lanham, MD: Rowman & Littlefield.
Lombrozo, T. (2006). The structure and function of explanations. Trends in Cognitive Sciences, 10(10), 464–470.
Lombrozo, T., & Carey, S. (2006). Functional explanation and the function of explanation. Cognition, 99(2), 167–204.
Machamer, P., & Woody, A. (1994). A model of intelligibility in science: Using Galileo’s balance as a model for understanding the motion of bodies. Science & Education, 3(3), 215–244.
Maher, P. (2007). Explication defended. Studia Logica, 86(2), 331–341.
Miller, D. L. (1946). The meaning of explanation. Psychological Review, 53(4), 241–246.
Miller, D. L. (1947). Explanation versus description. The Philosophical Review, 56(3), 306–312.
Mishra, P., & Brewer, W. F. (2003). Theories as a form of mental representation and their role in the recall of text information. Contemporary Educational Psychology, 28, 277–303.
Nersessian, N. J. (2009). How do engineering scientists think? Model-based simulation in biomedical engineering research laboratories. Topics in Cognitive Science, 1, 730–757.
Powell, D., Horne, Z., & Pinillos, A. (forthcoming). Semantic integration as a method for investigating concepts. In J. Beebe (Ed.), Advances in experimental epistemology. New York: Continuum Press.
Psillos, S. (2011). Making contact with molecules: On Perrin and Achinstein. In G. J. Morgan (Ed.), Philosophy of science matters: The philosophy of Peter Achinstein. New York: Oxford University Press.
Sellars, W. (1997). Empiricism and the philosophy of mind. Cambridge, MA: Harvard University Press.
Simon, H. A. (1966). Thinking by computers. In R. G. Colodny (Ed.), Mind and cosmos: Essays in contemporary science and philosophy. Pittsburgh: University of Pittsburgh Press.
Strevens, M. (2013). No understanding without explanation. Studies in History and Philosophy of Science. doi:10.1016/j.shpsa.2012.12.005.
Sulin, R. A., & Dooling, D. J. (1974). Intrusion of a thematic idea in retention of prose. Journal of Experimental Psychology, 103(2), 255.
Thagard, P., & Litt, A. (2008). Models of scientific explanation. In R. Sun (Ed.), The Cambridge handbook of computational cognitive modeling. Cambridge, MA: Cambridge University Press.
Trout, J. D. (2007). The psychology of scientific explanation. Philosophy Compass, 2(3), 564–591.
Vosniadou, S. (2002). Mental models in conceptual development. In L. Magnani & N. J. Nersessian (Eds.), Model-based reasoning: Science, technology, values. Berlin: Springer.
Acknowledgments
For their many helpful suggestions regarding this research, our heartfelt thanks go to Mike Braverman, Bill Brewer, Mark Donahue, Andrew Higgins, Peter Machamer, and Derek Powell. This research was funded by a generous grant from the University of Illinois Campus Research Board and kind assistance from the Beckman Institute for Advanced Science and Technology.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Waskan, J., Harmon, I., Horne, Z. et al. Explanatory anti-psychologism overturned by lay and scientific case classifications. Synthese 191, 1013–1035 (2014). https://doi.org/10.1007/s11229-013-0304-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-013-0304-2