Skip to main content
Log in

Particularism, Analogy, and Moral Cognition

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

‘Particularism’ and ‘generalism’ refer to families of positions in the philosophy of moral reasoning, with the former playing down the importance of principles, rules or standards, and the latter stressing their importance. Part of the debate has taken an empirical turn, and this turn has implications for AI research and the philosophy of cognitive modeling. In this paper, Jonathan Dancy’s approach to particularism (arguably one of the best known and most radical approaches) is questioned both on logical and empirical grounds. Doubts are raised over whether Dancy’s brand of particularism can adequately explain the graded nature of similarity assessments in analogical arguments. Also, simple recurrent neural network models of moral case classification are presented and discussed. This is done to raise concerns about Dancy’s suggestion that neural networks can help us to understand how we could classify situations in a way that is compatible with his particularism. Throughout, the idea of a surveyable standard—one with restricted length and complexity—plays a key role. Analogical arguments are taken to involve multidimensional similarity assessments, and surveyable contributory standards are taken to be attempts to articulate the dimensions of similarity that may exist between cases. This work will be of relevance both to those who have interests in computationally modeling human moral cognition and to those who are interested in how such models may or may not improve our philosophical understanding of such cognition.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. It is an empirical question how long or complex a principle can be before human beings can no longer consciously process it. I use the length of 1,000 encyclopaedia volumes for effect. I suspect principles would become unsurveyable long before reaching that length. Indeed, a principle of one volume in length is likely too long. Also, what is surveyable to us now is a function of our current cognitive architecture. If our cognitive architecture changes—whether through evolution, genetic engineering, or biological or cybernetic implants—then what is surveyable to “us” or our progeny may also change.

  2. There is no need to assume that contributory principles need to follow lexical concepts (lying) or near-lexcical concepts (causing pain) closely. It may take quite a bit of work to state a contributory principle (involving multiple concepts like promising, duress, and what is otherwise morally impermissible). More than a little reflection may be required to explicitly articulate or even approximate such principles. Moreover, not only is it is possible for one contributory principle to make use of more than one key lexical or near-lexical concept, it may be that it may take more than one contributory principle to articulate the relevant dimensions of a lexical or near-lexical concept. For example, someone might want to argue that under circumstance a, b, or c, allowing to die contributes to impermissibility, but under circumstance x, y, or z, allowing to die contributes to permissibility.

  3. “Thick” and “thin” are usually used to qualify moral terms (“right” or “courageous”), not standards. My usage here deviates from common usage. This should be kept in mind in my comparisons with other philosophers. For example, McNaughton and Rawling (2000) might refer to my thin standards as “fat.”

  4. Since thickness is defined in terms of unsurveyability, it is presented as a cognitively constrained notion: it is defined in terms of what the limits of our cognitive architecture allow us to explicitly reason about. This should be distinguished from metaphysical thickness, which would eliminate the reference to surveyability. Metaphysical thickness does not turn on the limits of cognition. Accepting thickness understood in a cognitively constrained manner does not entail accepting metaphysical thickness.

  5. If Jack is unsure of whether he did the right thing by breaking his promise, someone might reassure him by using a more extreme example, such as Case 2, to help make clear the idea that we are not obliged to keep a promise that ought not to have been made in the first place.

  6. I am assuming that when we attempt to linguistically articulate what makes cases similar, we are looking for something surveyable. Perhaps there are unsurveyable accounts of what makes cases similar or dissimilar, but they could not play a role in the game of asking for and giving reasons in ethical discourse.

  7. A further wrinkle: which promises ought not to be made will depend, in part, on which promises have already been permissibly made. This means that that an exhaustive nonmoral characterization of promises not made is insufficient for rewriting clause (a), since there will be cases where the promises we want to exclude will depend on prior promises permissibly made. A nonmorally specified version of CSP would have to nonmorally characterise both impermissible and permissible promises. Yes, I know, but if some form of utilitarianism is true, then a nonmoral specification of the requisite promises is possible since we could provide a nonenumerative, general account of which promises are permissible or not. I am not inclined to think utilitarianism is the whole story either about promises or the moral life more generally. If we accept that the promises we ought and ought not to make are at least in part a function of our various life projects and their associated commitments, then a story about which promises we ought or ought not to make will be anything but simple. Promising to take Jasmine to the parade is permissible for Habib in large part because of the relation he bares to her: he is her father. In virtue of this relation, he has certain obligations and rights. It would not be acceptable for a total stranger off the street to make such a promise to Jasmine without consulting her parents (even if it did promote overall happiness, preference satisfaction or whatever). All sorts of familial, professional, and other social relations (and concomitant obligations and rights) are relevant to which promises are appropriate to make or not. It is not obvious that there is a surveyable, total standard that captures everything that needs to be captured in nonmoral terms.

  8. In engaging Dancy’s work, McNaughton and Rawling (2000, pp. 271–272) consider this sort of objection and in the course of a short paragraph hint at two possible replies. The next paragraph in the body of this paper constitutes an example of the first of the two approaches they suggested.

  9. There are different ways in which this could be done. Considerations of justice could be part of the enabling conditions for a contributory principle pertaining to mercy, or perhaps considerations of justice and mercy are part of one principle which are jointly enable by some other set of conditions. I take no stand on those issues here. I am simply making the point that defender of contributory standards need not explicate normative concepts one by one.

  10. The material in this paragraph takes the second of the two approaches hinted at by McNaughton and Rawling (2000, p. 272). The two approaches are not obviously incompatible. It might be the case that the absence of duress and deception figure as part of the favouring that promising contributes to an action, and ought implying can could figure as an enabling condition. Perhaps there are very general kinds of background conditions that function as enabling conditions, such as being alive, being sentient, and ought implying can. If enabling conditions are restricted to these kinds of considerations, then there is no need to include freedom from duress and deception as enabling conditions. However, if we expand enabling conditions to include considerations like freedom from duress and deception, then we have abandoned the approach mentioned in the previous note. When expanding enabling conditions in this way, offering freedom from duress as a reason for action would be to offer an enabling reason rather than a favoring reason. As mentioned in the text, provided the list of enabling conditions is surveyable, such an approach goes against the spirit of particularism, as McNaughton and Rawling (2000, p. 272) indicate.

  11. The argument below does not depend on this exact formulation. We could attach enabling conditions that would allow the killing of humans to reverse its pertinence, and the arguments below still go through provided that the enabling conditions are nonmorally and surveyably specified.

  12. I do not want to suggest that culpability plays no role with respect to our similarity assessments. It is possible to construct sets of cases involving the killing of culpable individuals and the killing of blameless individuals, and the former may be more similar to one another than the latter. Blamlessness of the individual killed may well be acting as what Dancy (2006, chapter 3) refers to as an intensifier (with respect to the wrongness of killing). In the hope of undercutting arguments for contributory standards, Dancy argues for distinguishing between reasons for favouring/disfavouring from enabling/disabling conditions from intensifiers/attenuators. However, these same distinctions may actually help to defend some thin contributory standards. If killing a human (without thick qualifiers) can be shown to explain similarity assessments in a wide range of cases (including cases both of culpability and blamlessness, as well as cases that are overall permissible and impermissible), then we have some reason to think it is a thin standard. We can take the thick qualifiers people are usually tempted to attach to that standard and argue that they are functioning, say, as intensifiers or attenuators, depending on how we formulate them.

  13. We could argue that the number of similarities and differences is infinite in the following way. The agent in X0 is also non-identical to one fish, and non-identical to two fish, and non-identical to three fish, and so onto infinity. The agent in X1 is non-identical to one fish, and non-identical to two fish, and so onto infinity. The Xi share these non-identities, which means that they have an infinite number of similarities. A similar proof strategy shows that they have an infinite number of differences. The agent in X0 is identical to himself or one fish, and identical to himself and two fish, and so on. The agent in X1 is identical to himself and one fish, and identical to himself and two fish, and so on. Since there are different agents in the Xi, it follows that there must be an infinite number of differences between them. Scott Brewer (1996, p. 932) makes the same point. However, Brewer appears to suggest that we are searching for a kind of total standard when we engage in case based reasoning. It is one thing to suggest that there is a need for general relevance constraints—a point with which I am inclined to agree—but something quite different to turn that into the search for total (surveyable) standards, which is more contentious.

  14. We could ask exactly the same question for local similarities. I will focus on the problem with local differences and a possible reply, but it should be understood that a related problem and reply exists for similarities.

  15. We could even contrive cases where not being identical to one fish, two fish, …, are relevant considerations, but I will spare the reader that.

  16. The word ‘considered’ is used in a special sense. Consideration of contributory standards could be an explicit, conscious endeavor. Alternatively, it could involve an implicit or tacit use of information embodied in contributory standards.

  17. See Anderson and Anderson (2006) for a discussion of a machine learning algorithm applied to a restricted sub-domain of biomedical ethics. It yields a surveyable principle, but the domain in question was very limited. See Anderson and Anderson (2007) for a brief, more general discussion of machine learning and other AI techniques as applied to ethics. See Wallach and Allen (2008) for a book length discussion of various AI approaches to modelling ethics.

  18. Note well: unserveyability was defined with respect to the usefulness of a standard for conscious, reflective purposes. From the fact that an unserveyable principle fails to be useful for purposes of conscious reflection, it does not follow that it is useless sans phrase. There may well be AI applications where long principles are manageable. Keep in mind that Windows Vista (hardly AI!) has over 50 million lines of code, and that pales in comparison to the over 200 million lines of code in Debian 4.0, a Unix-like operating system. (By the way, if we have 500 pages per volume, and 100 lines of text per page, and 4 programming instructions on each line of text, then 200 million lines of programming takes up about 1,000 volumes.) Sufficiently powerful computers may well be able to put 200 million lines of code (or more) to good use. Moreover, some (though not I) might even argue that there could be applications in cognitive modelling for very long explicitly represented principles. This much does seem correct: from the fact that we cannot consciously reason over very long principles, it does not follow without further argument that long principles are not being explicitly represented and processed behind the veil of awareness.

References

  • Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating and ethical intelligent agent. AI Magazine, 28(4), 15–26.

    Google Scholar 

  • Anderson, M., Anderson, S. L., & Armen, C. (2006). An approach to computing ethics. IEEE Intelligent Systems, 21(4), 56–63.

    Google Scholar 

  • Brewer, S. (1996). Exemplary reasoning: Semantics, pragmatics, and the rational force of legal argument by analogy. Harvard Law Review, CIX: 923.

  • Dancy, J. (1993). Moral reasons. Oxford: Blackwell.

    Google Scholar 

  • Dancy, J. (1999). Can a particularist learn the difference between right and wrong? In K. Brinkmann (Ed.), Proceedings from the 20th world congress of philosophy, volume I: Ethics. Bowling Green, Ohio: Philosophy Documentation Center.

    Google Scholar 

  • Dancy, J. (2000). ‘The Particularist’s progress. In B. Hooker & M. Little (Eds.), Moral particularism. Oxford: Oxford University Press.

    Google Scholar 

  • Dancy, J. (2006). Ethics without principles. Oxford: Oxford University Press.

    Google Scholar 

  • Dancy, J. (2009). Moral particularism. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy (Spring Ed.). URL: http://plato.stanford.edu/archives/spr2009/entries/moral-particularism/.

  • Elman, J. (1990). Finding structure in time. Cognitive Science, 14, 179–211.

    Article  Google Scholar 

  • Garfield, J. (2000). Particularity and principle: The structure of moral knowledge. In B. Hooker & M. Little (Eds.), Moral particularism. Oxford: Oxford University Press.

    Google Scholar 

  • Horgan, T., & Timmons, M. (2007). Morphological rationalism and the psychology of moral judgement. Ethical Theory and Moral Practice, 10, 279–295.

    Article  Google Scholar 

  • Horgan, T. & Timmons, M. (2009). What does the Frame Problem tell us about Normativity? Ethical Theory and Moral Practice, 12, 25–51.

    Google Scholar 

  • Jackson, F., Petit, P., & Smith, M. (2000). Ethical particularism and patterns. In B. Hooker & M. Little (Eds.), Moral particularism. Oxford: Oxford University Press.

    Google Scholar 

  • Little, M. O. (2000). Moral generalities revisited. In B. Hooker & M. Little (Eds.), Moral particularism. Oxford: Oxford University Press.

    Google Scholar 

  • McKeever, S., & Ridge, M. (2005). The many moral particularisms. The Canadian Journal of Philosophy, 35, 83–106.

    Google Scholar 

  • McKeever, S., & Ridge, M. (2006). Principled ethics: Generalism as a regulative ideal. Oxford: Oxford University Press.

    Book  Google Scholar 

  • McLaren, B. M. (2003). Extensionally defining principles and cases in ethics: An AI model. Artificial Intelligence Journal, 150, 145–181.

    Article  MATH  Google Scholar 

  • McLaren, B. M. (2006). Computational models of ethical reasoning: Challenges, initial steps, and future directions. IEEE Intelligent Systems, 21(4), 29–37.

    Google Scholar 

  • McNaughton, D., & Rawling, P. (2000). Unprincipled ethics. In B. Hooker & M. Little (Eds.), Moral particularism. Oxford: Oxford University Press.

    Google Scholar 

  • Sun, R. (2002). Duality of the mind: A bottom up approach toward cognition. Mahway, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Thomson, J. J. (1971). A defense of abortion. Philosophy & Public Affairs, 1(3), 47–66.

    Google Scholar 

  • Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.

    Google Scholar 

Download references

Acknowledgments

I thank the Social Sciences and Humanities Research Council of Canada for financial support during the research and writing of this paper. I am also indebted to the University of Windsor for a sabbatical leave during which the research was completed and the paper composed. Thanks to Joshua Chauvin for his endless patience in assisting with neural network simulations. Finally, thanks to Amy Butchart and Nicholas Ray for comments on earlier drafts.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marcello Guarini.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Guarini, M. Particularism, Analogy, and Moral Cognition. Minds & Machines 20, 385–422 (2010). https://doi.org/10.1007/s11023-010-9200-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-010-9200-4

Keywords

Navigation