Abstract
There is a distinction between merely having the right belief, and further basing that belief on the right reasons. Any adequate epistemology needs to be able to accommodate the basing relation that marks this distinction. However, trouble arises for Bayesianism. I argue that when we combine Bayesianism with the standard approaches to the basing relation, we get the result that no agent forms their credences in the right way; indeed, no agent even gets close. This is a serious problem, for it prevents us from making epistemic distinctions between agents that are doing a reasonably good job at forming their credences and those that are forming them in clearly bad ways. I argue that if this result holds, then we have a problem for Bayesianism. However, I show how the Bayesian can avoid this problem by rejecting the standard approaches to the basing relation. By drawing on recent work on the basing relation, we can develop an account of the relation that allows us to avoid the result that no agent comes close to forming their credences in the right way. The Bayesian can successfully accommodate the basing relation.
Access this article
Rent this article via DeepDyve
Similar content being viewed by others
Notes
Others mark the same distinction in terms of ‘justifiable’ and ‘justified’ belief. See Alston (1985) for discussion of different concepts of justification.
Indeed, this puzzle has been wielded as an objection to coherentism, according to which an agent’s belief is justified in virtue of cohering with the agent’s other beliefs. The objection is that no ordinary agent could base their belief on the coherence of the agent’s entire belief state. See Kornblith (2002, pp. 122–132) for this objection.
Bonjour (2006, pp. 328, 329) considers this as a response to Kornblith’s version of this challenge.
I will assume that a single number measures the degree of beliefs of actual agents. This assumption is controversial, but it is dispensable. We could instead opt for an ‘imprecise Bayesianism’ where sets of real numbers measure the degree of beliefs of actual agents. See Levi (1974), van Fraassen (2006), and Joyce (2010).
Note that the confidence a number measures need not be occurrent, it could be merely dispositional.
As an initial background constraint, I assume that the propositions form an algebra; intuitively, that they are closed under negation, disjunction, and conjunction. More formally, the propositions are isomorphic with, or identical with, a collection of subsets of a set such that the collection is closed under complement, the union of finitely many sets, and the intersection of finitely many sets. Sometimes this is generalized to allow for the union or intersection of infinitely many sets.
There are different ways of interpreting ‘necessary’ resulting in different accounts. Fortunately, these differences won’t matter for our purposes.
Some philosophers prefer to take conditional probabilities as primitive. See Goosen (1979) for an axiomatization as well as motivation for taking both unconditional probabilities as well as conditional probabilities as primitive. Hájek (2003) argues that not only should we not define conditional probabilities, but that we should analyze unconditional probabilities in terms of conditional ones. The arguments in the text are not affected by this issue.
For example, Lewis (1980) proposed the ‘Principal Principle,’ a claim about how an agent’s credences should line up with chances.
Perhaps we might think of Bayesianism as only telling us about when entire credence functions are permissibly held, but adopt another account for individual credences. However, I’ll set aside such an approach in this paper. We would no longer be considering a pure Bayesian epistemology, but a kind of hybrid view that incorporates Bayesianism. But we have our work cut out for us in just thinking about how a pure Bayesian epistemology should think about the basing relation. I thank an anonymous referee for the suggestion.
We can also construct a case involving axiom (2) of (Probabilism). Suppose P is a mathematical truth. One agent sets her credence in P to 1 because she recognizes that P is a mathematical truth. A second agent sets her credence in P to 1 because of how much she likes P. The first agent is doing better, from an epistemic point of view, than the second agent in virtue of how she has based her credence.
See Feldman (2002, p. 46), Korcz (2000, pp. 525, 526), Kvanvig (2003, p. 8), Pollock and Cruz (1999, pp. 35, 36), and Swain (1979, p. 25). This standard answer follows from what Turri (2010) calls the ‘orthodox view’ of the relationship between propositional and doxastic justification, according to which a belief is doxastically justified iff it is based on whatever propositionally justifies that belief. Is it enough for the belief to be based on what makes it the right belief to have; that is, is this sufficient for doxastic justification? Some cases from Turri (2010) cast doubt on this. They purport to show that one can have their belief based in this way, and yet still fail to form their belief in the right way. However, all I need is a necessary condition; that if a doxastic state is doxastically justified then it is based on whatever makes that state propositionally justified. So even if we reject, with Turri, the orthodox view of the relationship between propositional and doxastic justification, we can still accept the necessary condition on doxastic justification.
Instead of understanding basing in terms of the original cause of the doxastic states, some philosophers understand basing in terms of what causally sustains the doxastic state. For example, see Moser (1989, pp. 156–158). However, switching to talk of causal sustaining doesn’t make the problematic causal relations any more palatable.
Here’s an example from Plantinga (1993, p. 69, n. 8): “Suddenly seeing Sylvia, I form the belief that I see her; as a result, I become rattled and drop my cup of tea, scalding my leg. I then form the belief that my leg hurts; but though the former belief is a (part) cause of the latter, it is not the case that I accept the latter on the evidential basis of the former”. Pollock and Cruz (1999, pp. 35, 36) also give this objection.
Kornblith (2002, p. 124) makes this point with respect to binary beliefs.
Cherniak (1986, pp. 92–95). The argument is directed against the view that it is constitutive of rationality that one accept the laws of logic. Kornblith (2002, pp. 128–130, 132–135) uses Cherniak’s argument against internalism about justification. Harman (1986, pp. 25–27) makes a somewhat similar argument.
Here’s one way to make the point. Suppose the agent has a credence in just 137 other propositions, and needs to determine whether P is logically incompatible with the conjunction of the 137 propositions. A system that determined whether the set of 138 propositions is logically compatible using a truth table, and reading one line of the truth table in the time it takes a light ray to traverse the diameter of a proton, would take more than twenty billion years. To put it mildly, this is far out of the reach of our cognitive systems. See Cherniak (1986, p. 143, n. 13) for a proof of this. Of course, an actual agent may not satisfy all of these assumptions, but the proof allows us to see just how demanding this task is.
It might be thought that appealing to the possible worlds approach of, say, Lewis (1981) will avoid this argument since it doesn’t require checking for consistency between every proposition that one has a credence. But the problem still arises. First, the approach requires ensuring that one’s credences in each of the worlds sum to 1. Moreover, the approach usually defines credences in propositions in terms of the sum of the credences in the worlds where the proposition is true. But agents that aren’t perfectly rational can have their credence in a proposition differ from the sum of their credences in the worlds where the proposition is true. So we must add another norm that says that a credence in a proposition ought to equal the sum of the credences in the worlds where the proposition is true. But both this task as well as the task of ensuring that one’s credences in each of the worlds sums to 1 are far too demanding for an actual agent to take to be a reason to have a credence.
I mentioned the notion of reliability. Why not simply understand basing in terms of the processes that form credences, and hold that a credence is formed in the right way when the process that forms it is reliable with respect to satisfying the Bayesian constraints? Ultimately, this is in the vein of my own proposal, but as stated we don’t have an acceptable account of the basing relation. Limiting basing merely to processes that form doxastic states is too limited, for we want other things to be bases as well; e.g. other doxastic states, experiences, etc. The account is too narrow, we need a broader account.
We also get a problem with Korcz’s (2000) hybrid causal-doxastic account of the basing relation. The rough idea is that a belief is based on R just in case either R is the cause, or the agent takes R to be a reason to have that belief. But, this account will simply inherit the problems that face both the causal and the doxastic accounts. If no agent can satisfy either disjunct, then no agent can satisfy the disjunction.
It is somewhat uncommon to use a degreed notion of justification, but some epistemologists make use of it, and we can clearly make sense of it. To borrow an example from Plantinga (1988, p. 2): my belief that I live in Massachusetts is more justified than my belief that Homer was born before 800 B.C.
Perhaps the Bayesian can still make a very fine-grained distinction between the two jurors. The first juror does a slightly better job at forming her credence than the second juror. But this isn’t enough to mark an epistemically significant distinction between them. We want to say that there is a significant sense in which the first juror is doing a reasonable job at forming her credences, and that distinguishes her from the second juror.
Harman (1986, pp. 25–27) makes a somewhat similar point.
Kelly (2002, p. 175). I have made (Conditions) more specific than Kelly does, so it is more relevant for our purposes. What Kelly gives is: “whether R is a reason on which S’s \(\upvarphi \)-ing is based...will be reflected in the conditions under which S would (or would not) continue to \(\upvarphi \)”. Evans (2013) uses (Conditions) to critique existing accounts of the basing relation, and motivate his own account.
It might be preferable to weaken this so that it only requires that the basis be reflected in the conditions under which S would (or would not) continue to have c or a nearby credence in P. But I’ll keep working with the simpler principle in the text.
Evans (2013, pp. 2949–2951) uses (Conditions) to argue against doxastic accounts of the basing relation on these grounds.
Evans only considers a dispositional account in the context of binary belief, rather than credences, and so does not consider the issue how the Bayesian should accommodate the basing relation.
What if, in the event that her credence doesn’t satisfy the Bayesian constraints, an agent would instead change her credence in a different proposition? For example, suppose an agent is disposed to revise her credence in P if the sum of her credences in P and \(\sim \)P fails to sum to 1. Does this imply that the agent is not disposed to revise her credence in \(\sim \)P if the sum of the credences in P and \(\sim \)P fails to sum to 1? No, the agent can still be disposed to revise her credence in \(\sim \)P, even if she wouldn’t have changed it had her credences not satisfied the Bayesian constraints. It’s just that she has a disposition to revise her credence in P as well, so by manifesting this first disposition, she doesn’t manifest the second disposition to revise her credence in \(\sim \)P. As much research on dispositions has revealed, something possessing the disposition to manifest M, given stimulus S does not guarantee the truth of the counterfactual that if S obtained, then M would obtain.
I thank an anonymous referee for this objection.
I thank Chris Meacham for the example of a sundial.
See van Gelder (1995) for more on how computational processes are representational, whereas non-computational processes need not be.
Note further that we can also imagine an agent that forms all of her credences in the right way in accordance with (Disposition), and see that this is a fairly moderate idealization. Contrast this with the quite extreme idealization required to imagine agents that form their credences in the right way in accordance with the causal or doxastic accounts considered above.
There are trickier cases to consider. For example, perhaps I could have the disposition to raise certain credences if my stomach hurts. Intuitions seem mixed here. In conversation, I’ve found that some think this is an intuitive case of basing. However, Evans (2013) finds this unintuitive. There is room to give a more discriminating account of the sorts of dispositions the basing relation should be analyzed in terms of. For example, Evans (2013, pp. 2953, 2954) argues that we should understand the dispositions as dispositions of the agent’s cognitive system.
Again, just as in the case of (Disposition), the manifestation of Miss Knowit’s disposition could be kept from manifesting because of the manifestation of a different disposition.
References
Alston, W. (1985). Concepts of epistemic justification. The Monist, 68(1), 57–89.
Audi, R. (1986). Belief, reason, and inference. Philosophical Topics, 14(1), 27–65.
Bonjour, L. (2006). Kornblith on knowledge and epistemology. Philosophical Studies, 127, 317–335.
Cherniak, C. (1986). Minimal rationality. Cambridge: MIT Press.
Conee, E., & Feldman, R. (2010). Earl Conee and Richard Feldman. In S. Dancy & M. Steup (Eds.), A companion to epistemology (pp. 123–130). Oxford: Blackwell.
Easwaran, K. (2011). Bayesianism I: Introduction and arguments in its favor. Philosophy Compass, 6(5), 312–320.
Evans, I. (2013). The problem of the basing relation. Synthese, 190, 2943–2957.
Feldman, R. (2002). Epistemology. Upper Saddle River: Prentice Hall.
Garson, J. (2016). Connectionism. In: Zalta, E. N. (Ed.) The stanford encyclopedia of philosophy (Winter 2016 ed.). https://plato.stanford.edu/archives/win2016/entries/connectionism/.
Goosen, W. (1979). Alternative axiomatizations of probability theory. Notre Dame Journal of Formal Logic XX, 1, 227–239.
Haidt, J., Bjorklund, F., & Murphy, S. (2000). Moral dumbfounding: When intuition finds no reason (unpublished).
Hájek, A. (2003). What conditional probabilities could not be. Synthese, 137(3), 273–323.
Hájek, A., & Hartmann, S. (2010). Bayesian epistemology. In J. Dancy, et al. (Eds.), A companion to epistemology (pp. 93–105). Oxford: Blackwell.
Harman, G. (1986). Change in view. Cambridge: MIT Press.
Joyce, J. (2010). A defense of imprecise credences in inference and decision making. Philosophical Perspectives 24(1), 281–323.
Kelly, T. (2002). The rationality of belief and some other propositional attitudes. Philosophical Studies, 110, 163–196.
Kolodny, N. (2007). How does coherence matter? Proceedings of the Aristotelian Society, 107(1), 229–263.
Korcz, K. (1997). Recent work on the basing relation. American Philosophical Quarterly, 34(2), 171–191.
Korcz, K. (2000). The causal-doxastic theory of the basing relation. Canadian Journal of Philosophy, 30(4), 525–550.
Korcz, K. (2015). The epistemic basing relation. In: Zalta, E. (ed.), The Stanford encyclopedia of philosophy (Fall 2015 ed.). http://plato.stanford.edu/entries/basing-epistemic/.
Kornblith, H. (2002). Knowledge and its place in nature. Oxford: Oxford University Press.
Kvanvig, J. (2003). Propositionalism and the perspectival character of justification. American Philosophical Quarterly, 40(1), 3–18.
Kvanvig, J. (2010). Epistemic justification. In S. Bernecker & D. Pritchard (Eds.), Routledge companion to epistemology (pp. 25–36). New York: Routledge.
Leite, A. (2008). Believing one’s reasons are good. Synthese, 161(3), 419–441.
Levi, I. (1974). On indeterminate probabilities. Journal of Philosophy, 71, 397–418.
Lewis, D. (1980). A subjectivist’s guide to objective chance. In R. Jeffrey (Ed.), Studies in inductive logic and probability (Vol. II, pp. 263–293). Berkeley: University of California Press.
Lewis, D. (1981). Causal decision theory. Australasian Journal of Philosophy, 59, 283–289.
Longino, H. (1978). Inferring. Philosophy Research Archives, 4, 19–26.
Maher, P. (2004). Probability captures the logic of scientific confirmation. In C. Hitchcock (Ed.), Contemporary debates in philosophy of science (pp. 69–93). Oxford: Blackwell.
McCain, K. (2012). The interventionist account of causation and the basing relation. Philosophical Studies, 159, 357–382.
Meacham, C. J. G. (2013). Impermissive Bayesianism. Erkenntnis, S6, 1–33.
Moser, P. (1989). Knowledge and evidence. Cambridge: Cambridge University Press.
Oliveira, L. R. G. (2015). Non-agential permissibility in epistemology. Australasian Journal of Philosophy, 93(2), 389–394.
Plantinga, A. (1988). Positive epistemic status and proper function. Philosophical Perspectives, 2(1), 1–50.
Plantinga, A. (1993). Warrant: The current debate. Oxford: Oxford University Press.
Pollock, J., & Cruz, J. (1999). Contemporary theories of knowledge (2nd ed.). New York: Rowman and Littlefield.
Silva, P. (2014). Does doxastic justification have a basing requirement? Australasian Journal of Philosophy, 93(2), 371–387.
Strevens, M. (2012). Notes on Bayesian confirmation theory. http://www.nyu.edu/classes/strevens/BCT/BCT.pdf (unpublished).
Swain, M. (1979). Justification and the Basis of Belief. In: G. Pappas (Ed.), Justification and Knowledge. Dordrecht: D. Reidel.
Swain, M. (1981). Reasons and knowledge. Ithaca: Cornell University Press.
Tolliver, J. (1981). Basing beliefs on reasons. Grazer Philosophische Studien, 15, 149–161.
Turri, J. (2010). On the relationship between propositional and doxastic justification. Philosophy and Phenomenological Research, 80(2), 312–326.
Turri, J. (2011). Believing for a reason. Erkenntnis, 74, 383–397.
van Fraassen, B. (2006). Vague expectation loss. Philosophical Studies, 127, 483–491.
van Gelder, T. (1995). What might cognition be, if not computation? Journal of Philosophy, 91, 345–381.
White, R. (2005). Epistemic permissiveness. Philosophical Perspectives, 19, 445–459.
White, R. (2010). Evidential symmetry and mushy credences. In T. S. Gendler & J. Hawthorne (Eds.), Oxford Studies in Epistemology (Vol. 3). Oxford: Oxford University Press.
Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.
Williamson, J. (2007). Motivating objective Bayesianism: From empirical constraints to objective probabilities. In W. E. Harper & G. R. Wheeler (Eds.), Probability and inference: Essays in honor of Henry E. Kyburg Jr. Amsterdam: Elsevier.
Acknowledgements
I’m grateful to Peter Graham, Hilary Kornblith, Chris Meacham, Luis Oliveira, Alejandro Pérez Carballo, and two anonymous referees for very helpful comments and discussion.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Gibbs, C. Basing for the Bayesian. Synthese 196, 3815–3840 (2019). https://doi.org/10.1007/s11229-017-1622-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-017-1622-6