Perceptual dogmatism is the view that a perceptual seeming that P prima facie justifies P. Classical Bayesianism (CB), as I am using the term, is the idea that justification is accurately modelled by classical probability theory, which includes Bayes’ Theorem. A popular way of objecting to dogmatism goes something like this: dogmatism is incompatible with CB, so dogmatism is false. I’ll assume that you have some familiarity with this sort of objection, which can be found in White’s “Problems for Dogmatism”, Schiffer’s “Skepticism and the Vagaries of Justified Belief” (pgs 175-6), and Wright’s “The Perils of Dogmatism” (pg 42).

Suppose we grant the premise that dogmatism is incompatible with CB. Why in the world should we conclude that the CB wins? Why shouldn’t we reject CB instead? The latter view does have a number of virtues, but it is not as though dogmatism has *nothing *going for it. And CB has a number of well-known problems. So I repeat: if dogmatism and CB are incompatible, why should the Classical Bayesian win?

Here is a way of approaching the question: if the incompatibility between CB and dogmatism depends on the most controversial features of Classical Bayesianism, then dogmatism should win. For example, I find it implausible that a rational human being should assign a credence to every proposition or a credence of 1 to every necessary truth. I take it that these implausible claims are entailed by CB. To whatever extent the incompatibility arises because of CB’s commitment to either of those two claims, dogmatism should win and we should rework CB. If, however, the incompatibility relies only on the least controversial features of CB, then CB should win. Perhaps an example of something uncontroversial would be: *if* S assigns a credence to both P and P or Q, S is irrational for assigning P or Q a lower credence than P.

My suspicion is that you can only derive the incompatibility by relying on the most controversial features of dogmatism, such as the claim that I suffer a rational failing if I don’t assign every proposition a credence. Suppose I might reasonably fail to assign a credence to the proposition (~D) that I’m not being deceived into thinking there is a hand. That is, suppose that, prior to having an experience as of my hand, ~D has no probability for me (perhaps because I have no information one way or the other, I’ve never considered ~D, etc.). Now what’s the problem with the dogmatism allowing this: my experience provides me with justification to believe that I have a hand, and then, from my justified belief that I have a hand, I justifiably deduce ~D?

I admit that I’m worse than an idiot when it comes to formal epistemology, so this post is intended as a request for information as much as it is intended as a defense of dogmatism. And let me also stress that I’m not against formal epistemology. If dogmatism wins, then we shouldn’t give up on formal epistemology; rather, we should improve our formal modelling techniques.

(Btw: Peter Kung’s “On Having No Reason: Dogmatism and Bayesian Confirmation” provides a defense of dogmatism that may be a more intelligent way of pushing some of the points I make in this post. But perhaps Peter wouldn’t like to be associated with ideas that may be worse than idiotic.)

Chris, can you say a bit about why dogmatism and classical bayesianism are incompatible. I have read the White and Wright pieces, but it doesn’t seem to me as if the arguments for incompatibility go through without some additional assumptions that the dogmatist is free to reject. As I understand it, classical bayesianism is the view that: 1) one’s credences ought to obey the probability calculus; and 2) updating one’s credences in light of new evidence ought to follow a principle of conditionalization. But in order to make sense of conditionalization we have to have a theory of evidence and evidence possession. Now the problem that White brings up only arises if we assume that the experience itself is the evidence upon which we must conditionalize. But isn’t it free to the dogmatist to make a Williamson type move here and say that seemings or experiences are not themselves evidence but rather provide evidence? In which case, a subject’s evidence might just consist of the propositional content of one’s seemings or experiential states. This would even seem to be a very plausible claim on behalf of dogmatism given all of the recent literature defending the claim that evidence must be propositional (although, I’ll admit that I am not convinced by these arguments). Thus, when one has a seeming that there a red table in front of one, one’s evidence is the proposition “there is a red table in front of me.” If this is the evidence on which we must conditionalize then, as far as I can tell, White’s problem doesn’t arise in the first place.

Of course, someone might worry that our evidence must be given a probability 1. In which case, it would seem that dogmatism, combined with this theory of evidence, wouldn’t be able to make sense of cases where I have an experience as of a red table but the justification of this belief is defeated. At which point, couldn’t the dogmatist say that the prior probability of one’s evidence should be proportional to the seeming or experiences strength, and then adopt a principle of conditionalization that makes sense of conditionalizing on new evidence that is less than certain?

I am not a formal epistemologist so perhaps I have made an idiotic mistake. But it seems to me that, since the dogmatist is free to make this move about what a person’s evidence is (in fact, I think some have made just these kinds of claims), these questions about what to do in the face of such incompatibility are somewhat moot.

This comment is a bit off topic from your main post so feel free to ignore. You also might just move to the hypothetical “Were dogmatism and CB to turn out to be incompatible why should be reject one over the other?”.

Hi Samuel,

I don’t think your comment is off-topic. I’m reluctant to try to reconstruct the various arguments because I don’t understand them very well. I don’t understand them very well because I don’t understand the Bayesian view very well. I’m hoping that someone good with the formal stuff will provide some instruction. But I will try to respond to some of your various suggestions.

Can the dogmatist avoid White’s objection by holding that the evidence is not but P itself? I think the answer is yes–but only if the dogmatist rejects standard Bayesian assumptions (which I believe are supposed to follow from the probability calculus and/or conditionalization rules). Before it seemed to me that P, what was the prior probability of P? If it didn’t have a prior probability, then we are forced to reject what I take to be a standard Bayesian assumption that all propositions have a prior probability. If it does have a prior probability, then the probability of P presumably goes up after having the seeming. But if the probability of P has gone up after the seeming but for reasons other than conditionalizing on evidence, I think we also must reject a Bayesian assumption about when it is appropriate to change one’s credence in a proposition.

I’m not sure I understand your second paragraph. But if I do, you are gesturing to a view like this: Reject the Bayesian assumption that every proposition automatically gets a prior probability. Let seemings fix prior probabilities, so a proposition will have a prior probability only if we have or have had a relevant seeming. Then we will need some conditionalization rule that operates on those propositions which do have prior probabilities. If this is what you had in mind, I think I’m attracted to such a position, but I believe it represents a very, very significant departure from classical Bayesianism.

http://www.ucl.ac.uk/statistics/research/pdfs/rr270.pdf C. Hennig (author with Jon Williamson “In Defense of Objective Bayesianism”) writes,

“The adoption of an interpretation of probability is a decision to adopt a certain structure for the perception of random phenomena. Any attempt to tie a probability model too closely to the idea of a true underlying reality has to fail because of the basic problem of formal modeling: the relation of a formal model to the modeled non-formal reality cannot be formally modeled and tested. This does not only apply to objective probabilities, but also to the relation of subjective probabilities to the true personal beliefs which are attempted to be modeled.”

I believe this quote has application to Peter Kung’s paper. The first part of the Alien Card Game provides a two-player win/lose scenario with no possibility of ties. No information is given that will help predict the winner. So all those considerations being then deemed equi-probable because they are unknown, the chances of Player A or Player B receiving the winning hand can be evaluated as 1/2 as Kung has done. This is a metaphysical or ideal prediction based on assumption, conditionally independent, and not the same category type/level as the physical 1/2 winner prediction given later which is conditionally dependent. Kung footnote: 4 “As we will see shortly, nothing hinges on assigning 1/2 as the rational credence in your state of ignorance — or even whether rationality constrains what assignment of priors you make — although it is hard to see what grounds you could have for assigning a credence of other than 1/2.”

Later on, Kung gives an example of flipping 25 consecutive heads at the beginning of a sequence means it’s likely one is not using a fair coin. But then he supposes that if at first there are 2 million tosses, 1 million heads and 1 million tails, and then 25 consecutive tosses of heads, he thinks one should not inductively infer that that the coin is unfair.

Each flipping sequence of the 2,000,025 total coin tosses involved for each of the myriad sequences is unique, and all have the exact same probability. The theoretical chance of 25 heads and then two million more flips where there are equal heads and tails, is the same chance as two million flips, a million heads and a million tails, and then 25 heads. But if this were a physical result rather than theoretical, 25 consecutive heads flipped at the end of 2 million equi-balanced flips of heads and tails, would again indicate that the coin was unfair, perhaps worn and unbalanced, contrary to Kung’s conclusion.

Kung provides another provenance for a 1/2 probability assignment.

“Ponda Baba, one of your new drinking buddies, who is also looking over the same player’s shoulder, notices your confusion and discretely passes you a report with exhaustive frequency information detailing the odds for every hand.” ‘the particular hand is reported at 1/2’ There is a difference between playing a fair game like flipping a fair coin and betting on heads or tails. The trials are potentially infinite, and if 5 or 25 previous results were all heads, it doesn’t matter, the chance of the next flip is still 50/50 or 1/2. Coins have no memory. This is an unconditional probability, the chance doesn’t vary.

But in the Alien Card Game, the probability of winning has become a conditional probability, dependent upon a prior structure: There are a finite number of cards in the ACG, so a finite number of hands, and each particular hand can be assigned an exact probability due to its placement and frequency in the evaluative structure of all the hands. Ponda Baba’s report provides objective evidence as to value of a given hand and if that probability happens to be 1/2 and be the same value as an inductive formulation of the probability 1/2*, then that is just a coincidence and doesn’t carry argumentative weight. I think Kung making both probabilities equal to 1/2 (which is quite unlikely) obscured rather than clarified his argument. Kung says in the footnote that the original probability value of 1/2, nothing hinges on it.

Predicting the probability of some particle which has been colliding around in the universe for billions of years is pretty much an unconditional probability, intractable. There’s subjective Bayesianism and objective Bayesianism. If a plane goes down at sea, they can reduce the area of search because conditional probability is at work. Each reduction of the search area has a bearing on the overall likelihood of finding the lost plane. Dogmatism claims that sense data is relevant to arriving at a belief. It doesn’t discount additional evidence that serves to support the original belief, but just doesn’t require such evidence. At one end of the spectrum there is Skepticism and Dogmatism (Reliabilism) at the other end. In between most people are Fallibilists. One can be a Dogmatist and a Fallibilist, but few people are Fallibilists and also Dogmatists. I think this area falls under the larger category, Realism vs. Anti-Realism.

I don’t know this topic very well, but here is a thought that came to mind.

One version of methodological conservatism says that if you have two incompatible hypotheses (one of which is aleady believed, and the other of which has just been proposed) then, other things being equal, the one that you already hold is the one that should be favored.

Applying this to philosophy: suppose you are a philosopher who is already committed to Bayesianism, or to some Bayesian research program, and then you come across dogmatism; methodological conservatism dictates that dogmatism loses, if they are incompatible.

What methodological conservatism dictates, of course, is determined by where one starts.

Hi Hugh,

Two replies. First, if the only reason why Bayesians prefer Classical Bayesianism over dogmatism is that they already believe CB, then the objection to dogmatism is pretty thin. It is little more than: I believe CB and you should too! I think those giving the Bayesian objections were after something a bit more ambitious than that. Second, methodological conservatives often put constraints on when one’s starting point makes a difference. Lycan, for example, seems to think that one’s starting point makes a difference only when there is a tie between two theories. Yet, it’s not obvious that the theoretical virtues of dogmatism and CB work out to be a tie. So it’s not obvious that methodological conservatism can help out here anyway–it probably depends on which methodological conservatism we are talking about.

After reading some of your other posts Chris, I came across this point which I think is relevant to this topic,

“The phenomenal character “assures” the subject that its content is true. Perceptual experiences, memorial experiences, and a priori intuitions are all plausible candidates for seemings. Phenomenal conservatism is unpopular because many find it implausible that all seemings provide justification. Surely, some argue, a seeming produced by wishful thinking has no justificatory power.

Here is another way of pressing the question in which I’m interested: if some seemings justify, why don’t all seemings justify? It seems that many internalists allow perceptual and a priori seemings to justify, but they don’t allow wishfully-produced seemings to justify. What principled, internalist criterion can allow only some (and the right ones) to justify?”

I don’t think there is one, which extends to defining a principled method of grading controversy,

“Here is a way of approaching the question: if the incompatibility between CB and dogmatism depends on the most controversial features of Classical Bayesianism, then dogmatism should win. For example, I find it implausible that a rational human being should assign a credence to every proposition or a credence of 1 to every necessary truth. I take it that these implausible claims are entailed by CB. To whatever extent the incompatibility arises because of CB’s commitment to either of those two claims, dogmatism should win and we should rework CB. If, however, the incompatibility relies only on the least controversial features of CB, then CB should win.”

I read the next statement, attributed to I.J. Good, more than once; ‘there are at least 46,656 varieties of Bayesianism’.

Chihara (1981) observes, “there is no such thing as the Bayesian solution. There are many different ‘solutions’ that Bayesians have put forward using Bayesian techniques.”

http://fitelson.org/thesis.pdf

“According to Bayesian conﬁrmation theory, evidence E (incrementally) conﬁrms (or supports) a hypothesis H (roughly) just in case E and H are positively probabilistically correlated (under an appropriate probability function Pr). There are many logically equivalent ways of saying that E and H are correlated under Pr. Surprisingly, this leads to a plethora of non-equivalent quantitative measures of the degree to which E conﬁrms

H( under Pr). In fact, many non-equivalent Bayesian measures of the degree to which E conﬁrms (or supports) H have been proposed and defended in the literature on inductive logic.”

Principle of Non-Sufficient Reason/Indifference

“However, it is generally agreed by both objectivists and subjectivists that ignorance alone cannot be the basis for assigning prior probabilities. The reason is that in any particular case there must be some information to pick out which parameters or which transformations are the ones among which one is to be indifferent. Without such information, indifference considerations lead to paradox.”

Donald Gillies: “Roughly the thesis is that Bayesianism can be validly applied only if we are in a situation in which there is a fixed and known theoretical framework which it is reasonable to suppose will not be altered in the course of the investigation. I call this the condition of the fixity of the theoretical framework. For Bayesianism to be appropriate, the framework of general laws and theories assumed must not be altered during the procedure of belief change in the light of evidence.”

http://www.jimpryor.net/research/papers/Credulism.pdf Kung’s paper mentions Pryor,

“We’ve looked at several of the philosophical assumptions behind the interpretation White puts on his formal result. We’ve seen that these assumptions are not mandatory, and that once we start considering alternatives to them, it’s sometimes an open question whether the Bayesian formalism still conﬂicts with the commitments of dogmatism, about perception or anything else.”

http://brian.weatherson.org/papers.shtml

“Dogmatism and Intuitionistic Probability (with David Jehle). We note that the Bayesian attack on dogmatism requires that agents be certain that classical logic is correct. We argue that a dogmatist can properly sidestep this criticism if they insist that an agent with no experience of the world can rationally be uncertain about what the correct logic is.”

http://www.guardian.co.uk/education/2004/mar/08/highereducation.uk1

“Dr Stephen Unwin has used a 200-year-old formula to calculate the probability of the existence of an omnipotent being [67% !]. Bayes’ Theory is usually used to work out the likelihood of events, such as nuclear power failure, by balancing the various factors that could affect a situation. .. Dr. Unwin said he was interested in bridging the gap between science and religion. He argues that rather than being a theological issue, the question of God’s existence is simply a matter of statistics.”

I assume that Dr. Unwin first uses classical logic, the Law of The Excluded Middle, to establish God, and if not God, then Evolution, whereas Intuitionistic logic would allow for other possibilities. I wonder what are the criteria for sorting evidence into two piles, one for Evolution and the one for God, twice as big? I think it is a dubious practice to weigh metaphysical assumptions as evidence, especially evidence as carrying the same weight as physical evidence, in order to proceed with a Bayesian calculation which arrives at 67% for God. I don’t think there is the ‘fixity of theoretical framework’ which Gillies considers necessary to arrive at a sound conclusion. So if this Bayes 67% probability for God amounts to an abuse of rational reasoning, then one can conclude that a Bayesian argument to refute Dogmatism is one which advances unprincipled justificatory liberties.

There are various things that people mean by “Dogmatism”. One is the view that there is immediate justification. Another one is the view that there is immediate justification and that, although this sort of justification is fallible, it provides the key for putting aside skeptical doubts about the external world, and perhaps in other areas as well. I believe the incompatibility with Bayesianism arises for Dogmatism in the latter sense, and only in the latter sense.

Having said that, i think it is not some of the objectionable features of the Bayesian framework that causes the problem, but rather one of the more intuitive and basic ones, i.e. the assumption that if all the available evidence is entailed by a certain hypothesis, than the hypothesis cannot be ruled out on the basis of the evidence. Like Lewis did with the statement of fallibilism, I ask you to try to hear it afresh: “my experience is just what the skeptical hypothesis predicts; but I can conclude that the skeptical hypothesis is false on the basis of my experience (and some trivial reasoning steps)”.

There is a hint in your post (or so it seemed to me) that rejecting Dogmatism on the basis of Bayesianism might be a sort of appeal to authority, in this case the authority of some formal machinery. I wish to make something like the opposite claim. I think people like Pryor and Weatherson and Kung provide very interesting technical alternatives to Bayesianism, but ultimately their formal machinery obscures the basic philosophical problem.

Stephen,

There’s a lot of helpful information in your latest comment. Here are a few random thoughts in response:

1) As a proponent of phenomenal conservatism, I do think that all seemings justify, so I bite the bullet in the cognitive penetration cases. But I’m inclined to think that if dogmatism has cognitive penetration worries, then just about everyone else does too, including many versions of Bayesianism (my post on wishful thinking problems for reliabilism is related to this point). If that’s right, then most other views don’t enjoy an advantage with respect to cognitive penetration.

2) You make the point that there are many different ways of taking Bayesianism. Agreed. Some of these ways are compatible with dogmatism. Agreed. But objectors to dogmatism claim that it’s a problem for dogmatism that it is incompatible with certain versions of Bayesianism. A different way of asking the question in the post is this: why should we give up dogmatism rather than giving up on the versions of Bayesianism that are incompatible with dogmatism?

3) Thanks for bringing some of those articles to my attention. I had been vaguely aware of them, but it sounds like you’ve given me a good reading list.

Hi Danielle,

Here are a few replies to your comment. First, I’m thinking of dogmatism as the claim that, if you have and experience as of P and you have no defeaters, then you are immediately justified in believing P. That’s the thesis that the Bayesian objectors target.

Second, suppose you’ve identified the feature of CB that leads to the incompatibility between CB and dogmatism. I agree that the claim in question seems intuitive. But, one might think, that the claim has skeptical consequences that more than outweigh its intuitive appeal. And, I’m inclined to think that the claim you’ve identified isn’t the one the Bayesians are relying on.

Third, suppose the existing replies to the Bayesian objection are inadequate insofar as they rely on inadequate formal machinery. There still is the question of whether there is an otherwise acceptable version of Bayesianism that is incompatible with dogmatism. And if there is, there still is the question of why we should favor that version of Bayesianism over dogmatism. In principle, one could have greater reason to endorse dogmatism and search for a compatible and acceptable formal framework than to stick with an otherwise acceptable version of Bayesianism that is incompatible with dogmatism.

Hello Daniele,

I tend to agree with your ideas, if I understand them correctly. I think that Bayesian reasoning requires that the original probability estimation be close to certain. Any subsequent evidence is not supposed to reduce the original estimated probability, but perhaps increase it? I think that is a basic Bayes position. However, Peter Kung wrote,

“Dogmatism is concerned with exactly those cases where it is assumed you initially have no reason to believe. Dogmatism’s opponents grant that assumption in raising their objections. Therefore we can turn to dogmatism without defending the stipulation.”

Daniele wrote: “I think people like Pryor and Weatherson and Kung provide very interesting technical alternatives to Bayesianism, but ultimately their formal machinery obscures the basic philosophical problem.”

Dogmatism deals with uncertain cases such as intuitions about a supreme being or atheism (supernatural) or and also I think, conceptual philosophical positions such as internalism and externalism that have nothing close to certain initial probability assumptions. Bayesianism works best in a physical, scientific theoretical framework which can generate initial probabilities with more certainty. I think the philosophical problem is the attempt to undermine an intuitive (no reason to believe) philosophical position with a contrary philosophical position which is augmented by using facts for evidence of Bayes increased likelihood to support a hypothesis, rather than possibly lowering the initial Dogmatism expectation. I didn’t fully grasp your explication of what the “obscured philosophical problem” actually was?

Hi Chris,

About your understanding of “Dogmatism”, I tend to think that is the best way to understand the term. But people in this debate are not consistent. Here is Weatherson (2007): “Someone who denies the second premise says that your empirical evidence can provide the basis for knowing that you aren’t in S [the skeptical scenario], even though you didn’t know this a priori. I’m going to call such a person a dogmatist..”. And here is Wright, one of the people you cite as objecting to Dogmatism on Bayesian grounds: ” ‘Dogmatism’ is a term renovated by James Pryor [2000] to stand for a certain kind of neo-Moorean response to Scepticism and an associated conception of the architecture of basic perceptual warrant.”

It might still be that the majority of people sticks to the more modest understanding of Dogmatism, as just a thesis about perceptual justification.

About your second point. I agree that if the assumption I talked about had sceptical consequences that would be a reason to doubt it. But I don’t think it has those consequences. I believe it does block dogmatist responses to scepticism, but scepticism follows only if you think no other solution is available.

Finally, I’d also like to stress that I am not saying that there is anything wrong in the formal results of Weatherson and friends. On the contrary, those results are correct, and also interesting. I just think that the resulting probability calculus cannot model epistemic justification, because it would violate some strong pre-theoretical commitments we have, and which support CB.

That’s helpful, Daniele. They key point in Wright’s understanding of dogmatism is about the structure of basic perceptual warrant. So he has in mind what I referred to as ‘dogmatism’ plus Neo-Mooreanism. I’ve never understood how Weatherson’s notion of dogmatism is related to what everyone else calls dogmatism. About your final point: that’s an interesting take. I find the implications of CB so obviously foreign to epistemic justification, that I have a hard time imagining that those other views fare worse than CB at modeling epistemic justification. But, then again, most of the formal stuff is a bit over my head.

Hello Chris,

I found this topic very interesting. You have a talent for nosing out stimulating questions. My reply is going to stray a bit since it’s likely the last I’ll post on this subject. I think your penultimate sentence hits the nail on the head, and it’s as true as grue notions can beget.

Chris: “I find the implications of CB so obviously foreign to epistemic justification, that I have a hard time imagining that those other views fare worse than CB at modeling epistemic justification.”

Patrick Maher: “Bayesian decision theory is here construed as explicating a particular concept of rational choice and Bayesian probability is taken to be the concept of probability used in that theory. Bayesian probability is usually identified with the agent’s degrees of belief but that interpretation makes Bayesian decision theory a poor explication of the relevant concept of rational choice. A satisfactory conception of Bayesian decision theory is obtained by taking Bayesian probability to be an explicatum for inductive probability given the agent’s evidence.”

For handling uncertainty in Artificial Intelligence, causal Bayesian networks are a useful tool and I think Maher’s view matches with this. In theory, Solomonoff induction solves AI, but it’s uncomputable. .. I decided to find an undergraduate exposition on Bayesian Confirmation Theory. Strevens teaches at NYU. I was surprised to find that NYU had the top rated Philosophy Dept. in the US. So here is an expert assessment on the limitations of BC, assuming by that you include/refer to Bayesian Confirmation Theory (BCT) in BC.

http://www.nyu.edu/classes/strevens/BCT/BCT.pdf

“Let me begin with the bad news. Scientists using Bayesian conditionalization, and who are therefore committed to the likelihood lover’s principle, may disagree about any of the following matters.

1. Which of several competing theories is most likely to be true, given a certain body of evidence.

2. Which of several competing theories received the greatest boost in its probability from a given piece of evidence, where the boost is the difference between the relevant prior and the posterior probabilities.

(The LLP, though, completely ﬁxes the relative size of the Bayesian multipliers for competing theories; scientists will agree as to which theory had the highest Bayesian multiplier: it is the theory with the highest likelihood on the evidence.)

3. Whether a theory’s probability ought to increase or decrease given— whether it is conﬁrmed or disconﬁrmed by — a particular piece of evidence. This last possibility for disagreement is particularly dismaying. To see how the disagreement may arise, recall that a hypothesis h’s probability increases on the observation of e just in case the relevant Bayesian multiplier is greater than one, which is to say, just in case the physical probability that h assigns to e is greater than the unconditional subjective probability for e; in symbols, just in case Ph(e) > C(e). Whether this is so depends on a scientist’s probability for e, that is, on C(e), which depends in turn — by way of the theorem of total probability — on the scientist’s prior probability distribution over the hypotheses. Scientists with different priors will have different values for C(e). Even though they agree on the likelihood of a hypothesis h, then, they may disagree as to whether the likelihood is greater or less than C(e), and so they may disagree as to whether h is conﬁrmed or disconﬁrmed by e. (There is not complete anarchy, however: if two scientists agree that his conﬁrmed by e, they will agree that every hypothesis with a greater physical likelihood than h — every hypothesis that assigns a higher physical probability to e than h — is conﬁrmed.) How can there be scientiﬁc progress if scientists fail to agree even on whether a piece of evidence conﬁrms or disconﬁrms a given hypothesis?”

Likelihood Lover’s Principle: The principle, entailed by BCT, that the degree to which a piece of evidence conﬁrms a hypothesis increases with the physical likelihood of the hypothesis on the evidence (or the degree to which the evidence disconﬁrms the hypothesis increases as the physical likelihood decreases). The principle asssumes that subjective likelihoods are set equal to physical likelihoods, thus that there is no inadmissible evidence.”

Chris: “I find the implications of CB so obviously foreign to epistemic justification, …

3. “Whether a theory’s probability ought to increase or decrease given— whether it is conﬁrmed or disconﬁrmed by — a particular piece of evidence. This last possibility for disagreement is particularly dismaying.”

I think that “3.” may well serve as an example of a CB implication obviously foreign to epistemic justification. However, even if the CB attack on Dogmatism is completely vanquished, I don’t see how that restores the credibility of Dogmatism to a point having more plausibility than Dogmatism enjoyed before it was nefariously impugned by CB.