Published in The Philosophical Quarterly, October, 2004, 54:217, pp. 557-569

 

What do you do with Misleading Evidence?

 

 

 

Michael Veber

 

 

ABSTRACT:  Harman presented an argument to the effect that if S knows that p then S knows that any evidence for Not-p is misleading.  Therefore S is warranted in being dogmatic about anything he happens to know.  Sorensen’s attempt to solve the paradox via Jackson’s theory of conditionals is explained and rejected.  It is argued that S is not in a position to disregard evidence even when he knows it to be misleading. 

 

I. Harman’s Paradox

 

S is dogmatic with respect to his belief that h iff S disregards all evidence e that seems to confirm ~ h.  Harman presents an argument (attributed to Saul Kripke) to the effect that knowledge entails that dogmatism in this sense is rationally justified. 

‘If I know that h is true, I know that any evidence against h is evidence against something that is true; so I know that such evidence is misleading.  So once I know that h is true, I am in a position to disregard any future evidence that seems to tell against h.’  This is paradoxical, because I am never in a position simply to disregard any future evidence even though I do know a great many different things.[1] 

 

I know that knowledge has a truth condition and thus if I know that we’re having pigs’ feet tonight, I know that any evidence which seems to confirm that we’re not is evidence that seems to confirm something false, i.e., it is misleading.  Since we ought to disregard evidence we know to be misleading, I ought to disregard any evidence which indicates something in conflict with what I know.  In other words, I ought to be dogmatic about what I know.

Harman proposed that this paradox can be solved if we recognize that my knowing that an item of evidence is misleading “does not warrant me in simply disregarding that evidence, since getting that further evidence can change what I know.  In particular, after I get such further evidence I may no longer know that it is misleading” (p. 148).  Suppose Mom tells me “I know how much you like pigs’ feet but we really ought to watch our cholesterol level.  So no pigs’ feet tonight; it’s salad for us.”  According to Harman, the dogmatic argument does not license me in ignoring Mom’s testimony, even if it is in fact misleading, because once I have acquired this evidence, I no longer know that we’re having pigs’ feet tonight.   

Harman’s solution, however, is unsatisfactory.  To see exactly why, we must understand what it is for evidence to be disregarded.  If e is a proposition, S disregards e iff S does not incorporate e into his belief system and make the relevant probability adjustments.  If e is some sort of non-propositional evidence (a sensory experience perhaps), S disregards e iff S does not take e to be relevant to assigning probabilities to any other things he believes.  S accepts e iff S does not disregard e. 

Harman is correct to say that once I “get” (i.e., accept) e I may no longer know that h.  Given my belief in Mom’s general reliability, once I accept the proposition that she has reported that we are not having pigs’ feet, I can no longer assign a sufficiently high probability to my belief that we are.  But that is precisely why the dogmatic argument advises us not to get e, in other words, disregard it.  Once I accept it, I’ve already violated the advice of the dogmatist and since the evidence is misleading, I’ve already been duped.

Some degree of doxastic voluntarism is being assumed here.  At least sometimes, evidence is the sort of thing we can choose to accept or choose to disregard.  Doxastic voluntarism is, of course, controversial.  Its most extreme version (we are in control of all of our beliefs all of the time) seems obviously false but this does not present any serious problem.  It is equally obvious that we can and do freely choose to disregard counterevidence at least some of the time, especially when it conflicts with some belief we are emotionally attached to.  Anyone who has taught a philosophy class (or attended a philosophy conference) has seen this happen live and in person.  The distinction that Harman overlooks is between being aware of the existence of an item of evidence and factoring it in to one’s web of belief.  The former entails the latter neither conceptually nor in actual practice.  I.T. Oakley argues that David Lewis makes the same mistake in articulating his contextualism.[2]  Quine may be guilty of the same sort of oversight.[3]  He seems to assume that recalcitrant experience always requires that we make some modification to the web, if only a plea of hallucination.  But it is at least conceptually possible to simply ignore the recalcitrant experience and make no change at all.     

Initially, one might react to the dogmatic argument by saying that my steadfast disregard for Mom’s testimony does not save my knowledge.  My knowledge that we are having pigs’ feet tonight is defeated by Mom’s testimony whether or not I choose to accept it.  If this is true then being dogmatic is pointless because it fails to preserve my knowledge.  Here we are assuming that S knows that h only if there is no e such that if S were to adopt e, S would no longer know that h.  In other words, it is sufficient that the defeating evidence be “out there”. 

But this way out of the paradox is of no help to Harman.  He, with good reason, rejects the principle that misleading evidence that we are unaware of always undermines our knowledge (pp. 145-148).  Sometimes it does; sometimes it doesn’t.  I won’t review his examples here, but it is enough to point out that if we were to adopt this sort of principle our knowledge would be extremely unstable.  Anytime someone misspeaks in a way that conflicts with what we know; we would lose that knowledge whether or not we are present during the pronouncement.

            Sorensen applies the dogmatic line of reasoning to a particular case involving reliable Doug.[4]

(1)  My car is in the parking lot.

(2)  If my car is in the parking lot and Doug reports otherwise, then Doug’s report is misleading.

(3)  If Doug reports that my car is not in the parking lot, then his report is misleading

(4)  Doug reports that my car is not in the parking lot.

(5)  Doug’s report is misleading.

 

We assume for the sake of argument that (1) is known by me to be true.  Given the definition of ‘misleading evidence’ (i.e., any evidence which seems to confirm a false proposition), (2) is analytic.  Thus the inference to (3) is valid.  (4) is also assumed ex hypothesi and (5) follows by modus ponens.

As stated, however, the conclusion of Sorensen’s argument is not as troublesome as the conclusion of the argument Harman offers in the above passage.  (5), by itself, says nothing about dogmatism and it is not obviously paradoxical.  Thus it appears that generating a paradox requires an additional premise such as:

(6)  For any subject S:  If a piece of evidence is misleading then S is in a position to disregard it.

 

 This principle, if accepted, would generate the paradoxical result.  But (6) does not have much independent plausibility.  Suppose that unbeknownst to me, a newspaper that I know to be generally reliable contains a misprint concerning last night’s game.  Although the newspaper’s report is misleading, this fact alone does not license me in ignoring it; if only because I have good evidence that the paper is reliable and no reason to think otherwise in this particular case.   A different principle is required. 

(7)  For any subject S:  If a piece of evidence is known by S to be misleading then S is in a position to disregard it.

 

This is more plausible than (6).  The problem with (6) is not that it is straightforwardly false.  One of our chief cognitive aims is truth.  Evidence that misleads is evidence that gets in the way of this goal and therefore we ought to steer clear of it.  (6) is implausible because the fact that a particular piece of evidence is misleading does not, by itself, give the subject a reason to disregard that evidence.  (7), on the other hand, is formulated in a way that avoids this problem.  The fact that the subject knows the evidence to be misleading does give him a reason to disregard it.

            Stating things this way, however, does not generate the paradox from Sorensen’s premises.  What is needed to generate the paradox is an argument for the claim that I know that Doug’s report is misleading.  This argument can be constructed as follows.

(8)  I know that my car is in the parking lot.

(9)  I know that if my car is in the parking lot and Doug reports otherwise then Doug’s report is misleading.

(10)        I know that if Doug reports that my car is not in the parking lot,

then Doug’s report is misleading.

(11)        I know that Doug reports that my car is not in the parking lot.

(12)        I know that Doug’s report is misleading. 

 

When (7) is added to the end of this argument the paradoxical result follows.  And since the example is an arbitrary one, the argument, if cogent, shows that I am in a position to ignore any evidence that conflicts with anything I know.  And if the argument works for testimony, it works equally well for evidence stemming from other sources, including sense perception.

            The individual premises of the above argument can be defended in the same way as the premises of Sorensen’s argument.  (8) is assumed ex hypothesi.  (9) is true because I know the definition of ‘misleading’ and am able to apply it in this case.  (11) is also assumed for the sake of argument.  Granting a principle to be discussed shortly, (10) and (12) are deductive consequences of the others.

 

II. Closure

The revised argument is more complex than Sorensen’s and in this complexity a critic might seek solace.  Generating the paradoxical result requires that each premise of the argument be nested within a knowledge operator.  Doing that seems to require employment of the much contested principle that knowledge is closed under known deductive implication.  In other words, if S knows that P and S knows that (P entails Q) then S knows that Q.  This principle is employed in both inferences in the argument.

            The closure principle has taken fire from several fronts.  Some philosophers have rejected this principle because they believe that it leads to skepticism.[5]  Suppose we grant the claim that we don’t know that we are not brains in vats.  We know that being a brain in a vat and having hands are mutually exclusive.  So if we accept the closure principle and we grant that we don’t know that we are not brains in vats, then we must grant that we don’t know that we have hands.   Given that the skeptic might use the chain of reasoning in (8) – (12), the closure denier may take this same line of reasoning as further evidence that closure leads to skepticism and therefore ought to be rejected. 

While there might be some reasons for rejecting the closure principle, this is not one of them.  The best argument for skepticism is a standard underdetermination argument.[6]  And this kind of argument need not employ the controversial principle at all.  If our aim is to circumvent skepticism, it would appear that denying closure is too little, too late.  Second, if the skeptic has good independent reasons for rejecting (8) or any analogous claim and these reasons do not employ the closure principle, then we are not warranted in denying the inference simply because it leads to skepticism. 

A different kind of argument to support closure denial comes from more straightforward counterexamples to the principle.  It is not difficult to find cases where someone knows that P, knows that (P entails Q), but fails to know that Q because he has not “put the two together”.  He fails to know that Q because he has not undertaken the inference and formed the belief that Q.  In fact, it seems that given our finite minds, a frequent failure to draw out the logical consequences of our beliefs is a practical necessity.

When the closure principle fails it is because of the fact that its consequent ascribes a certain psychological state to the subject which the subject contingently may not realize.  But the closure principle can be modified to avoid this problem.  The only expense is wordiness.  If S knows that P and S knows that (P entails Q) and S attentively considers his knowledge that P together with his knowledge that (P entails Q) and S makes the required logical inference, then S knows that Q.  In what follows, I will assume that this modified closure principle is being employed and that its conditions are met in the above argument.  Of course, this means that the argument only applies in cases where the subject meets these conditions and not in every case of knowledge.  But, of course, that does not make the result any less paradoxical.  It only means that we avoid dogmatism as long as we fail to attentively reflect on our beliefs and make logical inferences. 

Audi sees the dogmatic argument as itself a counterexample to the closure principle.[7]  Thus he would reject the inference from (8) and (9) to (10).  In his response to Audi, Feldman argues that propositions like (10), when understood as material conditionals, are perfectly unproblematic.[8]  What cannot be derived from (8) and (9) is knowledge of the corresponding subjunctive conditional: if Doug were to report that my car is not in the parking lot then his report would be misleading.  As a matter of logic, Feldman’s point is of course correct but it does not solve the problem at hand.  The dogmatist’s argument does not make use of any subjunctive conditionals.  The material conditional works just fine for him.        

In the opening of his paper, Feldman claims that “denying closure is one of the least plausible ideas to come down the philosophical pike in recent years” (p. 487).  While I have sympathies for Feldman’s point, at least with respect to a properly restricted version of the closure principle, I can also understand the motivation for rejecting it if there were no other response to the dogmatic argument.  In the final section I will argue for my own solution to the paradox, but first I must discuss Sorensen’s.

 

III.  Conditionals and Junk Knowledge

Sorensen attempts to solve the paradox via Jackson’s theory of conditionals.  On this view, certain kinds of conditionals do not admit of “expansion” or employment in modus ponens arguments.  In this section, I will explain Sorensen’s position in terms of the argument that he constructs ((1)-(5) above) and assume that the same ideas apply mutatis mutandis to my revised version of the argument ((8)-(12) and (7) above). 

            Frank Jackson defends the Equivalence Thesis for indicative conditionals.[9]  This is the view that ordinary language statements of the form “If A then B” when expressed in the indicative mood (to be represented “A ą B”) have the same truth conditions as material conditionals or statements of the form “A É B”.  According to Jackson, assertions of indicative conditionals function to “signal robustness” with respect to their antecedents.  A proposition P is robust with respect to information I if and only if the probability of P and the probability of P given I are close to each other and both high.  If the probability of P is high but its probability decreases significantly as an item of information comes in, then P is not robust relative to that information.  Thus, according to Jackson, when I assert A ą B I am representing it as being the case that (i) the probability of A ą B is high and (ii) if I were to learn that A is true, A ą B would continue to have a high subjective probability.

This is Jackson’s basis for thinking that the alleged paradoxes of material implication are really just cases where the conditional is true but not assertible.  If I assign a high probability to A ą B merely because I assign a high probability to Not-A, if I were to learn that A is in fact true, the probability of the conditional would be significantly reduced.  Thus the conditional is not robust with respect to its antecedent and therefore not assertible.[10] 

Jackson’s theory is helpful at explaining why certain kinds of conditional assertions seem incorrect even though, if the equivalence thesis holds, they are true.  We are collecting mushrooms and I assert “If you eat that one, you’ll die”.  Since I know that you tend to take my word on matters of fungi, the probability of the antecedent is very low, and thus the probability of the conditional, once it is asserted, is very high.  But the mushroom referred to is not poisonous.  I only asserted the conditional because I wanted that mushroom for myself.[11]  Jackson’s theory explains why conditionals such as this are inappropriate to assert even when they are known to be true.  If I were to learn that you ate the mushroom anyway, the probability of the conditional would be greatly reduced. 

The relevance of all of this to the dogmatic argument, as Sorensen sees it, is this.  Certain kinds of conditionals, mushroom conditionals being one of them, are not robust with respect to their antecedents and therefore exhibit “resistance to modus ponens” (p. 444).  While I am initially justified in believing the mushroom conditional, I would not be justified in believing the consequent if I were to learn that the antecedent is true because the probability of the antecedent is inversely proportional to the probability of the conditional.  Since they are not highly probable together, I am not warranted in carrying out the modus ponens inference. 

As it turns out, there are many cases of conditionals that are resistant to modus ponens in just this way.  Gettier conditionals are one example:  If Jones does not own a Ford then Brown is in Barcelona (based on the knowledge that Jones owns a Ford).  Monkey conditionals (i.e., conditionals with absurd consequents) are another: If he’s a good cook then I’m a monkey’s uncle.  Consequential blindspots (i.e., conditionals for which the consequents are possibly true but could not be coherently believed) are a third type: If he was holding a flush, then I’ll be forever innocent of that fact.  

The last two types are often assertible, so Jackson’s theory needs to be slightly modified to account for them.  For monkey conditionals, Sorensen suggests that we recognize that they are ways of indirectly asserting the falsity of their antecedents; and thus the conditions for asserting the falsity of the antecedent rather than the conditions for asserting the conditional are what applies.   Consequential blindspots, Sorensen suggests, are acceptable exceptions to the general rule.  Since they are regarded as an exception, I will ignore them in the proceeding discussion.

According to Sorensen, the dogmatic argument goes wrong in that it employs conditionals that are also resistant to modus ponens.  In particular,

(3)  If Doug reports that my car is not in the parking lot, then his report is misleading.

 

is just this sort of conditional.  He writes

given that I am justified in believing P, I am indeed justified in believing that any contrary evidence is misleading.  However, I am not justified in asserting that any contrary evidence is misleading.  Asserting ‘Any contrary evidence is misleading’ is equivalent to asserting ‘If there is contrary evidence, then it is misleading’.   This conditional is not robust.  But asserting it signals that it is robust and, so, is a conversational misdeed (p. 445).

 

Sorensen connects assertibility to the issue at hand by way of a theory about the purpose of argument.  “To draw a conclusion from your premises is to assert the conclusion on their basis. Assertion carries more than a commitment to the truth of what one says.  For the assertion indicates that the relevant robustness requirements are satisfied” (p. 451).  Since a key premise in the argument is not assertibile, the argument cannot be used to justify its conclusion.  It would be misleading to employ this argument in support of this conclusion because to do so is to indicate that premise (3) is robust and it is not.  While (3) is known to be true this knowledge cannot be expanded under modus ponens and so it is merely “junk”.

 

IV.  Sorensen’s Solution Assessed

Sorensen has more to say about the dogmatic argument and its connection to other issues but what has been said so far suffices for our purposes.  A closer look at the typical examples of conditionals which resist modus ponens will reveal that the phenomena behind the behavior of these conditionals are explainable in terms of extremely simple epistemic principles.  I will then argue that this makes trouble for Sorensen’s solution. 

            An argument can serve to justify an agent in believing its conclusion only if that agent is justified in believing its premises.[12]  Since mushroom and monkey conditionals are conditionals believed on the basis of a belief in the falsity of their antecedents, it is no great wonder that they resist expansion under modus ponens.  Their “resistance” is simply a consequence of the fact once I accept their antecedents my basis for believing the conditional itself is forced away by propositional logic.  Any old belief can go from being justified to unjustified once its basis is removed or epistemically undermined by accepting things inconsistent with it.  If the conditional is no longer justified then I am not justified in believing the consequent upon forming a justified belief in its antecedent.  This is particularly surprising when this happens in the case of belief in a conditional because, if Jackson’s view is correct, conditionals are designed to be expanded by modus ponens.  But the fact that this is surprising should not lead us to think that the underlying epistemology is anything more than mundane.      

There is a thorny issue here about whether the maxim: “don’t believe inconsistencies” is acceptable or not.  Some philosophers argue that it is physically impossible for a finite human subject to test every new belief for logical consistency with each of his other beliefs.[13]  Even if this claim is true it’s not clear that the maxim should be outright rejected.  Perhaps there is something to Plato’s view that ideals, even when unattainable in the actual world, are still worth striving for in some sense.  Second, while there may be cases where accepting a contradiction is epistemically permissible (provided the right sort of mental partitioning is in place) these cases involving conditionals are not among them.  The logical contradictions here are immediate and obvious.

Consider the Gettier conditional.  Smith’s basis for believing that if Jones does not own a Ford then Brown is in Barcelona is his knowledge that Jones owns a Ford along with his knowledge that P entails if Not-P then Q.  Smith cannot infer Q upon acquiring justified belief that Not-P but, once again, that is because Smith’s acceptance of Not-P logically contradicts his basis for believing the conditional.    

At this point, it might sound like the dispute here is merely a verbal one.  Sorensen wants to call mushroom, monkey and Gettier conditionals junk knowledge and bring in the apparatus of Jackson’s theory of conditionals to explain why they cannot be used in certain kinds of arguments.  Even if this is a bit lavish, does it really matter whether the data are explained in this way or by some simpler route?

In the context of the original paradox, it does matter.  If the story told in this section suffices to explain why these conditionals cannot be used in modus ponens inferences and it turns out that the same story cannot be told of the conditionals in the dogmatic chain of reasoning, then there is a reason to doubt that Dougie Conditionals (such as (3)) are in the same family as the others.  If they are not, then no analogy can be drawn and that would mean that Sorensen needs to marshal additional reasons for thinking that the dogmatic argument fails because of the non-robustness of one or more of its premises.

The simpler explanation does not apply to the conditionals employed in the dogmatic argument.  My basis for believing (3) is (1) and (2).  The antecedent of (3) is not inconsistent with (1) or (2) nor is it inconsistent with my epistemic bases for believing (1) or (2).  Sorensen acknowledges this point but does not regard it as a problem for his view.  He believes that the class of non-robust conditionals is the most general with mushrooms, monkeys, Gettiers and Dougies all being subsets.  It might be true that the dogmatic argument employs a non-robust conditional, but since the analogy with the clear cases of non-robustness fails, this point needs independent argument.     

 

V.  An Alternative Solution

When determining whether to adopt a moral principle, a good consequentialist must consider not only the consequences of adopting the principle and applying it correctly but also the consequences of applying it incorrectly and the frequency with which this might occur.  Let us propose the following policy for the officers at the local police department:

(G)  If you know that a suspect is guilty, deliver an appropriate form of punishment right then and there.

 

If our goal is justice, (G) is a pretty good policy, as long as it is never misapplied.  Those who are known to be guilty will always get punished and they will always get exactly what they deserve.  (And think of the time and money we would save on court procedures!)  But this is obviously not a policy we should endorse.  The danger in adopting (G) is that it is likely to get misapplied.  It is likely that many of those who are merely thought to be guilty will get punished and it is likely that forms of punishment which are merely thought to be appropriate will be dispensed.

            The weakness in the dogmatic argument is a similar one.  Consider again the essential final premise.

(7) For any subject S:  If a piece of evidence is known by S to be misleading then S is in a position to disregard it

 

This principle coheres with our epistemic goals of maximizing the number of true beliefs and minimizing the number of false ones; provided it is not incorrectly applied.  The problem is that human subjects often take themselves to know things that they do not.  Therefore (7) is objectionable; not so much because of what it says but because of the cost and likelihood of misapplying it.[14]  We cannot, without falling into a Moorean absurdity, recognize cases where we merely think we know.  Consider the absurdity of believing: I don’t know that p but I believe I do.  Adopting (7) will result in our disregarding evidence that we merely think is misleading just as adopting (G) will result in our punishing people that we merely think are guilty. 

We reject a rule such as (G) in acknowledgement of human fallibility.   For the same reason, we have adopted further policies and procedures which are designed to allow for self-correction after suspects are arrested.  We ought to apply the same kind of reasoning to our own personal belief forming practices.  Acceptance of (7) overlooks the fact that we often take ourselves to know when we do not and it robs our epistemic practices of their self-corrective capacity.  Cases will arise where we apply the principle to things we merely think we know and by disregarding what could be corrective evidence, we will force our heads deeper and deeper into the sand.  (7) is bad epistemic policy.  And without (7), dogmatism doesn’t follow from the other premises of the argument.  Thus I am not in a position to disregard evidence even when I know it to be misleading. 

Even though I contend that rejecting (7) is no more irrational than rejecting (G), I must admit that whole thing sounds initially like epistemological suicide.  There are several reasons for this prima facie counter-intuitiveness.  In cases where our background beliefs are such that once we accept the counterevidence we no longer know that P, we will, upon accepting the evidence, not maintain that it is misleading.  That is because our basis for believing that it is misleading has been removed.  We must also remember that, in our actual epistemic practice, it all happens so fast.  As a matter of psychological fact, we are naturally inclined to accept the testimony of those we take to be generally reliable (at least for the sorts of examples considered here).  And once we do, we can no longer call that testimony misleading.  This explains why philosophers such as Harman overlook the conceptual distinction between being aware of the existence of a piece of evidence and accepting it.  In actual practice, we tend to move automatically from one to the other (at least for sources of evidence we take to be generally reliable).  Although Harman is correct to think that we should accept relevant evidence once we are made aware of it, he did not supply an independent argument for why we should do this.  That is what I have tried to do here.  Our long term epistemic goals are better served by adopting an epistemic policy that requires us to accept rather than ignore the available evidence.

Additional worries over my proposal to reject (7) may stem from a failure to acknowledge the role of background beliefs in determining whether accepting an item of evidence will destroy my knowledge.  I know that there are no such things as zombies (i.e., in the traditional Hollywood sense of undead beings who roam the earth to feast on human flesh rather than the sense that concerns philosophers of mind).  Based on that, I also know that if the supermarket tabloid says that those partaking in Haiti’s political revolt are zombies then the supermarket tabloid is misleading.  My proposal requires that when we read of a Haitian zombie uprising in the tabloid, we ought to accept this evidence; even though we know it to be misleading.  But it is confusion to think that this is absurd or epistemically suicidal.  Accepting the evidence in this case (i.e., accepting the proposition that the supermarket tabloid has reported that those partaking in Haiti’s political revolt are zombies) does not destroy any of my knowledge.  Given my background belief in the unreliability of supermarket tabloids, I can accept propositions of the form the supermarket tabloid says that P at no epistemic cost.  The source of confusion here may lie in the fact that “accepting S’s report that P” is ambiguous between accepting the content of the report and accepting the proposition that S has reported that P.  Whether the latter entails the former depends on one’s background beliefs.  This explains why the situation is different when I accept the proposition that reliable Doug has reported that my car is not in the parking lot.  Given my belief in his general reliability, once I accept the evidence (i.e., once I incorporate the proposition that he has made the report into my belief system and make the relevant probability adjustments), I stand to lose my knowledge that my car is in the parking lot.  But, if the argument of this section is cogent, it is rational to incur such a loss.      

If our basic sources of evidence are generally reliable they will not lead us astray most of the time.  That’s a tautology.  And thus it is wise epistemic policy to always follow the evidence.  The benefits outweigh the costs.  The down side of this solution is that our sources of evidence are not perfect.  If Mom wants to rob me of my knowledge that we are having pigs’ feet tonight she can do so by producing some misleading evidence.[15]  But this is as it should be; or at least, as it must be.  We will, no doubt, occasionally be misled by our evidence.  To borrow Ginet’s (1980) phrase, we will sometimes end up knowing less by knowing more.  Our long-term goal of justice occasionally results in injustice and our quest for knowledge occasionally dooms us to ignorance.  Such is the human condition.[16]    

             

 East Carolina University



[1] G. Harman, Thought (Princeton University Press, 1973), p. 148.  Future references to Harman are all from this work.

 

[2] I.T. Oakley,  “A Skeptic’s Reply to Lewisian Contextualism”, Canadian Journal of Philosophy 31 (2001), pp.  309-332 at p. 318.  D. Lewis, “Elusive Knowledge”, Australasian Journal of Philosophy 74 (1996), pp. 549-567.  If Oakley is right then Lewis’ solution to Harman’s paradox fails for the same reason that Harman’s does.  An adequate discussion of Lewis’ epistemology is, however, outside the scope of this paper. 

[3] W.V.O. Quine, “Two Dogmas of Empiricism”, Philosophical Review 60 (1951), pp. 20-43.

[4] R. Sorensen, “Dogmatism, Junk Knowledge and Conditionals”, The Philosophical Quarterly 38 (1988), pp. 433-454 at p. 438.  All subsequent references to Sorensen are from this work unless otherwise indicated.

 

[5] See R. Nozick Philosophical Explanations, (Harvard University Press, 1981) and F. Dretske, “Epistemic Operators”, Journal of Philosophy 67 (1970), pp. 1007-1023.

[6] For a discussion of this kind of skeptical argument see U. Yalcin, “Skeptical Arguments from Underdetermination”, Philosophical Studies 68 (1992), pp. 1-34.

 

[7] R. Audi,  Belief, Justification and Knowledge, (Belmont: Wadsworth, 1988).

[8] R. Feldman, “In Defense of Closure”, The Philosophical Quarterly 45 (81),  pp. 487-494 at p. 490.  All subsequent references to Feldman are from this work. 

[9] F. Jackson, “On Assertion and Indicative Conditionals””, Philosophical Review 88 (1979), pp. 565-589.

[10] Jackson has a somewhat different way of handling the counter-intuitive inference from B to A ą B.  But this does not concern us here.

[11] The example comes from D. Lewis, “Probabilities of Conditionals and Conditional Probabilities”, The Philosophical Review 85 (1976), pp. 297-315 at p. 306.

[12] An anonymous referee has pointed out that an argument containing premises which are unjustified but redundant may constitute a counterexample to this principle.  In response to this worry, one could either restrict the principle or say that when an agent presents an arguments with unjustified but redundant premises in support of some belief, the belief is not justified on the basis of that argument; although the agent would have another argument available (i.e., the old one minus the unjustified redundancies). 

[13] See C. Cherniak, Minimal Rationality (Cambridge: MIT Press, 1986) and E. Stein, Without Good Reason: The Rationality Debate in Philosophy and Cognitive Science (Oxford: Clarendon Press, 1996). Sorensen, Vagueness and Contradiction (Oxford: Clarendon Press, 2002), also argues for epistemically acceptable inconsistencies.   

[14] If we understand (7) propositionally, it cannot be defended merely on the grounds that believing it would serve our epistemic goals.  Compare “Good scientists go to heaven”.  The fact that believing this might make me work harder at obtaining knowledge does not make it epistemically credible.  I take it that a proponent of (7) would rather we see it as a rule:  If you know that e is misleading, disregard it.  If this is how we are to understand (7), the argument above can be easily modified to account for that.  I thank an anonymous referee for making this point clear to me.

[15] But remember—she can’t do this very often.  If she loses her credibility, she will eventually obtain the evidential status of a supermarket tabloid. 

[16] This paper was presented before the 96th meeting of the Southern Society for Philosophy and Psychology in New Orleans, Louisiana.  I thank Umit Yalcin, Debra Tollefson, Bill Lycan and two anonymous referees for their very thorough and helpful comments on earlier versions of this paper.