Friday, March 5, 2010

From Scanlon: "Why should this challenge be so difficult? Samuel Scheffler has suggested one possible answer. What 'lies at the heart of consequentialism' he says is 'a fundamental and familiar conception of rationality that we accept and operate withint a very wide range of contexts.' This is what he calls 'maximizing rationality.' 'The core of this conception of rationality is the idea that if one accepts the desirability of a certain goal being achieved and if one has a choice between two options one of which is certain to accomplish the goal better than the other then it is, ceteris paribus, rational to choose the former over the latter.'"

In context, this is a disscussion about a well-known objection to deontological ethical views. The accusation is that there is a maximizing principle of rationality, that if one accepts something as a goal then one should maximize the occurrence of that goal and minimize anything that obstructs that goal. And Scanlon offers a response to it. So goes that discussion.

But I'm interested in something else, and that is the intuitions that crop up in this discussion. I think that there are clearly some times when being moral requires us to confer value on states of affairs, and this is what drives the "paradox of deontology." (Scanlon resists the idea that we always accept some goal whenever we accept some reason or confer value on some action). But there clearly is a real sense of some things being morally right that does involve a maximizing rationality. The idea that you should do something a little bit wrong in order to do something very good has great currency in ethics for a lot of people. Maybe it's wrong to lie most of the time, but when a human life is at stake the equation changes. It then becomes right to do the bad thing in order to do a great deal more good.

Let's talk about epistemology. Is there anything like this? Sorta. There is a sense in which, when taking a third person perspective on the matter, you can say "we should invest in science education because more good will come out of that, even though in doing so we'll take away resources from math. Even though we value people having knowledge, we will be able to maximize the potential for knowledge by focusing on science education." That works.

How strong is the parallel to ethics, though? Here's something that doesn't seem to me as if it should work in epistemology: Say that it's empirically true that a person will do a better job at believing true things and disbelieving false things if they believe that there is a secret ghost that is in their brains and gets very very upset at them whenever they have a false belief. The myth keeps these kids on their toes. Now, should we apply the maximizing rationale here, and say that there's a fine epistemic trade-off going on: we've given up one false belief for the benefit of far many more true beliefs? This seems very very wrong.

I think that the ethical parallel is much more plausible. We very well might teach a person that they should be willing to harm one person, as long as the good they can do outweighs the bad that they've done. This is the burden of consequentialist theories. But my point here is not that consequentialism is true in a way that it's not for epistemology (I have no idea). My point is that there certainly are some cases when it would be right to do something bad for the benefit of doing far more good. But in the epistemic case my sense is that we can never justify believing some falsity for the sake of believing far more many truths.

Now, this might just mean that having true belief isn't really a goal of epistemology. But really? It sure seems like a crucial goal of epistemology is ensuring that believers have true beliefs.

What could explain this difference between epistemology and ethics, the difficulty of applying a maximizing rationality to epistemology?

Either the maximizing rationality isn't really a rational requirement, epistemology doesn't involve commitment to some epistemic goals or aims, or something besides the state of affairs of having true beliefs is the aim of epistemology. I think it means that we're not taking on any goals in epistemology, unlike the case in ethics where sometimes we do value states of affairs. Epistemology is not about the value in certain states of affairs--I think that's what this means. Does this relate to the other ways in which epistemology is unlike ethics? Not sure. Sure hope so, cuz I need 20 more pages.

Thursday, March 4, 2010

A short list of philosophers that grapple with epistemic and moral norms together

Scanlon
Putnam
Korsgaard
Gibbard
Cuneo
Enoch
Sayre-McCord
Shafer-Landau
Parfit
Aristotle (sorta...epistemic virtues show up in Nichomachean Ethics)
Hartry Field
Frege
Kant
Alton
Dancy
Tim Williamson (Introduction especially)
Sharon Street (Darwinian Dilemma and Evolution [Draft])

To be updated as I continue thinking...

Gibbard

"What, then, of Putnam's claim that norms infuse facts? With this I fully agree: the beliefs I am calling factual depend on epistemic norms--on norms for belief. That we continue to hold the beliefs we do depends on our thinking it makes sense to do so. It would be incoherent, then, to dismiss all normative judgments as merely subjective, while accepting some factual beliefs as firmly and objectively grounded. From the point of view of their justification, they are on a par; factual beliefs and normative judgments stand or fall together.

None of this means that epistemic norms themselves are facts, or that factual judgments themselves are normative. The justificaiton of factual beliefs is a normative matter, but that does not turn factual beliefs into normative judgments. There remains the challenge to say what the difference is. I have suggested a simple linguistic test: a notion is normative if we can paraphrase it in terms of what it makes sense to do, to think or to feel. Later I try for a more systematic account..."

I talked with BN today and he advocated a view like Gibbard's. Without having studied him, I have to say that I feel the attraction.

Wednesday, March 3, 2010

Man, should've read Shafer-Landau a while ago

"We can be helped to see this by comparing ethics not to philosophy as a whole but to one of its close philosphical cousins. In my opinion, moral facts are sui generis, but they are most similar to another kind of normative fact--epistemic facts. Epistemic facts concern what we ought to believe, provided that our beliefs are aimed at the truth. Once one understands the concept of logical validity, then if confronted with a modus ponens argument one ought to blieve that it is logically valid. This is a true epistemic principle."

Also, "The epistemic principle [the causal test] is problematic because it invokes an entity--a good reason--whose existence is not itself scientifically confirmable. It's like saying that God sustains a universe that contains no supernatural beings. There's a kind of internal incoherence here: the claim discounts the existence of the kind of thing that is presupposed by the claim itself."

I think that my line of argument from the first chapter is starting to become even more focused. It goes like this: realists realize that epistemology offers some sort of help, but investigation into epistemology reveals much about why certain arguments don't apply to it. If moral truths are going to gain help from epistemology it's only because they can escape in this way, by being taken as basic.

Note to past Me

Keep on reading! It's really really important to never stop reading at any point in this writing process. When you're stuck, it's the ideas of others that push you through. Don't try to tackle things on your own!

Another way of putting my failure to cope with math, and why epistemology has been more helpful

Math is helpful because mathematical realism is similar to ethical realism, because there are similar issues involved with all sorts of realism. I originally mistook that for an actual resemblance between ethics and math. This resemblance actually exists between ethics and epistemology, though.