Wednesday, February 24, 2010

Ideas that haven't really fit into this story yet

1. So far I haven't fit in the idea that epistemology governs actions as well as beliefs into this story.

2. There are other important differences between ethics and epistemology. A maximizing rationality seems wildly implausible in one, and not so wild in the other. One is focused on theory more than action.

3. A very different, longer project, would talk about the primacy of practical reason. Note that at the very beginning of our story about theoretical reason is a choice. Maybe it's not right to represent that as a choice, but then what can we say when there are two options and no norms pushing one way or the other? Ah, but there are norms, they're just not epistemic ones? Then what kind are they? If they're practical norms or considerations of some other kind, then standing above objective, theoretical reasons are some other sort of reasons. Isn't that strange.

4. Korsgaard has her own arguments against moral realism that I haven't read yet.

5. Maybe belief really is just action. Haven't read Stalnaker yet. But that would just mean that there's nothing weird about practical reaosn in the first plac,e no? ANyway, havne't read it yet.

More?

What's the obvious thing to say, and what could I be saying that's not obvious?

Here's an obvious story to tell: Consider any old thing that you know. It's justified. Well, you follow justification up the chain, and you end up stuck with something that's gotta be basic. Well, what's our justification for believing that? Either it's experience (not a belief), a priori knowledge, or we just take it as basic (and then we had better figure out some way of distinguishing those beliefs from others). In response to the familiar regress problem we might have to deal with arguments against this way of understanding justification, but we may feel that our picture of justification has got to be foundationalist. And maybe the beliefs that we take as basic are the really really obvious ones. And then we argue that moral realism or something is really really obvious in the same way.

This is a simplistic story. And it wouldn't be write to ascribe this to anyone. But, for the sake of my own thought process, lemme ascribe it to someone who doesn't deserve this kind of simplistic treatment. We'll call him E. E tells a sort of similar story. All of our beliefs are generated by some basic belief-forming principles. How are these basic belief-forming principles (such as IBE) justified? This goes just one step above the previous analysis of the tree of justification. Meaning, take whatever our basic beliefs are, and say that we're not taking them as basic. Maybe because we're relying on experience directly, or maybe because we're relying on a priori knowledge. But that means that we have some method for forming beliefs, that takes experience or a priori intuitions and results in justified beliefs. These methods themselves need justification, though, and so now we're really left up the creek without a paddle. No matter what your solution, now you have the problem of those who simply accepted certain beliefs as basic. So how do you distinguish between the good basic beliefs and the bad ones? You tell a story about which basic methods are justified and which methods are not.

This could not provide ultimate justification, of course. Any first year philosophy student can see that this would only shift our problem to another method, another principle and another unjustified belief. I think that this is pretty obvious, that there is no ultimate ground, only relative grounds, and epistemic regress is unavoidable.

E's answer seems to be just to provide some belief that justifies IBE instead of IWE. That's great, and maybe we should take that as primitive instead of IBE, but how does that help, ultimately?

I think the answer is that it doesn't. So now I want to explain where there is room for someone to say something a bit different.
----------
One point to note is that the idea that we could be justified from top to bottom in all our beliefs is necessarily false. I argue this by pointing out the epistemic realism--a view whose falsity would result in there being no justified beliefs at all--cannot be justified. I then argue that whatever our most basic beliefs are, they similarly cannot be justified--after all, how would you justify them without some epistemic principles, and I am denying you even those at this point.

Then I argue that this doesn't spell doom at all for our cognitive enterprise, because these beliefs that cannot be justified are also not unjustified, that is, it's not like we think that they're wrong. It's just that there can be no reasons, either for or against.

Now, where does ethics fit into this picture? The typical picture is that moral realism might gain justification some where down the line. But, given this picture, the most promising thing to attempt is to formulate some sort of principle that can serve as the most basic ones, the kinds that can't be criticized for being unjustified since we don't have the epistemic resources to do so.

This needs care, for two reasons. One, there's a problem with taking particular beliefs as basic. I don't know what it is, but there is such a problem. Another is that epistemic principles can conflict and be unstable.

So, in sum, this is the picture I'm providing. Epistemology is an a priori endeavor, where we necessarily start by taking certain things for granted. These things that we take for granted are similar to intuitions, in the sense that the only reason we believe them is because they're obvious and available to us, and not because we think that there's a reason that they're true. In fact, at the very foundations we can't have reason to think that things are true, and so there's no way to criticize our taking certain things for granted. This means that, very quickly if we choose well, we start getting a system of epistemology, of what's justified and what's unjustified, what should we believe and what we shouldn't. And this is all for free, more or less, from those first things. We choose things when we can't be criticized for not choosing them (when they don't conflict). We get a lot, but we're sloppy and it's complicated, so we're still fighting over it. But the pressures of cooperation and living together force us to refine our system over time, and we've got it pretty down in practice.

(Note that my arguments show, I think, that if our most basic item of cognitive commitment is normative, we're gonna be in trouble, because eventually theoretical reason runs out of the resources from which to defend itself with. This is natural and untroubling.)

The question is, have we left things out of our picture? Maybe we left ethics out. Maybe we left religion out. Did we screw up? How could we tell if a basic belief doesn't work out? After all, at the very start of our cognitive adventures we have no epistemic principles, and no way to criticize you for believing anything at all. So what's to stop you from taking something as basic? Nothing possibly could. Well, should we just pack up and go home? No. We have to make decisions, and this is something that we might prefer to avoid, but it's not something that we can. (With a nod to that piece that I like by David Lewis) at a certain point we just pick between competing systems, and that's all we can do. So we can add a belief to our basic set and then note the troubles that occur, and then decide whether it's worth giving up the conflicting beliefs or the basic one.

How does this work out with ethics? We need to see if there are any conflicts with the rest of our beliefs. Well, there are, and these are the arguments for anti-realism in ethics.

But here comes epistemology and theoretical reason again to add an interesting twist. We started by trying to find a place for the moral norms governing practical reason inside theoretical reason. And then we noted that the foundations of theoretical reason are such that there are a number of blank spaces. And then we noted that ethics could be plugged in, but that it conflicts with much of the rest of our picture. Here comes an interesting suggestion: maybe our theoretical picture of the world sucks. How could that be? Suppose that we had an epistemic principle that said something like "Never ever believe anything without justification." That principle is self-defeating, since it would undermine our basic principles (that presumably lead to this principle) and epistemic realism itself! So that would suck. Maybe there are other epistemic principles that are behind our objections of moral realism that are simply not being sufficiently reflective to note that they're self-defeating.

How could this be? There are these arguments against moral realism. Do they actually also apply to epistemic realism?

Now, a note: epistemic realism is a VERY different belief than moral realism. From the perspective of theoretical reason, it's basically rock bottom, and that's what the above arguments show. So to argue that we could prove moral realism by parity arguments...that's just not going to fly. Another problem is that none of these arguments could actually be considerations counting AGAINST epistemic realism. It's unclear what they would be capable of showing at all in a discussion of epistemic realism (yes, they could show epistemic expressivism, but then you're in the peculiar position of defending expressivism against arguments that it falls into nihilism in order to defend moral realism?)

At best, here's what we could hope for: these arguments show us that we have permission to take moral realism as a basic principle, or something. This would involve clearing up the confusions about what ethics and epitemology require. And then we would have permission to take it as primitive.

So this is the lesson of Enoch combined with the lesson of Cuneo: Enoch tells us that if we could take ethics as basic that would rock. The lesson of Cuneo is, maybe ethics isn't all that much worse that epistemology.

Tuesday, February 23, 2010

Intuitionism and Epistemology

There is a certain sense in which I'm defending a kind of intuitionism about epistemology. On the account I've been writing about, what plays the role of an intuition are epistemically optional beliefs. They are intuitions in the sense that they are our most obvious beliefs, but there is a reason why they are our most obvious beliefs: because they are our most basic beliefs, and so they play a central role in our webs of belief, so to speak. What this means is that a principle like IBE (if it's basic) is not believed because we have reason to think that it's true, but rather out of a pure intuition--that is, we make an epistemicially optional choice to believe in it. Same with epistemic realism: it's a pure choice, made for no reason having to do with the truth. This is what an intuition could be.

Then there is a sense in which we might be able to defend a corresponding kind of moral intuitionism, at least in theory. Now, epistemic realism is at the very very foundations of our theoretical world, and moral realism plays no such role. But perhaps there is some belief that we may take as basic that does not interfere with out our other basic epistemic principles. This would make it epistemically optional, and then there would be a sense in which belief in certain ethical principles is an intuition.

This is one way of reading Enoch, I think. I think that there are two problems with Enoch. The first is that the I'm not sure why we should take as basic the pragmatic principle--it doesn't seem to get us anywhere. Second, is that unless you deal with all the arguments against moral realism first, the argument is implausible because your moral principle will conflict with your epistemic principles about explanation (for example).

But perhaps the following is a programme:
1. Show that ethics does not conflict with our other epistemic beliefs
2. Then you can take some ethical principle as basic.
3. Then you believe in ethics and can't be blamed for it.

That might just be another way of formulating the question that isn't helpful. But I'm tired, and that's all for tonight.

Belief and Reasons

Scanlon p.36:

"Even if it is true that in order to believe something one must take there to be a reason for thinking it true (so there can be no such thing as believing something simply becuase one would like it to be true)..."

Note: There can be no reason for believing in epistemic realism. Unless we're going to say that the true logic of a denial of epistemic nihilism doesn't resemble it's surface appearance in some strange way (a Wittgensteinian move or something), we're stuck with a regular old belief that we can't have reason to think is true. So if that's not belief, then we have no foundation for knowledge.

Theoretical and Practical Rationality

I understand that some believe that practical rationality is autonomous from theoretical rationality, and that the search for moral realism is a lost cause because it's an attempt to build up practical rationality from theoretical rationality. I sorta understand that. What I don't even sorta understand is why we would try to philosophize about practical rationality, if that is the case. Isn't that the attempt to bring our theoretical reasoning abilities to bear on practical rationality? Yes, to understand something is not to inhabit it, but then how is that different from the original attempt to do moral philosophy from the standpoint of the theoretical reasoner? I'm confused.

Monday, February 22, 2010

Why epistemology, again

Just a reminder:

There are tons of things for which we have reasons. We have reasons to fear, reasons to hope, etc. These are reactions that are conceptually distinct from belief. So why think that the study of what reasons there are to believe things might help vindicate the study of what moral reasons we have to do things? Because unlike most of these areas, it's hard to pass off epistemic statements as being second rate from a cognitive point of view. If epistemic statements can't be true or false, then we would seem to be in a boatload of trouble. And what we really want is to distinguish between reasons to fear something and reasons to believe something, and say that reasons to act morally are more like the latter than the former.

At least, that's the idea. If reasons to believe are ultimately not true/false then all of our knowledge seems to be in trouble.