Re: Moral Relativism? (Was: "Divine Command vs. Divine Nature theories of moral truth")

Greg Billock (billgr@cco.caltech.edu)
Thu, 5 Mar 1998 19:09:55 -0800 (PST)

Wes,

> >
> >But is the derivation a logical thing, or a psychological thing?
>
> I take it be be a "logical thing."
>
> > If I
> >hold that a certain moral truth is fundamental, and you think it is
> >derived, how are we to decide between the two?
>
> I don't have any general, one-size-fits-all answer to that question. It is
> perfectly possible for moral disagreement to be so deep that there is no way
> for the parties to reach agreement; but it doesn't follow that there is no
> objective moral truth or that one of the parties to the dispute is not
> mistaken.

I know what you mean, but what I'm presenting is this:
take moral beliefs A, B, C, ...., Z

system 1:
A => B => C => D .... => Z

system 2:
Z => Y => X => .... => A

There is total and complete agreement as to the content of the ethical
system, but a different (inverse) rule is used to get between them and
so what one system holds as a 'leaf' moral value the other holds as
a 'root'.

> The fundamental / derived distinction that I was assuming can be relativized
> to a person's - or a community's - practice of making moral judgments.
> Within any such practice, some moral principles will be fundamental and
> others will be derived. The question you are really raising is whether one
> such practice can be objectively superior to another.

What I'm saying is that in judging some system superior to another, we
assume a moral standard. It seems clear, then, than in judging the
UMS superior to all other systems, we are assuming a moral standard, in
which case what we are using to perform such a discrimination is (in my
view) very likely to be basically equal to the UMS already.

> >Unless you want to argue that deduction is the morally right way to
> >derive moral arguments,
>
> Often, it's a matter of applying general moral principles to this set of
> moral facts and see what follows... For example, we might argue: "Acts of
> type X are cruel. Cruelty is wrong. So acts of type X are wrong." If we
> did argue in that way, we would be implicitly acknowledging that the
> wrongness is cruelty is more fundamental in our moral practice than is the
> wrongness of acts of type X. There does not seem to be to be anything
> especially mysterious about this.

Sure there is. Why do you think cruelty is wrong? If you are like me,
it is hard to give an account of this, other than to say "well, in my
experience, acts of cruelty are wrong, therefore cruelty is wrong." I
would consider it most mysterious if you (or anyone) had some preprogrammed
axiomatic ethical system. In fact, the evidence is quite pronounced that
ethics are learned--the wide variety of ethics in different cultures is
exhibit A.

> This is a special case where the conditions under which you are warranted in
> *believing* that something is so are identical to the conditions under which
> it is *true*. But notice that the conditions under which *I* am warranted
> in believing that *you* are in pain are *not* identical to the conditions
> under which it is *true* that you are in pain.

That is so, but whether you have a warrant or not, you are incorrect to
believe that I am unless I really am. This is true for both of us.

> >So, is
> >moral truth in that category or not?
>
> No. Whatever makes the proposition "cruelty is wrong" true, it is not
> identical to the fact that - given the moral practice that I take for
> granted - I am warranted in believing that cruelty is wrong. Unlike the
> first person pain case, it is perfectly possible for moral judgments to be
> mistaken.

Of course, just like it is possible to mistake hot for cold. I'm not
arguing for the immutability of moral judgments--the fact that I think
they are learned should clear me of that charge! I'm arguing that while
it is sometimes useful to speak of objective morality, it is much less
useful to reify it.

> Kicking dogs is (ordinarily) wrong, because it is wrong to cause unnecessary
> pain. And it would be wrong even if you or I were unable to "see" this.
> Whatever it is that makes kicking dogs wrong, it is not that you or I have
> gone through some "internal process of determining that it is wrong."
>
> Would you say that slavery, for example, was not wrong until someone
> "decided" that it was?

That's absolutely right. Does that weaken our conviction that the
decision it was wrong was a good and right one? Not in the slightest.
It *does* serve to explain the significant amount of trauma that went
into the process, though.

[...]

> >I think there are only truths about what subjects determine to be moral
> >or immoral.
>
> So... If A approves of slavery (after all, doesn't the Bible support it?)
> and B disapproves of slavery (because all persons should be treated as Ends,
> and never merely as means), there is no fact of the matter about whether A
> or B is closer to the moral truth about slavery.

There is the fact of the matter that I think B is right. It seems to
me, though, that in examining how we might teach A the error of his or
her ways though, we can discover where the fundamentals are.

Is the problem that, in order to act to correct someone else's moral
decrepitude, you feel that we need a kind of moral mandate external to
ourselves, whereas I would argue that our own moral convictions serve
as all the mandate we need? But in light of the above, it would seem
to me that being able to establish the UMS would be a precursor to
feeling like you had a mandate--without such an identification, the best
you can do is *hope* you have the UMS (or a good approximation) and
consider that a mandate. What difference is there, then, between us?
Perhaps this: in correcting the faults of our neighbors, I'm perhaps
more open to the possibility that my own views may need altering, whereas,
in claiming to act for the UMS, you must needs be more determined
that yours don't. (I'm using us as examples here, not making any kind
of personality judgment. :-))

> >The idea of 'objective morality' seems to me a good way to
> >externalize an ethical system as a prelude to trying to get others to
> >agree with it. There's nothing wrong with that, but I think we need to
> >realize that acceptance of such a system is (or should be) dependent upon
> >moral judgments made by the other person.
>
> Are you afraid that if we allow ourselves to say that *anything* is
> objectively wrong, then we will become intolerant of people who do not share
> our opinions about various difficult and disputed questions? If so, then I
> want to know where you stand on the value of tolerance itself? Is that
> completely subjective as well?

I have learned to value some other moral systems as 'peers' or something
like that--that is, while I don't agree, I can see the merit in the
complete system. Headhunting. For a hundred thousand years or so,
people on Irian Jaya practiced low-level border skirmishes and headhunting.
In the past one or two decades, this practice has been increasingly
frowned upon by people who have the power to stop it. But what ecologists
are observing is that headhunting and low-level agression like this is
intricately woven into mechanisms for keeping the population in balance,
establishing territory rights, and so forth. In Europe, low-level
aggression hasn't been the norm for thousands of years. Instead, we
have organized ourselves into much larger population masses, stripped
the local habitat, driven every non-domesticated animal more obnoxious
than a deer extinct in those areas, and launched major wars. So is
it wise to eliminate low-level aggression and head-hunting? I'm not sure,
but I'm not comfortable sitting in judgment over that system. I am
certainly glad I'm not a part of it, but I can see where counter-criticisms
of non-head-hunting society can be keen as well. Or polygamy, modesty,
all the other Western moral values that have been transferred to other
populations. I'm happy to live within those values, typically, as I
share them, but I'm not sure they make a good replacement system.

So I guess the upshot is that I think moral values are situated, and
what role tolerance plays in the system can be traded off with other
factors and still acheive a balance that I may not be in moral agreement
with, but that I am happy not to interfere with.

> >In this function, it doesn't much matter how the system is presented, and
> >which moral truths are painted in at the base and which at the apex--the
> >structure is a vehicle for transmitting an ethical viewpoint.
>
> Why "for transmitting?" What you are describing *is* an ethical viewpoint.
> Our disagreement seems to be about whether one such viewpoint can be any
> closer to the truth than another. A consistent moral relativist has to say
> NO, on the grounds that there is no objective moral truth.

The why is in the sense that while there may not be objective morality,
that doesn't mean it is unwise or undesireable to get others to agree about
morality. I wish, for instance, that I could get everyone to agree that
corporate colonialism is evil. Whether I do that in some more deductive
fashion by trying to point out to someone how their basement moral
beliefs tend to imply that, or in pointing out the various injustices that
result, and trying to get someone to inductively draw the conclusion, seems
to me more a vehicle than the content.

> >From this point of view, there will naturally be no interesting questions
> about the relation between God or God's Nature or God's Commands and Moral
> Truth. At least not of the sort I meant to be raising...

If this isn't the discussion you're most interested in, I won't feel bad
if you don't want to prolong it. Perhaps we can move to Gary's thread,
as I think he is more 'on-topic' and saying stuff I find myself really
agreeing with.

-Greg