Re: Morality (was Gene duplication and design)

From: Richard Wein (tich@primex.co.uk)
Date: Thu Apr 27 2000 - 06:19:02 EDT

  • Next message: Bertvan@aol.com: "Re: Gene duplication and design"

    From: Tedd Hadley <hadley@reliant.yxi.com>

    > Well, I do tend to think we regard different values and different
    > codes of morality, deep down, as irrational. (However, I find
    > consistently of those I've investigated that they're not irrational
    > so much as based on unsound, unexamined, or untested tenets.)

    I can only say that that's not true for me. Of course, I would consider a
    moral code to be irrational if it isn't internally consistent. But I
    strongly feel that the *basis* for moral codes is subjective.

    If I should be persuaded by David Deutsch's argument that there is a
    rational basis for morality, it wil be *in spite* of my strong intuition to
    the contrary.

    > My reasoning is this, briefly. It seems more likely that an ID
    > would be a member of a race rather than the only one of its kind
    > simply because suggesting another form of evolution as the
    > explanation for its origin is superior to assuming that the ID
    > always existed (a god) or popped into existence from nowhere (divine
    > creation of an ID?).

    This is a reasonable assumption, but not a necessary one. Perhaps the ID
    which created life on Earth was itself a one-off creation of a species that
    evolved! Anyway, it ignores the possibility (at least from a theist's point
    of view) of an eternally existing God outside the universe. Since almost all
    IDers are theists, they're hardly likely to be persuaded by such an
    argument.

    > (It seems to be the case that ID theory
    > reduces to either abiogenesis/ evolution elsewhere in the
    > universe, or something very much like a god, which is probably
    > why most ID'ers are so reluctant to speculate about the ID.)
    >
    > If the ID is a member of a race, it must be a social organism
    > requiring interaction and conflict resolution with others of
    > its own kind at some point in its evolution (I see conflict
    > resolution as practically a law of the universe because,
    > inevitably, organisms will multiply to compete for the same
    > resources).
    >
    > If a race of organisms has no innate value for others of its
    > own kind, it will surely become extinct sooner or later, a victim
    > of endless internal battles over resources, rather than compromises
    > and cooperation. Without innate value for others, any intelligent
    > organism could correctly conclude that killing all others of
    > its own kind would maximize its resources. Innate value must
    > come from inside -- it must be hardwired. In humans, our
    > hardwired innate-value- for-others is empathy: the ability to
    > feel another person's pain or pleasure as our own. Without
    > that, the human race would probably never have formed--heck,
    > social organisms would never have occurred..

    Many animals have social structures without (I think we can safely assume)
    any sense of empathy. Take ants as a particularly good example. Their social
    behaviour is entirely instinctive.

    I think your argument must be that, once a species evolves free will, its
    instinct for social behaviour must evolve into empathy; otherwise it would
    cease to act socially and become entirely selfish. I can imagine a species
    where an individual's interaction with its peers is entirely on a selfish
    basis, but I'm not sure that such a species could become highly advanced
    (since co-operation would be relatively limited), and anyway some sort of
    unselfish behaviour towards one's offspring seems inevitable, as it would
    almost certainly confer a selective advantage. (Actually, this wouldn't
    apply to a species which abandons its offspring at birth, such as turtles.
    But it's hard to imagine this behaviour remaining unchanged as a species
    grows more intelligent.)

    So I accept this part of your argument. But note that this assumes the
    designer evolved by natural selection. You're a priori ruling out any other
    sort of naturalistic evolution. (Which seems reasonable to me, but might not
    to everyone.)

    > If a race has intelligence and a form of empathy, I think this
    > leads naturally to morality and a desire to rationally maximize
    > the pleasure and minimize the pain of empathy. This is what
    > we observe in the human race (agreeing with you above that many
    > might not believe "morality" has improved, but everyone can
    > agree that people seem to be more concerned about reducing human
    > suffering at this point in human evolution).
    >
    > If we agree that an ID with intelligence must have some kind of
    > empathy, then we can conclude that the most logical basis for
    > applying empathy --that is, finding the attribute of any given
    > organism to deem worthy of empathic feelings-- should be
    > self-awareness. In humans, empathy without knowledge might
    > allow us to feel that humans of our own race are the only ones
    > worth empathizing with. However, empathy with knowledge tells
    > us that all humans -- or even all organisms capable of feeling
    > pain and pleasure the way we do -- are worth that.

    Empathy with knowledge may *tell* us that other organisms are worth
    empathizing with. That doesn't necessarily mean that we *will* empathize
    with them. There's a big difference between knowing something and feeling
    it. That's why we often do things that we *know* to be wrong and don't
    necessarily feel bad about it.

    > Likewise,
    > I would expect that an ID would place its focus of intelligent
    > empathy on self-awareness rather than the attribute of simply
    > being of its own particular race. Empathy that only fixates
    > on the color of one's skin or shape of face, etc., seems
    > far too fragile to allow for the surivival of a race for any
    > length of time needed to produce advanced intelligence.

    The question is how well does empathy extend from our close circle of
    friends and relatives to other members of our species, and, more
    significantly, to other species? Judging by mankind's history of violent
    conflicts and inhumanity towards other species, I would have to say not very
    well. Our attitudes towards other races and species do seem to be improving
    with better education, but the old ingrained prejudices keep resurfacing.

    Empathy can easily disappear when it conflicts with other interests. For
    example, we may well empathize with a lamb playing in a field. But how many
    of us feel bad about eating meat? Then again, vegetarianism is on the
    increase (at least in the West, but maybe not in the developing world,
    where traditional diets are being replaced by western fast foods!).

    Perhaps the designer gains more in pleasure from watching the antics of the
    human race than he loses in empathic suffering!

    > However, if the ID does value self-awareness and does wish to
    > minimize pain, it would surely use a different means to "create"
    > self-aware organisms (and by "self-aware", I don't mean to
    > limit such to humans or primates or even mammals). Thus, I find
    > it far less likely that an ID, if it did exist, would be using
    > a process that looks to us like evolution.
    >
    > > > If it helps to reduce confusion, we could just talk about
    > > > suffering. Does advanced intelligence combined with
    > > > knowledge lead to a desire to reduce suffering? Unless
    > > > here are other important factors not considered, clearly it
    > > > does.
    > >
    > > It certainly isn't clear to me that the goal of a moral code must be to
    > > reduce suffering (unless you simply define it that way), nor that an
    > > intelligent being must have any moral code at all.
    >
    > I've never known a moral code which didn't have -- at its base --
    > the goals of maximizing pleasure and minimizing pain. Think about
    > any moral rule whatsoever. Ultimately, it reduces to just that.

    That might be the original impulse for morality, but, like our other natural
    impulses, it has been modified under the influence of our intelligent mind.
    As a result, we have such moral imperatives as duty to country, upholding of
    religious and political dogma, work ethic, etc. Who can say that that the
    intelligent designer is not working under some other moral imperative.
    Perhaps he has developed the power of his rational mind (or his technology)
    to the point where he can switch off his feelings of empathy. Or perhaps
    he's a psychopath!

    All in all, while your argument is reasonable, it's far from conclusive. And
    I think we have much better arguments against ID. (At least against ID as a
    scientific theory.)

    Richard Wein (Tich)
    See my web pages for various games at http://homepages.primex.co.uk/~tich/



    This archive was generated by hypermail 2b29 : Thu Apr 27 2000 - 06:25:06 EDT