Re: Morality (was Gene duplication and design)

From: Tedd Hadley (hadley@reliant.yxi.com)
Date: Wed Apr 26 2000 - 17:25:54 EDT

  • Next message: Susan Brassfield: "Re: ad hominems & the future of this Reflector"

    "Richard Wein" writes
      in message <007401bfaf18$921ccc60$792e70c3@richard.wein.virgin.net>:
    > From: Tedd Hadley <hadley@reliant.yxi.com>
    >
    > > A conflict can not occur unless one side believes the other's
    > > beliefs are rationally flawed. That's why we ask questions
    > > and probe other people's belief systems -- because at first
    > > glance they appear irrational.
    >
    > Surely conflicts need not be over whether a belief is rationally flawed.
    > They can result from different value judgements, e.g. different codes of
    > morality.

       Well, I do tend to think we regard different values and different
       codes of morality, deep down, as irrational. (However, I find
       consistently of those I've investigated that they're not irrational
       so much as based on unsound, unexamined, or untested tenets.)

    > > If you disagree that morality in society has improved, I'd like
    > > to hear your argument. Otherwise, we must agree that advanced
    > > intelligence combined with knowledge appears to lead to a greater
    > > concern to eliminating suffering.
    >
    > I agree that morality has improved (at least by most people's standards),
    > but I don't think you can draw conclusions about the morality of the
    > putative intelligent designer, about whom we're told nothing, based on human
    > experience. Why should we think that the intelligent designer has anything
    > in common with humans (other than the basic fact of having intelligence)?

       My reasoning is this, briefly. It seems more likely that an ID
       would be a member of a race rather than the only one of its kind
       simply because suggesting another form of evolution as the
       explanation for its origin is superior to assuming that the ID
       always existed (a god) or popped into existence from nowhere (divine
       creation of an ID?). (It seems to be the case that ID theory
       reduces to either abiogenesis/ evolution elsewhere in the
       universe, or something very much like a god, which is probably
       why most ID'ers are so reluctant to speculate about the ID.)

       If the ID is a member of a race, it must be a social organism
       requiring interaction and conflict resolution with others of
       its own kind at some point in its evolution (I see conflict
       resolution as practically a law of the universe because,
       inevitably, organisms will multiply to compete for the same
       resources).

       If a race of organisms has no innate value for others of its
       own kind, it will surely become extinct sooner or later, a victim
       of endless internal battles over resources, rather than compromises
       and cooperation. Without innate value for others, any intelligent
       organism could correctly conclude that killing all others of
       its own kind would maximize its resources. Innate value must
       come from inside -- it must be hardwired. In humans, our
       hardwired innate-value- for-others is empathy: the ability to
       feel another person's pain or pleasure as our own. Without
       that, the human race would probably never have formed--heck,
       social organisms would never have occurred..

       If a race has intelligence and a form of empathy, I think this
       leads naturally to morality and a desire to rationally maximize
       the pleasure and minimize the pain of empathy. This is what
       we observe in the human race (agreeing with you above that many
       might not believe "morality" has improved, but everyone can
       agree that people seem to be more concerned about reducing human
       suffering at this point in human evolution).

       If we agree that an ID with intelligence must have some kind of
       empathy, then we can conclude that the most logical basis for
       applying empathy --that is, finding the attribute of any given
       organism to deem worthy of empathic feelings-- should be
       self-awareness. In humans, empathy without knowledge might
       allow us to feel that humans of our own race are the only ones
       worth empathizing with. However, empathy with knowledge tells
       us that all humans -- or even all organisms capable of feeling
       pain and pleasure the way we do -- are worth that. Likewise,
       I would expect that an ID would place its focus of intelligent
       empathy on self-awareness rather than the attribute of simply
       being of its own particular race. Empathy that only fixates
       on the color of one's skin or shape of face, etc., seems
       far too fragile to allow for the surivival of a race for any
       length of time needed to produce advanced intelligence.

       However, if the ID does value self-awareness and does wish to
       minimize pain, it would surely use a different means to "create"
       self-aware organisms (and by "self-aware", I don't mean to
       limit such to humans or primates or even mammals). Thus, I find
       it far less likely that an ID, if it did exist, would be using
       a process that looks to us like evolution.

    > > If it helps to reduce confusion, we could just talk about
    > > suffering. Does advanced intelligence combined with
    > > knowledge lead to a desire to reduce suffering? Unless
    > > here are other important factors not considered, clearly it
    > > does.
    >
    > It certainly isn't clear to me that the goal of a moral code must be to
    > reduce suffering (unless you simply define it that way), nor that an
    > intelligent being must have any moral code at all.

       I've never known a moral code which didn't have -- at its base --
       the goals of maximizing pleasure and minimizing pain. Think about
       any moral rule whatsoever. Ultimately, it reduces to just that.

    > My present belief is that moral codes are subjective, though partly informed
    > by objective factors such as genetic inheritance.
       
       I find it helps to think of morality as the result of purely
       selfish impulses conflicting with empathic concerns for others
       in a framework moderated variously by ignorance, superstition,
       and, in more modern times, reason. It is my opinion that
       without ignorance or superstition, morality converges to
       an objective ideal (I don't know what that is yet, though).

    > However, I've just finished reading a very thought-provoking
    > book ("The Fabric of Reality" by David Deutsch), which is causing
    > me to reconsider some of my beliefs. It argues that "if ethics
    > and aesthetics are at all compatible with the world-view advocated
    > in this book, beauty and rightness must be as objective as
    > scientific or mathematical truth. And they must be created in
    > analogous ways, through conjecture and rational criticism." I
    > don't fully understand the book, and I'm sure I still won't even
    > after I've read it again. But it does at least attempt to offer
    > a rational argument for the objectivity of morality and a way
    > to discover it, as opposed to just "because the Bible says so"!
    >
    > Has anyone else here read this book? If so, I'd very much like
    > to discuss it. It's at least partly on topic, as it deals with
    > evolution among other things.



    This archive was generated by hypermail 2b29 : Wed Apr 26 2000 - 17:25:47 EDT