I've got a problem with the "paradox" assumptions. The claim is that
pushing the one person in front of the train will stop it and prevent the
five from being injured. It seems to me that a normal evaluation of such
a situation makes the train being stopped highly questionable. The
difference between a stationary 70 kg person and a speeding megagram
(gigagram ?) train makes stopping doubtful. In contrast, switching to a
different track is effective--the train will either miss the 5 by being
on the other track or miss 6 because it jumps the track. Is the
difference in evaluation moral, or because the normal people don't fully
believe the story?
Dave
On Mon, 26 Mar 2007 14:18:13 -0400 "David Opderbeck"
<dopderbeck@gmail.com> writes:
There is an interesting article in this month's Economist that
illustrates, I think, some of the problems with social Darwinism,
particularly when it is linked to a particular political outlook, as it
seemingly inevitably is (article here:
http://www.economist.com/science/displaystory.cfm?story_id=E1_RRRTQSD).
The article reports on a study of six people who have suffered damage to
a part of the brain (the ventromedial prefrontal cortex (VMPC)) that is
involved with social emotion. The study showed that these people were
more likely than a control group to provide a "utilitarian" answer to the
"runaway train paradox."
The "runaway train paradox" involves two dilemmas -- in one, you must
decide whether to push a person in front of an oncoming train in order to
slow the train before it hits five other people further down the line; in
the other, you must decide whether to switch the track so that that train
will hit only one person further down the line rather than hitting five
people. Most people will hesistate to push a person in front of the
train to save five lives, but will not hesistate to switch the track so
that the train hits one person further down the line instead of five.
The six subjects with damaged VMPC's felt the same about both
possibilities -- they would not hesitate in either case to sacrifice one
person in order to save five.
The article explains that "In these cases it seems that the decision on
how to act is not a single, rational calculation of the sort that moral
philosophers have generally assumed is going on, but a conflict between
two processes, with one (the emotional) sometimes able to override the
other (the utilitarian, the location of which this study does not
address)." This yin-and-yang of emotional and rational responses, the
article says, "fits with one of the tenets of evolutionary psychology....
This is that minds are composed of modules evolved for given purposes....
The VMPC may be the site of a 'moral-decision' module, linked to the
social emotions, that either regulates or is regulated by an
as-yet-unlocated utilitarian module. "
So far, perhaps, so good. All of this seems very speculative, and a
sample size of six people with brain damage hardly seems adequate, but
nevertheless, it wouldn't be surprising that the emotional and rational
aspects of moral reasoning relate to different parts of the brain, and it
doesn't problematic per se if those parts of the brain developed over
time through evolutionary processes. The kicker is in the article's
concluding paragraph:
This does not answer the question of what this module (what philosophers
woudl call 'moral sense') is actually for. But it does suggest the
question should be addressed functionally, rather than in the abstract.
Time, perhaps, for philosophers to put away their copies of Kant and pull
a dusty tome of Darwin off the bookshelf.
It seems to me that in this paragraph the article crosses from
descriptive to prescriptive; from science to metaphysics. This is
particularly so in that, as a devoted reader of the Economist, I'm well
aware of that magazine's pragmatist / libertarian political philosophy
and its slant towards materialist metaphysics. In a very subtle way,
this is an example of the materialist / pragmatist saying: "See there
... all that 'moral sense' and whot is in your head. We shall move
beyond this and learn to develop our utilitarian modules."
To unsubscribe, send a message to majordomo@calvin.edu with
"unsubscribe asa" (no quotes) as the body of the message.
Received on Mon Mar 26 16:51:33 2007
This archive was generated by hypermail 2.1.8 : Mon Mar 26 2007 - 16:51:36 EDT