Parsimony [was Re: macroevolution or macromutations? (was ID)

From: Brian D Harper (bharper@postbox.acs.ohio-state.edu)
Date: Tue Jun 13 2000 - 15:12:49 EDT

  • Next message: Bertvan@aol.com: "symbiosis and Cliff"

    At 03:09 PM 6/13/00 +0100, Richard wrote:

    >From: Cliff Lundberg <cliff@cab.com>
    >
    >
    > >Parsimony seems to be something you look at when you're really
    > >in the dark and have nothing to go on but general principles. There is
    > >such a thing as correct explanation and it doesn't necessarily have any
    > >connection to parsimony.
    >
    >So how do you determine which is the "correct" explanation, without
    >considering parsimony? You can't. For any set of observations, there are an
    >infinite number of theories that could explain them. For example, consider
    >fitting a curve to a set of data points. There are an infinite number of
    >different polynomials that will fit, not matter how many data points you
    >have. But, we would tend to reject higher order polynomials on the grounds
    >of parsimony. In fact, knowing that there are likely to be random errors in
    >the data, we would probably accept a simple curve which gives an imperfect
    >fit, in preference to a high order polynomial that gives a perfect fit,
    >because the latter seems ad hoc.

    Let me ask a question that I recently encountered in a book by Rene Thom.
    The question is related to the above, except I want to first divorce it from
    mere curve fitting. IMHO, curve fitting tells you very little about the
    "correctness"
    of a model. I recall one of my professors saying that all data becomes linear
    if you take enough logarithms :). So, let's suppose we have two different
    models.
    Preliminary experiments have been used to determine all free parameters.
    Now we need some "model verification" experiments. Experiments of a
    fundamentally
    different nature than those used to characterize the model. Prediction of these
    experiments with no additional "tweaking" of parameters gives some strong
    support for the model.

    OK, so we compare the predictions of the two models for the verification
    experiments. One model does a tremendous job "fitting" the experiment
    by any quantitative measure, such as square mean error, but does a
    really lousy job matching the overall "shape" of the data. The other model
    predicts the shape very well but is way way off quantitatively.

    What would parsimony have to say in this situation?

    Brian Harper
    Associate Professor
    Mechanical Engineering
    The Ohio State University
    "One never knows, do one?"
    -- Fats Waller



    This archive was generated by hypermail 2b29 : Tue Jun 13 2000 - 12:05:06 EDT