Friday, September 30, 2005


In response to posts at Richard's Blog

comparing interpersonal utility is not a fatal problem because I can fairly accurately predict your preferences (e.g. I expect you do not want to be poked in the eye) and therefore I must have a concept of their relative value and that I have that concept implies our scales are fairly similar and that I could get a 90% correct allocation of preferences all utility for all people for all events if I had enough time and information. (Of course that would probably fall short of the allocation they might achieve if they allocated it to themselves which might, in turn, fall short of the allocation with perfect information).
Almost every moral philosophy faces some sort of approximation problems anyway.

So we have a set of ordinal scales that are about as accurate as any other moral philosophy and we can reasonably compare them BUT only when we have a huge set of variables and only probabilistically. i.e. I cant say you like grapes more than apples unless I can match up our ordinal scales with maybe 10 other standard events and find that you place them in a certain identical order (also I would want to be sure we had removed any game theory aspects to it!) with just the grapes being out of place.
But still this demonstrates that maybe A wants the grapes "more" than B but doesn’t make it very clear how much more

economics helps a bit here in that we can record a large number of "trades" or much better "gambles" (because this avoids diminishing marginal utility - although it does add risk avoidance) where an individual might make bets with money (or similar) to own various things.

Once we calibrate the desires of the person we could then use a standard event (lets say having $10 was in a matching location in both preference lists) and allow the people to make bets with that event to give us a quantitative scale.

Of course there is an issue with this "preference utilitarianism" since it implies that people will make the right decisions for them (and won’t try to second guess the system).
The system could be adjusted for more perfect decision making by assessing the persons after the event analysis of their own decisions and their effective satisfaction. Then using that to determine where people are making potentially irrational decisions (such as killing someone in a fit of rage lets say).

this still leaves me with the most troubling problem of defining the utility of a person that dies or fails to came into existence and thus if we want to maximize average or total utility. Both seem to create unpleasant conclusions
1) A single super happy man
2) A billion only marginally happy people


Post a Comment

<< Home