Putting intelligence in
I can think of 3 ways to put intelligence into this (as surely anyone who wants to devalue chimps, or rats for that matter, must want to do)
1) As a multiplier - i.e. Bill matter more than Mark by the ratio to which he is smarter (therefore every unit on his preference list is multiplied by that ration when comparing them
Thus every unit of utility is measured as intelligence * approximated utility (from the above method)
This one requires a much extended upper end to the scale to provide the discrimination against lower life forms that is desired BUT that also would provide quite an elitist society (Richard and I might benefit from but some of our friends might not).
2) As a matter of having more interests and more abstract interests
This works by the fact that there will simply be more opportunities to make a smart person happier.
After calibrating normal things like being pricked with a pin there will be a large set of higher goals that only exist in the set of the more intelligent individual - therefore the system might automatically favor that individual.
This is related to "domain specific knowledge gathering" sort of things discussed in the humans thread.
However this probably won’t produce the total domination of lower life forms most would desire. Philosophers might benefit a lot from this.
3) Writing off certain things as invalid for comparison, for example saying a shark can't feel pain even if it fears pain more than a set of other things on its preference scale than also exists on our preference scale in almost identical form.
Maybe a better example is saying a chimp can’t feel love or something along those lines.
This seems more like what people actually do but also seems pretty dubious morally but I guess one could argue each step is an evolutionary advancement and that each advancement carries with it some sort of potential to have rights or intrinsic right.
1) As a multiplier - i.e. Bill matter more than Mark by the ratio to which he is smarter (therefore every unit on his preference list is multiplied by that ration when comparing them
Thus every unit of utility is measured as intelligence * approximated utility (from the above method)
This one requires a much extended upper end to the scale to provide the discrimination against lower life forms that is desired BUT that also would provide quite an elitist society (Richard and I might benefit from but some of our friends might not).
2) As a matter of having more interests and more abstract interests
This works by the fact that there will simply be more opportunities to make a smart person happier.
After calibrating normal things like being pricked with a pin there will be a large set of higher goals that only exist in the set of the more intelligent individual - therefore the system might automatically favor that individual.
This is related to "domain specific knowledge gathering" sort of things discussed in the humans thread.
However this probably won’t produce the total domination of lower life forms most would desire. Philosophers might benefit a lot from this.
3) Writing off certain things as invalid for comparison, for example saying a shark can't feel pain even if it fears pain more than a set of other things on its preference scale than also exists on our preference scale in almost identical form.
Maybe a better example is saying a chimp can’t feel love or something along those lines.
This seems more like what people actually do but also seems pretty dubious morally but I guess one could argue each step is an evolutionary advancement and that each advancement carries with it some sort of potential to have rights or intrinsic right.
0 Comments:
Post a Comment
<< Home