Utilitarianism
Utilitarianism
Concequentialism : there are two types
Under Act Utilitarianism, the rightness of actions is evaluated on a case by case basis, and things such as rules and laws are only present if they have practical usefulness. Under Rule Utilitarianism , the utility of rules rather than actions is evaluated, and all actions should conform to the rules with the highest utility.
The decision between these two seems to be a pragmatic one - fundamentally affected by assumptions regarding how reliable the process is. (So in a sense we are talking about setting the world up temporarily in an ideal state and seeing if it remains so)
Questions for the rule utilitarians
"What are your rules?
Why are they so good?
What if breaking the rule results in an unambiguous improvement to the world?
Questions for the act utilitarian
how fallible is the decision making process?
Who is the decision maker?
And what information do they use to make that decision?
Poor usage of rule utilitarianism results in people just defending rules because they are the status quo in their thinking and they don’t dare to try to recalculate benefits.
Poor usage of act utilitarianism results in people ignoring certain types of information such as indications that they don’t know all the facts, or aren’t the best people to make the decision, and the associated costs of being wrong.
Welfare:
there are two main types - preference utilitarianism and hedonistic utilitarianism .
These are effected by when one sums the utility (at the end of their life (a life worth living) or all across their life (classic hedonism) also if one includes higher level goals in themselves for example "humanity's" desire to go into space.
Aggregation:
Do you maximize total utility (which creates the ballooning population and diminishing average resource allocation problem) or maximize average utility (which results in trying to minimize population, presumably resulting in a small set of extremely happy people.)
Neither if which are quiet as scary as they sound. The first option reduces individual utility to a minimal level (in relation to resources) - but that level is not nearly as low as you might think - after all the least fortunate of these individuals must still have lives worth living.
In the average population problem you actually have many thousands of years of people who HAVE lived on average utility levels - this means that you actually do need quite a few happy people to average it out so no need to fear we will wipe out humanity.
Maximization:
Direct Utilitarians believe people should try to maximize utility
Indirect utilitarian believe people should follow laws and rules of thumb which are effective in practice.*
This seems to involve two concepts the first is the time of the decision making
Indirect utilitarianism is associated with making choices long before the event and direct utilitarianism is associated with making decisions in the heat of the moment. Obviously the best answer is to leave your brain switched on at both times evaluating if you have additional information or if you are too caught up to make an optimal decision. Both sides presumably claim this middle ground just justifying it slightly differently.
The other concept is the nature of complex relationships. Extreme indirect utilitarianism assumes relationships tend to be so complex, and current rules of thumb so efficient, that trying to achieve something will generally prevent you from achieving it.
Direct utilitarianism tends to assume that relationships are simple enough that you can understand them and trying to achieve something generally helps you to achieve that thing and if it doesn’t then you will learn from it.
* Actually this is a false dichotomy. The problem is that maximizing utility DOES involve following rules of thumb - in fact it is inconceivable that human would operate without rules of thumb. Meanwhile an indirect utilitarian will engage in calculation and use these to determine some actions (although maybe not always according to utilitarianism) if he is anything more than a very simple robot.
Before you rush out and read utilitarian books keep in mind that the theory we have developed so far is fundamentally “do what is right” (in a utilitarian sense). You can’t write much of a book on that so writers tend to make a few additional assumptions on what needs to be done to achieve this. So a philosopher will take a set of things to be beyond utilitarian analysis and taking these for granted build a theory on top. For example someone like Hare might assume “universalizability” (I.e. that a moral statement should apply to any combination of agents).
Under pure utilitarianism this is an assertation “universalizability” results in greater overall utility” which you could dispute if you wanted. Similarly for all the other main utilitarians.
Concequentialism : there are two types
Under Act Utilitarianism, the rightness of actions is evaluated on a case by case basis, and things such as rules and laws are only present if they have practical usefulness. Under Rule Utilitarianism , the utility of rules rather than actions is evaluated, and all actions should conform to the rules with the highest utility.
The decision between these two seems to be a pragmatic one - fundamentally affected by assumptions regarding how reliable the process is. (So in a sense we are talking about setting the world up temporarily in an ideal state and seeing if it remains so)
Questions for the rule utilitarians
"What are your rules?
Why are they so good?
What if breaking the rule results in an unambiguous improvement to the world?
Questions for the act utilitarian
how fallible is the decision making process?
Who is the decision maker?
And what information do they use to make that decision?
Poor usage of rule utilitarianism results in people just defending rules because they are the status quo in their thinking and they don’t dare to try to recalculate benefits.
Poor usage of act utilitarianism results in people ignoring certain types of information such as indications that they don’t know all the facts, or aren’t the best people to make the decision, and the associated costs of being wrong.
Welfare:
there are two main types - preference utilitarianism and hedonistic utilitarianism .
These are effected by when one sums the utility (at the end of their life (a life worth living) or all across their life (classic hedonism) also if one includes higher level goals in themselves for example "humanity's" desire to go into space.
Aggregation:
Do you maximize total utility (which creates the ballooning population and diminishing average resource allocation problem) or maximize average utility (which results in trying to minimize population, presumably resulting in a small set of extremely happy people.)
Neither if which are quiet as scary as they sound. The first option reduces individual utility to a minimal level (in relation to resources) - but that level is not nearly as low as you might think - after all the least fortunate of these individuals must still have lives worth living.
In the average population problem you actually have many thousands of years of people who HAVE lived on average utility levels - this means that you actually do need quite a few happy people to average it out so no need to fear we will wipe out humanity.
Maximization:
Direct Utilitarians believe people should try to maximize utility
Indirect utilitarian believe people should follow laws and rules of thumb which are effective in practice.*
This seems to involve two concepts the first is the time of the decision making
Indirect utilitarianism is associated with making choices long before the event and direct utilitarianism is associated with making decisions in the heat of the moment. Obviously the best answer is to leave your brain switched on at both times evaluating if you have additional information or if you are too caught up to make an optimal decision. Both sides presumably claim this middle ground just justifying it slightly differently.
The other concept is the nature of complex relationships. Extreme indirect utilitarianism assumes relationships tend to be so complex, and current rules of thumb so efficient, that trying to achieve something will generally prevent you from achieving it.
Direct utilitarianism tends to assume that relationships are simple enough that you can understand them and trying to achieve something generally helps you to achieve that thing and if it doesn’t then you will learn from it.
* Actually this is a false dichotomy. The problem is that maximizing utility DOES involve following rules of thumb - in fact it is inconceivable that human would operate without rules of thumb. Meanwhile an indirect utilitarian will engage in calculation and use these to determine some actions (although maybe not always according to utilitarianism) if he is anything more than a very simple robot.
Before you rush out and read utilitarian books keep in mind that the theory we have developed so far is fundamentally “do what is right” (in a utilitarian sense). You can’t write much of a book on that so writers tend to make a few additional assumptions on what needs to be done to achieve this. So a philosopher will take a set of things to be beyond utilitarian analysis and taking these for granted build a theory on top. For example someone like Hare might assume “universalizability” (I.e. that a moral statement should apply to any combination of agents).
Under pure utilitarianism this is an assertation “universalizability” results in greater overall utility” which you could dispute if you wanted. Similarly for all the other main utilitarians.
0 Comments:
Post a Comment
<< Home