Chapter 352 - Astronomical Waste: The Opportunity Cost Of Delayed Technological Development[3]

Name:Random Stuff Author:Brayon101
III. THE CHIEF GOAL FOR UTILITARIANS SHOULD BE TO REDUCE EXISTENTIAL RISK

In light of the above discussion, it may seem as if a utilitarian ought to focus her efforts on accelerating technological development. The payoff from even a very slight success in this endeavor is so enormous that it dwarfs that of almost any other activity. We appear to have a utilitarian argument for the greatest possible urgency of technological development.

However, the true lesson is a different one. If what we are concerned with is (something like) maximizing the expected number of worthwhile lives that we will create, then in addition to the opportunity cost of delayed colonization, we have to take into account the risk of failure to colonize at all. We might fall victim to an existential risk, one where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential. Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.

Therefore, if our actions have even the slightest effect on the probability of eventual colonization, this will outweigh their effect on when colonization takes place. For standard utilitarians, priority number one, two, three and four should consequently be to reduce existential risk. The utilitarian imperative "Maximize expected aggregate utility!" can be simplified to the maxim "Minimize existential risk!".

IV. IMPLICATIONS FOR AGGREGATIVE PERSON-AFFECTING VIEWS

The argument above presupposes that our concern is to maximize the total amount of well-being. Suppose instead that we adopt a "person-affecting" version of utilitarianism, according to which our obligations are primarily towards currently existing persons and to those persons that will come to exist. On such a person-affecting view, human extinction would be bad only because it makes past or ongoing lives worse, not because it constitutes a loss of potential worthwhile lives. What ought someone who embraces this doctrine do? Should he emphasize speed or safety, or something else?

To answer this, we need to consider some further matters. Suppose one thinks that the probability is negligible that any existing person will survive long enough to get to use a significant portion of the accessible astronomical resources, which, as described in opening section of this paper, are gradually going to waste. Then one's reason for minimizing existential risk is that sudden extinction would off cut an average of, say, 40 years from each of the current (six billion or so) human lives. While this would certainly be a large disaster, it is in the same big ballpark as other ongoing human tragedies, such as world poverty, hunger and disease. On this assumption, then, a person-affecting utilitarian should regard reducing existential risk as a very important but not completely dominating concern. There would in this case be no easy answer to what he ought to do. Where he ought to focus his efforts would depend on detailed calculations about which area of philanthropic activity he would happen to be best placed to make a contribution to.