Changes

Jump to: navigation, search

Nick Bostrom

6,653 bytes added, 20:30, 12 December 2016
no edit summary
He was born in March 10, 1973, in Helsingborg (Sweden). He grew up as an only child and didn’t particularly enjoyed school. He does not cite his parents as influences in exploring large philosophical questions. His father worked for an investment bank, and his mother for a Swedish corporation <ref name=”3”></ref>. When he was a teenager he had what he describes as an “epiphany experience”. In 1989 he picked up at random an anthology book of 19th-century German philosophy, containing works by Nietzsche and Schopenhauer. After reading it, he experience a dramatic sense of the possibilities of learning and proceeded to educating himself quickly, reading feverishly, and painting and writing poetry in the spare time <ref name=”1”></ref> <ref name=”3”></ref>. He did not pursue these artistic endeavors, giving priority to his mathematical pursuits. He took degrees in philosophy and mathematical logic at Gothenburg University and completed his PhD at the London School of Economics <ref name=”3”></ref>.
 
==Existential risk==
 
In his studies, Nick Bostrom has delved into the analysis of existential risks. These are the ones that threaten the entire future of humanity, although it may seem unthinkable. It may not, necessarily, mean the premature extinction of Earth-originating intelligent life, but instead be the permanent and drastic destruction of the potential for desirable future development. Human extinction risks are not well understood and tend to be underestimated by society <ref name=”5”> Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1): 15-31</ref> <ref name=”6”> Andersen, R. (2012). Risk of human extinction. Retrieved from https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/</ref>. Some of the risks are relatively well know, such as asteroids or supervolcanoes, but there are others more obscure, arising from the development of human technology, and that are expected to grow in number and potency in the near future <ref name=”6”></ref>.
 
Usually, it is difficult to determine the probability of existential risks, but experts suppose that there is a significant risk confronting humanity over the next few centuries. It is estimated to be between 10-20 per cent in this century. This value relies in subjective judgement and so a more reasonable estimate could be substantially higher or lower. Some theories of value imply that a relatively small reduction in net existential risk could have enormous expected value <ref name=”5”></ref>. Bostrom uses philosophy and mathematics (specifically probability theory) to try and determine how humanity might prevent and survive on of many potential catastrophic events <ref name=”6”></ref>.
 
Not all risks have the same probability of occurring. According to Bostrom (2002), the description of the magnitude of a risk can be made using three dimension: scope, intensity, and probability. Here, “scope” is the size of the group of people that are at risk; “intensity” is how much each individual in the group would be affected; and “probability” is the best current subjective estimate of the probability of the adverse outcome <ref name=”7”> Bostrom, N. (2002). Existential risks: analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology, 9(1)</ref>.
 
There is a distinction between existential risks and global endurable risks. In the first one, all of humankind is imperiled. Examples of the latter are threats to the Earth’s biodiversity, moderate global warming, global economic recessions, or stifling cultural or religious eras (e.g. “dark ages”), assuming they are transitory. These examples, although endurable, are not meant to be taken as acceptable or not serious <ref name=”7”></ref>.
 
The first man-made existential risk was the detonation of an atomic bomb. There were concerns at the time that the explosion could start a chain-reaction by igniting the atmosphere. This risks has, since then, been dismissed as physically impossible, but it would still be qualified as an existential risk that was present at the time. This means that if there is some subjective probability of an adverse outcome (even if later the risk is dismissed), it still needs to be considered as one at the time. Concluding, one as to take into account the best current subjective estimate of what the objective risk factors are. Bostrom says that the risk of nuclear Armageddon and comet or asteroid strikes may be preludes to the risks that will be faced during the 21st century <ref name=”7”></ref>.
 
Existential risks can be classified into four categories. The names of each one has changed since first introduced by Bostrom, but their descriptions remain the same. These were, initially, “Bangs”, “Crunches”, “Shrieks”, and “Whimpers”. The current nomenclature of the four categories is: “Human extinction”, “Permanent stagnation”, “Flawed realization”, and “Subsequent ruination” <ref name=”5”></ref> <ref name=”7”></ref>.
 
In the “Human extinction” category, humanity goes extinct before reaching technological maturity, due to a sudden accidental disaster or a deliberate act of destruction. Examples include the deliberate misuse of nanotechnology, nuclear holocaust, the shutting down of a simulation (if we are living in one), a badly programmed superintelligence, or a genetically engineered biological agent. “Permanent stagnation” is when humanity never reaches technology maturity, even though human life continues in some form. Some scenarios that can lead to this category are resource depletion or ecological destruction, misguided world government stopping technological progress, “dysgenic” pressures, or technological arrest. In the category of “Flawed realization”, humankind achieves technological maturity but in a flawed way, that is irremediably flawed, existing in an extremely narrow band of what would be possible and desirable. Examples include a take-over by a transcending upload, flawed superintelligence, or a repressive totalitarian global regime. Finally, in “Subsequent ruination” humanity reaches technological maturity but future developments lead to the permanent ruination of future good prospects. Example scenarios for this are to be killed by an extraterrestrial civilization, the potential or even core values of humanity are eroded by evolutionary development, or something unforeseen <ref name=”5”></ref> <ref name=”7”></ref>.
 
In Bostrom (2013) some policy implications of existential risks are described. For example, existential risk as a concept can help bring focus on long term global efforts and sustainability concerns; morally it could be said that existential risk reduction is strictly more important than any other global public good; and maybe the cost-effective strategy to reduce existential risks, in a long-term, is to fund analysis of a wide range of risks and mitigation policies. It is considered that, between all of the existential risk scenarios, the ones with the greatest concern level are the anthropogenic existential risks that arise from human activity <ref name=”5”></ref>.
 
Bostrom also suggested the concept of “Maxipok”, which he describes it as an effort to “maximize the probability of an “OK outcome”, where an OK outcome is any outcome that avoids existential catastrophe.” This concept should be taken as a rule of thumb, and not as a principle of absolute validity. Its usefulness is as an aid to prioritization <ref name=”5”></ref>.
==Bibliography==
349
edits

Navigation menu