Nick Bostrom

From Virtual Reality, Augmented Reality Wiki
Jump to: navigation, search

Biography

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University, with a background in physics, computational neuroscience, mathematical logic, and philosophy. He is the founding director of the Future of Humanity Institute, a multidisciplinary research center at the University of Oxford which enables leading researchers to use mathematics, philosophy, and science to explore big-picture questions about humanity. Recently, the focus of the institute has been exploring questions regarding existential risks and the future of machine intelligence. The Future of Humanity institute works closely with the Centre for Effective Altruism [1] [2] [3].

In the beginning of 1998, the World Transhumanist Association was founded by Nick Bostrom and David Pearce. Its objective is “to provide a general organizational basis for all transhumanist groups and interests, across the political spectrum. The aim was also to develop a more mature and academically respectable form of transhumanism, freed from the “cultishness” which, at least in the eyes of some critics, had afflicted some of its earlier convocations.” The association has since changed its name to Humanity+. There were two founding documents of the World Transhumanist Association: the Transhumanist Declaration and the Transhumanist FAQ. The first document was a concise statement of the basic principles of transhumanism. The FAQ was a consensus document, more philosophical in its scope [4].

Bostrom is the author of the books Anthropic Bias (2002), and Superintelligence: Paths, Dangers, Strategies (2014). He served as an editor for the books Human Enhancement (2009), and Global Catastrophic Risks (2014). He has also published several journal papers, as well as contributing to books chapters, conference proceedings and articles. He is best known for his work in the simulation argument, existential risk, anthropics, impacts of future technology, and implications of consequentialism for global strategy [1].

He has received a Eugene R. Gannon Award, in which one person is selected annually worldwide from the fields of philosophy, mathematics, the arts and other humanities, and the natural sciences. He was selected to be part of Foreign Policy’s Top 100 Global Thinkers list and Prospect Magazine’s World Thinkers list [1].

He was born in March 10, 1973, in Helsingborg (Sweden). He grew up as an only child and didn’t particularly enjoyed school. He does not cite his parents as influences in exploring large philosophical questions. His father worked for an investment bank, and his mother for a Swedish corporation [3]. When he was a teenager he had what he describes as an “epiphany experience”. In 1989 he picked up at random an anthology book of 19th-century German philosophy, containing works by Nietzsche and Schopenhauer. After reading it, he experience a dramatic sense of the possibilities of learning and proceeded to educating himself quickly, reading feverishly, and painting and writing poetry in the spare time [1] [3]. He did not pursue these artistic endeavors, giving priority to his mathematical pursuits. He took degrees in philosophy and mathematical logic at Gothenburg University and completed his PhD at the London School of Economics [3].

Existential risk

In his studies, Nick Bostrom has delved into the analysis of existential risks. These are the ones that threaten the entire future of humanity, although it may seem unthinkable. It may not, necessarily, mean the premature extinction of Earth-originating intelligent life, but instead be the permanent and drastic destruction of the potential for desirable future development. Human extinction risks are not well understood and tend to be underestimated by society [5] [6]. Some of the risks are relatively well know, such as asteroids or supervolcanoes, but there are others more obscure, arising from the development of human technology, and that are expected to grow in number and potency in the near future [6].

Usually, it is difficult to determine the probability of existential risks, but experts suppose that there is a significant risk confronting humanity over the next few centuries. It is estimated to be between 10-20 per cent in this century. This value relies in subjective judgement and so a more reasonable estimate could be substantially higher or lower. Some theories of value imply that a relatively small reduction in net existential risk could have enormous expected value [5]. Bostrom uses philosophy and mathematics (specifically probability theory) to try and determine how humanity might prevent and survive on of many potential catastrophic events [6].

Not all risks have the same probability of occurring. According to Bostrom (2002), the description of the magnitude of a risk can be made using three dimension: scope, intensity, and probability. Here, “scope” is the size of the group of people that are at risk; “intensity” is how much each individual in the group would be affected; and “probability” is the best current subjective estimate of the probability of the adverse outcome [7].

There is a distinction between existential risks and global endurable risks. In the first one, all of humankind is imperiled. Examples of the latter are threats to the Earth’s biodiversity, moderate global warming, global economic recessions, or stifling cultural or religious eras (e.g. “dark ages”), assuming they are transitory. These examples, although endurable, are not meant to be taken as acceptable or not serious [7].

The first man-made existential risk was the detonation of an atomic bomb. There were concerns at the time that the explosion could start a chain-reaction by igniting the atmosphere. This risks has, since then, been dismissed as physically impossible, but it would still be qualified as an existential risk that was present at the time. This means that if there is some subjective probability of an adverse outcome (even if later the risk is dismissed), it still needs to be considered as one at the time. Concluding, one as to take into account the best current subjective estimate of what the objective risk factors are. Bostrom says that the risk of nuclear Armageddon and comet or asteroid strikes may be preludes to the risks that will be faced during the 21st century [7].

Existential risks can be classified into four categories. The names of each one has changed since first introduced by Bostrom, but their descriptions remain the same. These were, initially, “Bangs”, “Crunches”, “Shrieks”, and “Whimpers”. The current nomenclature of the four categories is: “Human extinction”, “Permanent stagnation”, “Flawed realization”, and “Subsequent ruination” [5] [7].

In the “Human extinction” category, humanity goes extinct before reaching technological maturity, due to a sudden accidental disaster or a deliberate act of destruction. Examples include the deliberate misuse of nanotechnology, nuclear holocaust, the shutting down of a simulation (if we are living in one), a badly programmed superintelligence, or a genetically engineered biological agent. “Permanent stagnation” is when humanity never reaches technology maturity, even though human life continues in some form. Some scenarios that can lead to this category are resource depletion or ecological destruction, misguided world government stopping technological progress, “dysgenic” pressures, or technological arrest. In the category of “Flawed realization”, humankind achieves technological maturity but in a flawed way, that is irremediably flawed, existing in an extremely narrow band of what would be possible and desirable. Examples include a take-over by a transcending upload, flawed superintelligence, or a repressive totalitarian global regime. Finally, in “Subsequent ruination” humanity reaches technological maturity but future developments lead to the permanent ruination of future good prospects. Example scenarios for this are to be killed by an extraterrestrial civilization, the potential or even core values of humanity are eroded by evolutionary development, or something unforeseen [5] [7].

In Bostrom (2013) some policy implications of existential risks are described. For example, existential risk as a concept can help bring focus on long term global efforts and sustainability concerns; morally it could be said that existential risk reduction is strictly more important than any other global public good; and maybe the cost-effective strategy to reduce existential risks, in a long-term, is to fund analysis of a wide range of risks and mitigation policies. It is considered that, between all of the existential risk scenarios, the ones with the greatest concern level are the anthropogenic existential risks that arise from human activity [5].

Bostrom also suggested the concept of “Maxipok”, which he describes it as an effort to “maximize the probability of an “OK outcome”, where an OK outcome is any outcome that avoids existential catastrophe.” This concept should be taken as a rule of thumb, and not as a principle of absolute validity. Its usefulness is as an aid to prioritization [5].

Anthropic principle

Another topic that Nick Bostrom has studied extensively is anthropic bias, or observational selection effects. A selection effect is a bias introduced by constraints in the data collection process, from the limitations of some measuring device for example. An observational selection effect is one that arises from the precondition that there is some observer properly positioned to examine the evidence. Bostrom has studied how to reason when it is suspected that evidence is biased by this observation selection effect [8] [9]. Some questions that involve reasoning from conditioned observations are, for example, “is the fact that life evolved on Earth evidence that life is abundant in the universe? “, “why does the universe appear fine-tuned for life?”, or “are we entitled to conclude from our being among the first sixty billion humans ever to have lived that probably no more than several trillion humans will ever come into existence - that is, that human extinction lies in the relatively near future?” [8]

According to Bostrom (2002), anthropic reasoning seeks to detect, diagnose, and cure biases. It is a very rich philosophical field in terms of empirical implications, touching on so many important scientific questions, posing intricate paradoxes, and containing generous quantities of conceptual and methodological confusion that need to be sorted out.

The term “anthropic principle” is less than three decades old, and it was coined by Brandon Carter, a cosmologist, in a series of papers. The term “anthropic” is a misnomer, since reasoning about observation selection effects have no limited relation with a specific species (in this case Homo sapiens), but with observers in general. There is some confusion in the field, with several anthropic principles being formulated and defined in different ways by various authors. In Bostrom (2002), the author writes that “some reject anthropic reasoning out of hand as representing an obsolete and irrational form of anthropocentrism. Some hold that anthropic inferences rest on elementary mistakes in probability calculus. Some maintain that at least some of the anthropic principles are tautological and therefore indisputable. Tautological principles have been dismissed by some as empty and thus of no interest or ability to do explanatory work. Others have insisted that like some results in mathematics, though analytically true, anthropic principles can nonetheless be interesting and illuminating. Others still purport to derive empirical predictions from these same principles and regard them as testable hypotheses.” [9].

More recently, Bostrom and colleagues introduced the concept of anthropic shadow. It is an observation selection effect that prevents the observation of certain extreme risks that are close in terms of geological and evolutionary time. The anthropic shadow is cumulative with the “normal” selection effects that are applied to any sort of event. Correcting for this type of bias can affect the probability estimates for catastrophic events, and recognizing it might also help avoiding errors in risk analysis [10].

Superintelligence

The development of artificial intelligence (AI) could advance rapidly, possibly becoming an existential threat to humankind. Bostrom, in his book Superintelligence (2014), compares the development of AI to humans being like small children playing with a bomb. He also considers it “the most important thing to happen… since the rise of the human species”. Indeed, there is no reason why human psychology should be projected onto artificial minds, and assume that they would have the same emotional responses that humans developed during the evolutionary process. Expecting human characteristics from an AI could impede our understanding of what it might be like [11]. This area of study has received some attention, with Elon Musk investing $10 million dollars to fund research about keeping AI friendly [12].

Simulation argument

The simulation argument is, arguably, the most well-known work of Bostrom. This concept comes from a 2003 paper published in The Philosophical Quarterly [13] [14]. Although the full argument requires some probability theory, the basic idea can be grasped without resorting to mathematics [6] [15]. It begins with the assumption that the computer power in future civilizations will be so robust that it will be possible to create ancestor simulations. These are detailed simulations of their forebears, replicating reality to the smallest detail, and allowing for minds in the simulation to be conscious. Due to their enormous computer power it is assumed that they will also run many such simulations. According to Bostrom (2003), “then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race”, therefore being likely that we are among the simulated minds instead of the original biological ones. If we do not believe that we are in a computer simulation, then we cannot assume that our descendants will run a great number of simulations of their ancestors [14] [15] [16].

Another assumption that needs to be made is that of substrate independence, regarding the philosophy of mind. It means that mental states can occur in different classes of physical substrates and not only biological ones. For example, silicon-based processors on a computer could in principle be capable of generating consciousness. It is believed that it is enough for the generation of subjective experiences that the computational processes of a human brain be replicated with fine-grained detail, to the level of individual synapses. Presently, there is no sufficiently powerful hardware or the necessary software to develop conscious minds in computers, but it is expected that if technological progress continues these problems will be overcome [14] [16].

The simulation argument tries to demonstrate that at least one of three propositions is true. The first one is that almost all civilizations like ours go extinct before reaching technological maturity; the second, almost all technologically mature civilizations lose interest in creating ancestor simulations; and the third, we're almost certainly living in a computer simulation [6] [14].

If the first proposition is false than it means that a significant portion of civilizations reach technological maturity. If the second one is false, it would mean that a significant fraction of these civilizations run ancestor simulations. It follows that if one and two are false, then there would be a great number of simulations. In this case, almost all observers with our types of experiences would be living in simulations. The simulation argument does not show that we are living in a simulation. Instead, it states that at least one of the three propositions its true, not telling which one [6] [15].

Transhumanism

In Bostrom (2005), transhumanism is described as “a loosely defined movement that has developed gradually over the past two decades, and can be viewed as an outgrowth of secular humanism and the Enlightenment. It holds that current human nature is improvable through the use of applied science and other rational methods, which may make it possible to increase human health-span, extend our intellectual and physical capacities, and give us increased control over our own mental states and moods. Technologies of concern include not only current ones, like genetic engineering and information technology, but also anticipated future developments such as fully immersive virtual reality, machine-phase nanotechnology, and artificial intelligence.” [17]. This arises from the human desire to acquire new capabilities. Even in ancient times, humanity as sought to expand the boundaries of its existence [4].

Transhumanism advocates that human enhancement technologies should be widely available, and that individuals should have the option to choose which technologies they want to apply to themselves. It also promotes the view that parents should decide which reproductive technologies to use when having children. Transhumanists believe that the potential hazards of human enhancement technologies will be surpassed by their benefits. The development and implementation of these future technologies could lead to our descendant being “posthuman”, with indefinite health-spans, greater intellectual faculties, new sensibilities, or possibly the ability to control emotions [17].

Cognitive enhancement

Cognitive enhancement is “the amplification or extension of core capacities of the mind through improvement or augmentation of internal or external information processing systems.” For example, currently, external hardware and software give human beings effective cognitive abilities that in some aspects surpass those of biological brains. To improve cognitive function, interventions can be directed at the core faculties of cognition: perception, attention, understanding, and memory [18].

Bibliography

Books

  • Bostrom, N. (2002). Anthropic bias: Observation Selection Effects in Science and Philosophy. New York, NY, Routledge.
  • Savulescu, J. and Bostrom, N. (Eds.) (2009). Human Enhancement. Oxford, NY, Oxford University Press.
  • Bostrom, N. and Circovic, M. M. (Eds.) (2011). Global Catastrophic Risks. Oxford, NY, Oxford University Press.
  • Bostrom, N (2014). Superintelligence. Oxford, NY, Oxford University Press.

Selected articles

  • Bostrom, N. (1998). How long before superintelligence? International Journal of Future Studies, 2.
  • Bostrom, N. (2002). Existential risks. Journal of Evolution and Technology, 9(1).
  • Bostrom, N. (2003). Are we living in a computer simulation? The Philosophical Quarterly, 53(211): 243-155.
  • Bostrom, N. (2003). Human genetic enhancements: a transhumanist perspective. The Journal of Value Inquiry, 37(4): 493-506.
  • Bostrom, N. (2005). In defense of posthuman dignity. Bioethics, 19(3): 202-214.
  • Bostrom, N. (2005). A history of transhumanist thought. Journal of Evolution and Technology, 14(1).
  • Bostrom, N. (2005). Transhumanist values. Journal of Philosophical Research, 30: 3-14.
  • Bostrom, N. and Ord, T. (2006). The reversal test: eliminating status quo bias in applied ethics. Ethics, 116(4): 656-679.
  • Bostrom, N. and Sandberg, A. (2009). Cognitive enhancement: methods, ethics, regulatory challenges. Science and Engineering Ethics, 15(3): 311-341.
  • Bostrom, N. (2011). A patch for the simulation argument. Analysis, 71(1): 54-61.
  • Bostrom, N. (2012). The superintelligent will: motivation and instrumental rationality in advanced artificial agents, 22(2): 71-85.
  • Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1): 15-31.

References

  1. 1.0 1.1 1.2 1.3 Bostrom, N. Nick Bostrom’s home page. Retrieved from http://nickbostrom.com/
  2. Future of Humanity Institute. Mission. Retrieved from https://www.fhi.ox.ac.uk/about/mission/
  3. 3.0 3.1 3.2 3.3 Adams, T. (2016). Artificial intelligence: ‘We’re like children playing with a bomb’. Retrieved from https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine
  4. 4.0 4.1 Bostrom, N. (2005). A history of transhumanist thought. Journal of Evolution and Technology, 14(1)
  5. 5.0 5.1 5.2 5.3 5.4 5.5 Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1): 15-31
  6. 6.0 6.1 6.2 6.3 6.4 6.5 Andersen, R. (2012). Risk of human extinction. Retrieved from https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/
  7. 7.0 7.1 7.2 7.3 7.4 Bostrom, N. (2002). Existential risks: analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology, 9(1)
  8. 8.0 8.1 Manson, N. (2003). Anthropic bias: observations selection effects in science and philosophy (Review). Retrieved from http://ndpr.nd.edu/news/23266/
  9. 9.0 9.1 Bostrom, N. (2002). Anthropic bias: Observation Selection Effects in Science and Philosophy. New York, NY, Routledge
  10. Cirkovic, M. M., Sandberg, A. and Bostrom, N. (2010). Anthropic shadow: observation selection effects and human extinction risks. Risk Analysis, 30(10): 1495-1506
  11. Silverman, A. In conversation: Nick Bostrom. Retrieved from http://2015globalthinkers.foreignpolicy.com/#!advocates/detail/qa-bostrom
  12. Mack, E. (2015). Bill Gates says you should worry about artificial intelligence. Retrieved from http://www.forbes.com/sites/ericmack/2015/01/28/bill-gates-also-worries-artificial-intelligence-is-a-threat/#b2a52b93d103
  13. Stricherz, V. (2012) Do we live in a computer simulation? UW researchers say idea can be tested. Retrieved from https://www.washington.edu/news/2012/12/10/do-we-live-in-a-computer-simulation-uw-researchers-say-idea-can-be-tested/
  14. 14.0 14.1 14.2 14.3 Bostrom, N. (2003). Are we living in a computer simulation? The Philosophical Quarterly, 53(211): 243-155
  15. 15.0 15.1 15.2 Bostrom, N. (2006). Do we live in a computer simulation? New Scientist, 192(2579): 38-39
  16. 16.0 16.1 Solon, O. (2016). Is our world a simulation? Why some scientists say it’s more likely than not. Retrieved from https://www.theguardian.com/technology/2016/oct/11/simulated-world-elon-musk-the-matrix
  17. 17.0 17.1 Bostrom, N. (2005). In defense of posthuman dignity. Bioethics, 19(3): 202-214
  18. Bostrom, N. and Sandberg, A. (2009). Cognitive enhancement: methods, ethics, regulatory challenges. Science and Engineering Ethics, 15(3): 311-341