349
edits
Changes
no edit summary
Bostrom also suggested the concept of “Maxipok”, which he describes it as an effort to “maximize the probability of an “OK outcome”, where an OK outcome is any outcome that avoids existential catastrophe.” This concept should be taken as a rule of thumb, and not as a principle of absolute validity. Its usefulness is as an aid to prioritization <ref name=”5”></ref>.
==Anthropic principle==
Another topic that Nick Bostrom has studied extensively is anthropic bias, or observational selection effects. A selection effect is a bias introduced by constraints in the data collection process, from the limitations of some measuring device for example. An observational selection effect is one that arises from the precondition that there is some observer properly positioned to examine the evidence. Bostrom has studied how to reason when it is suspected that evidence is biased by this observation selection effect <ref name=”8”> Manson, N. (2003). Anthropic bias: observations selection effects in science and philosophy (Review). Retrieved from http://ndpr.nd.edu/news/23266/</ref> <ref name=”9”> Bostrom, N. (2002). Anthropic bias: Observation Selection Effects in Science and Philosophy. New York, NY, Routledge </ref>. Some questions that involve reasoning from conditioned observations are, for example, “is the fact that life evolved on Earth evidence that life is abundant in the universe? “, “why does the universe appear fine-tuned for life?”, or “are we entitled to conclude from our being among the first sixty billion humans ever to have lived that probably no more than several trillion humans will ever come into existence - that is, that human extinction lies in the relatively near future?” <ref name=”8”></ref>
According to Bostrom (2002), anthropic reasoning seeks to detect, diagnose, and cure biases. It is a very rich philosophical field in terms of empirical implications, touching on so many important scientific questions, posing intricate paradoxes, and containing generous quantities of conceptual and methodological confusion that need to be sorted out.
The term “anthropic principle” is less than three decades old, and it was coined by Brandon Carter, a cosmologist, in a series of papers. The term “anthropic” is a misnomer, since reasoning about observation selection effects have no limited relation with a specific species (in this case Homo sapiens), but with observers in general. There is some confusion in the field, with several anthropic principles being formulated and defined in different ways by various authors. In Bostrom (2002), the author writes that “some reject anthropic reasoning out of hand as representing an obsolete and irrational form of anthropocentrism. Some hold that anthropic inferences rest on elementary mistakes in probability calculus. Some maintain that at least some of the anthropic principles are tautological and therefore indisputable. Tautological principles have been dismissed by some as empty and thus of no interest or ability to do explanatory work. Others have insisted that like some results in mathematics, though analytically true, anthropic principles can nonetheless be interesting and illuminating. Others still purport to derive empirical predictions from these same principles and regard them as testable hypotheses.” <ref name=”9”></ref>.
More recently, Bostrom and colleagues introduced the concept of anthropic shadow. It is an observation selection effect that prevents the observation of certain extreme risks that are close in terms of geological and evolutionary time. The anthropic shadow is cumulative with the “normal” selection effects that are applied to any sort of event. Correcting for this type of bias can affect the probability estimates for catastrophic events, and recognizing it might also help avoiding errors in risk analysis <ref name=”10”> Cirkovic, M. M., Sandberg, A. and Bostrom, N. (2010). Anthropic shadow: observation selection effects and human extinction risks. Risk Analysis, 30(10): 1495-1506</ref>.
==Superintelligence==
The development of artificial intelligence (AI) could advance rapidly, possibly becoming an existential threat to humankind. Bostrom, in his book Superintelligence (2014), compares the development of AI to humans being like small children playing with a bomb. He also considers it “the most important thing to happen… since the rise of the human species”. Indeed, there is no reason why human psychology should be projected onto artificial minds, and assume that they would have the same emotional responses that humans developed during the evolutionary process. Expecting human characteristics from an AI could impede our understanding of what it might be like <ref name=”11”> Silverman, A. In conversation: Nick Bostrom. Retrieved from http://2015globalthinkers.foreignpolicy.com/#!advocates/detail/qa-bostrom</ref>. This area of study has received some attention, with Elon Musk investing $10 million dollars to fund research about keeping AI friendly <ref name=”12”> Mack, E. (2015). Bill Gates says you should worry about artificial intelligence. Retrieved from http://www.forbes.com/sites/ericmack/2015/01/28/bill-gates-also-worries-artificial-intelligence-is-a-threat/#b2a52b93d103</ref>.
==Bibliography==
===Selected articles===
* Bostrom, N. (1998). How long before superintelligence? International Journal of Future Studies, 2.
* Bostrom, N. (2002). Existential risks. Journal of Evolution and Technology, 9(1).
* Bostrom, N. (2003). Are we living in a computer simulation? The Philosophical Quarterly, 53(211): 243-155.
* Bostrom, N. (2003). Human genetic enhancements: a transhumanist perspective. The Journal of Value Inquiry, 37(4): 493-506.
* Bostrom, N. (2005). In defense of posthuman dignity. Bioethics, 19(3): 202-214.
* Bostrom, N. (2005). A history of transhumanist thought. Journal of Evolution and Technology, 14(1).
* Bostrom, N. (2005). Transhumanist values. Journal of Philosophical Research, 30: 3-14.
* Bostrom, N. and Ord, T. (2006). The reversal test: eliminating status quo bias in applied ethics. Ethics, 116(4): 656-679.
* Bostrom, N. and Sandberg, A. (2009). Cognitive enhancement: methods, ethics, regulatory challenges. Science and Engineering Ethics, 15(3): 311-341.
* Bostrom, N. (2012). The superintelligent will: motivation and instrumental rationality in advanced artificial agents, 22(2): 71-85.
* Bostrom, N. (2013). Existential risk prevention as global priority. Global Policy, 4(1): 15-31.
==References==