Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Search
Search
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Mind uploading
(section)
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
Move
General
What links here
Related changes
Special pages
Page information
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
==Issues== ===Philosophical issues=== The main philosophical problem faced by "mind uploading" or mind copying is the [[hard problem of consciousness]]: the difficulty of explaining how a physical entity such as a human can have [[qualia]], [[Consciousness#Types of consciousness|phenomenal consciousness]], or [[Subjective character of experience|subjective experience]].<ref name="original-paper2">{{cite journal |last=Chalmers |first=David |date=1995 |title=Facing up to the problem of consciousness |journal=[[Journal of Consciousness Studies]] |volume=2 |issue=3 |pages=200–219}}</ref> Many philosophical responses to the hard problem entail that mind uploading is fundamentally or practically impossible, while others are compatible with at least some formulations of mind uploading. Many proponents of mind uploading defend the possibility of mind uploading by recourse to [[physicalism]], which includes the philosophical belief that consciousness is an [[emergence|emergent]] feature that arises from large neural network high-level patterns of organization, which could be realized in other processing devices. Mind uploading relies on the idea that the human mind (the "self" and the long-term memory) reduces to the current neural network paths and the weights of synapses in the brain. In contrast, many [[Dualism (philosophy of mind)|dualistic]] and [[Idealism|idealistic]] accounts seek to avoid the hard problem of consciousness by explaining it in terms of immaterial (and presumably inaccessible) substances like [[soul]], which would pose a fundamental or at least practical challenge to the feasibility of artificial consciousness in general.<ref>{{Cite journal |last1=Kastrup |first1=Bernardo |year=2018 |title=The Universe in Consciousness |url=https://philpapers.org/rec/KASTUI |journal=Journal of Consciousness Studies |volume=25 |issue=5–6 |pages=125–155}}</ref> Assuming physicalism is true, the mind can be defined as the information state of the brain, so it is immaterial only in the same sense as the information content of a data file, or the state of software residing in a computer's memory. In this case, data specifying the information state of the neural network could be captured and copied as a "computer file" from the brain and re-implemented into a different physical form.<ref>{{cite web|url=http://hplusmagazine.com/2013/06/17/clearing-up-misconceptions-about-mind-uploading/|title=Clearing Up Misconceptions About Mind Uploading|work=h+ Media|date=June 17, 2013 |first= Franco |last= Cortese}}</ref> This is not to deny that minds are richly adapted to their substrates.<ref>{{cite journal|url=http://faculty.cs.tamu.edu/choe/ftp/publications/choe-ijmc12-preprint.pdf|title=Time, Consciousness, and Mind Uploading|journal=International Journal of Machine Consciousness|year=2012|doi=10.1142/S179384301240015X|volume=04|issue=1|pages=257 |author1=Yoonsuck Choe |author2= Jaerock Kwon |author3= Ji Ryang Chung }}</ref> An analogy to mind uploading is to copy the information state of a computer program from the memory of the computer on which it is executing to another computer and then continue its execution on the second computer. The second computer may perhaps have different hardware architecture, but it [[computer emulator|emulates]] the hardware of the first computer. These philosophical issues have a long history. In 1775, [[Thomas Reid]] wrote: “I would be glad to know... whether when my brain has lost its original structure, and when some hundred years after the same materials are fabricated so curiously as to become an intelligent being, whether, I say that being will be me; or, if, two or three such beings should be formed out of my brain; whether they will all be me, and consequently one and the same intelligent being.”<ref>{{cite web |title=The Duplicates Paradox (The Duplicates Problem) |url=http://www.benbest.com/philo/doubles.html |work=benbest.com}}</ref> Although the name of ''the hard problem of consciousness'' was coined in 1994, debate surrounding the problem itself is ancient. [[Augustine of Hippo]] argued against physicalist "Academians" in the 5th century, writing that consciousness cannot be an illusion because only a conscious being can be deceived or experience an illusion.<ref>{{cite book |last=Augustine of Hippo |title=City of God |chapter=Book 11, Chapter 26}}</ref> [[René Descartes]], the founder of [[Mind–body dualism|mind-body dualism]], made a similar objection in the 17th century, coining the popular phrase ''"Je pense, donc je suis"'' ("I think, therefore I am").<ref>{{cite book |last=Descartes |first=René |title=Discourse on the Method |year=1637 |chapter=4}}</ref> Although physicalism is known to have been proposed in ancient times, [[Thomas Henry Huxley|Thomas Huxley]] was among the first to describe mental experience as merely an [[epiphenomenon]] of interactions within the brain, having no causal power of its own and being entirely downstream from the brain's activity.<ref>{{Citation |last=Robinson |first=William |title=Epiphenomenalism |date=2023 |encyclopedia=The Stanford Encyclopedia of Philosophy |editor-last=Zalta |editor-first=Edward N. |url=https://plato.stanford.edu/archives/sum2023/entries/epiphenomenalism/ |access-date=2024-05-16 |edition=Summer 2023 |publisher=Metaphysics Research Lab, Stanford University |editor2-last=Nodelman |editor2-first=Uri}}</ref> A considerable portion of [[transhumanists]] and [[singularitarians]] place great hope in the belief that they may become immortal, by creating one or many non-biological functional copies of their brains, thereby leaving their "biological shell". However, the philosopher and transhumanist [[Susan Schneider (philosopher)|Susan Schneider]] claims that at best, uploading would create a copy of the original person's mind.<ref name="Schneider">{{cite news|last=Schneider|first=Susan|title=The Philosophy of 'Her'|url=http://opinionator.blogs.nytimes.com/2014/03/02/the-philosophy-of-her/?_php=true&_type=blogs&_r=0|access-date=May 7, 2014|newspaper=The New York Times|date=March 2, 2014}}</ref> Schneider agrees that consciousness has a computational basis, but this does not mean we can upload and survive. According to her views, "uploading" would probably result in the death of the original person's brain, while only outside observers can maintain the illusion of the original person still being alive. For it is implausible to think that one's consciousness would leave one's brain and travel to a remote location; ordinary physical objects do not behave this way. Ordinary objects (rocks, tables, etc.) are not simultaneously here, and elsewhere. At best, a copy of the original mind is created.<ref name="Schneider" /> [[Neural correlates of consciousness]], a sub-branch of neuroscience, states that consciousness may be thought of as a state-dependent property of some undefined [[Complex systems|complex]], adaptive, and highly interconnected biological system.<ref>{{Cite book|title= Fundamental neuroscience|date=2008|publisher=Elsevier / Academic Press| last= Squire| first= Larry R.|isbn= 9780123740199|edition= 3rd|location=Amsterdam|oclc=190867431}}</ref> Others have argued against such conclusions. For example, Buddhist transhumanist James Hughes has pointed out that this consideration only goes so far: if one believes the self is an illusion, worries about survival are not reasons to avoid uploading,<ref name="The Transhumanist Reader">{{cite book|last=Hughes|first=James|title=Transhumanism and Personal Identity|date=2013|publisher=Wiley|url=http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1118334299.html}}</ref> and Keith Wiley has presented an argument wherein all resulting minds of an uploading procedure are granted equal primacy in their claim to the original identity, such that survival of the self is determined retroactively from a strictly subjective position.<ref>{{cite news|last=Wiley|first=Keith|title=Response to Susan Schneider's "Philosophy of 'Her"|url=http://hplusmagazine.com/2014/03/26/response-to-susan-schneiders-the-philosophy-of-her/|access-date=7 May 2014| work= H+Magazine|date=March 20, 2014}}</ref><ref name="WileyK_Taxonomy">{{cite book |last1= Wiley |first1= Keith |title= A Taxonomy and Metaphysics of Mind-Uploading|date=September 2014|publisher=Humanity+ Press and Alautun Press| isbn= 978-0692279847|edition= 1st|url=http://alautunpress.com|access-date=16 October 2014}}</ref> Some have also asserted that consciousness is a part of an extra-biological system that is yet to be discovered; therefore it cannot be fully understood under the present constraints of neurobiology. Without the transference of consciousness, true mind-upload or perpetual immortality cannot be practically achieved.<ref>{{Cite web|url=https://medium.com/@anthrobot/on-achieving-immortality-3ed1d567f7a2|title=On Achieving Immortality|last=Ruparel|first=Bhavik|date=2018-07-30|website= medium.com |access-date=2018-07-31}}</ref> Another potential consequence of mind uploading is that the decision to "upload" may then create a mindless symbol manipulator instead of a conscious mind (see [[philosophical zombie]]).<ref>{{cite journal |url= https://www.academia.edu/1246312|title=My Brain, my Mind, and I: Some Philosophical Problems of Mind-Uploading |volume= 4 |issue= 1 |pages= 187–200| first= Michael |last= Hauskeller|journal=Academia.edu|year=2012}}</ref><ref>{{cite web| url= http://io9.com/you-ll-probably-never-upload-your-mind-into-a-computer-474941498|title=You Might Never Upload Your Brain Into a Computer| first= George |last= Dvorsky|work=io9 |date=April 17, 2013}}</ref> If a computer could process sensory inputs to generate the same outputs that a human mind does (speech, muscle movements, etc.) without necessarily having any experience of consciousness, then it may be impossible to determine whether the uploaded mind is truly conscious, and not merely an automaton that externally behaves the way a human would. Thought experiments like the [[Chinese room]] raise fundamental questions about mind uploading: If an upload displays behaviors that are highly indicative of consciousness, or even verbally insists that it is conscious, does that prove it is conscious?<ref>{{cite web |url= http://degreesofclarity.com/writing/oto_mind_uploading.pdf|title=Seeking normative guidelines for novel future forms of consciousness| first= Brandon |last= Oto|publisher=University of California, Santa Cruz|year=2011|access-date=2014-01-03 |archive-date=2014-01-03|archive-url= https://web.archive.org/web/20140103132727/http://degreesofclarity.com/writing/oto_mind_uploading.pdf|url-status=dead}}</ref> There might also be an absolute upper limit in processing speed, above which consciousness cannot be sustained. The subjectivity of consciousness precludes a definitive answer to this question.<ref>{{cite web |url= http://goertzel.org/Goertzel_IJMC_Special_Issue.pdf|title=When Should Two Minds Be Considered Versions of One Another?| first= Ben |last= Goertzel |year=2012 }}</ref> Numerous scientists, including [[Ray Kurzweil]], believe that whether a separate entity is conscious is impossible to know with confidence, since consciousness is inherently subjective (see [[solipsism]]). Regardless, some scientists believe consciousness is the consequence of computational processes which are substrate-neutral. Still other scientists believe consciousness may emerge from some form of quantum computation that is dependent on the organic substrate (see [[quantum mind]]).<ref>{{cite web| url= http://hplusmagazine.com/2013/04/21/goertzel-contra-dvorsky-on-mind-uploading/|title=Goertzel Contra Dvorsky on Mind Uploading|date=April 21, 2013| first= Sally |last= Morem|work=h+ Media}}</ref><ref>{{cite journal |url= http://www.terasemcentral.org/docs/Terasem%20Mind%20Uploading%20Experiment%20IJMC.pdf |title= The Terasem Mind Uploading Experiment| first= Martine |last= Rothblatt|pages=141–158|year=2012 |journal= [[International Journal of Machine Consciousness]] |volume=4|issue=1|doi=10.1142/S1793843012400070 |url-status=dead |archive-url= https://web.archive.org/web/20130827213457/http://www.terasemcentral.org/docs/Terasem%20Mind%20Uploading%20Experiment%20IJMC.pdf |archive-date=2013-08-27}}</ref><ref>{{cite journal|url=http://home.millsaps.edu/hopkipd/IJMC-Preprint-HopkinsUploading.pdf|title=Why Uploading Will Not Work, or, the Ghosts Haunting Transhumanism| first= Patrick D.| last= Hopkins| year= 2012 |journal=International Journal of Machine Consciousness|volume=4|issue=1|pages=229–243| doi= 10.1142/S1793843012400136| url-status= dead|archive-url= https://web.archive.org/web/20120906145410/http://home.millsaps.edu/hopkipd/IJMC-Preprint-HopkinsUploading.pdf |archive-date= 2012-09-06}}</ref> In light of uncertainty about whether mind uploads are conscious, Sandberg proposes a cautious approach:<ref name= SandbergEthics2014 /> {{Blockquote|Principle of assuming the most (PAM): Assume that any emulated system could have the same mental properties as the original system and treat it correspondingly.}} ===Ethical and legal implications=== The process of developing emulation technology raises ethical issues related to [[animal welfare]] and [[artificial consciousness]].<ref name=SandbergEthics2014>{{cite journal| first= Anders |last= Sandberg|title=Ethics of brain emulations |journal=Journal of Experimental & Theoretical Artificial Intelligence| date=14 April 2014 |volume= 26 |issue= 3 |pages= 439–457| doi=10.1080/0952813X.2014.895113|s2cid=14545074}}</ref> The neuroscience required to develop brain emulation would require animal experimentation, first on invertebrates and then on small mammals before moving on to humans. Sometimes the animals would just need to be euthanized in order to extract, slice, and scan their brains, but sometimes behavioral and ''in vivo'' measures would be required, which might cause pain to living animals.<ref name=SandbergEthics2014 /> In addition, the resulting animal emulations themselves might suffer, depending on one's views about consciousness.<ref name=SandbergEthics2014 /> Bancroft argues for the plausibility of consciousness in brain simulations on the basis of the "[[Qualia#David Chalmers|fading qualia]]" thought experiment of [[David Chalmers]]. He then concludes:<ref name= Bancroft2013>{{cite journal |first= Tyler D. |last= Bancroft|title=Ethical Aspects of Computational Neuroscience |journal= Neuroethics |date=Aug 2013|volume=6|issue=2|pages=415–418|doi=10.1007/s12152-012-9163-7| s2cid= 145511899 |issn= 1874-5504}}</ref> “If, as I argue above, a sufficiently detailed computational simulation of the brain is potentially operationally equivalent to an organic brain, it follows that we must consider extending protections against suffering to simulations.” Chalmers himself has argued that such virtual realities would be genuine realities.<ref>{{cite book |last=Chalmers |first=David |author-link=David Chalmers |date=2022 |title=Reality+: Virtual Worlds and the Problems of Philosophy |url=https://wwnorton.com/books/reality |location=New York |publisher=W. W. Norton & Company |isbn= 9780393635805}}</ref> However, if mind uploading occurs and the uploads are not conscious, there may be a significant opportunity cost. In the book ''[[Superintelligence: Paths, Dangers, Strategies|Superintelligence]]'', [[Nick Bostrom]] expresses concern that we could build a "Disneyland without children."<ref name="bostrom2014">{{cite book |last=Bostrom |first=Nick |title=Superintelligence: Paths, Dangers, Strategies |date=2014 |publisher=Oxford University Press |isbn=978-0199678112 |location=Oxford, England}}</ref> It might help reduce emulation suffering to develop virtual equivalents of anaesthesia, as well as to omit processing related to pain and/or consciousness. However, some experiments might require a fully functioning and suffering animal emulation. Animals might also suffer by accident due to flaws and lack of insight into what parts of their brains are suffering.<ref name=SandbergEthics2014 /> Questions also arise regarding the moral status of partial brain emulations, as well as creating neuromorphic emulations that draw inspiration from biological brains but are built somewhat differently.<ref name=Bancroft2013 /> Brain emulations could be erased by computer viruses or malware, without the need to destroy the underlying hardware. This may make assassination easier than for physical humans. The attacker might take the computing power for its own use.<ref name=EckersleySandberg2013>{{cite journal|first1= Peter |last1= Eckersley|first2= Anders |last2= Sandberg|title=Is Brain Emulation Dangerous?|journal=Journal of Artificial General Intelligence|date=Dec 2013|volume=4|issue=3|pages=170–194|doi=10.2478/jagi-2013-0011|issn=1946-0163|bibcode=2013JAGI....4..170E|doi-access=free}}</ref> Many questions arise regarding the legal personhood of emulations.<ref name=Muzyka2013 /> Would they be given the rights of biological humans? If a person makes an emulated copy of themselves and then dies, does the emulation inherit their property and official positions? Could the emulation ask to "pull the plug" when its biological version was terminally ill or in a coma? Would it help to treat emulations as adolescents for a few years so that the biological creator would maintain temporary control? Would criminal emulations receive the death penalty, or would they be given forced data modification as a form of "rehabilitation"? Could an upload have marriage and child-care rights?<ref name=Muzyka2013>{{cite journal|first= Kamil |last= Muzyka|title=The Outline of Personhood Law Regarding Artificial Intelligences and Emulated Human Entities|journal=Journal of Artificial General Intelligence|date=Dec 2013|volume=4|issue=3|pages=164–169| doi= 10.2478/jagi-2013-0010|issn=1946-0163|bibcode=2013JAGI....4..164M|doi-access=free}}</ref> If simulated minds would come true and if they were assigned rights of their own, it may be difficult to ensure the protection of "digital human rights". For example, social science researchers might be tempted to secretly expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments in which many copies of the same minds are exposed (serially or simultaneously) to different test conditions.{{citation needed|date=June 2014}} Research led by cognitive scientist Michael Laakasuo has shown that attitudes towards mind uploading are predicted by an individual's belief in an afterlife; the existence of mind uploading technology may threaten religious and spiritual notions of immortality and divinity.<ref name="laakasuo2022">{{cite journal |first1= Michael |last1= Laakasuo |first2= Jukka |last2= Sundvall |first3= Marianna |last3= Drosinou |display-authors= 3| author4=Ivar Hannikainen |author5=Anton Kunnari |author6=Kathryn B. Francis |author7=Jussi Palomäki |date=2023 |title=Would you exchange your soul for immortality? – Existential Meaning and Afterlife Beliefs Predict Mind Upload Approval |journal=Frontiers in Psychology |volume=14 |doi=10.3389/fpsyg.2023.1254846 |doi-access=free |pmid=38162973 |pmc=10757642 }}</ref> ===Political and economic implications=== {{Update section|date=June 2024|reason=may not be relevant anymore, considering recent progress in large multimodal models}} Emulations might be preceded by a technological arms race driven by [[First-mover advantage|first-strike advantages]]. Their emergence and existence may lead to increased risk of war, including inequality, power struggles, strong loyalty and willingness to die among emulations, and new forms of racism, xenophobia, and religious prejudice.<ref>{{Cite journal |last=Hurtado Hurtado |first=Joshua |date=2022-07-18 |title=Envisioning postmortal futures: six archetypes on future societal approaches to seeking immortality |journal=Mortality |volume=29 |pages=18–36 |doi=10.1080/13576275.2022.2100250 |s2cid=250650618 |issn=1357-6275|doi-access=free }}</ref><ref name=EckersleySandberg2013 /> If emulations run much faster than humans, there might not be enough time for human leaders to make wise decisions or negotiate. It is possible that humans would react violently against the growing power of emulations, especially if that depresses human wages. Emulations may not trust each other, and even well-intentioned defensive measures [[security dilemma|might be interpreted as offense]].<ref name=EckersleySandberg2013 /> The book ''[[The Age of Em]]'' by [[Robin Hanson]] poses many hypotheses on the nature of a society of mind uploads, including that the most common minds would be copies of adults with personalities conducive to long hours of productive specialized work.<ref name="hanson">{{cite book |last=Hanson |first=Robin |author-link=Robin Hanson |url=https://ageofem.com/ |title=The Age of Em |date=2016 |publisher=Oxford University Press |isbn=9780198754626 |location=Oxford, England |page=528}}</ref> ===Emulation timelines and AI risk=== {{Update section|date=June 2024|reason=most sources are more than 10 years old, and may not reflect the current state of the debate}} Kenneth D. Miller, a professor of neuroscience at Columbia University and a co-director of the Center for Theoretical Neuroscience, raised doubts about the practicality of mind uploading. His major argument is that reconstructing neurons and their connections is in itself a formidable task, but it is far from being sufficient. Operation of the brain depends on the dynamics of electrical and biochemical signal exchange between neurons; therefore, capturing them in a single "frozen" state may prove insufficient. In addition, the nature of these signals may require modeling at the molecular level and beyond. Therefore, while not rejecting the idea in principle, Miller believes that the complexity of the "absolute" duplication of an individual mind is insurmountable for the nearest hundreds of years.<ref name="abondonallhopetoupload">{{Cite news |url=https://www.nytimes.com/2015/10/11/opinion/sunday/will-you-ever-be-able-to-upload-your-brain.html |title=Will You Ever Be Able to Upload Your Brain? |work=New York Times |first=Kenneth D. |last=Miller |date=October 10, 2015}}</ref> There are very few feasible technologies that humans have refrained from developing<!-- what technologies? -->. The neuroscience and computer-hardware technologies that may make brain emulation possible are widely desired for other reasons, and logically their development will continue into the future. We may also have brain emulations for a brief but significant period on the way to non-emulation based human-level AI.<ref name="hanson" /> Assuming that emulation technology will arrive, a question becomes whether we should accelerate or slow its advance.<ref name=EckersleySandberg2013 /> Arguments for speeding up brain-emulation research: * If neuroscience is the bottleneck on brain emulation rather than computing power, emulation advances may be more erratic and unpredictable based on when new scientific discoveries happen.<ref name=EckersleySandberg2013 /><ref name=ShulmanSandberg2010>{{cite journal|last1= Shulman |first1= Carl |first2= Anders |last2= Sandberg|title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy |year= 2010 |url= http://intelligence.org/files/SoftwareLimited.pdf|access-date=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref><ref name=Hanson2009 /> Limited computing power would mean the first emulations would run slower and so would be easier to adapt to, and there would be more time for the technology to transition through society.<ref name=Hanson2009>{{cite web|last1=Hanson|first1=Robin|title=Bad Emulation Advance|url=http://www.overcomingbias.com/2009/11/bad-emulation-advance.html|website=Overcoming Bias|access-date=28 June 2014|date=26 Nov 2009}}</ref> * Improvements in manufacturing, 3D printing, and nanotechnology may accelerate hardware production,<ref name =EckersleySandberg2013 /> which could increase the "computing overhang"<ref name="MuehlhauserSalamon2012">{{cite book |last1=Muehlhauser |first1=Luke |title=Singularity Hypotheses: A Scientific and Philosophical Assessment |last2=Salamon |first2=Anna |publisher=Springer |year=2012 |editor=Eden |editor-first=Amnon |chapter=Intelligence Explosion: Evidence and Import |editor2=Søraker |editor-first2=Johnny |editor3=Moor |editor-first3=James H. |editor4=Steinhart |editor-first4=Eric |chapter-url=http://intelligence.org/files/IE-EI.pdf}}</ref> from excess hardware relative to neuroscience. * If one AI-development group had a lead in emulation technology, it would have more subjective time to win an arms race to build the first superhuman AI. Because it would be less rushed, it would have more freedom to consider AI risks.<ref name=SalamonMuehlhauser2012 /><ref name=Bostrom2014 /> Arguments for slowing brain-emulation research: * Greater investment in brain emulation and associated cognitive science might enhance the ability of artificial intelligence (AI) researchers to create "neuromorphic" (brain-inspired) algorithms, such as neural networks, reinforcement learning, and hierarchical perception. This could accelerate [[Existential risk from artificial general intelligence|risks from uncontrolled AI]].<ref name=EckersleySandberg2013 /><ref name=Bostrom2014>{{cite book| last1= Bostrom |first1= Nick |title=Superintelligence: Paths, Dangers, Strategies|chapter=Ch. 14: The strategic picture| date= 2014 |publisher= Oxford University Press|isbn=978-0199678112}}</ref> Participants at a 2011 AI workshop estimated an 85% probability that neuromorphic AI would arrive before brain emulation. This was based on the idea that brain emulation would require understanding of the workings and functions of the different brain components, along with the technological know-how to emulate neurons. To counter this idea, reverse engineering the Microsoft Windows code base is already hard, so reverse engineering the brain would likely be much harder. By a very narrow margin, the participants on balance leaned toward the view that accelerating brain emulation would increase expected AI risk.<ref name=SalamonMuehlhauser2012>{{cite web| first1= Anna |last1= Salamon | first2= Luke |last2= Muehlhauser|title=Singularity Summit 2011 Workshop Report |url= https://intelligence.org/files/SS11Workshop.pdf|website=Machine Intelligence Research Institute| date=2012 |access-date= 28 June 2014}}</ref> * Waiting might give society more time to think about the consequences of brain emulation and develop institutions to improve cooperation.<ref name=EckersleySandberg2013 /><ref name=Bostrom2014 /> Emulation research would also accelerate neuroscience as a whole, which might accelerate medical advances, cognitive enhancement, lie detectors, and capability for [[psychological manipulation]].<ref name=Bostrom2014 /> Emulations might be easier to control than ''de novo'' AI because: # Human abilities, behavioral tendencies, and vulnerabilities are more thoroughly understood, thus control measures might be more intuitive and easier to plan.<ref name=SalamonMuehlhauser2012 /><ref name=Bostrom2014 /> # Emulations could more easily inherit human motivations.<ref name=Bostrom2014 /> # Emulations are harder to manipulate than ''de novo'' AI, because brains are messy and complicated; this could reduce risks of their rapid takeoff.<ref name=EckersleySandberg2013 /><ref name=Bostrom2014 /> Also, emulations may be bulkier and require more hardware than AI, which would also slow the speed of a transition.<ref name=Bostrom2014 /> Unlike AI, an emulation would not be able to rapidly expand beyond the size of a human brain.<ref name=Bostrom2014 /> Emulations running at digital speeds would have less intelligence differential vis-à-vis AI and so might more easily control AI.<ref name=Bostrom2014 /> As counterpoint to these considerations, Bostrom notes some downsides: # Even if we better understand human behavior, the ''evolution'' of emulation behavior under self-improvement might be much less predictable than the evolution of safe ''de novo'' AI under self-improvement.<ref name=Bostrom2014 /> # Emulations may not inherit all human motivations. Perhaps they would inherit our darker motivations or would behave abnormally in the unfamiliar environment of cyberspace.<ref name=Bostrom2014 /> # Even if there is a slow takeoff toward emulations, there would still be a second transition to ''de novo'' AI later on. Two intelligence explosions may mean more total risk.<ref name=Bostrom2014 /> Because of the postulated difficulties that a whole brain emulation-generated [[superintelligence]] would pose for the control problem, computer scientist [[Stuart J. Russell]] in his book ''[[Human Compatible]]'' rejects creating one, simply calling it "so obviously a bad idea".<ref>{{Cite book|last=Russell|first=Stuart|author-link=Stuart J. Russell|title=[[Human Compatible|Human Compatible: Artificial Intelligence and the Problem of Control]]|publisher=[[Viking Press]]|year=2019|isbn=978-0-525-55861-3|oclc=1113410915}}</ref>
Summary:
Please note that all contributions to Ikwipedia are considered to be released under the Creative Commons Attribution-ShareAlike (see
Ikwipedia:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Toggle limited content width