Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
Help about MediaWiki
Search
Search
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Mind uploading
(section)
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
Move
General
What links here
Related changes
Special pages
Page information
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===Emulation timelines and AI risk=== {{Update section|date=June 2024|reason=most sources are more than 10 years old, and may not reflect the current state of the debate}} Kenneth D. Miller, a professor of neuroscience at Columbia University and a co-director of the Center for Theoretical Neuroscience, raised doubts about the practicality of mind uploading. His major argument is that reconstructing neurons and their connections is in itself a formidable task, but it is far from being sufficient. Operation of the brain depends on the dynamics of electrical and biochemical signal exchange between neurons; therefore, capturing them in a single "frozen" state may prove insufficient. In addition, the nature of these signals may require modeling at the molecular level and beyond. Therefore, while not rejecting the idea in principle, Miller believes that the complexity of the "absolute" duplication of an individual mind is insurmountable for the nearest hundreds of years.<ref name="abondonallhopetoupload">{{Cite news |url=https://www.nytimes.com/2015/10/11/opinion/sunday/will-you-ever-be-able-to-upload-your-brain.html |title=Will You Ever Be Able to Upload Your Brain? |work=New York Times |first=Kenneth D. |last=Miller |date=October 10, 2015}}</ref> There are very few feasible technologies that humans have refrained from developing<!-- what technologies? -->. The neuroscience and computer-hardware technologies that may make brain emulation possible are widely desired for other reasons, and logically their development will continue into the future. We may also have brain emulations for a brief but significant period on the way to non-emulation based human-level AI.<ref name="hanson" /> Assuming that emulation technology will arrive, a question becomes whether we should accelerate or slow its advance.<ref name=EckersleySandberg2013 /> Arguments for speeding up brain-emulation research: * If neuroscience is the bottleneck on brain emulation rather than computing power, emulation advances may be more erratic and unpredictable based on when new scientific discoveries happen.<ref name=EckersleySandberg2013 /><ref name=ShulmanSandberg2010>{{cite journal|last1= Shulman |first1= Carl |first2= Anders |last2= Sandberg|title=Implications of a Software-Limited Singularity|journal=ECAP10: VIII European Conference on Computing and Philosophy |year= 2010 |url= http://intelligence.org/files/SoftwareLimited.pdf|access-date=17 May 2014|editor1-first=Klaus|editor1-last=Mainzer}}</ref><ref name=Hanson2009 /> Limited computing power would mean the first emulations would run slower and so would be easier to adapt to, and there would be more time for the technology to transition through society.<ref name=Hanson2009>{{cite web|last1=Hanson|first1=Robin|title=Bad Emulation Advance|url=http://www.overcomingbias.com/2009/11/bad-emulation-advance.html|website=Overcoming Bias|access-date=28 June 2014|date=26 Nov 2009}}</ref> * Improvements in manufacturing, 3D printing, and nanotechnology may accelerate hardware production,<ref name =EckersleySandberg2013 /> which could increase the "computing overhang"<ref name="MuehlhauserSalamon2012">{{cite book |last1=Muehlhauser |first1=Luke |title=Singularity Hypotheses: A Scientific and Philosophical Assessment |last2=Salamon |first2=Anna |publisher=Springer |year=2012 |editor=Eden |editor-first=Amnon |chapter=Intelligence Explosion: Evidence and Import |editor2=Søraker |editor-first2=Johnny |editor3=Moor |editor-first3=James H. |editor4=Steinhart |editor-first4=Eric |chapter-url=http://intelligence.org/files/IE-EI.pdf}}</ref> from excess hardware relative to neuroscience. * If one AI-development group had a lead in emulation technology, it would have more subjective time to win an arms race to build the first superhuman AI. Because it would be less rushed, it would have more freedom to consider AI risks.<ref name=SalamonMuehlhauser2012 /><ref name=Bostrom2014 /> Arguments for slowing brain-emulation research: * Greater investment in brain emulation and associated cognitive science might enhance the ability of artificial intelligence (AI) researchers to create "neuromorphic" (brain-inspired) algorithms, such as neural networks, reinforcement learning, and hierarchical perception. This could accelerate [[Existential risk from artificial general intelligence|risks from uncontrolled AI]].<ref name=EckersleySandberg2013 /><ref name=Bostrom2014>{{cite book| last1= Bostrom |first1= Nick |title=Superintelligence: Paths, Dangers, Strategies|chapter=Ch. 14: The strategic picture| date= 2014 |publisher= Oxford University Press|isbn=978-0199678112}}</ref> Participants at a 2011 AI workshop estimated an 85% probability that neuromorphic AI would arrive before brain emulation. This was based on the idea that brain emulation would require understanding of the workings and functions of the different brain components, along with the technological know-how to emulate neurons. To counter this idea, reverse engineering the Microsoft Windows code base is already hard, so reverse engineering the brain would likely be much harder. By a very narrow margin, the participants on balance leaned toward the view that accelerating brain emulation would increase expected AI risk.<ref name=SalamonMuehlhauser2012>{{cite web| first1= Anna |last1= Salamon | first2= Luke |last2= Muehlhauser|title=Singularity Summit 2011 Workshop Report |url= https://intelligence.org/files/SS11Workshop.pdf|website=Machine Intelligence Research Institute| date=2012 |access-date= 28 June 2014}}</ref> * Waiting might give society more time to think about the consequences of brain emulation and develop institutions to improve cooperation.<ref name=EckersleySandberg2013 /><ref name=Bostrom2014 /> Emulation research would also accelerate neuroscience as a whole, which might accelerate medical advances, cognitive enhancement, lie detectors, and capability for [[psychological manipulation]].<ref name=Bostrom2014 /> Emulations might be easier to control than ''de novo'' AI because: # Human abilities, behavioral tendencies, and vulnerabilities are more thoroughly understood, thus control measures might be more intuitive and easier to plan.<ref name=SalamonMuehlhauser2012 /><ref name=Bostrom2014 /> # Emulations could more easily inherit human motivations.<ref name=Bostrom2014 /> # Emulations are harder to manipulate than ''de novo'' AI, because brains are messy and complicated; this could reduce risks of their rapid takeoff.<ref name=EckersleySandberg2013 /><ref name=Bostrom2014 /> Also, emulations may be bulkier and require more hardware than AI, which would also slow the speed of a transition.<ref name=Bostrom2014 /> Unlike AI, an emulation would not be able to rapidly expand beyond the size of a human brain.<ref name=Bostrom2014 /> Emulations running at digital speeds would have less intelligence differential vis-à-vis AI and so might more easily control AI.<ref name=Bostrom2014 /> As counterpoint to these considerations, Bostrom notes some downsides: # Even if we better understand human behavior, the ''evolution'' of emulation behavior under self-improvement might be much less predictable than the evolution of safe ''de novo'' AI under self-improvement.<ref name=Bostrom2014 /> # Emulations may not inherit all human motivations. Perhaps they would inherit our darker motivations or would behave abnormally in the unfamiliar environment of cyberspace.<ref name=Bostrom2014 /> # Even if there is a slow takeoff toward emulations, there would still be a second transition to ''de novo'' AI later on. Two intelligence explosions may mean more total risk.<ref name=Bostrom2014 /> Because of the postulated difficulties that a whole brain emulation-generated [[superintelligence]] would pose for the control problem, computer scientist [[Stuart J. Russell]] in his book ''[[Human Compatible]]'' rejects creating one, simply calling it "so obviously a bad idea".<ref>{{Cite book|last=Russell|first=Stuart|author-link=Stuart J. Russell|title=[[Human Compatible|Human Compatible: Artificial Intelligence and the Problem of Control]]|publisher=[[Viking Press]]|year=2019|isbn=978-0-525-55861-3|oclc=1113410915}}</ref>
Summary:
Please note that all contributions to Ikwipedia are considered to be released under the Creative Commons Attribution-ShareAlike (see
Ikwipedia:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)
Toggle limited content width