Loading...
Please wait, while we are loading the content...
Similar Documents
◾ Superintelligence Safety Engineering
| Content Provider | Scilit |
|---|---|
| Author | Yampolskiy, Roman V. |
| Copyright Year | 2015 |
| Description | Consequently, I propose that purely philosophical discussions of ethics for machines be supplemented by scientific work aimed at creating safe machines in the context of a new field I term artificial intelligence (AI) safety engineering. Some concrete work in this important area has already begun (Gordon 1998; Gordon-Spears 2003, 2004). A common theme in AI safety research is the possibility of keeping a superintelligent agent in sealed hardware to prevent it from doing any harm to humankind. Such ideas originate with scientific visionaries such as Eric Drexler, who has suggested confining transhuman machines so that their outputs could be studied and used safely (Drexler 1986). Similarly, Nick Bostrom, a futurologist, has proposed (Bostrom 2008) an idea for an oracle AI (OAI), which would only be capable of answering questions. Finally, in 2010 David Chalmers proposed the idea of a “leakproof” singularity (Chalmers 2010). He suggested that, for safety reasons, AI systems first be restricted to simulated virtual worlds until their behavioral tendencies could be fully understood under the controlled conditions. Book Name: Artificial Superintelligence |
| Related Links | https://content.taylorfrancis.com/books/download?dac=C2013-0-25975-4&isbn=9780429174353&doi=10.1201/b18612-11&format=pdf |
| Ending Page | 167 |
| Page Count | 10 |
| Starting Page | 158 |
| DOI | 10.1201/b18612-11 |
| Language | English |
| Publisher | Informa UK Limited |
| Publisher Date | 2015-06-17 |
| Access Restriction | Open |
| Subject Keyword | Book Name: Artificial Superintelligence History and Philosophy of Science Safety Engineering Behavioral Chalmers Machines Scientific Gordon Bostrom Drexler |
| Content Type | Text |
| Resource Type | Chapter |