First Page | Document Content | |
---|---|---|
Date: 2013-05-29 12:47:25Science Philosophy of artificial intelligence Transhumanists Futurology Computational neuroscience Friendly artificial intelligence Ben Goertzel Singularity Institute for Artificial Intelligence Strong AI Singularitarianism Time Future | MIRI MACH IN E INT ELLIGENCE R ESEARCH INS TITU TE AI Risk Bibliography 2012Add to Reading ListSource URL: intelligence.orgDownload Document from Source WebsiteFile Size: 284,13 KBShare Document on Facebook |
Risks and mitigation strategies for Oracle AI Abstract: There is no strong reason to believe human level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poseDocID: 1rOi4 - View Document | |
The Data You Have... Tomorrow’s Information Business Marjorie M.K. Hlava President Access Innovations, IncDocID: 1nFdt - View Document | |
Aligning Superintelligence with Human Interests: An Annotated Bibliography Nate Soares Machine Intelligence Research InstituteDocID: 1gGhe - View Document | |
Microsoft Word - P583_584_CNT_14_45__KOKORO_IDX.docDocID: 1gvAC - View Document | |
Predicting AGI: What can we say when we know so little? Fallenstein, Benja Mennen, AlexDocID: 1gsMG - View Document |