The subtitle of the doom bible to be published by AI extinction prophets Eliezer Yudkowsky and Nate Soares later this month is “Why superhuman AI would kill us all.” But it really should be “Why superhuman AI WILL kill us all,” because even the coauthors don’t believe that the world will take the necessary measures to stop AI from eliminating all non-super humans.
The book is beyond dark, reading like notes scrawled in a dimly lit prison cell the night before a dawn execution. When I meet these self-appointed Cassandras, I ask them outright if they believe that they personally will meet their ends through some machination of superintelligence. The answers come promptly: “yeah” and “yup.”
I’m not surprised, because I’ve read the book—the title, by the way, is If Anyone Builds It, Everyone Dies. Still, it's a jolt to hear this. It is one thing to, say, write about cancer statistics and quite another to talk about coming to terms with a fatal diagnosis. I ask them how they think the end will come for them. Yudkowsky at first dodges the answer.
Please select this link to read the complete article from WIRED.