I’ve been finishing up this book during the week.
The gist of it is: “What can we do now to prepare for superintelligent AI’s who may not have tender regard for us humans”?
Right off the bat, let me say that I’m glad someone is thinking about this. For reasons that he details in the book, Nick Bostrom believes that we need to be talking this rather abstruse topic before a superintelligent AI comes on the scene, because it could happen pretty damn quickly. It’s a worthy argument.