Winston Churchill once said, “First we shape our structures and then our structures shape us”. This is a good frame for a mind-stretching story in the latest New Yorker about AI, and in particular question of whether intelligent machines could threaten humanity – treating us much as we now treat ants. The story is woven around Nick Bostrom, Director of the Future of Humanity Institute at Oxford University and his new-ish book "Superintelligence." Bostrom is an AI optimist. According to the New Yorker:
“Perhaps the most radical of his visions is that superintelligent A.I. will hasten the uploading of minds—what he calls “whole-brain emulations”—technology that might not be possible for centuries, if at all. Bostrom, in his most hopeful mode, imagines emulations not only as reproductions of the original intellect “with memory and personality intact”—a soul in the machine—but as minds expandable in countless ways. “We live for seven decades, and we have three-pound lumps of cheesy matter to think with, but to me it is plausible that there could be extremely valuable mental states outside this little particular set of possibilities that might be much better.””
Scientists are only now starting to debate the ethical and social implications of AI research. Meantime, as the New Yorker piece explains, many of the biggest and highest profile tech companies (such as Google) are in an “A.I. arms race," buying up AI firms and setting up special AI units.
One researcher who is pessimistic about AI outcomes for humans was asked asked why he continues:
“ “[T]he truth is that the prospect of discovery is too sweet.” He smiled awkwardly, the word hanging in the air—an echo of Oppenheimer, who famously said of the bomb, “When you see something that is technically sweet, you go ahead and do it, and you argue about what to do about it only after you have had your technical success.””