David Shariatmadari, The Guardian; If Anyone Builds it, Everyone Dies review – how AI could kill us all
"“History,” they write, “is full of … examples of catastrophic risk being minimised and ignored,” from leaded petrol to Chornobyl. But what about predictions of catastrophic risk being proved wrong? History is full of those, too, from Malthus’s population apocalypse to Y2K. Yudkowsky himself once claimed that nanotechnology would destroy humanity “no later than 2010”.
The problem is that you can be overconfident, inconsistent, a serial doom-monger, and still be right. It’s important to be aware of our own motivated reasoning when considering the arguments presented here; we have every incentive to disbelieve them.
And while it’s true that they don’t represent the scientific consensus, this is a rapidly changing, poorly understood field. What constitutes intelligence, what constitutes “super”, whether intelligence alone is enough to ensure world domination – all of this is furiously debated.
At the same time, the consensus that does exist is not particularly reassuring. In a 2024 survey of 2,778 AI researchers, the median probability placed on “extremely bad outcomes, such as human extinction” was 5%. Worryingly, “having thought more (either ‘a lot’ or ‘a great deal’) about the question was associated with a median of 9%, while having thought ‘little’ or ‘very little’ was associated with a median of 5%”.
Yudkowsky has been thinking about the problem for most of his adult life. The fact that his prediction sits north of 99% might reflect a kind of hysterical monomania, or an especially thorough engagement with the problem. Whatever the case, it feels like everyone with an interest in the future has a duty to read what he and Soares have to say."
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.