Robert Booth , The Guardian; ‘The biggest decision yet’
"Humanity will have to decide by 2030 whether to take the “ultimate risk” of letting artificial intelligence systems train themselves to become more powerful, one of the world’s leading AI scientists has said.
Jared Kaplan, the chief scientist and co-owner of the $180bn (£135bn) US startup Anthropic, said a choice was looming about how much autonomy the systems should be given to evolve.
The move could trigger a beneficial “intelligence explosion” – or be the moment humans end up losing control...
He is not alone at Anthropic in voicing concerns. One of his co-founders, Jack Clark, said in October he was both an optimist and “deeply afraid” about the trajectory of AI, which he called “a real and mysterious creature, not a simple and predictable machine”.
Kaplan said he was very optimistic about the alignment of AI systems with the interests of humanity up to the level of human intelligence, but was concerned about the consequences if and when they exceed that threshold."
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.