Showing posts with label artificial general intelligence (AGI). Show all posts
Showing posts with label artificial general intelligence (AGI). Show all posts

Wednesday, October 30, 2024

A Harris Presidency Is the Only Way to Stay Ahead of A.I.; The New York Times, October 29, 2024

THOMAS L. FRIEDMAN, The New York Times; A Harris Presidency Is the Only Way to Stay Ahead of A.I.

"Kamala Harris, given her background in law enforcement, connections to Silicon Valley and the work she has already done on A.I. in the past four years, is up to this challenge, which is a key reason she has my endorsement for the presidency...

I am writing a book that partly deals with this subject and have benefited from my tutorials with Craig Mundie, the former chief research and strategy officer for Microsoft who still advises the company. He is soon coming out with a book of his own related to the longer-term issues and opportunities of A.G.I., written with Eric Schmidt, the former Google C.E.O., and Henry Kissinger, who died last year and worked on the book right up to the end of his life.

It is titled “Genesis: Artificial Intelligence, Hope, and the Human Spirit.” The book invokes the Bible’s description of the origin of humanity because the authors believe that our A.I. moment is an equally fundamental turning point for our species.

I agree. We have become Godlike as a species in two ways: We are the first generation to intentionally create a computer with more intelligence than God endowed us with. And we are the first generation to unintentionally change the climate with our own hands.

The problem is we have become Godlike without any agreement among us on the Ten Commandments — on a shared value system that should guide the use of our newfound powers. We need to fix that fast. And no one is better positioned to lead that challenge than the next U.S. president, for several reasons."

Saturday, June 29, 2024

AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’; The Guardian, June 29, 2024

Zoe Corbin, The Guardian; AI scientist Ray Kurzweil: ‘We are going to expand intelligence a millionfold by 2045’

"The American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer – and some of his predictions no longer seem so wacky. Kurzweil’s day job is principal researcher and AI visionary at Google. He spoke to the Observer in his personal capacity as an author, inventor and futurist...

What of the existential risk of advanced AI systems – that they could gain unanticipated powers and seriously harm humanity? AI “godfather” Geoffrey Hinton left Google last year, in part because of such concerns, while other high-profile tech leaders such as Elon Musk have also issued warnings. Earlier this month, OpenAI and Google DeepMind workers called for greater protections for whistleblowers who raise safety concerns. 

I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]. We do have to be aware of the potential here and monitor what AI is doing. But just being against it is not sensible: the advantages are so profound. All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive...

Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? 

Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time...

The book looks in detail at AI’s job-killing potential. Should we be worried? 

Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.

There are other alarming ways, beyond job loss, that AI is promising to transform the world: spreading disinformation, causing harm through biased algorithms and supercharging surveillance. You don’t dwell much on those… 

We do have to work through certain types of issues. We have an election coming and “deepfake” videos are a worry. I think we can actually figure out [what’s fake] but if it happens right before the election we won’t have time. On issues of bias, AI is learning from humans and humans have bias. We’re making progress but we’re not where we want to be. There are also issues around fair data use by AI that need to be sorted out via the legal process."

Wednesday, July 12, 2023

Inside the White-Hot Center of A.I. Doomerism; The New York Times, July 11, 2023

 Kevin Roose, The New York Times; Inside the White-Hot Center of A.I. Doomerism

"But the difference is that Anthropic’s employees aren’t just worried that their app will break, or that users won’t like it. They’re scared — at a deep, existential level — about the very idea of what they’re doing: building powerful A.I. models and releasing them into the hands of people, who might use them to do terrible and destructive things.

Many of them believe that A.I. models are rapidly approaching a level where they might be considered artificial general intelligence, or “A.G.I.,” the industry term for human-level machine intelligence. And they fear that if they’re not carefully controlled, these systems could take over and destroy us...

And lastly, he made a moral case for Anthropic’s decision to create powerful A.I. systems, in the form of a thought experiment.

“Imagine if everyone of good conscience said, ‘I don’t want to be involved in building A.I. systems at all,’” he said. “Then the only people who would be involved would be the people who ignored that dictum — who are just, like, ‘I’m just going to do whatever I want.’ That wouldn’t be good.”"