Sunday, January 9, 2022

Artificial intelligence author kicks off Friends of the Library nonfiction lecture series; Naples Daily News, January 7, 2022

Vicky Bowles, Naples Daily News; Artificial intelligence author kicks off Friends of the Library nonfiction lecture series

"Over the past few decades, a bunch of smart guys built artificial intelligence systems that have had deep impact on our everyday lives. But do they — and their billion-dollar companies — have the human intelligence to keep artificial intelligence safe and ethical?

Questions like this are part of the history and overview of artificial intelligence in Cade Metz’s book “Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World.”

On Monday, Jan. 17, Metz, a technology correspondent for The New York Times and former senior writer for Wired magazine, is the first speaker in the 2022 Nonfiction Author Series, sponsored by the nonprofit Friends of the Library of Collier County, which raises money for public library programs and resources...

"NDN: This was such a wonderful sentence early on in your book: “As an undergraduate at Harvard (in the 1940s), using over three thousand vacuum tubes and a few parts from an old B-52 bomber, (Marvin) Minsky built what may have been the first neural network.” Is that kind of amateur, garage-built science still possible, given the speed of innovation now and the billions of dollars that are thrown at development?

CM: It certainly is. It happens all the time, inside universities and out. But in the AI field, this has been eclipsed by the work at giant companies like Google and Facebook. That is one of the major threads in my book: academia struggling to keep up with the rapid rate of progress in the tech industry. It is a real problem. So much of the talent is moving into industry, leaving the cupboard bare at universities. Who will teach the next generation? Who will keep the big tech companies in check? 

NDN: I was amused to see that Google and DeepMind built a team “dedicated to what they called ‘AI safety,’ an effort to ensure that the lab’s technologies did no harm.” My question is, who defines harm within this race to monetize new technologies? Isn’t, for example, the staggering amount of electrical power used to run these systems harmful to the globe?

CM: I am glad you were amused. These companies say we should trust them to ensure AI "safety" and "ethics," but the reality is that safety and ethics are in the eye of the beholder. They can shape these terms to mean whatever they like. Many of the AI researchers at the heart of my book are genuinely concerned about how AI will be misused — how it will cause harm — but when they get inside these large companies, they find that their views clash with the economic aims of these tech giants."

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.