Showing posts with label AI safety. Show all posts
Showing posts with label AI safety. Show all posts

Thursday, August 29, 2024

California advances landmark legislation to regulate large AI models; AP, August 28, 2024

TRÂN NGUYỄN, AP ; California advances landmark legislation to regulate large AI models

"Wiener’s proposal is among dozens of AI bills California lawmakers proposed this year to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography. With AI increasingly affecting the daily lives of Americans, state legislators have tried to strike a balance of reigning in the technology and its potential risks without stifling the booming homegrown industry. 

California, home of 35 of the world’s top 50 AI companies, has been an early adopter of AI technologies and could soon deploy generative AI tools to address highway congestion and road safety, among other things."

Thursday, March 7, 2019

Does AI Ethics Have A Bad Name?; Forbes, March 7, 2019

Calum Chace, Forbes; Does AI Ethics Have A Bad Name?

"One possible downside is that people outside the field may get the impression that some sort of moral agency is being attributed to the AI, rather than to the humans who develop AI systems.  The AI we have today is narrow AI: superhuman in certain narrow domains, like playing chess and Go, but useless at anything else. It makes no more sense to attribute moral agency to these systems than it does to a car or a rock.  It will probably be many years before we create an AI which can reasonably be described as a moral agent...

The issues explored in the field of AI ethics are important but it would help to clarify them if some of the heat was taken out of the discussion.  It might help if instead of talking about AI ethics, we talked about beneficial AI and AI safety.  When an engineer designs a bridge she does not finish the design and then consider how to stop it from falling down.  The ability to remain standing in all foreseeable circumstances is part of the design criteria, not a separate discipline called “bridge ethics”. Likewise, if an AI system has deleterious effects it is simply a badly designed AI system.

Interestingly, this change has already happened in the field of AGI research, the study of whether and how to create artificial general intelligence, and how to avoid the potential downsides of that development, if and when it does happen.  Here, researchers talk about AI safety. Why not make the same move in the field of shorter-term AI challenges?"