Aaron Mak, Slate; When Is Technology Too Dangerous to Release to the Public?
"The announcement has also sparked a debate about how to handle the proliferation of potentially dangerous A.I. algorithms...
It’s worth considering, as OpenAI seems to be encouraging us to do, how
researchers and society in general should approach powerful A.I. models...
Nevertheless, OpenAI said that it would only be publishing a “much
smaller version” of the model due to concerns that it could be abused.
The blog post fretted that it could be used to generate false news
articles, impersonate people online, and generally flood the internet
with spam and vitriol...
“There’s a general philosophy that when the time has come for some
scientific progress to happen, you really can’t stop it,” says
[Robert] Frederking [the principal systems scientist at Carnegie Mellon’s Language Technologies Institute]. “You just need to figure out how you’re going to deal with
it.”"
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label how to approach powerful AI models. Show all posts
Showing posts with label how to approach powerful AI models. Show all posts
Tuesday, February 26, 2019
When Is Technology Too Dangerous to Release to the Public?; Slate, February 22, 2019
Subscribe to:
Posts (Atom)