Aaron Mak, Slate; When Is Technology Too Dangerous to Release to the Public?
"The announcement has also sparked a debate about how to handle the proliferation of potentially dangerous A.I. algorithms...
It’s worth considering, as OpenAI seems to be encouraging us to do, how
researchers and society in general should approach powerful A.I. models...
Nevertheless, OpenAI said that it would only be publishing a “much
smaller version” of the model due to concerns that it could be abused.
The blog post fretted that it could be used to generate false news
articles, impersonate people online, and generally flood the internet
with spam and vitriol...
“There’s a general philosophy that when the time has come for some
scientific progress to happen, you really can’t stop it,” says
[Robert] Frederking [the principal systems scientist at Carnegie Mellon’s Language Technologies Institute]. “You just need to figure out how you’re going to deal with
it.”"
Ethically-tangled aspects of 21st century societies and cultures. In the vein of Charles Darwin’s 1859 “entangled bank” metaphor—a complex and evolving digital ecosystem of difference and dependence, where humans, technologies, ethics, law, policy, data, and information converge and diverge. Kip Currier, PhD, JD
Showing posts with label potentially dangerous tech. Show all posts
Showing posts with label potentially dangerous tech. Show all posts
Tuesday, February 26, 2019
When Is Technology Too Dangerous to Release to the Public?; Slate, February 22, 2019
Subscribe to:
Posts (Atom)