[Update 12/21/16: I was able to locate and buy this afternoon a copy of Ethics in the Real World (2016), a collection of 82 essays by Peter Singer, at a Barnes & Noble at Settler's Ridge in suburban Pittsburgh. The 4-page essay "Rights for Robots?" was written by Peter Singer (with Agata Sagan). Though this essay was written in 2009, the ethical issues it raises about robots seem even more timely and relevant today.] [Kip Currier: Just read the New York Times review (excerpted below) of Princeton University philosopher Peter Singer's new book “Ethics in the Real World: 82 Brief Essays on Things That Matter" and was intrigued by some of the chapter titles, like "Rights for Robots?" Unfortunately, the Barnes & Noble near me is holding their only print copy for another customer. But I'll pick up a copy elsewhere this week and look forward to checking it out.] In his new book, “Ethics in the Real World: 82 Brief Essays on Things That Matter,” Mr. Singer picks up the topics of animal rights and poverty amelioration and runs quite far with them. But he’s written better and more fully about these issues elsewhere; they are not the primary reason to come to this book. “Ethics in the Real World” comprises short pieces, most of them previously published. This book is interesting because it offers a chance to witness this influential thinker grapple with more offbeat questions. Among the essay titles here: “Should Adult Sibling Incest Be a Crime?”; “Is It O.K. to Cheat at Football?”; “Tiger Mothers or Elephant Mothers?”; “Rights for Robots?”; and “Kidneys for Sale?” This book is the equivalent of a moral news conference, or a particularly good Terry Gross interview."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label robotic ethics. Show all posts
Showing posts with label robotic ethics. Show all posts
Tuesday, December 20, 2016
‘Ethics in the Real World,’ Peter Singer’s Provocative Essays; Book Review by Dwight Garner, New York Times, 12/19/16
Book Review by Dwight Garner, New York Times; ‘Ethics in the Real World,’ Peter Singer’s Provocative Essays:
Sunday, December 18, 2016
The Wild West of Robotic "Rights and Wrongs"; Ethics and Information Blog, 12/18/16
Kip Currier, Ethics and Information Blog; The Wild West of Robotic "Rights and Wrongs"
The challenge of "robot ethics"--how to imbue robotic machines and artificial intelligence (AI) with the "right" programming and protocols to make ethical decisions--is a hot topic in academe and business. Particularly right now, related to its application in autonomous self-driving vehicles (e.g. Uber, Apple, Google). When we think about ethical questions addressing how robots should or should not act, Isaac Asimov's oft-discussed "Three Laws of Robotics", spelled out in his 1942 short story "Runaround", certainly come to mind (see here). Themes of robots making judgments of "right and wrong", as well as ethical topics exploring AI accountability and whether "human rights" should be inclusive of "rights-for-robots", have also been prominent in depictions of robots and AI in numerous science fiction films and TV shows over the past 50+ years: Gort in The Day The Earth Stood Still (1951) and (2008) (Klaatu...Barada...Nikto!). 2001: A Space Odyssey (1968) and the monotonal, merciless HAL 9000 ("Open the pod bay doors, Hal"). 1983's War Games, starring Brat Pack-ers Matthew Broderick and Ally Sheedy, can also be seen as a cautionary tale of ethical-decision-making-gone-awry in a proto-machine learning gaming program ("Shall we play a game?"), used for then-Cold War military and national security purposes. Blade Runner (1982) revealed Replicants-with-an-expiration-date-on-the-run. (We'll have to wait and see what's up with the Replicants until sequel Blade Runner 2049 debuts in late 2017.) Arnold Schwarznegger played a killer-robot from the future in The Terminator (1984), and returned as a reprogrammed/converted "robot savior" in Terminator 2: Judgment Day (1991). Star Trek: The Next Generation (1987-1994) throughout its run explored "sentience" and the nature of humans AND non-humans "being human", as seen through the eyes of Enterprise android crew member "Commander Data" (see 1987 standout episode "The Measure of a Man"). Fifth column sometimes-sleeper Cylons with "many copies" and "a plan" were the driving force in 2004-2009's Battlestar Galactica. Will Smith portrayed a seriously robophobic cop hot on the heels of a homicidal robot suspect in the Asimov-short-story-collection-suggested I, Robot (2004). Most recently, robots are front and center (if not always readily identifiable!) in this year's breakout HBO hit Westworld (see the official Opening Credits here). Short-hand for the show's plot: "robots in an American West-set amusement park for the human rich". But it's a lot more than that. Westworld is an inspired reimagining ("Game of Thrones" author George R.R. Martin recently called this first season of “Westworld” a "true masterpiece") of the same-named, fairly-forgettable (--but for Yul Brynner's memorable robot role, solely credited as "Gunslinger"!) 1973 Michael Crichton-written/directed film. What the 1973 version lacked in deep-dive thoughts, the new version makes up for in spades, and then some: This is a show about robots (but really, the nature of consciousness and agency) for thinking people.--With, ahem, unapologetic dashes of Games of Thrones-esque sex and violence ("It's Not TV. It's HBO.(R)") sprinkled liberally throughout. Much of the issue of robot ethics has tended to center on the impacts of robots on humans. With "impacts" often meaning, at a minimum, job obsolescense for humans (see here and here). Or, at worst, (especially in terms of pop culture narratives) euphemistic code for "death and destruction to humans". (Carnegie Mellon University PhD and author David H. Wilson's 2011 New York Times best-selling Robopocalypse chillingly tapped into fears of a "Digital Axis of Evil"--AI/robots/Internet-of-Things--Revolution of robotic rampage and revenge against humans, perceived as both oppressors and inferior. This year Stephen Hawking and Elon Musk, among others (from 2015, see here and here), also voiced real-world concerns about the threats AI may hold for future humanity.) But thought-provoking, at times unsettling and humanizing depictions of robotic lifeforms--Westworld "hosts" Maeve and Dolores et al., robot boy David in Steven Spielberg's 2001 A.I. Artificial Intelligence, as well as animated treatments in 2008's WALL-E from Pixar and 2016's Hum (see post below linked here)--are leveling this imbalance. Flipping the "humancentric privilege" and spurring us to think about the impacts of human beings on robots. What ethical considerations, if any, are owed to the latter? Whether robots/AI can and should be (will be?) seen as emergent "forms of life". Perhaps even with "certain inalienable Rights" (Robot Lives Matter?). (Aside: As a kid who grew up watching the "Lost in Space" TV show (1965-1968) in syndication in the 1970's, I'll always have a soft spot for the Robinson family's trusty robot ("Danger, Will Robinson, Danger!") simply called...wait for it..."Robot".) In the meantime--at least until sentient robots can think about "the nature of their own existence" a la Westworld, or the advent of the "singularity" (sometimes described as the merging of man and machine and/or the moment when machine intelligence surpasses that of humans)--these fictionalized creations serve as allegorical constructs to ponder important, enduring questions: What it means to be "human". The nature of "right" and "wrong", and the shades in between. Interpretations of societal values, like "compassion", "decency", and "truth". And what it means to live in a "civilized" society. Sound timely?
Wednesday, September 21, 2016
British Philosophers Consider the Ethics of a Robotic Future; PC Magazine, 9/20/16
Tom Brant, PC Magazine; British Philosophers Consider the Ethics of a Robotic Future:
"The British Standards Institute (BSI) commissioned a group of scientists, academics, ethicists, and philosophers to provide guidance on potential hazards and protective measures. They presented their guidelines at a robotics conference in Oxford, England last week. "As far as I know this is the first published standard for the ethical design of robots," professor of robotics at the University of the West of England Alan Winfield told the Guardian... The EU, which Britain will soon leave, is also working on robot ethics standards. Its provisional code of conduct for robotics engineers and users includes provisions like "robots should act in the best interests of humans" and forbids users from modifying a robot to enable it to function as a weapon."
Sunday, November 8, 2015
The Ethics Behind Driverless Cars; NPR, 11/7/15
Scott Simon, NPR; The Ethics Behind Driverless Cars:
"Despite the optimism behind driverless cars, at times they will have to decide whether it is better to harm the driver or pedestrians. NPR's Scott Simon talks with philosophy professor Patrick Lin."
Wednesday, September 3, 2014
You Should Have a Say in Your Robot Car’s Code of Ethics; Wired, 9/2/14
Jason Millar, Ethics; You Should Have a Say in Your Robot Car’s Code of Ethics:
"We Must Embrace Complexity If we embrace robust informed consent practices in engineering the sky will not fall. There are some obvious limits to the kinds of ethics settings we should allow in our robot cars. It would be absurd to design a car that allows users to choose to continue straight only when a woman is blocking the road. At the same time, it seems perfectly reasonable to allow a person to sacrifice himself to save a child if doing so aligns with his moral convictions. We can identify limits, even if the task is complex. Robots, and the ethical issues they raise, are immensely complex. But they require our thoughtful attention if we are to shift our thinking about the ethics of design and engineering, and respond to the burgeoning robotics industry appropriately. Part of this shift in thinking will require us to embrace moral and legal complexity where complexity is required. Unfortunately, bringing order to the chaos does not always result in a simpler world."
Subscribe to:
Posts (Atom)