Sunday, December 18, 2016

The Wild West of Robotic "Rights and Wrongs"; Ethics and Information Blog, 12/18/16

Kip Currier, Ethics and Information Blog; The Wild West of Robotic "Rights and Wrongs"
The challenge of "robot ethics"--how to imbue robotic machines and artificial intelligence (AI) with the "right" programming and protocols to make ethical decisions--is a hot topic in academe and business. Particularly right now, related to its application in autonomous self-driving vehicles (e.g. Uber, Apple, Google).
When we think about ethical questions addressing how robots should or should not act, Isaac Asimov's oft-discussed "Three Laws of Robotics", spelled out in his 1942 short story "Runaround", certainly come to mind (see here).
Themes of robots making judgments of "right and wrong", as well as ethical topics exploring AI accountability and whether "human rights" should be inclusive of "rights-for-robots", have also been prominent in depictions of robots and AI in numerous science fiction films and TV shows over the past 50+ years: Gort in The Day The Earth Stood Still (1951) and (2008) (Klaatu...Barada...Nikto!). 2001: A Space Odyssey (1968) and the monotonal, merciless HAL 9000 ("Open the pod bay doors, Hal"). 1983's War Games, starring Brat Pack-ers Matthew Broderick and Ally Sheedy, can also be seen as a cautionary tale of ethical-decision-making-gone-awry in a proto-machine learning gaming program ("Shall we play a game?"), used for then-Cold War military and national security purposes.
Blade Runner (1982) revealed Replicants-with-an-expiration-date-on-the-run. (We'll have to wait and see what's up with the Replicants until sequel Blade Runner 2049 debuts in late 2017.) Arnold Schwarznegger played a killer-robot from the future in The Terminator (1984), and returned as a reprogrammed/converted "robot savior" in Terminator 2: Judgment Day (1991). Star Trek: The Next Generation (1987-1994) throughout its run explored "sentience" and the nature of humans AND non-humans "being human", as seen through the eyes of Enterprise android crew member "Commander Data" (see 1987 standout episode "The Measure of a Man"). Fifth column sometimes-sleeper Cylons with "many copies" and "a plan" were the driving force in 2004-2009's Battlestar Galactica. Will Smith portrayed a seriously robophobic cop hot on the heels of a homicidal robot suspect in the Asimov-short-story-collection-suggested I, Robot (2004).
Most recently, robots are front and center (if not always readily identifiable!) in this year's breakout HBO hit Westworld (see the official Opening Credits here). Short-hand for the show's plot: "robots in an American West-set amusement park for the human rich". But it's a lot more than that. Westworld is an inspired reimagining ("Game of Thrones" author George R.R. Martin recently called this first season of “Westworld” a "true masterpiece") of the same-named, fairly-forgettable (--but for Yul Brynner's memorable robot role, solely credited as "Gunslinger"!) 1973 Michael Crichton-written/directed film. What the 1973 version lacked in deep-dive thoughts, the new version makes up for in spades, and then some: This is a show about robots (but really, the nature of consciousness and agency) for thinking people.--With, ahem, unapologetic dashes of Games of Thrones-esque sex and violence ("It's Not TV. It's HBO.(R)") sprinkled liberally throughout.
Much of the issue of robot ethics has tended to center on the impacts of robots on humans. With "impacts" often meaning, at a minimum, job obsolescense for humans (see here and here). Or, at worst, (especially in terms of pop culture narratives) euphemistic code for "death and destruction to humans". (Carnegie Mellon University PhD and author David H. Wilson's 2011 New York Times best-selling Robopocalypse chillingly tapped into fears of a "Digital Axis of Evil"--AI/robots/Internet-of-Things--Revolution of robotic rampage and revenge against humans, perceived as both oppressors and inferior. This year Stephen Hawking and Elon Musk, among others (from 2015, see here and here), also voiced real-world concerns about the threats AI may hold for future humanity.)
But thought-provoking, at times unsettling and humanizing depictions of robotic lifeforms--Westworld "hosts" Maeve and Dolores et al., robot boy David in Steven Spielberg's 2001 A.I. Artificial Intelligence, as well as animated treatments in 2008's WALL-E from Pixar and 2016's Hum (see post below linked here)--are leveling this imbalance. Flipping the "humancentric privilege" and spurring us to think about the impacts of human beings on robots. What ethical considerations, if any, are owed to the latter? Whether robots/AI can and should be (will be?) seen as emergent "forms of life". Perhaps even with "certain inalienable Rights" (Robot Lives Matter?).
(Aside: As a kid who grew up watching the "Lost in Space" TV show (1965-1968) in syndication in the 1970's, I'll always have a soft spot for the Robinson family's trusty robot ("Danger, Will Robinson, Danger!") simply called...wait for it..."Robot".)
In the meantime--at least until sentient robots can think about "the nature of their own existence" a la Westworld, or the advent of the "singularity" (sometimes described as the merging of man and machine and/or the moment when machine intelligence surpasses that of humans)--these fictionalized creations serve as allegorical constructs to ponder important, enduring questions: What it means to be "human". The nature of "right" and "wrong", and the shades in between. Interpretations of societal values, like "compassion", "decency", and "truth". And what it means to live in a "civilized" society. Sound timely?

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.