Showing posts with label ethical concerns. Show all posts
Showing posts with label ethical concerns. Show all posts

Thursday, June 8, 2023

How ethics is becoming a key part of research in tech; The Stanford Daily, June 7, 2023

Cassandra Huff, The Stanford Daily; How ethics is becoming a key part of research in tech

"Building off the IRB model, in 2020 the Ethics in Society Review (ESR) board was created under the McCoy Family Center, the Center for Advanced Study in Behavioral Sciences (CASBS) and Human-Centered AI (HAI) to make ethics a core part of research in computer science. The ESR acts similarly to the IRB by examining ethical concerns to minimize potential harm of the research before a project is approved for funding.

This process is integrated into grant proposal applications in HAI. After HAI reviews the technical merits of a proposal, it is handed off to the ESR, which assigns an interdisciplinary panel of faculty to review each of them. This panel acts as advisors on ethical issues to identify challenges and provide additional guidance on the ethical component of the research. Once completed, the panel will either release research funds, or recommend more iterations of the review process.

The ESR is not meant to determine whether the proposal should be funded, but rather to analyze the unintended consequences of the research prior to the start of the project. In discussing what ESR does, Betsy Rajala, Program Director at CASBS said, “Everytime you touch research these questions come up and [it’s better to] think about them sooner rather than later.”"

Thursday, September 1, 2022

Ethical issues of facial recognition technology; TechRepublic, August 31, 2022

  Patrick Gray in Artificial Intelligence, TechRepublic; Ethical issues of facial recognition technology

"Facial recognition technology has entered the mass market, with our faces now able to unlock our phones and computers. While the ability to empower machines with the very human ability to identify a person with a quick look at their face is exciting, it’s not without significant ethical concerns.

Suppose your company is considering facial recognition technology. In that case, it’s essential to be aware of these concerns and ready to address them, which may even include abandoning facial recognition altogether.

When assessing these ethical concerns, consider how your customers, employees and the general public would react if they fully knew how you’re using the technology. If that thought is disconcerting, you may be veering into an ethical “danger zone.”"

Monday, July 13, 2020

Digital tools against COVID-19: taxonomy, ethical challenges, and navigation aid; The Lancet, June 29, 2020

Urs Gasser, PhD, Marcello Ienca, PhD, James Scheibner, PhD, Joanna Sleigh, MA, Prof Effy Vayena, PhD, The Lancet; Digital tools against COVID-19: taxonomy, ethical challenges, and navigation aid

"Summary

Data collection and processing via digital public health technologies are being promoted worldwide by governments and private companies as strategic remedies for mitigating the COVID-19 pandemic and loosening lockdown measures. However, the ethical and legal boundaries of deploying digital tools for disease surveillance and control purposes are unclear, and a rapidly evolving debate has emerged globally around the promises and risks of mobilising digital tools for public health. To help scientists and policy makers to navigate technological and ethical uncertainty, we present a typology of the primary digital public health applications that are in use. These include proximity and contact tracing, symptom monitoring, quarantine control, and flow modelling. For each, we discuss context-specific risks, cross-sectional issues, and ethical concerns. Finally, recognising the need for practical guidance, we propose a navigation aid for policy makers and other decision makers for the ethical development and use of digital public health tools."

Thursday, September 5, 2019

AI Ethics Guidelines Every CIO Should Read; Information Week, August 7, 2019

John McClurg, Information Week; AI Ethics Guidelines Every CIO Should Read

"Technology experts predict the rate of adoption of artificial intelligence and machine learning will skyrocket in the next two years. These advanced technologies will spark unprecedented business gains, but along the way enterprise leaders will be called to quickly grapple with a smorgasbord of new ethical dilemmas. These include everything from AI algorithmic bias and data privacy issues to public safety concerns from autonomous machines running on AI.

Because AI technology and use cases are changing so rapidly, chief information officers and other executives are going to find it difficult to keep ahead of these ethical concerns without a roadmap. To guide both deep thinking and rapid decision-making about emerging AI technologies, organizations should consider developing an internal AI ethics framework."

Monday, January 21, 2019

The ethics of gene editing: Lulu, Nana, and 'Gattaca; Johns Hopkins University, January 17, 2019


Saralyn Cruickshank, Johns Hopkins University; The ethics of gene editing: Lulu, Nana, and 'Gattaca

"Under the direction of Rebecca Wilbanks, a postdoctoral fellow in the Berman Institute of Bioethics and the Department of the History of Medicine, the students have been immersing themselves during in the language and principles of bioethics and applying what they learn to their understanding of technology, with an emphasis on robotics and reproductive technology in particular.

To help them access such heady material, Wilbanks put a spin on the course format. For the Intersession class—titled Science Fiction and the Ethics of Technology: Sex, Robots, and Doing the Right Thing—students explore course materials through the lens of science fiction.

"We sometimes think future technology might challenge our ethical sensibilities, but science fiction is good at exploring how ethics is connected to a certain way of life that happens to include technology," says Wilbanks, who is writing a book on how science fiction influenced the development of emerging forms of synthetic biology. "As our way of life changes together with technology, so might our ethical norms.""

Saturday, January 19, 2019

Why these young tech workers spent their Friday night planning a rebellion against companies like Google, Amazon, and Facebook; Recode, January 18, 2019

, Recode; Why these young tech workers spent their Friday night planning a rebellion against companies like Google, Amazon, and Facebook


"“We’re interested in connecting, bringing together, and organizing the workers in tech to help us fight big tech,” Ross Patton tells the crowd. A software engineer for a pharmaceutical technology startup, he’s an active member of the Tech Workers Coalition, a group dedicated to politically mobilizing employees in the industry to reform from within...

A couple of earnest college students head to the front of the room to talk to the speakers who had just presented, asking them for advice on organizing. One of them, a computer science student at Columbia University, says he has ethical concerns about going into the industry and wanted to learn about how to mobilize.

“If I go into an industry where I’m building things that impact people,” he says, “I want to have a say in what I build.""

Wednesday, November 28, 2018

'Of course it's not ethical': shock at gene-edited baby claims; The Guardian, November 27, 2018

Suzanne Sataline, The Guardian; 'Of course it's not ethical': shock at gene-edited baby claims

"Scientists have expressed anger and doubt over a Chinese geneticist’s claim to have edited the genes of twin girls before birth, as government agencies ordered investigations into the experiment.

A global outcry started after the genetic scientist He Jiankui claimed in a video posted on YouTube on Monday that he had used the gene-editing tool Crispr-Cas9 to modify a particular gene in two embryos before they were placed in their mother’s womb.

He said the genomes had been altered to disable a gene known as CCR5, blocking the pathway used by the HIV virus to enter cells.

Some scientists at the International Summit on Human Genome Editing, which began on Tuesday in Hong Kong, said they were appalled the scientist had announced his work without following scientific protocols, including publishing his findings in a peer-reviewed journal. Others cited the ethical problems raised by creating essentially enhanced humans."

Monday, March 6, 2017

The day of Trump toilets and condoms in China may have just ended. Here's why that's controversial; Los Angeles Times, March 6, 2017


Jessica Meyers, Los Angeles Times; 
The day of Trump toilets and condoms in China may have just ended. Here's why that's controversial


"Could Trump benefit from the decision?


Some analysts believe investors, wary about the delicate relationship between China and the U.S., will veer away from anything bearing Donald Trump’s name. But two chief ethics lawyers under former Presidents George W. Bush and Barack Obama argue China could still use Trump’s ties to his family empire to influence policies.
They’re part of a lawsuit filed in federal court in New York that alleges the president’s foreign business connections violate the Constitution.
“We should be seriously concerned about Mr. Trump’s ethical standards,” [Haochen] Sun [director of the Law and Technology Center at the University of Hong Kong and a specialist in intellectual property law] said. “The registration carries the message that Trump is still doing business.”

Sunday, December 18, 2016

The Wild West of Robotic "Rights and Wrongs"; Ethics and Information Blog, 12/18/16

Kip Currier, Ethics and Information Blog; The Wild West of Robotic "Rights and Wrongs"
The challenge of "robot ethics"--how to imbue robotic machines and artificial intelligence (AI) with the "right" programming and protocols to make ethical decisions--is a hot topic in academe and business. Particularly right now, related to its application in autonomous self-driving vehicles (e.g. Uber, Apple, Google).
When we think about ethical questions addressing how robots should or should not act, Isaac Asimov's oft-discussed "Three Laws of Robotics", spelled out in his 1942 short story "Runaround", certainly come to mind (see here).
Themes of robots making judgments of "right and wrong", as well as ethical topics exploring AI accountability and whether "human rights" should be inclusive of "rights-for-robots", have also been prominent in depictions of robots and AI in numerous science fiction films and TV shows over the past 50+ years: Gort in The Day The Earth Stood Still (1951) and (2008) (Klaatu...Barada...Nikto!). 2001: A Space Odyssey (1968) and the monotonal, merciless HAL 9000 ("Open the pod bay doors, Hal"). 1983's War Games, starring Brat Pack-ers Matthew Broderick and Ally Sheedy, can also be seen as a cautionary tale of ethical-decision-making-gone-awry in a proto-machine learning gaming program ("Shall we play a game?"), used for then-Cold War military and national security purposes.
Blade Runner (1982) revealed Replicants-with-an-expiration-date-on-the-run. (We'll have to wait and see what's up with the Replicants until sequel Blade Runner 2049 debuts in late 2017.) Arnold Schwarznegger played a killer-robot from the future in The Terminator (1984), and returned as a reprogrammed/converted "robot savior" in Terminator 2: Judgment Day (1991). Star Trek: The Next Generation (1987-1994) throughout its run explored "sentience" and the nature of humans AND non-humans "being human", as seen through the eyes of Enterprise android crew member "Commander Data" (see 1987 standout episode "The Measure of a Man"). Fifth column sometimes-sleeper Cylons with "many copies" and "a plan" were the driving force in 2004-2009's Battlestar Galactica. Will Smith portrayed a seriously robophobic cop hot on the heels of a homicidal robot suspect in the Asimov-short-story-collection-suggested I, Robot (2004).
Most recently, robots are front and center (if not always readily identifiable!) in this year's breakout HBO hit Westworld (see the official Opening Credits here). Short-hand for the show's plot: "robots in an American West-set amusement park for the human rich". But it's a lot more than that. Westworld is an inspired reimagining ("Game of Thrones" author George R.R. Martin recently called this first season of “Westworld” a "true masterpiece") of the same-named, fairly-forgettable (--but for Yul Brynner's memorable robot role, solely credited as "Gunslinger"!) 1973 Michael Crichton-written/directed film. What the 1973 version lacked in deep-dive thoughts, the new version makes up for in spades, and then some: This is a show about robots (but really, the nature of consciousness and agency) for thinking people.--With, ahem, unapologetic dashes of Games of Thrones-esque sex and violence ("It's Not TV. It's HBO.(R)") sprinkled liberally throughout.
Much of the issue of robot ethics has tended to center on the impacts of robots on humans. With "impacts" often meaning, at a minimum, job obsolescense for humans (see here and here). Or, at worst, (especially in terms of pop culture narratives) euphemistic code for "death and destruction to humans". (Carnegie Mellon University PhD and author David H. Wilson's 2011 New York Times best-selling Robopocalypse chillingly tapped into fears of a "Digital Axis of Evil"--AI/robots/Internet-of-Things--Revolution of robotic rampage and revenge against humans, perceived as both oppressors and inferior. This year Stephen Hawking and Elon Musk, among others (from 2015, see here and here), also voiced real-world concerns about the threats AI may hold for future humanity.)
But thought-provoking, at times unsettling and humanizing depictions of robotic lifeforms--Westworld "hosts" Maeve and Dolores et al., robot boy David in Steven Spielberg's 2001 A.I. Artificial Intelligence, as well as animated treatments in 2008's WALL-E from Pixar and 2016's Hum (see post below linked here)--are leveling this imbalance. Flipping the "humancentric privilege" and spurring us to think about the impacts of human beings on robots. What ethical considerations, if any, are owed to the latter? Whether robots/AI can and should be (will be?) seen as emergent "forms of life". Perhaps even with "certain inalienable Rights" (Robot Lives Matter?).
(Aside: As a kid who grew up watching the "Lost in Space" TV show (1965-1968) in syndication in the 1970's, I'll always have a soft spot for the Robinson family's trusty robot ("Danger, Will Robinson, Danger!") simply called...wait for it..."Robot".)
In the meantime--at least until sentient robots can think about "the nature of their own existence" a la Westworld, or the advent of the "singularity" (sometimes described as the merging of man and machine and/or the moment when machine intelligence surpasses that of humans)--these fictionalized creations serve as allegorical constructs to ponder important, enduring questions: What it means to be "human". The nature of "right" and "wrong", and the shades in between. Interpretations of societal values, like "compassion", "decency", and "truth". And what it means to live in a "civilized" society. Sound timely?

Saturday, November 26, 2016

FAQ: What you need to know, but were afraid to ask, about the EU Open Science Cloud; Science Business, 11/24/16

Science Business Staff, Science Business; FAQ: What you need to know, but were afraid to ask, about the EU Open Science Cloud:
"Will the data in the EU science cloud be available for free?
Some of it, yes; some of it, no. The EU says that not all data ‘will necessarily be free’, due to the legitimate rights of IP holders, so there will be an opportunity for some organisations to sell access to some of their data through the cloud. Private publishers, such as Elsevier and Springer, are also keen to be able to maintain charges for access to some of their services – but have also been unexpectedly enthusiastic about exploring the possible new business models that a very large, very active cloud could permit. On the other hand, some universities and research councils – among the most active proponents of free open access for research reports and text and data mining – are pushing to make the new cloud a tariff-free zone. It’s difficult to predict yet how this issue will be resolved...
What about privacy or ethical concerns?
Differing privacy and ethical policies and regulations in Europe, the US, and elsewhere could become sticking points which would prevent the cloud becoming fully global. There are legal restraints on where research data can be stored – essentially it has to be located in countries, and under the control of organisations, that are subject to EU data protection legislation, and that should make US-based commercial providers a little wary. Rules will need to be established to clarify the roles and responsibilities of the funding agencies, the data custodians, the cloud service providers and the researchers who use cloud-based data. The Commission has said these legal issues will be resolved as part of its broader rule-making efforts under its Digital Single Market – for privacy, copyright, and security of data. But it may not be so simple. The last time science and data rules collided was in 2014/15, when the EU was rewriting its data-privacy regulation; the original, EU-wide proposal would have had an unintended impact on medical research – leading medical universities across the EU to scream loudly that the EU was about to kill drug research. A muddled compromise resulted. Expect similar surprises in cloud regulation."

Monday, September 5, 2016

Top Safety Official Doesn’t Trust Automakers to Teach Ethics to Self-Driving Cars; MIT Technology Review, 9/2/16

Andrew Rosenblum, MIT Technology Review; Top Safety Official Doesn’t Trust Automakers to Teach Ethics to Self-Driving Cars:
"Rapid progress on autonomous driving has led to concerns that future vehicles will have to make ethical choices, for example whether to swerve to avoid a crash if it would cause serious harm to people outside the vehicle.
Christopher Hart, chairman of the National Transportation Safety Board, is one of them. He told MIT Technology Review that federal regulations will be required to set the basic morals of autonomous vehicles, as well as safety standards for how reliable they must be...
Hart also said there would need to be rules for how ethical prerogatives are encoded into software. He gave the example of a self-driving car faced with a decision between a potentially fatal collision with an out-of-control truck or heading up on the sidewalk and hitting pedestrians. “That to me is going to take a federal government response to address,” said Hart. “Those kinds of ethical choices will be inevitable.”
That NHTSA has been evaluating how it will regulate driverless cars for the past eight months, and will release guidance in the near future. The agency hasn't so far discussed ethical concerns about automated driving.
What regulation exists for self-driving cars comes from states such as California, and is targeted at the prototype vehicles being tested by companies such as Alphabet and Uber."

Saturday, February 6, 2016

Humane Society boss resigns after petition demands her removal; Pittsburgh Post-Gazette, 2/5/16

Madasyn Czebiniak and Anya Sostek, Pittsburgh Post-Gazette; Humane Society boss resigns after petition demands her removal:
"The head of the Western PA Humane Society has resigned, days after she was put on administrative leave.
Joy Braunstein had been under pressure after an online petition demanded her removal was circulated.
Statement from Joy Braunstein:
In a statement this afternoon, Ms. Braunstein said: “Given the present circumstances, I have made a personal choice to step away from The Western Pennsylvania Humane Society and resign my position effective immediately out of respect for my family and out of respect for the organization. I wish the Western Pennsylvania Humane Society well and will continue to be a supporter of the organization. At this time, I have not decided what I plan to do next professionally. Before I do, I plan to take some time with my family. I want to thank the Western Pennsylvania Humane Society for my time there and everyone else for their concern, but I have no further comment.”
Former employees estimate that in Ms. Braunstein’s 13-month tenure as executive director of the Western PA Humane Society, more than a third of the roughly 60-member staff was either fired or quit."

Wednesday, July 2, 2014

Facebook’s Secret Manipulation of User Emotions Faces European Inquiries; New York Times, 7/2/14

Facebook’s Secret Manipulation of User Emotions Faces European Inquiries:
"In response to widespread public anger, several European data protection agencies are examining whether Facebook broke local privacy laws when it conducted the weeklong investigation in January 2012.
That includes Ireland’s Office of the Data Protection Commissioner, which regulates Facebook’s global operations outside North America because the company has its international headquarters in Dublin. The Irish regulator has sent a series of questions to Facebook related to potential privacy issues, including whether the company got consent from users for the study, according to a spokeswoman.
The Information Commissioner’s Office of Britain also said that it was looking into potential privacy breaches that may have affected the country’s residents, though a spokesman of the office said that it was too early to know whether Facebook had broken the law. It is unknown where the users who were part of the experiment were located. Some 80 percent of Facebook’s 1.2 billion users are based outside North America...
The Federal Trade Commission, the American regulator that oversees Facebook’s conduct under a 20-year consent decree, has not publicly expressed similar interest in the case, which has caused an uproar over the company’s ethics and prompted the lead researcher on the project to apologize."