Laura Hughes , Financial Times; NHS staff boycott Palantir’s data platform over ethical concerns
My Bloomsbury book "Ethics, Information, and Technology" was published on Nov. 13, 2025. Purchases can be made via Amazon and this Bloomsbury webpage: https://www.bloomsbury.com/us/ethics-information-and-technology-9781440856662/
Thursday, April 2, 2026
NHS staff boycott Palantir’s data platform over ethical concerns; Financial Times, April 1, 2026
Thursday, June 19, 2025
AI ‘reanimations’: Making facsimiles of the dead raises ethical quandaries; The Conversation, June 17, 2025
Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston, Daniel J. Feldman, Senior Research Fellow, Applied Ethics Center, UMass Boston; The Conversation; AI ‘reanimations’: Making facsimiles of the dead raises ethical quandaries
"The use of artificial intelligence to “reanimate” the dead for a variety of purposes is quickly gaining traction. Over the past few years, we’ve been studying the moral implications of AI at the Center for Applied Ethics at the University of Massachusetts, Boston, and we find these AI reanimations to be morally problematic.
Before we address the moral challenges the technology raises, it’s important to distinguish AI reanimations, or deepfakes, from so-called griefbots. Griefbots are chatbots trained on large swaths of data the dead leave behind – social media posts, texts, emails, videos. These chatbots mimic how the departed used to communicate and are meant to make life easier for surviving relations. The deepfakes we are discussing here have other aims; they are meant to promote legal, political and educational causes."
Saturday, January 25, 2025
Ethics watchdog issues conflict of interest warning to Musk’s Doge agency; The Guardian, January 23, 2025
David Smith, The Guardian ; Ethics watchdog issues conflict of interest warning to Musk’s Doge agency
"A leading ethics watchdog has issued warnings to Donald Trump’s billionaire ally Elon Musk and the “department of government efficiency” (Doge), an agency Trump has stated he will create, claiming its use of encrypted messaging apps potentially violates the Federal Records Act (FRA).
American Oversight, which uses litigation to obtain public records and expose government misconduct, argues that Musk’s leadership of Doge raises “significant ethical concerns about potential conflicts of interest”, given his business empire and the substantial impact that Doge could have on federal agencies.
The warnings stem from reports that members of Doge, which aims to carry out dramatic cuts to the US government, are using the encrypted messaging app Signal with an auto-delete feature, which could hinder the preservation of official records."
Monday, July 1, 2024
Supreme Court Justices: Ethics, recusal and public perception; WOUB Public Media, June 28, 2024
WOUB Public Media; Supreme Court Justices: Ethics, recusal and public perception
"The U.S. Supreme Court has hit an all-time low in public trust and confidence.
In this episode of “Next Witness…Please,” retired judges Gayle Williams-Byers and Tom Hodson explore the reasons behind this decline and the immense power wielded by Supreme Court justices.
They delve into why the public sees the court as more political than judicial, eroding faith in the rule of law.
The episode also addresses shady financial dealings, unreported gifts, and questionable public actions and statements by justices, including Clarence Thomas and Samuel Alito.
These issues raise serious ethical concerns and undermine the court’s integrity, much to the consternation of many legal analysts and ethicists.
Tune in to “Next Witness…Please” as the judges discuss potential solutions to these ethical challenges and ways the Supreme Court can restore public trust."
Thursday, June 8, 2023
How ethics is becoming a key part of research in tech; The Stanford Daily, June 7, 2023
Cassandra Huff, The Stanford Daily; How ethics is becoming a key part of research in tech
"Building off the IRB model, in 2020 the Ethics in Society Review (ESR) board was created under the McCoy Family Center, the Center for Advanced Study in Behavioral Sciences (CASBS) and Human-Centered AI (HAI) to make ethics a core part of research in computer science. The ESR acts similarly to the IRB by examining ethical concerns to minimize potential harm of the research before a project is approved for funding.
This process is integrated into grant proposal applications in HAI. After HAI reviews the technical merits of a proposal, it is handed off to the ESR, which assigns an interdisciplinary panel of faculty to review each of them. This panel acts as advisors on ethical issues to identify challenges and provide additional guidance on the ethical component of the research. Once completed, the panel will either release research funds, or recommend more iterations of the review process.
The ESR is not meant to determine whether the proposal should be funded, but rather to analyze the unintended consequences of the research prior to the start of the project. In discussing what ESR does, Betsy Rajala, Program Director at CASBS said, “Everytime you touch research these questions come up and [it’s better to] think about them sooner rather than later.”"
Thursday, September 1, 2022
Ethical issues of facial recognition technology; TechRepublic, August 31, 2022
Patrick Gray in Artificial Intelligence, TechRepublic; Ethical issues of facial recognition technology
"Facial recognition technology has entered the mass market, with our faces now able to unlock our phones and computers. While the ability to empower machines with the very human ability to identify a person with a quick look at their face is exciting, it’s not without significant ethical concerns.
Suppose your company is considering facial recognition technology. In that case, it’s essential to be aware of these concerns and ready to address them, which may even include abandoning facial recognition altogether.
When assessing these ethical concerns, consider how your customers, employees and the general public would react if they fully knew how you’re using the technology. If that thought is disconcerting, you may be veering into an ethical “danger zone.”"
Monday, July 13, 2020
Digital tools against COVID-19: taxonomy, ethical challenges, and navigation aid; The Lancet, June 29, 2020
"Summary
Thursday, September 5, 2019
AI Ethics Guidelines Every CIO Should Read; Information Week, August 7, 2019
Monday, January 21, 2019
The ethics of gene editing: Lulu, Nana, and 'Gattaca; Johns Hopkins University, January 17, 2019
Saralyn Cruickshank, Johns Hopkins University; The ethics of gene editing: Lulu, Nana, and 'Gattaca
"Under the direction of Rebecca Wilbanks, a postdoctoral fellow in the Berman Institute of Bioethics and the Department of the History of Medicine, the students have been immersing themselves during in the language and principles of bioethics and applying what they learn to their understanding of technology, with an emphasis on robotics and reproductive technology in particular.
To help them access such heady material, Wilbanks put a spin on the course format. For the Intersession class—titled Science Fiction and the Ethics of Technology: Sex, Robots, and Doing the Right Thing—students explore course materials through the lens of science fiction.
"We sometimes think future technology might challenge our ethical sensibilities, but science fiction is good at exploring how ethics is connected to a certain way of life that happens to include technology," says Wilbanks, who is writing a book on how science fiction influenced the development of emerging forms of synthetic biology. "As our way of life changes together with technology, so might our ethical norms.""
Saturday, January 19, 2019
Why these young tech workers spent their Friday night planning a rebellion against companies like Google, Amazon, and Facebook; Recode, January 18, 2019
“If I go into an industry where I’m building things that impact people,” he says, “I want to have a say in what I build.""
Wednesday, November 28, 2018
'Of course it's not ethical': shock at gene-edited baby claims; The Guardian, November 27, 2018
"Scientists have expressed anger and doubt over a Chinese geneticist’s claim to have edited the genes of twin girls before birth, as government agencies ordered investigations into the experiment.
A global outcry started after the genetic scientist He Jiankui claimed in a video posted on YouTube on Monday that he had used the gene-editing tool Crispr-Cas9 to modify a particular gene in two embryos before they were placed in their mother’s womb.
He said the genomes had been altered to disable a gene known as CCR5, blocking the pathway used by the HIV virus to enter cells.
Some scientists at the International Summit on Human Genome Editing, which began on Tuesday in Hong Kong, said they were appalled the scientist had announced his work without following scientific protocols, including publishing his findings in a peer-reviewed journal. Others cited the ethical problems raised by creating essentially enhanced humans."
Monday, March 6, 2017
The day of Trump toilets and condoms in China may have just ended. Here's why that's controversial; Los Angeles Times, March 6, 2017
Jessica Meyers, Los Angeles Times;
The day of Trump toilets and condoms in China may have just ended. Here's why that's controversial
"Could Trump benefit from the decision?
Sunday, December 18, 2016
The Wild West of Robotic "Rights and Wrongs"; Ethics and Information Blog, 12/18/16
The challenge of "robot ethics"--how to imbue robotic machines and artificial intelligence (AI) with the "right" programming and protocols to make ethical decisions--is a hot topic in academe and business. Particularly right now, related to its application in autonomous self-driving vehicles (e.g. Uber, Apple, Google). When we think about ethical questions addressing how robots should or should not act, Isaac Asimov's oft-discussed "Three Laws of Robotics", spelled out in his 1942 short story "Runaround", certainly come to mind (see here). Themes of robots making judgments of "right and wrong", as well as ethical topics exploring AI accountability and whether "human rights" should be inclusive of "rights-for-robots", have also been prominent in depictions of robots and AI in numerous science fiction films and TV shows over the past 50+ years: Gort in The Day The Earth Stood Still (1951) and (2008) (Klaatu...Barada...Nikto!). 2001: A Space Odyssey (1968) and the monotonal, merciless HAL 9000 ("Open the pod bay doors, Hal"). 1983's War Games, starring Brat Pack-ers Matthew Broderick and Ally Sheedy, can also be seen as a cautionary tale of ethical-decision-making-gone-awry in a proto-machine learning gaming program ("Shall we play a game?"), used for then-Cold War military and national security purposes. Blade Runner (1982) revealed Replicants-with-an-expiration-date-on-the-run. (We'll have to wait and see what's up with the Replicants until sequel Blade Runner 2049 debuts in late 2017.) Arnold Schwarznegger played a killer-robot from the future in The Terminator (1984), and returned as a reprogrammed/converted "robot savior" in Terminator 2: Judgment Day (1991). Star Trek: The Next Generation (1987-1994) throughout its run explored "sentience" and the nature of humans AND non-humans "being human", as seen through the eyes of Enterprise android crew member "Commander Data" (see 1987 standout episode "The Measure of a Man"). Fifth column sometimes-sleeper Cylons with "many copies" and "a plan" were the driving force in 2004-2009's Battlestar Galactica. Will Smith portrayed a seriously robophobic cop hot on the heels of a homicidal robot suspect in the Asimov-short-story-collection-suggested I, Robot (2004). Most recently, robots are front and center (if not always readily identifiable!) in this year's breakout HBO hit Westworld (see the official Opening Credits here). Short-hand for the show's plot: "robots in an American West-set amusement park for the human rich". But it's a lot more than that. Westworld is an inspired reimagining ("Game of Thrones" author George R.R. Martin recently called this first season of “Westworld” a "true masterpiece") of the same-named, fairly-forgettable (--but for Yul Brynner's memorable robot role, solely credited as "Gunslinger"!) 1973 Michael Crichton-written/directed film. What the 1973 version lacked in deep-dive thoughts, the new version makes up for in spades, and then some: This is a show about robots (but really, the nature of consciousness and agency) for thinking people.--With, ahem, unapologetic dashes of Games of Thrones-esque sex and violence ("It's Not TV. It's HBO.(R)") sprinkled liberally throughout. Much of the issue of robot ethics has tended to center on the impacts of robots on humans. With "impacts" often meaning, at a minimum, job obsolescense for humans (see here and here). Or, at worst, (especially in terms of pop culture narratives) euphemistic code for "death and destruction to humans". (Carnegie Mellon University PhD and author David H. Wilson's 2011 New York Times best-selling Robopocalypse chillingly tapped into fears of a "Digital Axis of Evil"--AI/robots/Internet-of-Things--Revolution of robotic rampage and revenge against humans, perceived as both oppressors and inferior. This year Stephen Hawking and Elon Musk, among others (from 2015, see here and here), also voiced real-world concerns about the threats AI may hold for future humanity.) But thought-provoking, at times unsettling and humanizing depictions of robotic lifeforms--Westworld "hosts" Maeve and Dolores et al., robot boy David in Steven Spielberg's 2001 A.I. Artificial Intelligence, as well as animated treatments in 2008's WALL-E from Pixar and 2016's Hum (see post below linked here)--are leveling this imbalance. Flipping the "humancentric privilege" and spurring us to think about the impacts of human beings on robots. What ethical considerations, if any, are owed to the latter? Whether robots/AI can and should be (will be?) seen as emergent "forms of life". Perhaps even with "certain inalienable Rights" (Robot Lives Matter?). (Aside: As a kid who grew up watching the "Lost in Space" TV show (1965-1968) in syndication in the 1970's, I'll always have a soft spot for the Robinson family's trusty robot ("Danger, Will Robinson, Danger!") simply called...wait for it..."Robot".) In the meantime--at least until sentient robots can think about "the nature of their own existence" a la Westworld, or the advent of the "singularity" (sometimes described as the merging of man and machine and/or the moment when machine intelligence surpasses that of humans)--these fictionalized creations serve as allegorical constructs to ponder important, enduring questions: What it means to be "human". The nature of "right" and "wrong", and the shades in between. Interpretations of societal values, like "compassion", "decency", and "truth". And what it means to live in a "civilized" society. Sound timely?
Saturday, November 26, 2016
FAQ: What you need to know, but were afraid to ask, about the EU Open Science Cloud; Science Business, 11/24/16
"Will the data in the EU science cloud be available for free? Some of it, yes; some of it, no. The EU says that not all data ‘will necessarily be free’, due to the legitimate rights of IP holders, so there will be an opportunity for some organisations to sell access to some of their data through the cloud. Private publishers, such as Elsevier and Springer, are also keen to be able to maintain charges for access to some of their services – but have also been unexpectedly enthusiastic about exploring the possible new business models that a very large, very active cloud could permit. On the other hand, some universities and research councils – among the most active proponents of free open access for research reports and text and data mining – are pushing to make the new cloud a tariff-free zone. It’s difficult to predict yet how this issue will be resolved... What about privacy or ethical concerns? Differing privacy and ethical policies and regulations in Europe, the US, and elsewhere could become sticking points which would prevent the cloud becoming fully global. There are legal restraints on where research data can be stored – essentially it has to be located in countries, and under the control of organisations, that are subject to EU data protection legislation, and that should make US-based commercial providers a little wary. Rules will need to be established to clarify the roles and responsibilities of the funding agencies, the data custodians, the cloud service providers and the researchers who use cloud-based data. The Commission has said these legal issues will be resolved as part of its broader rule-making efforts under its Digital Single Market – for privacy, copyright, and security of data. But it may not be so simple. The last time science and data rules collided was in 2014/15, when the EU was rewriting its data-privacy regulation; the original, EU-wide proposal would have had an unintended impact on medical research – leading medical universities across the EU to scream loudly that the EU was about to kill drug research. A muddled compromise resulted. Expect similar surprises in cloud regulation."
Monday, September 5, 2016
Top Safety Official Doesn’t Trust Automakers to Teach Ethics to Self-Driving Cars; MIT Technology Review, 9/2/16
"Rapid progress on autonomous driving has led to concerns that future vehicles will have to make ethical choices, for example whether to swerve to avoid a crash if it would cause serious harm to people outside the vehicle. Christopher Hart, chairman of the National Transportation Safety Board, is one of them. He told MIT Technology Review that federal regulations will be required to set the basic morals of autonomous vehicles, as well as safety standards for how reliable they must be... Hart also said there would need to be rules for how ethical prerogatives are encoded into software. He gave the example of a self-driving car faced with a decision between a potentially fatal collision with an out-of-control truck or heading up on the sidewalk and hitting pedestrians. “That to me is going to take a federal government response to address,” said Hart. “Those kinds of ethical choices will be inevitable.” That NHTSA has been evaluating how it will regulate driverless cars for the past eight months, and will release guidance in the near future. The agency hasn't so far discussed ethical concerns about automated driving. What regulation exists for self-driving cars comes from states such as California, and is targeted at the prototype vehicles being tested by companies such as Alphabet and Uber."
Saturday, February 6, 2016
Humane Society boss resigns after petition demands her removal; Pittsburgh Post-Gazette, 2/5/16
"The head of the Western PA Humane Society has resigned, days after she was put on administrative leave. Joy Braunstein had been under pressure after an online petition demanded her removal was circulated. Statement from Joy Braunstein: In a statement this afternoon, Ms. Braunstein said: “Given the present circumstances, I have made a personal choice to step away from The Western Pennsylvania Humane Society and resign my position effective immediately out of respect for my family and out of respect for the organization. I wish the Western Pennsylvania Humane Society well and will continue to be a supporter of the organization. At this time, I have not decided what I plan to do next professionally. Before I do, I plan to take some time with my family. I want to thank the Western Pennsylvania Humane Society for my time there and everyone else for their concern, but I have no further comment.” Former employees estimate that in Ms. Braunstein’s 13-month tenure as executive director of the Western PA Humane Society, more than a third of the roughly 60-member staff was either fired or quit."
Wednesday, July 2, 2014
Facebook’s Secret Manipulation of User Emotions Faces European Inquiries; New York Times, 7/2/14
"In response to widespread public anger, several European data protection agencies are examining whether Facebook broke local privacy laws when it conducted the weeklong investigation in January 2012. That includes Ireland’s Office of the Data Protection Commissioner, which regulates Facebook’s global operations outside North America because the company has its international headquarters in Dublin. The Irish regulator has sent a series of questions to Facebook related to potential privacy issues, including whether the company got consent from users for the study, according to a spokeswoman. The Information Commissioner’s Office of Britain also said that it was looking into potential privacy breaches that may have affected the country’s residents, though a spokesman of the office said that it was too early to know whether Facebook had broken the law. It is unknown where the users who were part of the experiment were located. Some 80 percent of Facebook’s 1.2 billion users are based outside North America... The Federal Trade Commission, the American regulator that oversees Facebook’s conduct under a 20-year consent decree, has not publicly expressed similar interest in the case, which has caused an uproar over the company’s ethics and prompted the lead researcher on the project to apologize."
Friday, January 7, 2011
Judge Orders College to Reinstate Student Who Posted a Placenta Photo Online; Chronicle of Higher Education, 1/6/11
"A federal judge in Kansas on Wednesday ordered Johnson County Community College to reinstate a nursing student who sued after being dismissed for posting a picture of a human placenta on Facebook, The Kansas City Star reported."
