Showing posts with label informed consent. Show all posts
Showing posts with label informed consent. Show all posts

Sunday, May 4, 2025

‘The Worst Internet-Research Ethics Violation I Have Ever Seen’; The Atlantic, May 2, 2025

Tom Bartlett, The Atlantic; ‘The Worst Internet-Research Ethics Violation I Have Ever Seen’


[Kip Currier: The indifference and nonchalance of the University of Zurich researchers in this AI study -- who blatantly manipulated the Reddit human subjects without informed consent -- is deeply unsettling.

In the wake of outcries about this research study, the responses of the University of Zurich ethics board are perhaps even more troubling. That board's stated purpose:

"is to “support members of the University in their perception of ethical responsibility in research and teaching“, to “promote ethical awareness within the University” and to “represent ethical issues to the public at large"." 

https://www.ethik.uzh.ch/en/ethikkommission.html

The words "perception [italics added] of ethical responsibility" should give every researcher and Internet user pause in light of the Zurich ethics commission's providing a Get-Out-Of-Jail-Free card to virtually any of Zurich's researchers with its lack of substantive guardrails and accountability.]


[Excerpt]

"The researchers had a tougher time convincing Redditors that their covert study was justified. After they had finished the experiment, they contacted the subreddit’s moderators, revealed their identity, and requested to “debrief” the subreddit—that is, to announce to members that for months, they had been unwitting subjects in a scientific experiment. “They were rather surprised that we had such a negative reaction to the experiment,” says one moderator, who asked to be identified by his username, LucidLeviathan, to protect his privacy. According to LucidLeviathan, the moderators requested that the researchers not publish such tainted work, and that they issue an apology. The researchers refused. After more than a month of back-and-forth, the moderators revealed what they had learned about the experiment (minus the researchers’ names) to the rest of the subreddit, making clear their disapproval.

When the moderators sent a complaint to the University of Zurich, the university noted in its response that the “project yields important insights, and the risks (e.g. trauma etc.) are minimal,” according to an excerpt posted by moderators. In a statement to me, a university spokesperson said that the ethics board had received notice of the study last month, advised the researchers to comply with the subreddit’s rules, and “intends to adopt a stricter review process in the future.” Meanwhile, the researchers defended their approach in a Reddit comment, arguing that “none of the comments advocate for harmful positions” and that each AI-generated comment was reviewed by a human team member before being posted. (I sent an email to an anonymized address for the researchers, posted by Reddit moderators, and received a reply that directed my inquiries to the university.)

Perhaps the most telling aspect of the Zurich researchers’ defense was that, as they saw it, deception was integral to the study. The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments."

Saturday, April 26, 2025

We Already Have an Ethics Framework for AI; Inside Higher Ed, April 25, 2025

 Gwendolyn Reece, Inside Higher Ed; We Already Have an Ethics Framework for AI

"We need to develop an ethical framework for assessing uses of new information technology—and specifically AI—that can guide individuals and institutions as they consider employing, promoting and licensing these tools for various functions. There are two main factors about AI that complicate ethical analysis. The first is that an interaction with AI frequently continues past the initial user-AI transaction; information from that transaction can become part of the system’s training set. Secondly, there is often a significant lack of transparency about what the AI model is doing under the surface, making it difficult to assess. We should demand as much transparency as possible from tool providers.

Academia already has an agreed-upon set of ethical principles and processes for assessing potential interventions. The principles in “The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research” govern our approach to research with humans and can fruitfully be applied if we think of potential uses of AI as interventions. These principles not only benefit academia in making assessments about using AI but also provide a framework for technology developers thinking through their design requirements."

U.S. autism data project sparks uproar over ethics, privacy and intent; The Washington Post, April 25, 2025

 , The Washington Post; U.S. autism data project sparks uproar over ethics, privacy and intent

"The Trump administration has retreated from a controversial plan for a national registry of people with autism just days after announcing it as part of a new health initiative that would link personal medical records to information from pharmacies and smartwatches.

Jay Bhattacharya, director of the National Institutes of Health, unveiled the broad, data-driven initiative to a panel of experts Tuesday, saying it would include “national disease registries, including a new one for autism” that would accelerate research into the rapid rise in diagnoses of the condition.

The announcement sparked backlash in subsequent days over potential privacy violations, lack of consent and the risk of long-term misuse of sensitive data.

The Trump administration still will pursue large-scale data collection, but without the registry that drew the most intense criticism, the Department of Health and Human Services said."

Sunday, October 27, 2024

Declaration of Helsinki turns 60 – how this foundational document of medical ethics has stood the test of time; The Conversation, October 24, 2024

Consultant Neonatologist and Professor of Ethics, University of Oxford , The Conversation; Declaration of Helsinki turns 60 – how this foundational document of medical ethics has stood the test of time

"If you’re not familiar with the declaration – adopted by the World Medical Association on October 19 1964 – here is an explainer on this highly influential document: how it emerged, how it evolved and where it may be heading.

What is the declaration of Helsinki?

The World Medical Association was set up in the late 1940s in response to atrocities committed in the name of medical research during the second world war. It was focused on promoting and safeguarding medical ethics and human rights. 

Agreed at a meeting in Finland in 1964, the first version of the declaration included principles that have become the cornerstone of global research ethics. These include the importance of carefully assessing the risks and benefits of research projects, and seeking informed consent from those taking part in research."

Friday, October 11, 2024

23andMe is on the brink. What happens to all its DNA data?; NPR, October 3, 2024

 , NPR; 23andMe is on the brink. What happens to all its DNA data?

"As 23andMe struggles for survival, customers like Wiles have one pressing question: What is the company’s plan for all the data it has collected since it was founded in 2006?

“I absolutely think this needs to be clarified,” Wiles said. “The company has undergone so many changes and so much turmoil that they need to figure out what they’re doing as a company. But when it comes to my genetic data, I really want to know what they plan on doing.”

Tuesday, September 24, 2024

LinkedIn is training AI on you — unless you opt out with this setting; The Washington Post, September 23, 2024

 , The Washington Post; LinkedIn is training AI on you — unless you opt out with this setting

"To opt out, log into your LinkedIn account, tap or click on your headshot, and open the settings. Then, select “Data privacy,” and turn off the option under “Data for generative AI improvement.”

Flipping that switch will prevent the company from feeding your data to its AI, with a key caveat: The results aren’t retroactive. LinkedIn says it has already begun training its AI models with user content, and that there’s no way to undo it."

Sunday, December 31, 2023

Academic paper based on Uyghur genetic data retracted over ethical concerns; The Guardian, December 29, 2023

 , The Guardian; Academic paper based on Uyghur genetic data retracted over ethical concerns

"The retraction notice said the article had been withdrawn at the request of the journal that had published it, Forensic Science International: Genetics, after an investigation revealed that the relevant ethical approval had not been obtained for the collection of the genetic samples.

Mark Munsterhjelm, a professor at the University of Windsor, in Ontario, who specialises in racism in genetic research, said the fact that the paper had been published at all was “typical of the culture of complicity in forensic genetics that uncritically accepts ethics and informed consent claims with regards to vulnerable populations”.

Concerns have also been raised about a paper in a journal sponsored by China’s ministry of justice. The study, titled Sequencing of human identification markers in an Uyghur population, analysed Uyghur genetic data based on blood samples collected from individuals in the capital of Xinjiang, in north-west China. Yves Moreau, a professor of engineering at the University of Leuven, in Belgium, who focuses on DNA analysis, raised concerns that the subjects in the study may not have freely consented to their DNA samples being used. He also argued that the research “enables further mass surveillance” of Uyghur people."

Tuesday, November 14, 2023

Roland Pattillo helped keep Henrietta Lacks' story alive. It's key to his legacy; NPR, November 14, 2023

 , NPR; Roland Pattillo helped keep Henrietta Lacks' story alive. It's key to his legacy

"Dr. Roland Pattillo and his wife Pat O'Flynn Pattillo paid for Henrietta Lacks' permanent headstone, a smooth, substantial block of pink granite. It sits in the shape of a hardcover book...

Pattillo, an African American oncologist, stem cell researcher and professor, died in May at age 89. His death went largely unreported. The New York Times ran an obituary last month. The Nation published the news in September...

He protected and elevated Lacks' memory for decades. A Louisiana native, Dr. Pattillo is often described as a quiet, determined man, and a major reason why millions know Henrietta Lacks' story.

He befriended the Lacks family and protected them from reporters and other people. He was aware of the HeLa cell line story, the medical discovery that Henrietta Lacks' cancer cells successfully grew outside her body, but he learned more about the donor when he worked with biologist George Gey, his mentor at Johns Hopkins. Gey was responsible for harvesting her biopsied cancer cells and successfully growing them in culture, the first human cells to do so. They were put to use for medical research in labs around the world...

Henrietta Lacks left behind five young children in 1951.

She was treated at Johns Hopkins, a Baltimore charity hospital that cared for Black patients during the Jim Crow era. Her tumor cells were taken without her knowledge. Her cells became the first successful "immortal" cell line, grown outside her body and used for medical research. They have been instrumental in breakthroughs ever since.

Patients rights and the rules governing them were not like today.

HeLa cells were used to understand how the polio virus infected human beings. A vaccine was developed as a result. More recently, they played a significant role in COVID-19 vaccines.

Pat Pattillo says her husband wanted to share how Lacks' gift benefitted humanity since her death at age 31. But he also hoped to extend empathy for the family she left behind...

Skloot says she and Pattillo first had a mentor and mentee relationship, but it blossomed into a collegial one, especially when they formed the Henrietta Lacks Foundation.

"So, it provides financial support for people who made important contributions to science without their knowledge or consent," she says. "And their descendants, specifically people who were used in historic research studies like the Tuskegee syphilis studies, the Holmes Burke prison studies, and Henrietta Lacks family.""

Friday, August 25, 2023

Who owns your cells? Legacy of Henrietta Lacks raises ethical questions about profits from medical research; Cleveland.com, August 18, 2023

Who owns your cells? Legacy of Henrietta Lacks raises ethical questions about profits from medical research

"While the legal victory may have given the family some closure, it has raised concerns for bioethicists in Cleveland and elsewhere.

The case raises important questions about owning one’s own body; whether individuals are entitled to a share of the profits from medical discoveries derived from research on their own cells, organs and genetic material.

But it also offers a tremendous opportunity to not only acknowledge the ethical failures of the past and the seeds of mistrust they have sown, but to guide society toward building better, more trustworthy medical institutions, said Aaron Goldenberg, who directs the Bioethics Center for Community Health and Genomic Equity (CHANGE) at Case Western Reserve University."

Tuesday, August 1, 2023

Wednesday, July 26, 2023

If artificial intelligence uses your work, it should pay you; The Washington Post, July 26, 2023

 If artificial intelligence uses your work, it should pay you

"Renowned technologists and economists, including Jaron Lanier and E. Glen Weyl, have long argued that Big Tech should not be allowed to monetize people’s data without compensating them. This concept of “data dignity” was largely responding to the surveillance advertising business models of companies such as Google and Facebook, but Lanier and Weyl also pointed out, quite presciently, that the principle would only grow more vital as AI rose to prominence...

When I do a movie, and I sign my contract with a movie studio, I agree that the studio will own the copyright to the movie. Which feels fair and non-threatening. The studio paid to make the movie, so it should get to monetize the movie however it wants. But if I had known that by signing this contract and allowing the studio to be the movie’s sole copyright holder, I would then be allowing the studio to use that intellectual property as training data for an AI that would put me out of a job forever, I would never have signed that contract."

Saturday, July 15, 2023

Surprise, you just signed a contract! How hidden contracts took over the internet; Planet Money, NPR, July 14, 2023

 , Planet Money, NPR; Surprise, you just signed a contract! How hidden contracts took over the internet

"When you make an account online or install an app, you are probably entering into a legally enforceable contract. Even if you never signed anything. These days, we enter into these contracts so often, it can feel like no big deal."

Saturday, December 10, 2022

Your selfies are helping AI learn. You did not consent to this.; The Washington Post, December 9, 2022

 , The Washington Post; Your selfies are helping AI learn. You did not consent to this.

"My colleague Tatum Hunter spent time evaluating Lensa, an app that transforms a handful of selfies you provide into artistic portraits. And people have been using the new chatbot ChatGPT to generate silly poems or professional emails that seem like they were written by a human. These AI technologies could be profoundly helpful but they also come with a bunch of thorny ethical issues.

Tatum reported that Lensa’s portrait wizardly comes from the styles of artists whose work was included in a giant database for coaching image-generating computers. The artists didn’t give their permission to do this, and they aren’t being paid. In other words, your fun portraits are built on work ripped off from artists. ChatGPT learned to mimic humans by analyzing your recipes, social media posts, product reviews and other text from everyone on the internet...

Hany Farid, a computer science professor at the University of California at Berkeley, told me that individuals, government officials, many technology executives, journalists and educators like him are far more attuned than they were a few years ago to the potential positive and negative consequences of emerging technologies like AI. The hard part, he said, is knowing what to do to effectively limit the harms and maximize the benefits."

Thursday, April 28, 2022

3 Questions: Designing software for research ethics; MIT News, April 26, 2022

Rachel Gordon , MIT News; 3 Questions: Designing software for research ethics

"Jonathan Zong, a PhD candidate in electrical engineering and computer science at MIT, and an affiliate of the Computer Science and Artificial Intelligence Laboratory, thinks consent can be baked into the design of the software that gathers our data for online research. He created Bartleby, a system for debriefing research participants and eliciting their views about social media research that involved them. Using Bartleby, he says, researchers can automatically direct each of their study participants to a website where they can learn about their involvement in research, view what data researchers collected about them, and give feedback. Most importantly, participants can use the website to opt out and request to delete their data.  

Zong and his co-author, Nathan Matias SM '13, PhD '17, evaluated Bartleby by debriefing thousands of participants in observational and experimental studies on Twitter and Reddit. They found that Bartleby addresses procedural concerns by creating opportunities for participants to exercise autonomy, and the tool enabled substantive, value-driven conversations about participant voice and power. Here, Zong discusses the implications of their recent work as well as the future of social, ethical, and responsible computing."

Monday, February 21, 2022

Their DNA Hides a Warning, but They Don’t Want to Know What It Says; The New York Times, January 21, 2022

, The New York Times ; Their DNA Hides a Warning, but They Don’t Want to Know What It Says

"Benjamin Berkman, a bioethicist at the National Institutes of Health, said that, in his view, the benefits of telling participants about genetic findings that can be treated or prevented greatly outweighed the risk that the participants might be frightened or fail to follow up.

“These are important pieces of information that can be lifesaving,” he said.

But not all biobanks give subjects the chance to receive health warnings.

At Vanderbilt, Dr. Clayton said, she volunteered genetic information to a biobank whose participants have been de-identified — all names and other personal information are stripped from the data. It also has other protections to prevent individuals in the bank from being found. While she happily contributed to the research, Dr. Clayton said, she is glad her data can’t be traced and that no one will call her if they find something that may be worrying.

“I don’t want to know,” she said."

Friday, February 4, 2022

Where Automated Job Interviews Fall Short; Harvard Business Review (HBR), January 27, 2022

Dimitra Petrakaki, Rachel Starr, and , Harvard Business Review (HBR) ; Where Automated Job Interviews Fall Short

"The use of artificial intelligence in HR processes is a new, and likely unstoppable, trend. In recruitment, up to 86% of employers use job interviews mediated by technology, a growing portion of which are automated video interviews (AVIs).

AVIs involve job candidates being interviewed by an artificial intelligence, which requires them to record themselves on an interview platform, answering questions under time pressure. The video is then submitted through the AI developer platform, which processes the data of the candidate — this can be visual (e.g. smiles), verbal (e.g. key words used), and/or vocal (e.g. the tone of voice). In some cases, the platform then passes a report with an interpretation of the job candidate’s performance to the employer.

The technologies used for these videos present issues in reliably capturing a candidate’s characteristics. There is also strong evidence that these technologies can contain bias that can exclude some categories of job-seekers. The Berkeley Haas Center for Equity, Gender, and Leadership reports that 44% of AI systems are embedded with gender bias, with about 26% displaying both gender and race bias. For example, facial recognition algorithms have a 35% higher detection error for recognizing the gender of women of color, compared to men with lighter skin.

But as developers work to remove biases and increase reliability, we still know very little on how AVIs (or other types of interviews involving artificial intelligence) are experienced by different categories of job candidates themselves, and how these experiences affect them, this is where our research focused. Without this knowledge, employers and managers can’t fully understand the impact these technologies are having on their talent pool or on different group of workers (e.g., age, ethnicity, and social background). As a result, organizations are ill-equipped to discern whether the platforms they turn to are truly helping them hire candidates that align with their goals. We seek to explore whether employers are alienating promising candidates — and potentially entire categories of job seekers by default — because of varying experiences of the technology."

Saturday, November 20, 2021

Maryland lawmaker-doctor won’t face ethics violation for tuning into legislative meetings from the operating room; The Baltimore Sun, November 19, 2021

 , The Baltimore Sun; Maryland lawmaker-doctor won’t face ethics violation for tuning into legislative meetings from the operating room

 "Hill had initially defended her decision to join video meetings while at work as a doctor, saying her patients knew about it and she wasn’t putting them in any danger.

A Board of Physicians investigation found that one patient did not know Hill tuned into a legislative meeting, while the other patient was told about 10 minutes before surgery, but no consent paperwork was on file. Both legislative meetings where she appeared on camera from the operating room were streamed on the General Assembly’s website and YouTube channels."

Friday, May 28, 2021

Privacy laws need updating after Google deal with HCA Healthcare, medical ethics professor says; CNBC, May 26, 2021

Emily DeCiccio, CNBC; Privacy laws need updating after Google deal with HCA Healthcare, medical ethics professor says

"Privacy laws in the U.S. need to be updated, especially after Google struck a deal with a major hospital chain, medical ethics expert Arthur Kaplan said Wednesday.

“Now we’ve got electronic medical records, huge volumes of data, and this is like asking a navigation system from a World War I airplane to navigate us up to the space shuttle,” Kaplan, a professor at New York University’s Grossman School of Medicine, told “The News with Shepard Smith.” “We’ve got to update our privacy protection and our informed consent requirements.”

On Wednesday, Google’s cloud unit and hospital chain HCA Healthcare announced a deal that — according to The Wall Street Journal — gives Google access to patient records. The tech giant said it will use that to make algorithms to monitor patients and help doctors make better decisions."

Tuesday, December 3, 2019

China Uses DNA to Map Faces, With Help From the West; The New York Times, December 3, 2019

Sui-Lee Wee and , The New York Times; China Uses DNA to Map Faces, With Help From the West

Beijing’s pursuit of control over a Muslim ethnic group pushes the rules of science and raises questions about consent. 

"The Chinese government is building “essentially technologies used for hunting people,” said Mark Munsterhjelm, an assistant professor at the University of Windsor in Ontario who tracks Chinese interest in the technology.

In the world of science, Dr. Munsterhjelm said, “there’s a kind of culture of complacency that has now given way to complicity.""

Thursday, November 14, 2019

I'm the Google whistleblower. The medical data of millions of Americans is at risk; The Guardian, November 14, 2019

Anonymous, The Guardian; I'm the Google whistleblower. The medical data of millions of Americans is at risk

"After a while I reached a point that I suspect is familiar to most whistleblowers, where what I was witnessing was too important for me to remain silent. Two simple questions kept hounding me: did patients know about the transfer of their data to the tech giant? Should they be informed and given a chance to opt in or out?

The answer to the first question quickly became apparent: no. The answer to the second I became increasingly convinced about: yes. Put the two together, and how could I say nothing?

So much is at stake. Data security is important in any field, but when that data relates to the personal details of an individual’s health, it is of the utmost importance as this is the last frontier of data privacy.

With a deal as sensitive as the transfer of the personal data of more than 50 million Americans to Google the oversight should be extensive. Every aspect needed to be pored over to ensure that it complied with federal rules controlling the confidential handling of protected health information under the 1996 HIPAA legislation."