Showing posts with label informed consent. Show all posts
Showing posts with label informed consent. Show all posts

Sunday, October 27, 2024

Declaration of Helsinki turns 60 – how this foundational document of medical ethics has stood the test of time; The Conversation, October 24, 2024

Consultant Neonatologist and Professor of Ethics, University of Oxford , The Conversation; Declaration of Helsinki turns 60 – how this foundational document of medical ethics has stood the test of time

"If you’re not familiar with the declaration – adopted by the World Medical Association on October 19 1964 – here is an explainer on this highly influential document: how it emerged, how it evolved and where it may be heading.

What is the declaration of Helsinki?

The World Medical Association was set up in the late 1940s in response to atrocities committed in the name of medical research during the second world war. It was focused on promoting and safeguarding medical ethics and human rights. 

Agreed at a meeting in Finland in 1964, the first version of the declaration included principles that have become the cornerstone of global research ethics. These include the importance of carefully assessing the risks and benefits of research projects, and seeking informed consent from those taking part in research."

Friday, October 11, 2024

23andMe is on the brink. What happens to all its DNA data?; NPR, October 3, 2024

 , NPR; 23andMe is on the brink. What happens to all its DNA data?

"As 23andMe struggles for survival, customers like Wiles have one pressing question: What is the company’s plan for all the data it has collected since it was founded in 2006?

“I absolutely think this needs to be clarified,” Wiles said. “The company has undergone so many changes and so much turmoil that they need to figure out what they’re doing as a company. But when it comes to my genetic data, I really want to know what they plan on doing.”

Tuesday, September 24, 2024

LinkedIn is training AI on you — unless you opt out with this setting; The Washington Post, September 23, 2024

 , The Washington Post; LinkedIn is training AI on you — unless you opt out with this setting

"To opt out, log into your LinkedIn account, tap or click on your headshot, and open the settings. Then, select “Data privacy,” and turn off the option under “Data for generative AI improvement.”

Flipping that switch will prevent the company from feeding your data to its AI, with a key caveat: The results aren’t retroactive. LinkedIn says it has already begun training its AI models with user content, and that there’s no way to undo it."

Sunday, December 31, 2023

Academic paper based on Uyghur genetic data retracted over ethical concerns; The Guardian, December 29, 2023

 , The Guardian; Academic paper based on Uyghur genetic data retracted over ethical concerns

"The retraction notice said the article had been withdrawn at the request of the journal that had published it, Forensic Science International: Genetics, after an investigation revealed that the relevant ethical approval had not been obtained for the collection of the genetic samples.

Mark Munsterhjelm, a professor at the University of Windsor, in Ontario, who specialises in racism in genetic research, said the fact that the paper had been published at all was “typical of the culture of complicity in forensic genetics that uncritically accepts ethics and informed consent claims with regards to vulnerable populations”.

Concerns have also been raised about a paper in a journal sponsored by China’s ministry of justice. The study, titled Sequencing of human identification markers in an Uyghur population, analysed Uyghur genetic data based on blood samples collected from individuals in the capital of Xinjiang, in north-west China. Yves Moreau, a professor of engineering at the University of Leuven, in Belgium, who focuses on DNA analysis, raised concerns that the subjects in the study may not have freely consented to their DNA samples being used. He also argued that the research “enables further mass surveillance” of Uyghur people."

Tuesday, November 14, 2023

Roland Pattillo helped keep Henrietta Lacks' story alive. It's key to his legacy; NPR, November 14, 2023

 , NPR; Roland Pattillo helped keep Henrietta Lacks' story alive. It's key to his legacy

"Dr. Roland Pattillo and his wife Pat O'Flynn Pattillo paid for Henrietta Lacks' permanent headstone, a smooth, substantial block of pink granite. It sits in the shape of a hardcover book...

Pattillo, an African American oncologist, stem cell researcher and professor, died in May at age 89. His death went largely unreported. The New York Times ran an obituary last month. The Nation published the news in September...

He protected and elevated Lacks' memory for decades. A Louisiana native, Dr. Pattillo is often described as a quiet, determined man, and a major reason why millions know Henrietta Lacks' story.

He befriended the Lacks family and protected them from reporters and other people. He was aware of the HeLa cell line story, the medical discovery that Henrietta Lacks' cancer cells successfully grew outside her body, but he learned more about the donor when he worked with biologist George Gey, his mentor at Johns Hopkins. Gey was responsible for harvesting her biopsied cancer cells and successfully growing them in culture, the first human cells to do so. They were put to use for medical research in labs around the world...

Henrietta Lacks left behind five young children in 1951.

She was treated at Johns Hopkins, a Baltimore charity hospital that cared for Black patients during the Jim Crow era. Her tumor cells were taken without her knowledge. Her cells became the first successful "immortal" cell line, grown outside her body and used for medical research. They have been instrumental in breakthroughs ever since.

Patients rights and the rules governing them were not like today.

HeLa cells were used to understand how the polio virus infected human beings. A vaccine was developed as a result. More recently, they played a significant role in COVID-19 vaccines.

Pat Pattillo says her husband wanted to share how Lacks' gift benefitted humanity since her death at age 31. But he also hoped to extend empathy for the family she left behind...

Skloot says she and Pattillo first had a mentor and mentee relationship, but it blossomed into a collegial one, especially when they formed the Henrietta Lacks Foundation.

"So, it provides financial support for people who made important contributions to science without their knowledge or consent," she says. "And their descendants, specifically people who were used in historic research studies like the Tuskegee syphilis studies, the Holmes Burke prison studies, and Henrietta Lacks family.""

Friday, August 25, 2023

Who owns your cells? Legacy of Henrietta Lacks raises ethical questions about profits from medical research; Cleveland.com, August 18, 2023

Who owns your cells? Legacy of Henrietta Lacks raises ethical questions about profits from medical research

"While the legal victory may have given the family some closure, it has raised concerns for bioethicists in Cleveland and elsewhere.

The case raises important questions about owning one’s own body; whether individuals are entitled to a share of the profits from medical discoveries derived from research on their own cells, organs and genetic material.

But it also offers a tremendous opportunity to not only acknowledge the ethical failures of the past and the seeds of mistrust they have sown, but to guide society toward building better, more trustworthy medical institutions, said Aaron Goldenberg, who directs the Bioethics Center for Community Health and Genomic Equity (CHANGE) at Case Western Reserve University."

Tuesday, August 1, 2023

Wednesday, July 26, 2023

If artificial intelligence uses your work, it should pay you; The Washington Post, July 26, 2023

 If artificial intelligence uses your work, it should pay you

"Renowned technologists and economists, including Jaron Lanier and E. Glen Weyl, have long argued that Big Tech should not be allowed to monetize people’s data without compensating them. This concept of “data dignity” was largely responding to the surveillance advertising business models of companies such as Google and Facebook, but Lanier and Weyl also pointed out, quite presciently, that the principle would only grow more vital as AI rose to prominence...

When I do a movie, and I sign my contract with a movie studio, I agree that the studio will own the copyright to the movie. Which feels fair and non-threatening. The studio paid to make the movie, so it should get to monetize the movie however it wants. But if I had known that by signing this contract and allowing the studio to be the movie’s sole copyright holder, I would then be allowing the studio to use that intellectual property as training data for an AI that would put me out of a job forever, I would never have signed that contract."

Saturday, July 15, 2023

Surprise, you just signed a contract! How hidden contracts took over the internet; Planet Money, NPR, July 14, 2023

 , Planet Money, NPR; Surprise, you just signed a contract! How hidden contracts took over the internet

"When you make an account online or install an app, you are probably entering into a legally enforceable contract. Even if you never signed anything. These days, we enter into these contracts so often, it can feel like no big deal."

Saturday, December 10, 2022

Your selfies are helping AI learn. You did not consent to this.; The Washington Post, December 9, 2022

 , The Washington Post; Your selfies are helping AI learn. You did not consent to this.

"My colleague Tatum Hunter spent time evaluating Lensa, an app that transforms a handful of selfies you provide into artistic portraits. And people have been using the new chatbot ChatGPT to generate silly poems or professional emails that seem like they were written by a human. These AI technologies could be profoundly helpful but they also come with a bunch of thorny ethical issues.

Tatum reported that Lensa’s portrait wizardly comes from the styles of artists whose work was included in a giant database for coaching image-generating computers. The artists didn’t give their permission to do this, and they aren’t being paid. In other words, your fun portraits are built on work ripped off from artists. ChatGPT learned to mimic humans by analyzing your recipes, social media posts, product reviews and other text from everyone on the internet...

Hany Farid, a computer science professor at the University of California at Berkeley, told me that individuals, government officials, many technology executives, journalists and educators like him are far more attuned than they were a few years ago to the potential positive and negative consequences of emerging technologies like AI. The hard part, he said, is knowing what to do to effectively limit the harms and maximize the benefits."

Thursday, April 28, 2022

3 Questions: Designing software for research ethics; MIT News, April 26, 2022

Rachel Gordon , MIT News; 3 Questions: Designing software for research ethics

"Jonathan Zong, a PhD candidate in electrical engineering and computer science at MIT, and an affiliate of the Computer Science and Artificial Intelligence Laboratory, thinks consent can be baked into the design of the software that gathers our data for online research. He created Bartleby, a system for debriefing research participants and eliciting their views about social media research that involved them. Using Bartleby, he says, researchers can automatically direct each of their study participants to a website where they can learn about their involvement in research, view what data researchers collected about them, and give feedback. Most importantly, participants can use the website to opt out and request to delete their data.  

Zong and his co-author, Nathan Matias SM '13, PhD '17, evaluated Bartleby by debriefing thousands of participants in observational and experimental studies on Twitter and Reddit. They found that Bartleby addresses procedural concerns by creating opportunities for participants to exercise autonomy, and the tool enabled substantive, value-driven conversations about participant voice and power. Here, Zong discusses the implications of their recent work as well as the future of social, ethical, and responsible computing."

Monday, February 21, 2022

Their DNA Hides a Warning, but They Don’t Want to Know What It Says; The New York Times, January 21, 2022

, The New York Times ; Their DNA Hides a Warning, but They Don’t Want to Know What It Says

"Benjamin Berkman, a bioethicist at the National Institutes of Health, said that, in his view, the benefits of telling participants about genetic findings that can be treated or prevented greatly outweighed the risk that the participants might be frightened or fail to follow up.

“These are important pieces of information that can be lifesaving,” he said.

But not all biobanks give subjects the chance to receive health warnings.

At Vanderbilt, Dr. Clayton said, she volunteered genetic information to a biobank whose participants have been de-identified — all names and other personal information are stripped from the data. It also has other protections to prevent individuals in the bank from being found. While she happily contributed to the research, Dr. Clayton said, she is glad her data can’t be traced and that no one will call her if they find something that may be worrying.

“I don’t want to know,” she said."

Friday, February 4, 2022

Where Automated Job Interviews Fall Short; Harvard Business Review (HBR), January 27, 2022

Dimitra Petrakaki, Rachel Starr, and , Harvard Business Review (HBR) ; Where Automated Job Interviews Fall Short

"The use of artificial intelligence in HR processes is a new, and likely unstoppable, trend. In recruitment, up to 86% of employers use job interviews mediated by technology, a growing portion of which are automated video interviews (AVIs).

AVIs involve job candidates being interviewed by an artificial intelligence, which requires them to record themselves on an interview platform, answering questions under time pressure. The video is then submitted through the AI developer platform, which processes the data of the candidate — this can be visual (e.g. smiles), verbal (e.g. key words used), and/or vocal (e.g. the tone of voice). In some cases, the platform then passes a report with an interpretation of the job candidate’s performance to the employer.

The technologies used for these videos present issues in reliably capturing a candidate’s characteristics. There is also strong evidence that these technologies can contain bias that can exclude some categories of job-seekers. The Berkeley Haas Center for Equity, Gender, and Leadership reports that 44% of AI systems are embedded with gender bias, with about 26% displaying both gender and race bias. For example, facial recognition algorithms have a 35% higher detection error for recognizing the gender of women of color, compared to men with lighter skin.

But as developers work to remove biases and increase reliability, we still know very little on how AVIs (or other types of interviews involving artificial intelligence) are experienced by different categories of job candidates themselves, and how these experiences affect them, this is where our research focused. Without this knowledge, employers and managers can’t fully understand the impact these technologies are having on their talent pool or on different group of workers (e.g., age, ethnicity, and social background). As a result, organizations are ill-equipped to discern whether the platforms they turn to are truly helping them hire candidates that align with their goals. We seek to explore whether employers are alienating promising candidates — and potentially entire categories of job seekers by default — because of varying experiences of the technology."

Saturday, November 20, 2021

Maryland lawmaker-doctor won’t face ethics violation for tuning into legislative meetings from the operating room; The Baltimore Sun, November 19, 2021

 , The Baltimore Sun; Maryland lawmaker-doctor won’t face ethics violation for tuning into legislative meetings from the operating room

 "Hill had initially defended her decision to join video meetings while at work as a doctor, saying her patients knew about it and she wasn’t putting them in any danger.

A Board of Physicians investigation found that one patient did not know Hill tuned into a legislative meeting, while the other patient was told about 10 minutes before surgery, but no consent paperwork was on file. Both legislative meetings where she appeared on camera from the operating room were streamed on the General Assembly’s website and YouTube channels."

Friday, May 28, 2021

Privacy laws need updating after Google deal with HCA Healthcare, medical ethics professor says; CNBC, May 26, 2021

Emily DeCiccio, CNBC; Privacy laws need updating after Google deal with HCA Healthcare, medical ethics professor says

"Privacy laws in the U.S. need to be updated, especially after Google struck a deal with a major hospital chain, medical ethics expert Arthur Kaplan said Wednesday.

“Now we’ve got electronic medical records, huge volumes of data, and this is like asking a navigation system from a World War I airplane to navigate us up to the space shuttle,” Kaplan, a professor at New York University’s Grossman School of Medicine, told “The News with Shepard Smith.” “We’ve got to update our privacy protection and our informed consent requirements.”

On Wednesday, Google’s cloud unit and hospital chain HCA Healthcare announced a deal that — according to The Wall Street Journal — gives Google access to patient records. The tech giant said it will use that to make algorithms to monitor patients and help doctors make better decisions."

Tuesday, December 3, 2019

China Uses DNA to Map Faces, With Help From the West; The New York Times, December 3, 2019

Sui-Lee Wee and , The New York Times; China Uses DNA to Map Faces, With Help From the West

Beijing’s pursuit of control over a Muslim ethnic group pushes the rules of science and raises questions about consent. 

"The Chinese government is building “essentially technologies used for hunting people,” said Mark Munsterhjelm, an assistant professor at the University of Windsor in Ontario who tracks Chinese interest in the technology.

In the world of science, Dr. Munsterhjelm said, “there’s a kind of culture of complacency that has now given way to complicity.""

Thursday, November 14, 2019

I'm the Google whistleblower. The medical data of millions of Americans is at risk; The Guardian, November 14, 2019

Anonymous, The Guardian; I'm the Google whistleblower. The medical data of millions of Americans is at risk

"After a while I reached a point that I suspect is familiar to most whistleblowers, where what I was witnessing was too important for me to remain silent. Two simple questions kept hounding me: did patients know about the transfer of their data to the tech giant? Should they be informed and given a chance to opt in or out?

The answer to the first question quickly became apparent: no. The answer to the second I became increasingly convinced about: yes. Put the two together, and how could I say nothing?

So much is at stake. Data security is important in any field, but when that data relates to the personal details of an individual’s health, it is of the utmost importance as this is the last frontier of data privacy.

With a deal as sensitive as the transfer of the personal data of more than 50 million Americans to Google the oversight should be extensive. Every aspect needed to be pored over to ensure that it complied with federal rules controlling the confidential handling of protected health information under the 1996 HIPAA legislation."

Wednesday, September 4, 2019

The Ethics of Hiding Your Data From the Machines; Wired, August 22, 2019

Molly Wood, Wired;

The Ethics of Hiding Your Data From the Machines


"In the case of the company I met with, the data collection they’re doing is all good. They want every participant in their longitudinal labor study to opt in, and to be fully informed about what’s going to happen with the data about this most precious and scary and personal time in their lives.

But when I ask what’s going to happen if their company is ever sold, they go a little quiet."

Thursday, March 7, 2019

Scientists Raise Concerns About Revisions to Human Research Regulations; The Scientist, February 19, 2019

Katarina Zimmer, The Scientist; Scientists Raise Concerns About Revisions to Human Research Regulations

"When Henrietta Lacks visited the Johns Hopkins Medical Center in the 1950s to be treated for cervical cancer, she had no idea that some of her cancer cells would be used to create one of the most scientifically valuable and financially profitable cell lines that is used in labs today. Nor was she asked for permission.

Lacks’s experience has become nationally acknowledged as a shameful episode in the history of biomedical research in the US—particularly after the publication of a popular book about Lacks and her family—and forced the scientific community to consider how to conduct ethical research with human samples. The case was one of the reasons for a heated debate during a recent, six-year-long process of revising the Common Rule, a package of regulations adopted in the 1990s intended to ensure that all federally funded research conducted on human subjects is done ethically.

The revisions, enacted last month, are an attempt to strike a better balance between patients’ need for privacy and the benefits of using their tissue for research. In a paper published January 31 in JAMA Oncology, a group of clinicians and ethicists from the University of Michigan and the University of Pennsylvania argue that the revisions could have unintended consequences for research with various types of biospecimens, and propose that regulators should consider them differently when creating research protections."

Thursday, January 31, 2019

Facebook has been paying teens $20 a month for access to all of their personal data; Vox, January 30, 2019

Kaitlyn Tiffany, Vox; Facebook has been paying teens $20 a month for access to all of their personal data

"The shocking “research” program has restarted a long-standing feud between Facebook and Apple.

 

"Facebook, now entering a second year of huge data-collection scandals, can’t really afford this particular news story. However, it’s possible the company just weighed the risks of public outrage against the benefits of the data and made a deliberate choice: Knowing which apps people are using, how they’re using them, and for how long is extremely useful information for Facebook."