Showing posts with label data ethics. Show all posts
Showing posts with label data ethics. Show all posts

Wednesday, November 20, 2024

What 23andMe Owes its Users; The Hastings Center, November 18, 2024

 Jonathan LoTempio, Jr,, The Hastings Center; What 23andMe Owes its Users

"In the intervening years, 23andMe has sent you new findings related to your health status. You wonder: Is my data protected? Can I get it back?

There are protections for users of 23andMe and other direct-to-consumer genetic testing companies. Federal laws, including the Genetic Information Nondiscrimination Act (GINA) and the Affordable Care Act, protect users from employment and insurance discrimination. Residents of certain states including California have agencies where they can register complaints. 23andMe, which is based in California, has a policy in line with California citizens’ new right to access and delete their data. European residents have even more extensive rights over their digital data.

American users can rest assured that there are strong legal mechanisms under the Committee on Foreign Investment in the U.S. that can block foreign acquisition of U.S. firms on national security grounds. For certain critical sectors like biotech, the committee may consider, among other factors, whether a proposed transaction would result in the U.S. losing its place as a global industry leader as part of its review.

Any attempt by a foreign company to acquire 23andMe would be subject to a CFIUS review and could be blocked on national security grounds, particularly if the foreign company is headquartered in a “country of special concern” such as China, Russia, or Iran. As for acquisitions by U.S. companies, the legal landscape is a bit more Wild West. Buyers based in the U.S. could change policies to which users agreed long ago, in a world rather different than ours.

November 2024: With a new board the immediate crisis at 23andMe has been averted. However, long-term concerns remain regarding potential buyers and how they might respond to 23andMe’s layoffs and shuttering of its drug development arm, both of which suggest instability of the company. 23andMe and other DTC genetic testing companies should consider what they owe their users.

One thing they owe users is to implement a policy that, in the case of a sale, the companies will notify users multiple times and in multiple ways and give them the option of deleting their data."

Thursday, October 24, 2024

WITHOUT KNOWLEDGE OR CONSENT; ProPublica, October 24, 2024

Corey G. Johnson , ProPublica; WITHOUT KNOWLEDGEOR CONSENT

"FOR YEARS, America’s most iconic gun-makers turned over sensitive personal information on hundreds of thousands of customers to political operatives.

Those operatives, in turn, secretly employed the details to rally firearm owners to elect pro-gun politicians running for Congress and the White House, a ProPublica investigation has found.

The clandestine sharing of gun buyers’ identities — without their knowledge and consent — marked a significant departure for an industry that has long prided itself on thwarting efforts to track who owns firearms in America.

At least 10 gun industry businesses, including Glock, Smith & Wesson, Remington, Marlin and Mossberg, handed over names, addresses and other private data to the gun industry’s chief lobbying group, the National Shooting Sports Foundation. The NSSF then entered the gun owners’ details into what would become a massive database.

The data initially came from decades of warranty cards filled out by customers and returned to gun manufacturers for rebates and repair or replacement programs.

A ProPublica review of dozens of warranty cards from the 1970s through today found that some promised customers their information would be kept strictly confidential. Others said some information could be shared with third parties for marketing and sales. None of the cards informed buyers their details would be used by lobbyists and consultants to win elections."

Monday, October 14, 2024

ScienceAdviser: Shifting from harm to resilience Today in Science and science: ScienceAdviser honors Indigenous Peoples’ Day; Science, October 14, 2024

ScienceAdviser: Shifting from harm to resilience

"Today, in honor of Indigenous Peoples’ Day, Science Staff Writer Rodrigo Pérez Ortega speaks with Diné genetic epidemiologist Krystal Tsosie about the holiday and the importance of Indigenous data sovereignty. The rest of this edition of ScienceAdviser is centered on research that is relevant to and/or being conducted by Indigenous scientists and communities...

Your work has focused on Indigenous data sovereignty. Can you tell me more about the current efforts pushing for Native tribes to have control over their own data?

One recent effort is the #DataBack movement, which is about reclaiming control over Indigenous data, specifically genomic and biological data that have been taken and stored without our consent. My colleague, Keolu Fox, and I have been advocating for Indigenous data sovereignty. We’ve even made stickers to raise awareness, and I love seeing them on water bottles and in public spaces. It’s a small, symbolic way to promote the idea that Indigenous data should be returned to the communities it belongs to."

Friday, October 11, 2024

23andMe is on the brink. What happens to all its DNA data?; NPR, October 3, 2024

 , NPR; 23andMe is on the brink. What happens to all its DNA data?

"As 23andMe struggles for survival, customers like Wiles have one pressing question: What is the company’s plan for all the data it has collected since it was founded in 2006?

“I absolutely think this needs to be clarified,” Wiles said. “The company has undergone so many changes and so much turmoil that they need to figure out what they’re doing as a company. But when it comes to my genetic data, I really want to know what they plan on doing.”

Tuesday, September 24, 2024

LinkedIn is training AI on you — unless you opt out with this setting; The Washington Post, September 23, 2024

 , The Washington Post; LinkedIn is training AI on you — unless you opt out with this setting

"To opt out, log into your LinkedIn account, tap or click on your headshot, and open the settings. Then, select “Data privacy,” and turn off the option under “Data for generative AI improvement.”

Flipping that switch will prevent the company from feeding your data to its AI, with a key caveat: The results aren’t retroactive. LinkedIn says it has already begun training its AI models with user content, and that there’s no way to undo it."

Thursday, September 5, 2024

Intellectual property and data privacy: the hidden risks of AI; Nature, September 4, 2024

 Amanda Heidt , Nature; Intellectual property and data privacy: the hidden risks of AI

"Timothée Poisot, a computational ecologist at the University of Montreal in Canada, has made a successful career out of studying the world’s biodiversity. A guiding principle for his research is that it must be useful, Poisot says, as he hopes it will be later this year, when it joins other work being considered at the 16th Conference of the Parties (COP16) to the United Nations Convention on Biological Diversity in Cali, Colombia. “Every piece of science we produce that is looked at by policymakers and stakeholders is both exciting and a little terrifying, since there are real stakes to it,” he says.

But Poisot worries that artificial intelligence (AI) will interfere with the relationship between science and policy in the future. Chatbots such as Microsoft’s Bing, Google’s Gemini and ChatGPT, made by tech firm OpenAI in San Francisco, California, were trained using a corpus of data scraped from the Internet — which probably includes Poisot’s work. But because chatbots don’t often cite the original content in their outputs, authors are stripped of the ability to understand how their work is used and to check the credibility of the AI’s statements. It seems, Poisot says, that unvetted claims produced by chatbots are likely to make their way into consequential meetings such as COP16, where they risk drowning out solid science.

“There’s an expectation that the research and synthesis is being done transparently, but if we start outsourcing those processes to an AI, there’s no way to know who did what and where the information is coming from and who should be credited,” he says...

The technology underlying genAI, which was first developed at public institutions in the 1960s, has now been taken over by private companies, which usually have no incentive to prioritize transparency or open access. As a result, the inner mechanics of genAI chatbots are almost always a black box — a series of algorithms that aren’t fully understood, even by their creators — and attribution of sources is often scrubbed from the output. This makes it nearly impossible to know exactly what has gone into a model’s answer to a prompt. Organizations such as OpenAI have so far asked users to ensure that outputs used in other work do not violate laws, including intellectual-property and copyright regulations, or divulge sensitive information, such as a person’s location, gender, age, ethnicity or contact information. Studies have shown that genAI tools might do both1,2."

Tuesday, August 27, 2024

Ethical and Responsible AI: A Governance Framework for Boards; Directors & Boards, August 27, 2024

Sonita Lontoh, Directors & Boards; Ethical and Responsible AI: A Governance Framework for Boards 

"Boards must understand what gen AI is being used for and its potential business value supercharging both efficiencies and growth. They must also recognize the risks that gen AI may present. As we have already seen, these risks may include data inaccuracy, bias, privacy issues and security. To address some of these risks, boards and companies should ensure that their organizations' data and security protocols are AI-ready. Several criteria must be met:

  • Data must be ethically governed. Companies' data must align with their organization's guiding principles. The different groups inside the organization must also be aligned on the outcome objectives, responsibilities, risks and opportunities around the company's data and analytics.
  • Data must be secure. Companies must protect their data to ensure that intruders don't get access to it and that their data doesn't go into someone else's training model.
  • Data must be free of bias to the greatest extent possible. Companies should gather data from diverse sources, not from a narrow set of people of the same age, gender, race or backgrounds. Additionally, companies must ensure that their algorithms do not inadvertently perpetuate bias.
  • AI-ready data must mirror real-world conditions. For example, robots in a warehouse need more than data; they also need to be taught the laws of physics so they can move around safely.
  • AI-ready data must be accurate. In some cases, companies may need people to double-check data for inaccuracy.

It's important to understand that all these attributes build on one another. The more ethically governed, secure, free of bias and enriched a company's data is, the more accurate its AI outcomes will be."

Wednesday, May 29, 2024

Why using dating apps for public health messaging is an ethical dilemma; The Conversation, May 28, 2024

s, Chancellor's Fellow, Deanery of Molecular, Genetic and Population Health Sciences Usher Institute Centre for Biomedicine, Self and Society, The University of EdinburghProfessor of Sociology, Sociology, University of Manchester, Lecturer in Nursing, University of Manchester , The Conversation; Why using dating apps for public health messaging is an ethical dilemma

"Future collaborations with apps should prioritise the benefit of users over those of the app businesses, develop transparent data policies that prevent users’ data from being shared for profit, ensure the apps’ commitment to anti-discrimination and anti-harrassment, and provide links to health and wellbeing services beyond the apps.

Dating apps have the potential to be powerful allies in public health, especially in reaching populations that have often been ignored. However, their use must be carefully managed to avoid compromising user privacy, safety and marginalisation."

Tuesday, March 7, 2023

SAS' data ethics chief talks about keeping an ethical eye on AI; Axios, March 7, 2023


    "The U.S. is at a crossroads when it comes to the future of artificial intelligence, as the technology takes dramatic leaps forward without much regulation in place, Reggie Townsend, director of SAS Institute's Data Ethics Practice, tells Axios.

Driving the news: Cary-based SAS is a giant in the world of data analytics, and the company and its customers are increasingly using AI to process data and make decisions. Townsend's role at the company puts him at the forefront of the conversation.

Why it matters: Artificial intelligence could soon impact nearly every aspect of our lives, from health care decisions to who gets loans."

Thursday, September 5, 2019

Does the data industry need a code of ethics?; The Scotsman, August 29, 2019

David Lee, The Scotsman; Does the data industry need a code of ethics?

"Docherty says the whole area of data ethics is still emerging: “It’s where all the hype is now – it used to be big data that everyone talked about, now it’s data ethics. It’s fundamental, and embedding it across an organisation will give competitive advantage.”

So what is The Data Lab, set up in 2015, doing itself in this ethical space? “We’re ensuring data ethics training is baked in to the core technology training of all Masters students, so they are asking all the right questions,” says Docherty."

Monday, April 8, 2019

Are big tech’s efforts to show it cares about data ethics another diversion?; The Guardian, April 7, 2019

John Naughton, The Guardian; Are big tech’s efforts to show it cares about data ethics another diversion?

"No less a source than Gartner, the technology analysis company, for example, has also sussed it and indeed has logged “data ethics” as one of its top 10 strategic trends for 2019...

Google’s half-baked “ethical” initiative is par for the tech course at the moment. Which is only to be expected, given that it’s not really about morality at all. What’s going on here is ethics theatre modelled on airport-security theatre – ie security measures that make people feel more secure without doing anything to actually improve their security.

The tech companies see their newfound piety about ethics as a way of persuading governments that they don’t really need the legal regulation that is coming their way. Nice try, boys (and they’re still mostly boys), but it won’t wash. 

Postscript: Since this column was written, Google has announced that it is disbanding its ethics advisory council – the likely explanation is that the body collapsed under the weight of its own manifest absurdity."

Friday, December 21, 2018

What are tech companies doing about ethical use of data? Not much; The Conversation, November 27, 2018

, The Conversation; What are tech companies doing about ethical use of data? Not much

"Our relationship with tech companies has changed significantly over the past 18 months. Ongoing data breaches, and the revelations surrounding the Cambridge Analytica scandal, have raised concerns about who owns our data, and how it is being used and shared.

Tech companies have vowed to do better. Following his grilling by both the US Congress and the EU Parliament, Facebook CEO, Mark Zuckerberg, said Facebook will change the way it shares data with third party suppliers. There is some evidence that this is occurring, particularly with advertisers.

But have tech companies really changed their ways? After all, data is now a primary asset in the modern economy.

To find whether there’s been a significant realignment between community expectations and corporate behaviour, we analysed the data ethics principles and initiatives that various global organisations have committed since the various scandals broke.

What we found is concerning. Some of the largest organisations have not demonstrably altered practices, instead signing up to ethics initiatives that are neither enforced nor enforceable."

Thursday, November 1, 2018

A Rising Crescendo Demands Data Ethics and Data Responsibility; Forbes, October 29, 2018

Randy Bean, Forbes; A Rising Crescendo Demands Data Ethics and Data Responsibility

"It is against this backdrop that data ethics has rapidly moved to the forefront of any meaningful discussion about data. A spate of recent articles -- Never Heard of Data Ethics? You Will Soon, It’s Time to Talk About Data Ethics, Data Ethics: The New Competitive Advantage, Will Democracy Survive Big Data and Artificial Intelligence – underscore the increasing urgency and highlight the ethical considerations that organizations must address when managing data as an asset, and considering its impact on individual rights and privacy.

I recently convened two thought-leadership roundtables of Chief Data Officers and executives with responsibility for data initiatives within their organizations. The increased focus and concern for the ethical use of data is born out of widespread reaction to recent and highly publicized misuses of data that represent breaches of public trust -- whether this be unauthorized data sharing by social media platforms, reselling of customer information by businesses, or biased algorithms that reinforce social inequalities.

It is within this context that we are now witnessing increased corporate attention to data for good initiatives. Companies are increasingly recognizing and acknowledging that ethical action and doing well can be synonymous with doing good. A few corporations, notably Mastercard through their Center for Inclusive Growth, Bloomberg through Bloomberg Philanthropies and the Data for Good Exchange, and JP Morgan through the JP Morgan Institute have been among those corporations at the forefront of ethical data use for public good...

Increasingly, corporations are focusing on issues of data ethics, data privacy, and data philanthropy. An executive representing one of the nation’s largest insurance companies noted, “We are spending more hours on legal and ethical review of data than we are on data management”."

Thursday, October 25, 2018

Even The Data Ethics Initiatives Don't Want To Talk About Data Ethics; Forbes, October 23, 2018

Kalev Leetaru, Forbes; Even The Data Ethics Initiatives Don't Want To Talk About Data Ethics

"Two weeks ago, a new data ethics initiative, the Responsible Computer Science Challenge, caught my eye. Funded by the Omidyar Network, Mozilla, Schmidt Futures and Craig Newmark Philanthropies, the initiative will award up to $3.5M to “promising approaches to embedding ethics into undergraduate computer science education, empowering graduating engineers to drive a culture shift in the tech industry and build a healthier internet.” I was immediately excited about a well-funded initiative focused on seeding data ethics into computer science curricula, getting students talking about ethics from the earliest stages of their careers. At the same time, I was concerned about whether even such a high-profile effort could possibly reverse the tide of anti-data-ethics that has taken root in academia and what impact it could realistically have in a world in which universities, publishers, funding agencies and employers have largely distanced themselves from once-sacrosanct data ethics principles like informed consent and the right to opt out. Surprisingly, for an initiative focused on evangelizing ethics, the Challenge declined to answer any of the questions I posed it regarding how it saw its efforts as changing this. Is there any hope left for data ethics when the very initiatives designed to help teach ethics don’t want to talk about ethics?"

Tuesday, October 9, 2018

Canadian Medical Association leaves international group after president plagiarizes past president’s speech; Retraction Watch, October 8, 2018

Retraction Watch; Canadian Medical Association leaves international group after president plagiarizes past president’s speech

 

[Kip Currier: Quick question: How do you know if the scientific papers you're reading, and perhaps relying upon, represent "good" science or have been discredited? Enter Retraction Watch.

While working on a Research Misconduct chapter for my ethics textbook, I was reminded of Retraction Watch from one of my Information Ethics course's lectures. Retraction Watch is a project of its parent organization, The Center for Scientific Integrity, a 501(c)(3) non-profit, supported by grants like the John D. and Catherine T. MacArthur Foundation.

The Mission of The Center for Scientific Integrity is "to promote transparency and integrity in science and scientific publishing, and to disseminate best practices and increase efficiency in science."

One of the Center's 4 goals is a freely accessible "database of retractions, expressions of concern and related publishing events, generated by the work of Retraction Watch."

Exploring some of the content areas on the Retraction Watch site, I was enticed to check out the so-called "Retraction Watch Leaderboard"--billed by Retraction Watch as their "unofficial list" ranking individuals by the number of papers that have been retracted. Not a list one wants to make! An interesting gender-based observation by Retraction Watch, which bears further study and elucidation:

"We note that all of the top 30 are men, which agrees with the general findings of a 2013 paper suggesting that men are more likely to have papers retracted for fraud."

Another good-to-know-about section of Retraction Watch is its "Top 10 Most Highly Cited Retracted Papers"...Here's looking at you, Andrew Wakefield--still "in the house", presently at #2, for your 1998 invalidated autism/vaccines paper co-authored with 12 other researchers (!), not retracted until 12 years later in 2010 (!), and, as of October 9, 2018, cited 499 times after retraction (!):


"Ever curious which retracted papers have been most cited by other scientists? Below, we present the list of the 10 most highly cited retractions. Readers will see some familiar entries, such as the infamous Lancet paper by Andrew Wakefield that originally suggested a link between autism and childhood vaccines. You’ll note that many papers — including the #1 most cited paper — received more citations after they were retracted, which research has shown is an ongoing problem."
Retraction Watch also reports examples of plagiarism, as evinced by this October 8, 2018 story about the incoming World Medical Association (WMA) President, Leonid Eidelman, delivering a speech that was, allegedly, a "mashup" of remarks from the 2014 past WMA President's speech to the WMA, an MIT press release, and a telemedicine company's website. Quite a patchwork quilt of "creative" unattributed sourcing. Canadian Medical Association leaves international group after president plagiarizes past president’s speech."]

Thursday, September 27, 2018

Cornell Food Researcher's Downfall Raises Larger Questions For Science; NPR, September 26, 2018

Brett Dahlberg, NPR; Cornell Food Researcher's Downfall Raises Larger Questions For Science

"The fall of a prominent food and marketing researcher may be a cautionary tale for scientists who are tempted to manipulate data and chase headlines.

Brian Wansink, the head of the Food and Brand Lab at Cornell University, announced last week that he would retire from the university at the end of the academic year. Less than 48 hours earlier, JAMA, a journal published by the American Medical Association, had retracted six of Wansink's studies, after Cornell told the journal's editors that Wansink had not kept the original data and the university could not vouch for the validity of his studies."

Wednesday, August 29, 2018

The problem with ethics in data; Human Resources Director New Zealand, August 29, 2018

Emily Douglas, Human Resources Director New Zealand; The problem with ethics in data

"The problem of ‘ethics in data’ has become entrenched in HR. A recent paper published in Philosophical Transactions A by Luciano Floridi and Mariarosaria Taddeo, questioned the nature of ‘data ethics’ and what it means in a corporate setting.

“While the data ethics landscape is complex, we are confident that these ethical challenges can be addressed successfully,” commented Floridi.

“Striking a robust balance between enabling innovation in data science technology, and respecting privacy and human rights will not be an easy or simple task. But the alternative, failing to advance both the ethics and the science of data, would have regrettable consequences.”

It serves as both a scary reminder of what exactly is at stake here, and a rousing challenge for HR practitioners. HR should take on the role of a gatekeeper to employee data – rather than procurer."

Wednesday, March 28, 2018

Cambridge Analytica controversy must spur researchers to update data ethics; Nature, March 27, 2018

Editorial, Nature; Cambridge Analytica controversy must spur researchers to update data ethics

"Ethics training on research should be extended to computer scientists who have not conventionally worked with human study participants.

Academics across many fields know well how technology can outpace its regulation. All researchers have a duty to consider the ethics of their work beyond the strict limits of law or today’s regulations. If they don’t, they will face serious and continued loss of public trust."

Saturday, January 30, 2016

White House denies clearance to tech researcher with links to Snowden; Guardian, 1/29/16

Danny Yadron, Guardian; White House denies clearance to tech researcher with links to Snowden:
"The White House has denied a security clearance to a member of its technology team who previously helped report on documents leaked by Edward Snowden.
Ashkan Soltani, a Pulitzer prize-winning journalist and recent staffer at the Federal Trade Commission, recently began working with the White House on privacy, data ethics and technical outreach. The partnership raised eyebrows when it was announced in December because of Soltani’s previous work with the Washington Post, where he helped analyze and protect a cache of National Security Agency documents leaked by Snowden.
His departure raises questions about the US government’s ability to partner with the broader tech community, where people come from a more diverse background than traditional government staffers."