Showing posts with label nuance. Show all posts
Showing posts with label nuance. Show all posts

Sunday, September 1, 2024

QUESTIONS FOR CONSIDERATION ON AI & THE COMMONS; Creative Commons, July 24, 2024

Anna Tumadóttir , Creative Commons; QUESTIONS FOR CONSIDERATION ON AI & THE COMMONS

"The intersection of AI, copyright, creativity, and the commons has been a focal point of conversations within our community for the past couple of years. We’ve hosted intimate roundtables, organized workshops at conferences, and run public events, digging into the challenging topics of credit, consent, compensation, transparency, and beyond. All the while, we’ve been asking ourselves:  what can we do to foster a vibrant and healthy commons in the face of rapid technological development? And how can we ensure that creators and knowledge-producing communities still have agency?...

We recognize that there is a perceived tension between openness and creator choice. Namely, if we  give creators choice over how to manage their works in the face of generative AI, we may run the risk of shrinking the commons. To potentially overcome, or at least better understand the effect of generative AI on the commons, we believe  that finding a way for creators to indicate “no, unless…” would be positive for the commons. Our consultations over the course of the last two years have confirmed that:

  • Folks want more choice over how their work is used.
  • If they have no choice, they might not share their work at all (under a CC license or strict copyright).

If these views are as wide ranging as we perceive, we feel it is imperative that we explore an intervention, and bring far more nuance into how this ecosystem works.

Generative AI is here to stay, and we’d like to do what we can to ensure it benefits the public interest. We are well-positioned with the experience, expertise, and tools to investigate the potential of preference signals.

Our starting point is to identify what types of preference signals might be useful. How do these vary or overlap in the cultural heritage, journalism, research, and education sectors? How do needs vary by region? We’ll also explore exactly how we might structure a preference signal framework so it’s useful and respected, asking, too: does it have to be legally enforceable, or is the power of social norms enough?

Research matters. It takes time, effort, and most importantly, people. We’ll need help as we do this. We’re seeking support from funders to move this work forward. We also look forward to continuing to engage our community in this process. More to come soon."

Thursday, October 4, 2018

The push to create AI-friendly ethics codes is stripping all nuance from morality; Quartz, October 4, 2018

Olivia Goldhill, Quartz; The push to create AI-friendly ethics codes is stripping all nuance from morality

"A paper led by Veljko Dubljević, neuroethics researcher at North Carolina State University, published yesterday (Oct. 2) in PLOS ONE, claims to establish not just the answer to one ethical question, but the entire groundwork for how moral judgements are made.

According to the paper’s “Agent Deed Consequence model,” three things are taken into account when making a moral decision: the person doing the action, the moral action itself, and the consequences of that action. To test this theory, the researchers created moral scenarios that varied details about the agent, the action, and the consequences."

Sunday, January 31, 2016

How Europe is fighting to change tech companies' 'wrecking ball' ethics; Guardian, 1/30/16

Julia Powles and Carissa Veliz, Guardian; How Europe is fighting to change tech companies' 'wrecking ball' ethics:
"Culture and ethics beyond law...
European politicians want the new General Data Protection Regulation – the most-debated piece of EU legislation ever – to be part of the solution, along with the remainder of Europe’s pioneering fundamental rights framework. But law is not, and cannot be, the whole. Mostly, it’s about culture and ethics.
One European institution wants to seize this broader challenge. The European data protection supervisor, or EDPS, is the EU’s smallest entity but also one of its most ambitious, and immediately followed Schulz’s address by announcing a new ethics advisory group.
EDPS hopes this group will lead an inclusive debate on human rights, technology, markets and business models in the 21st century from an ethical perspective.
Six individuals have been selected to spearhead what is initially a two-year investigative, consultative and report-writing initiative: iconoclastic American computer scientist and writer Jaron Lanier; Dutch data analytics consultant Aurélie Pols; and four philosophers, Peter Burgess, Antoinette Rouvroy, Luciano Floridi and Jeroen van den Hoven, who bring experience in political and legal philosophy, logic, and the ethics and philosophy of technology.
Technology needs a moral compass
Bringing ethics into the data debate is essential."