Sunday, September 1, 2024

QUESTIONS FOR CONSIDERATION ON AI & THE COMMONS; Creative Commons, July 24, 2024

Anna Tumadóttir , Creative Commons; QUESTIONS FOR CONSIDERATION ON AI & THE COMMONS

"The intersection of AI, copyright, creativity, and the commons has been a focal point of conversations within our community for the past couple of years. We’ve hosted intimate roundtables, organized workshops at conferences, and run public events, digging into the challenging topics of credit, consent, compensation, transparency, and beyond. All the while, we’ve been asking ourselves:  what can we do to foster a vibrant and healthy commons in the face of rapid technological development? And how can we ensure that creators and knowledge-producing communities still have agency?...

We recognize that there is a perceived tension between openness and creator choice. Namely, if we  give creators choice over how to manage their works in the face of generative AI, we may run the risk of shrinking the commons. To potentially overcome, or at least better understand the effect of generative AI on the commons, we believe  that finding a way for creators to indicate “no, unless…” would be positive for the commons. Our consultations over the course of the last two years have confirmed that:

  • Folks want more choice over how their work is used.
  • If they have no choice, they might not share their work at all (under a CC license or strict copyright).

If these views are as wide ranging as we perceive, we feel it is imperative that we explore an intervention, and bring far more nuance into how this ecosystem works.

Generative AI is here to stay, and we’d like to do what we can to ensure it benefits the public interest. We are well-positioned with the experience, expertise, and tools to investigate the potential of preference signals.

Our starting point is to identify what types of preference signals might be useful. How do these vary or overlap in the cultural heritage, journalism, research, and education sectors? How do needs vary by region? We’ll also explore exactly how we might structure a preference signal framework so it’s useful and respected, asking, too: does it have to be legally enforceable, or is the power of social norms enough?

Research matters. It takes time, effort, and most importantly, people. We’ll need help as we do this. We’re seeking support from funders to move this work forward. We also look forward to continuing to engage our community in this process. More to come soon."

ASU workgroup addresses ethical questions about the use of AI in higher ed; ASU News, August 27, 2024

 , ASU; ASU workgroup addresses ethical questions about the use of AI in higher ed

"As artificial intelligence becomes more ubiquitous in our everyday lives, the AI and Ethics Workgroup at Arizona State University's Lincoln Center for Applied Ethics is working to establish ethical guidelines and frameworks for the deployment of AI technologies. 

Composed of experts from a variety of fields, the workgroup is dedicated to navigating the complex ethical challenges arising from rapid advancements in AI. The group published their first white paper earlier this month, which focuses on the use of AI tools in higher education.

The workgroup’s co-chairs are Sarah Florini, the associate director of the Lincoln Center and an associate professor of film and media studies, and Nicholas Proferes, an associate professor for ASU’s School of Social and Behavioral Sciences.

Florini and Proferes shared some insights into their workgroup’s research process and their publication, “AI and Higher Education: Questions and Projections.”...

Q: What can educators and institutions start doing today to instill more responsible, ethical adoption of AI-related technologies?

Florini: Get involved and participate in the conversations surrounding these technologies. We all need to be part of the efforts to shape how they will be integrated into colleges and universities. The terrain around AI is moving quickly, and there are many stakeholders with diverging opinions about the best course of action. We all need to be developing a critical understanding of these technologies and contributing to the process of determining how they align with our values.

Proferes: Have conversations with your community. Not just your peers, but with every stakeholder who might be impacted. Create spaces for that dialogue. Map out what the collective core values you want to achieve with the technology are, and then develop policies and procedures that can help support that.

But also, be willing to revisit these conversations. Very often with tech development, ethics is treated as a checkbox, rather than an ongoing process of reflection and consideration. Living wisely with technology requires phronesis, or practical wisdom. That’s something that’s gained over time through practice. Not a one-and-done deal."

A bill to protect performers from unauthorized AI heads to California governor; NPR, August 30, 2024

 , NPR; A bill to protect performers from unauthorized AI heads to California governor

"Other proposed guardrails

In addition to AB2602, the performer’s union is backing California bill AB 1836 to protect deceased performers’ intellectual property from digital replicas.

On a national level, entertainment industry stakeholders, from SAG-AFTRA to The Recording Academy and the MPA, and others are supporting The “NO FAKES Act” (the Nurture Originals, Foster Art, and Keep Entertainment Safe Act) introduced in the Senate. That law would make creating a digital replica of any American illegal.

Around the country, legislators have proposed hundreds of laws to regulate AI more generally. For example, California lawmakers recently passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), which regulates AI models such as ChatGPT.

“It's vital and it's incredibly urgent because legislation, as we know, takes time, but technology matures exponentially. So we're going to be constantly fighting the battle to stay ahead of this,” said voice performer Zeke Alton, a member of SAG-AFTRA’s negotiating committee. “If we don't get to know what's real and what's fake, that is starting to pick away at the foundations of democracy.”

Alton says in the fight for AI protections of digital doubles, Hollywood performers have been the canary in the coal mine. “We are having this open conversation in the public about generative AI and it and using it to replace the worker instead of having the worker use it as a tool for their own efficiency,” he said. “But it's coming for every other industry, every other worker. That's how big this sea change in technology is. So what happens here is going to reverberate.”"

Saturday, August 31, 2024

More Art School Classes Are Teaching AI This Fall Despite Ethical Concerns and Ongoing Lawsuits; Artnews, August 30, 2024

KAREN K. HO, Artnews ; More Art School Classes Are Teaching AI This Fall Despite Ethical Concerns and Ongoing Lawsuits


"When undergraduate students return to the Ringling College of Art and Design this fall, one of the school’s newest offerings will be an AI certificate

Ringling is just the latest of several top art schools to offer undergraduate students courses that focus on or integrate artificial intelligence tools and techniques.

ARTnews spoke to experts and faculty at Ringling, Rhode Island School of Design(RISD), Carnegie Mellon University (CMU), and Florida State University about how they construct curriculum; how they teach AI in consideration of its limitations and concerns about ethics and legal issues; as well as why they think it’s important for artists to learn."

ChatGPT Spirituality: Connection or Correction?; Geez, Spring 2024 Issue: February 27, 2024

Rob Saler, Geez ; ChatGPT Spirituality: Connection or Correction?

"Earlier this year, I was at an academic conference sitting with friends at a table. This was around the time that OpenAI technology – specifically ChatGPT – was beginning to make waves in the classroom. Everyone was wondering how to adapt to the new technology. Even at that early point, differentiated viewpoints ranged from incorporation (“we can teach students to use it well as part of the curriculum of the future”) to outright resistance (“I am going back to oral exams and blue book written in-class tests”).

During the conversation, a very intelligent friend casually remarked that she recently began using ChatGPT for therapy – not emergency therapeutic intervention, but more like life coaching and as a sounding board for vocational discernment. Because we all respected her sincerity and intellect, several of us (including me) suppressed our immediate shock and listened as she laid out a very compelling case for ChatGPT as a therapy supplement – and perhaps, in the case of those who cannot or choose not to afford sessions with a human therapist, a therapy substitute. ChapGPT is free (assuming one has internet), available 24/7, shapeable to one’s own interests over time, (presumably) confidential, etc…

In my teaching on AI and technology throughout the last semester, I used this example with theology students (some of whom are also receiving licensure as therapists) as a way of pressing them to examine their own assumptions about AI – and then, by extension, their own assumptions about ontology. If the gut-level reaction to ChatGPT therapy is that it is not “real,” then – in Matrix-esque fashion – we are called to ask how we should define “real.” If a person has genuine insights or intense spiritual experiences engaging in vocational discernment with a technology that can instantaneously generate increasingly relevant responses to prompts, then what is the locus of reality that is missing?"

‘Dangerous and un-American’: new recording of JD Vance’s dark vision of women and immigration; The Guardian, August 31, 2024

 , The Guardian; ‘Dangerous and un-American’: new recording of JD Vance’s dark vision of women and immigration

"Vance also talked about institutions like universities and the media as components of a “broken elite system”, and portrayed their inhabitants as enemies whom conservatives would need to reckon with.

“There is no way for a conservative to accomplish our vision of society unless we’re willing to strike at the heart of the beast. That’s the universities.”"

Friday, August 30, 2024

AI Ethics Part Two: AI Framework Best Practices; Mondaq, August 29, 2024

Laura Gibbs ,Ben Verley Justin GouldKristin MorrowRebecca Reeder, Monday; AI Ethics Part Two: AI Framework Best Practices

"Ethical artificial intelligence frameworks are still emerging across both public and private sectors, making the task of building a responsible AI program particularly challenging. Organizations often struggle to define the right requirements and implement effective measures. So, where do you start if you want to integrate AI ethics into your operations?

In Part I of our AI ethics series, we highlighted the growing pressure on organizations to adopt comprehensive ethics frameworks and the impact of failing to do so. We emphasized the key motivators for businesses to proactively address potential risks before they become reality.

This article delves into what an AI ethics framework is and why it is vital for mitigating these risks and fostering responsible AI use. We review AI ethics best practices, explore common challenges and pitfalls, and draw insights from the experiences of leading industry players across various sectors. We also discuss key considerations to ensure an effective and actionable AI ethics framework, providing a solid foundation for your journey towards ethical AI implementation.

AI Ethics Framework: Outline

A comprehensive AI ethics framework offers practitioners a structured guide with established rules and practices, enabling the identification of control points, performance boundaries, responses to deviations, and acceptable risk levels. Such a framework ensures timely ethical decision-making by asking the right questions. Below, we detail the main functions, core components, and key controls necessary for a robust AI ethics framework."

Essential Skills for IT Professionals in the AI Era; IEEE Spectrum, August 27, 2024

 , IEEE Spectrum; Essential Skills for IT Professionals in the AI Era

"Artificial Intelligence is transforming industries worldwide, creating new opportunities in health care, finance, customer service, and other disciplines. But the ascendance of AI raises concerns about job displacement, especially as the technology might automate tasks traditionally done by humans.

Jobs that involve data entry, basic coding, and routine system maintenance are at risk of being eliminated—which might worry new IT professionals. AI also creates new opportunities for workers, however, such as developing and maintaining new systems, data analysis, and cybersecurity. If IT professionals enhance their skills in areas such as machine learning, natural language processing, and automation, they can remain competitive as the job market evolves.

Here are some skills IT professionals need to stay relevant, as well as advice on how to thrive and opportunities for growth in the industry...

Key insights into AI ethics

Understanding the ethical considerations surrounding AI technologies is crucial. Courses on AI ethics and policy provide important insights into ethical implications, government regulations, stakeholder perspectives, and AI’s potential societal, economic, and cultural impacts.

I recommend reviewing case studies to learn from real-world examples and to get a grasp of the complexities surrounding ethical decision-making. Some AI courses explore best practices adopted by organizations to mitigate risks."