Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Wednesday, January 8, 2025

New SUNY requirements to focus on civic skills and AI ethics; WHEC News10NBC, January 7, 2025

WHEC News10NBC; New SUNY requirements to focus on civic skills and AI ethics 

[Kip Currier: This is an intriguing development by the SUNY higher education system. It will be interesting to see how these efforts are assessed in future years (or even sooner, as I can imagine these general education requirements will likely catalyze spirited discussion about how to define -- and who gets to define -- "civic discourse", "healthy dialogues", and even "AI ethics").

Also, the end of the story notes that "AI assisted with the formatting of the story" and provides a link to learn more about how the news station uses AI. 

This is their policy, at present:

"News 10 NBC’S guidelines for using Artificial Intelligence

News10NBC uses artificial intelligence (A.I.) tools to help format some of our news stories from broadcast style to our digital print style. We do not use A.I. for research, content, reporting, or imaging. All news content that uses News10NBC’s A.I. resources is reviewed and approved in the News10NBC editorial process before publishing."

It's good practice, in terms of transparency and user awareness, that they do specify the current parameters of their use of AI.

A question to keep our eyes on:

  • How might this policy potentially change as AI evolves? Or if/when economic circumstances change, which we know occurs. For example, if news organizations like this one decide to potentially downsize, at some point, and have to do more reporting with fewer human reporters and news staff.]


News story:

"The State University of New York (SUNY) system is set to introduce new general education requirements for its students.

These updates will add a civic discourse component to the core competencies of the general education curriculum. 

According to SUNY, this new requirement aims to ensure that “students gain the skills necessary to participate in civic life and engage in healthy dialogues in order to secure the future of our democracy.”

“SUNY is committed to academic excellence, which includes a robust general education curriculum,” said SUNY Chancellor King. “We are proud that every SUNY student will be expected to demonstrate the knowledge and skills that advance respectful and reasoned discourse, and that we will help our students recognize and ethically use AI as they consider various information sources.”

Additionally, students will be required to learn how to recognize and ethically use artificial intelligence. The new requirements are expected to start in fall 2026."

Monday, January 6, 2025

We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion; Detroit Free Press, January 6, 2025

 Nancy Kaffer, Detroit Free Press; We're using AI for stupid and unnecessary reasons. What if we just stopped? | Opinion

"We're jumping feet first into unreliable, unproven tech with devastating environmental costs and a dense thicket of ethical problems.

It's a bad idea. And — because I enjoy shouting into the void — we really ought to stop."

Friday, December 27, 2024

New Course Creates Ethical Leaders for an AI-Driven Future; George Mason University, December 10, 2024

Buzz McClain, George Mason University; New Course Creates Ethical Leaders for an AI-Driven Future

"While the debates continue over artificial intelligence’s possible impacts on privacy, economics, education, and job displacement, perhaps the largest question regards the ethics of AI. Bias, accountability, transparency, and governance of the powerful technology are aspects that have yet to be fully answered.

A new cross-disciplinary course at George Mason University is designed to prepare students to tackle the ethical, societal, and governance challenges presented by AI. The course, AI: Ethics, Policy, and Society, will draw expertise from the Schar School of Policy and Government, the College of Engineering and Computing(CEC), and the College of Humanities and Social Sciences (CHSS).

The master’s degree-level course begins in spring 2025 and will be taught by Jesse Kirkpatrick, a research associate professor in the CEC, the Department of Philosophy, and codirector of the Mason Autonomy and Robotics Center

The course is important now, said Kirkpatrick, because “artificial intelligence is transforming industries, reshaping societal norms, and challenging long-standing ethical frameworks. This course provides critical insights into the ethical, societal, and policy implications of AI at a time when these technologies are increasingly deployed in areas like healthcare, criminal justice, and national defense.”"

Why ethics is becoming AI's biggest challenge; ZDNet, December 27, 2024

 Joe McKendrick, ZDNet ; Why ethics is becoming AI's biggest challenge

"Many of the technical issues associated with artificial intelligence have been resolved, but the hard work surrounding AI ethics is now coming to the forefront. This is proving even more challenging than addressing technology issues.

The challenge for development teams at this stage is "to recognize that creating ethical AI is not strictly a technical problem but a socio-technical problem," said Phaedra Boinodiris, global leader for trustworthy AI at IBM Consulting, in a recent podcast. This means extending AI oversight beyond IT and data management teams across organizations.

To build responsibly curated AI models, "you need a team composed of more than just data scientists," Boinodiris said. "For decades, we've been communicating that those who don't have traditional domain expertise don't belong in the room. That's a huge misstep."

"It's also notable that well-curated AI models "are also more accurate models," she added. To achieve this, "the team designing the model should be multidisciplinary rather than siloed." The ideal AI team should include "linguistics and philosophy experts, parents, young people, everyday people with different life experiences from different socio-economic backgrounds," she urged. "The wider the variety, the better." Team members are needed to weigh in on the following types of questions:

  • "Is this AI solving the problem we need it to?"
  • "Is this even the right data according to domain experts?"
  • "What are the unintended effects of AI?"
  • "How can we mitigate those effects?""

Thursday, November 28, 2024

Is using AI tools innovation or exploitation? 3 ways to think about the ethics; The Conversation, November 27, 2024

Dean and Professor, College of University Libraries and Learning Sciences, University of New Mexico, The Conversation; Is using AI tools innovation or exploitation? 3 ways to think about the ethics

"Across industries, workers encounter more immediate ethical questions about whether to use AI every day. In a trial by the U.K.-based law firm Ashurst, three AI systems dramatically sped up document review but missed subtle legal nuances that experienced lawyers would catch. Similarly, journalists must balance AI’s efficiency for summarizing background research with the rigor required by fact-checking standards.

These examples highlight the growing tension between innovation and ethics. What do AI users owe the creators whose work forms the backbone of those technologies? How do we navigate a world where AI challenges the meaning of creativity – and humans’ role in it?

As a dean overseeing university libraries, academic programs and the university press, I witness daily how students, staff and faculty grapple with generative AI. Looking at three different schools of ethics can help us go beyond gut reactions to address core questions about how to use AI tools with honesty and integrity."

Monday, October 28, 2024

Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself; New Jersey Institute of Technology, October 22, 2024

Evan Koblentz , New Jersey Institute of Technology; Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself

"Expert panelists took a measured tone about the trends, challenges and ethics of artificial intelligence, at a campus forum organized by NJIT’s Institute for Data Science this month.

The panel moderator was institute director David Bader, who is also a distinguished professor in NJIT Ying Wu College of Computing and who shared his own thoughts on AI in a separate Q&A recently. The panel members were Kevin Coulter, field CTO for AI, Dell Technologies; Grace Wang, distinguished professor and director of NJIT’s Center for Artificial Intelligence Research; and Mengjia Xu, assistant professor of data science. DataBank Ltd., a data center firm that hosts NJIT’s Wulver high-performance computing cluster, was the event sponsor...

Bader: “There's also a lot of concerns that get raised with AI in terms of privacy, in terms of ethics, in terms of its usage. So I really want to understand your thoughts on how we ensure that AI systems are developed and deployed ethically. And are there specific frameworks or guidelines that you would follow?”...

Wang: “Well, I always believe that AI at its core is just a tool, so there's no difference for the AI and say, lock picking tools. Now, picking tools can open your door if you lock yourself out and it can also open others. That's a crime, right? So it depends on how AI is used. From that perspective, there's not much special when we talk about AI ethics, or, say, computer security ethics, or the ethics related to how to use a gun, for example. But what is different is, as AI is too complex, it's beyond the knowledge of many of us how it works. Sometimes it looks ethical but maybe what's behind it is amplifying the bias by using the AI tools without our knowledge. So whenever we talk about AI ethics, I think the most important one is education if you know what AI is about, how it works and what AI can do and what AI cannot. I think for now we have the fear that AI is so powerful it can do anything, but actually, many of the things that people believe AI can do now can be done in the past by just any software system. So education is very, very important to help us to demystify AI accordingly, so we can talk about AI ethics. I want to emphasize transparency. If AI is used for decision making, if we understand how the decision is made, that becomes very, very important. And another important topic related to AI ethics is auditing if we don't know what's inside. At least we have some assessment tools to know whether there's a risk or not in certain circumstances. Whether it can generate a harmful result or is not very much like the stress testing to the financial system after 2008.”

Tuesday, October 15, 2024

AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members; Business Wire, October 15, 2024

 Business Wire; AI Ethics Council Welcomes LinkedIn Co-Founder Reid Hoffman and Commentator, Founder and Author Van Jones as Newest Members

"The AI Ethics Council, founded by OpenAI CEO Sam Altman and Operation HOPE CEO John Hope Bryant, announced today that Reid Hoffman (Co-Founder of LinkedIn and Inflection AI and Partner at Greylock) and Van Jones (CNN commentator, Dream Machine Founder and New York Times best-selling author) have joined as a members. Formed in December 2023, the Council brings together an interdisciplinary body of diverse experts including civil rights activists, HBCU presidents, technology and business leaders, clergy, government officials and ethicists to collaborate and set guidelines on ways to ensure that traditionally underrepresented communities have a voice in the evolution of artificial intelligence and to help frame the human and ethical considerations around the technology. Ultimately, the Council also seeks to help determine how AI can be harnessed to create vast economic opportunities, especially for the underserved.

Mr. Hoffman and Mr. Jones join an esteemed group on the Council, which will serve as a leading authority in identifying, advising on and addressing ethical issues related to AI. In addition to Mr. Altman and Mr. Bryant, founding AI Ethics Council members include:

Friday, September 6, 2024

AN ETHICS EXPERT’S PERSPECTIVE ON AI AND HIGHER ED; Pace University, September 3, 2024

 Johnni Medina, Pace University; AN ETHICS EXPERT’S PERSPECTIVE ON AI AND HIGHER ED

"As a scholar deeply immersed in both technology and philosophy, James Brusseau, PhD, has spent years unraveling the complex ethics of artificial intelligence (AI).

“As it happens, I was a physics major in college, so I've had an abiding interest in technology, but I finally decided to study philosophy,” Brusseau explains. “And I did not see much of an intersection between the scientific and my interest in philosophy until all of a sudden artificial intelligence landed in our midst with questions that are very philosophical.”.

Some of these questions are heavy, with Brusseau positing an example, “If a machine acts just like a person, does it become a person?” But AI’s implications extend far beyond the theoretical, especially when it comes to the impact on education, learning, and career outcomes. What role does AI play in higher education? Is it a tool that enhances learning, or does it risk undermining it? And how do universities prepare students for an AI-driven world?

In a conversation that spans these topics, Brusseau shares his insights on the place of AI in higher education, its benefits, its risks, and what the future holds...

I think that if AI alone is the professor, then the knowledge students get will be imperfect in the same vaguely definable way that AI art is imperfect."

Friday, August 30, 2024

AI Ethics Part Two: AI Framework Best Practices; Mondaq, August 29, 2024

Laura Gibbs ,Ben Verley Justin GouldKristin MorrowRebecca Reeder, Monday; AI Ethics Part Two: AI Framework Best Practices

"Ethical artificial intelligence frameworks are still emerging across both public and private sectors, making the task of building a responsible AI program particularly challenging. Organizations often struggle to define the right requirements and implement effective measures. So, where do you start if you want to integrate AI ethics into your operations?

In Part I of our AI ethics series, we highlighted the growing pressure on organizations to adopt comprehensive ethics frameworks and the impact of failing to do so. We emphasized the key motivators for businesses to proactively address potential risks before they become reality.

This article delves into what an AI ethics framework is and why it is vital for mitigating these risks and fostering responsible AI use. We review AI ethics best practices, explore common challenges and pitfalls, and draw insights from the experiences of leading industry players across various sectors. We also discuss key considerations to ensure an effective and actionable AI ethics framework, providing a solid foundation for your journey towards ethical AI implementation.

AI Ethics Framework: Outline

A comprehensive AI ethics framework offers practitioners a structured guide with established rules and practices, enabling the identification of control points, performance boundaries, responses to deviations, and acceptable risk levels. Such a framework ensures timely ethical decision-making by asking the right questions. Below, we detail the main functions, core components, and key controls necessary for a robust AI ethics framework."