Friday, August 30, 2024

AI Ethics Part Two: AI Framework Best Practices; Mondaq, August 29, 2024

Laura Gibbs ,Ben Verley Justin GouldKristin MorrowRebecca Reeder, Monday; AI Ethics Part Two: AI Framework Best Practices

"Ethical artificial intelligence frameworks are still emerging across both public and private sectors, making the task of building a responsible AI program particularly challenging. Organizations often struggle to define the right requirements and implement effective measures. So, where do you start if you want to integrate AI ethics into your operations?

In Part I of our AI ethics series, we highlighted the growing pressure on organizations to adopt comprehensive ethics frameworks and the impact of failing to do so. We emphasized the key motivators for businesses to proactively address potential risks before they become reality.

This article delves into what an AI ethics framework is and why it is vital for mitigating these risks and fostering responsible AI use. We review AI ethics best practices, explore common challenges and pitfalls, and draw insights from the experiences of leading industry players across various sectors. We also discuss key considerations to ensure an effective and actionable AI ethics framework, providing a solid foundation for your journey towards ethical AI implementation.

AI Ethics Framework: Outline

A comprehensive AI ethics framework offers practitioners a structured guide with established rules and practices, enabling the identification of control points, performance boundaries, responses to deviations, and acceptable risk levels. Such a framework ensures timely ethical decision-making by asking the right questions. Below, we detail the main functions, core components, and key controls necessary for a robust AI ethics framework."

Essential Skills for IT Professionals in the AI Era; IEEE Spectrum, August 27, 2024

 , IEEE Spectrum; Essential Skills for IT Professionals in the AI Era

"Artificial Intelligence is transforming industries worldwide, creating new opportunities in health care, finance, customer service, and other disciplines. But the ascendance of AI raises concerns about job displacement, especially as the technology might automate tasks traditionally done by humans.

Jobs that involve data entry, basic coding, and routine system maintenance are at risk of being eliminated—which might worry new IT professionals. AI also creates new opportunities for workers, however, such as developing and maintaining new systems, data analysis, and cybersecurity. If IT professionals enhance their skills in areas such as machine learning, natural language processing, and automation, they can remain competitive as the job market evolves.

Here are some skills IT professionals need to stay relevant, as well as advice on how to thrive and opportunities for growth in the industry...

Key insights into AI ethics

Understanding the ethical considerations surrounding AI technologies is crucial. Courses on AI ethics and policy provide important insights into ethical implications, government regulations, stakeholder perspectives, and AI’s potential societal, economic, and cultural impacts.

I recommend reviewing case studies to learn from real-world examples and to get a grasp of the complexities surrounding ethical decision-making. Some AI courses explore best practices adopted by organizations to mitigate risks."

AI SERMON SERIES #1: AI AND THE HUMAN IDENTITY–THE ROYAL PRIESTHOOD; Epiphany Seattle, May 7, 2023

The Rev. Doyt L. Conn, Jr., Epiphany Seattle; AI SERMON SERIES #1: AI AND THE HUMAN IDENTITY–THE ROYAL PRIESTHOOD

Click here to watch the sermon

[Excerpt]

"Character will be the distinguishing trait for what it means to be human. It will be what makes us noble or ignoble, just or unjust, honest or liars. It is our character that will give truth meaning, and us the capacity to look each other in the eye as equal members in the caste of the Royal Priesthood.

The church’s role is as a gathering place and a training ground for Priestly Sovereigns, where we practice being human.  And toward that ends, it is important to recognize the ground upon which we stand. For us, it is within the balanced cadence of our Anglican tradition. We have historical precedence developed over the last 500 years for walking with equanimity on a line that runs through the complexity. It is why we are known as the people of the middle way, or to use a theological term: the via media-which is the theology of balance and moderation and equanimity. Think Queen Elizabeth(God rest her soul).

As Anglicans, we meet the latest headlines with a calm cadence, well aware that this new world of AI will reveal both good news and bad news. And so, when we read an article like the one I read the other day about a University of California San Diego School of Medicine study that found that 75% of the time patients reported that a chat bot’s response was superior in quality and empathy to human doctors…pause, don’t panic.

Remember the via media, that we are people of the middle way. And there will be good news: new medical treatments, and scientific breakthroughs, new efficiencies in manufacturing, and better worldwide food distribution. And there will be troubling news: Job losses, and theft, and fraud, and an Internet polluted with lies. Strikes will be more common, as we see with the Writers’ strike going on in Hollywood right now.

And amidst it all, we walk the middle way, understanding and practicing the true primacy of relationship, relationship with one another, with creation, and with God. + The cross is our sign because of who our God is, a relational God, Trinitarian, Father and Son and Holy Ghost . We are the Royal Priesthood, trained like Jedi as people who walk the earth; well balanced, with equanimity whether the world is run by AI or not."

Amy Klobuchar Wants to Stop Algorithms From Ripping You Off; The New York Times, August 30, 2024

 , The New York Times; Amy Klobuchar Wants to Stop Algorithms From Ripping You Off

"This week I interviewed Senator Amy Klobuchar, Democrat of Minnesota, about her Preventing Algorithmic Collusion Act. If you don’t know what algorithmic collusion is, it’s time to get educated, because you could be its next victim.

Algorithmic collusion is where companies illegally coordinate to raise prices through the use of an algorithm that they supply their data to. There is no explicit or even wink-and-a-nod agreement among the competitors, the usual standard for collusion. Instead, each company has its own contract with the algorithm provider. That provider uses the companies’ data to make pricing recommendations that make them all richer — at the expense of their customers.

Algorithmic collusion made the headlines this month when Vice President Kamala Harris vowed to crack down, should she be elected, on “corporate landlords” that use price-setting software to jack up rents. Last week, the Justice Department sued RealPage, charging that the company, which uses an algorithm powered by artificial intelligence to help landlords set rental rates, referred to its products as “driving every possible opportunity to increase price.”"

Wes Moore and the Bronze Star He Claimed but Never Received; The New York Times, August 29, 2024

 , The New York Times; Wes Moore and the Bronze Star He Claimed but Never Received

"Doug Sterner, a military historian and Vietnam veteran considered to be a leading researcher on military service claims, said that minor exaggerations about military service were common, but that imprecision about awards was more serious.

“Every veteran — I mean every veteran, even if they won’t admit it — has told a war story or embellished a little bit, but usually not about awards,” said Mr. Sterner, who helped draft the original version of the Stolen Valor Act, a law that criminalizes some false claims of military accomplishments, though not such assertions about a Bronze Star. “When you start embellishing about awards, then it becomes a problem.”"

How AI-generated memes are changing the 2024 election; NPR, August 30, 2024

 , NPR; How AI-generated memes are changing the 2024 election

"There's a lack of consensus among the companies behind AI generators over what guardrails should be in place. OpenAI, the maker of DALL-E and ChatGPT, prohibits users from creating images of public figures, including political candidates and celebrities. But the recently launched AI image generator on X, the social media platform formerly known as Twitter, appears to have fewer limits. When NPR tested it in mid-August, the tool produced depictions that appear to show Trump and Harris holding firearms and ballot drop boxes being stuffed."

Major publishers sue Florida over ‘unconstitutional’ school book ban; The Guardian, August 30, 2024

  , The Guardian; Major publishers sue Florida over ‘unconstitutional’ school book ban

"Six major book publishers have teamed up to sue the US state of Florida over an “unconstitutional” law that has seen hundreds of titles purged from school libraries following rightwing challenges.

The landmark action targets the “sweeping book removal provisions” of House Bill 1069, which required school districts to set up a mechanism for parents to object to anything they considered pornographic or inappropriate.

A central plank of Republican governor Ron DeSantis’s war on “woke” on Florida campuses, the law has been abused by rightwing activists who quickly realized that any book they challenged had to be immediately removed and replaced only after the exhaustion of a lengthy and cumbersome review process, if at all, the publishers say.

Since it went into effect last July, countless titles have been removed from elementary, middle and high school libraries, including American classics such as Brave New World by Aldous Huxley, For Whom the Bell Tolls by Ernest Hemingway and The Adventures of Tom Sawyer by Mark Twain.

Contemporary novels by bestselling authors such as Margaret Atwood, Judy Blume and Stephen King have also been removed, as well as The Diary of a Young Girl, Anne Frank’s gripping account of the Holocaust, according to the publishers."

Breaking Up Google Isn’t Nearly Enough; The New York Times, August 27, 2024

 , The New York Times; Breaking Up Google Isn’t Nearly Enough

"Competitors need access to something else that Google monopolizes: data about our searches. Why? Think of Google as the library of our era; it’s the first stop we go to when seeking information. Anyone who wants to build a rival library needs to know what readers are looking for in order to stock the right books. They also need to know which books are most popular, and which ones people return quickly because they’re no good."

Thursday, August 29, 2024

OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims; Bloomberg Law, August 29, 2024

 Annelise Gilbert, Bloomberg Law; OpenAI Pushes Prompt-Hacking Defense to Deflect Copyright Claims

"Diverting attention to hacking claims or how many tries it took to obtain exemplary outputs, however, avoids addressing most publishers’ primary allegation: AI tools illegally trained on copyrighted works."

The Nuremberg Code isn’t just for prosecuting Nazis − its principles have shaped medical ethics to this day; The Conversation, August 29, 2024

 Director of the Center for Health Law, Ethics & Human Rights, Boston University, The Conversation; The Nuremberg Code isn’t just for prosecuting Nazis − its principles have shaped medical ethics to this day

"I remain a strong supporter of the Nuremberg Code and believe that following its precepts is both an ethical and a legal obligation of physician researchers. Yet the public can’t expect Nuremberg to protect it against all types of scientific research or weapons development. 

Soon after the U.S. dropped atomic bombs over Hiroshima and Nagasaki – two years before the Nuremberg trials began – it became evident that our species was capable of destroying ourselves. 

Nuclear weapons are only one example. Most recently, international debate has focused on new potential pandemics, but also on “gain-of-function” research, which sometimes adds lethality to an existing bacteria or virus to make it more dangerous. The goal is not to harm humans but rather to try to develop a protective countermeasure. The danger, of course, is that a super harmful agent “escapes” from the laboratory before such a countermeasure can be developed.

I agree with the critics who argue that at least some gain-of-function research is so dangerous to our species that it should be outlawed altogether. Innovations in artificial intelligence and climate engineering could also pose lethal dangers to all humans, not just some humans. Our next question is who gets to decide whether species-endangering research should be done, and on what basis?"

Disinformation, Trust, and the Role of AI: Threats to Health & Democracy; The Hastings Center, September 9, 2024

The Hastings Center; Disinformation, Trust, and the Role of AI: Threats to Health & Democracy

"Join us for The Daniel Callahan Annual Lecture, hosted by The Hastings Center at Rockefeller University’s beautiful campus in New York. Hastings Center President Vardit Ravitsky will moderate a discussion with experts Reed Tuckson and Timothy Caulfieldon disinformation, trust, and the role of AI, focusing on current and future threats to health and democracy. The event will take place on Monday, September 9, 5 pm. Learn more and register...

A Moderated Discussion on DISINFORMATION, TRUST, AND THE ROLE OF AI: Threats to Health & Democracy, The Daniel Callahan Annual Lecture

Panelists
Reed Tuckson, MD, FACP, Chair & Co-Founder of the Black Coalition Against Covid, Chair and Co-Founder of the Coalition For Trust In Health & Science
Timothy Caulfield, LB, LLM, FCAHS, Professor, Faculty of Law and School of Public Health, University of Alberta; Best-selling author & TV host

Moderator:
Vardit Ravitsky, PhD, President & CEO, The Hastings Center"

The Ethics of Developing Voice Biometrics; The New York Academy of Sciences, August 29, 2024

Nitin Verma, PhD, The New York Academy of Sciences; The Ethics of Developing Voice Biometrics

"Juana Catalina Becerra Sandoval, a PhD candidate in the Department of the History of Science at Harvard University and a research scientist in the Responsible and Inclusive Technologies initiative at IBM Research, presented as part of The New York Academy of Sciences’ (the Academy) Artificial Intelligence (AI) & Society Seminar series. The lecture – titled “What’s in a Voice? Biometric Fetishization and Speaker Recognition Technologies” – explored the ethical implications associated with the development and use of AI-based tools such as voice biometrics. After the presentation, Juana sat down with Nitin Verma, PhD, a member of the Academy’s 2023 cohort of the AI & Society Fellowship, to further discuss the promises and challenges society faces as AI continues to evolve."

California advances landmark legislation to regulate large AI models; AP, August 28, 2024

TRÂN NGUYỄN, AP ; California advances landmark legislation to regulate large AI models

"Wiener’s proposal is among dozens of AI bills California lawmakers proposed this year to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography. With AI increasingly affecting the daily lives of Americans, state legislators have tried to strike a balance of reigning in the technology and its potential risks without stifling the booming homegrown industry. 

California, home of 35 of the world’s top 50 AI companies, has been an early adopter of AI technologies and could soon deploy generative AI tools to address highway congestion and road safety, among other things."

Wednesday, August 28, 2024

Controversial California AI regulation bill finds unlikely ally in Elon Musk; The Mercury News, August 28, 2024

  , The Mercury News; Controversial California AI regulation bill finds unlikely ally in Elon Musk

"With a make-or-break deadline just days away, a polarizing bill to regulate the fast-growing artificial intelligence industry from progressive state Sen. Scott Wiener has gained support from an unlikely source.

Elon Musk, the Donald Trump-supporting, often regulation-averse Tesla CEO and X owner, this week said he thinks “California should probably pass” the proposal, which would regulatethe development and deployment of advanced AI models, specifically large-scale AI products costing at least $100 million to build.

The surprising endorsement from a man who also owns an AI company comes as other political heavyweights typically much more aligned with Wiener’s views, including San Francisco Mayor London Breed and Rep. Nancy Pelosi, join major tech companies in urging Sacramento to put on the brakes."

After a decade of free Alexa, Amazon now wants you to pay; The Washington Post, August 27, 2024

 , The Washington Post; After a decade of free Alexa, Amazon now wants you to pay

"There was a lot of optimism in the 2010s that digital assistants like Alexa, Apple’s Siri and Google Assistant would become a dominant way we interact with technology, and become as life-changing as smartphones have been.

Those predictions were mostly wrong. The digital assistants were dumber than the companies claimed, and it’s often annoying to speak commands rather than type on a keyboard or tap on a touch screen...

If you’re thinking there’s no chance you’d pay for an AI Alexa, you should see how many people subscribe to OpenAI’s ChatGPT...

The mania over AI is giving companies a new selling point to upcharge you. It’s now in your hands whether the promised features are worth it, or if you can’t stomach any more subscriptions."

Tuesday, August 27, 2024

Ethical and Responsible AI: A Governance Framework for Boards; Directors & Boards, August 27, 2024

Sonita Lontoh, Directors & Boards; Ethical and Responsible AI: A Governance Framework for Boards 

"Boards must understand what gen AI is being used for and its potential business value supercharging both efficiencies and growth. They must also recognize the risks that gen AI may present. As we have already seen, these risks may include data inaccuracy, bias, privacy issues and security. To address some of these risks, boards and companies should ensure that their organizations' data and security protocols are AI-ready. Several criteria must be met:

  • Data must be ethically governed. Companies' data must align with their organization's guiding principles. The different groups inside the organization must also be aligned on the outcome objectives, responsibilities, risks and opportunities around the company's data and analytics.
  • Data must be secure. Companies must protect their data to ensure that intruders don't get access to it and that their data doesn't go into someone else's training model.
  • Data must be free of bias to the greatest extent possible. Companies should gather data from diverse sources, not from a narrow set of people of the same age, gender, race or backgrounds. Additionally, companies must ensure that their algorithms do not inadvertently perpetuate bias.
  • AI-ready data must mirror real-world conditions. For example, robots in a warehouse need more than data; they also need to be taught the laws of physics so they can move around safely.
  • AI-ready data must be accurate. In some cases, companies may need people to double-check data for inaccuracy.

It's important to understand that all these attributes build on one another. The more ethically governed, secure, free of bias and enriched a company's data is, the more accurate its AI outcomes will be."

World Intellectual Property Organization Adopts Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge; WilmerHale, August 26, 2024

 

"Following nearly twenty-five years of negotiations, members of the World Intellectual Property Organization (WIPO) recently adopted a treaty implementing the new requirement for international patent applicants to disclose in their applications any Indigenous Peoples and/or communities that provided traditional knowledge on which the applicant drew in creating the invention sought to be patented.1 The treaty was adopted at WIPO’s “Diplomatic Conference to Conclude an International Legal Instrument Relating to Intellectual Property, Genetic Resources, and Traditional Knowledge Associated with Genetic Resources,” which was held May 13–24.2 The goal of the treaty, known as the WIPO Treaty on Intellectual Property, Genetic Resources and Associated Traditional Knowledge, is to “prevent patents from being granted erroneously for inventions that are not novel or inventive with regard to genetic resources and traditional knowledge associated with genetic resources.”3 This treaty—the first treaty of its kind, linking intellectual property and Indigenous Peoples—also aims to “enhance the efficacy, transparency and quality of the patent system with regard to genetic resources and traditional knowledge associated with genetic resources.”4 

Once the treaty is ratified, patent applicants will have new (but nonretroactive) disclosure requirements for international patent applications."

EXAMINING THE WORKS OF C.S. LEWIS: CRITICAL THINKING AND ETHICS; United States Air Force Academy, August 26, 2024

Randy RoughtonU.S. Air Force Academy Strategic Communications , United States Air Force Academy; EXAMINING THE WORKS OF C.S. LEWIS: CRITICAL THINKING AND ETHICS

"Twentieth-century author C.S. Lewis’s books dominate the top shelf in Dr. Adam Pelser’s office. Pelser, who was recently recognized as an Inaugural Fellow of the Inklings Project, has used Lewis’ work to teach critical thinking skills and ethics in his Department of Philosophy course since 2018...

Reading with a critical eye

In Pelser’s course, cadets evaluate and discuss the philosophical arguments and themes in some of Lewis’s most influential non-fiction books and essays. They also observe how Lewis interacted with the philosophers and philosophies of his era, including the Oxford philosopher Elizabeth Anscombe, and the most noteworthy philosophers in history such as Aristotle, Plato, Immanuel Kant and David Hume.

Cadets read a series of Lewis books and learn to approach them with “a critical eye,” Pelser said. Like their professor, the cadets can raise their objections to Lewis’s arguments and study how the author interacted with his era’s other great thinkers...

Pelser has four goals for each course. First, he wants to deepen an understanding of the philosophical themes in Lewis’ writings. Second is a deeper understanding of the historical and contemporary philosophical influences on Lewis’s thought. The third goal is for cadets to learn to identify and summarize theses and arguments in philosophical texts. Finally, he wants each cadet to write and think through arguments carefully and clearly.

“A major critical thinking component is the dialogue in class when we push each other and challenge ideas,” Pelser said. “That is an important skill they learn in our course.”"

Chicago Public Library Debuts Initiative Offering Ebooks to the City’s Visitors During Special Events; Library Journal, August 23, 2024

Matt Enis, Library Journal; Chicago Public Library Debuts Initiative Offering Ebooks to the City’s Visitors During Special Events

"“Access to knowledge and information is the foundation of a thriving, equitable, and democratic city,” Mayor Johnson said in an announcement. “Thanks to Chicago Public Library and our dedicated librarians, we’re making this powerful initiative possible, ensuring that everyone in Chicago has the opportunity to learn, grow, and connect through universal access to literature.”"

A Good Way for ALA; American Libraries, July 24, 2024

  Cindy Hohl , American Libraries; A Good Way for ALA

"As the first Dakota president and Spectrum Scholar representing the 1% of Indigenous librarians, I will reaffirm that diversifying the field remains overdue. We need to focus on creating opportunities for our colleagues to be represented across every library type in this field. When leaders come together to support the entire community, that act of selfless service elevates collective goodwill among us. The same is true for work life. When we remember what our ancestors taught us and use those teachings to make informed decisions, we can avoid pitfalls along the path toward equitable service.

We also must have the goal of eliminating acts of censorship. On June 2, 1924, the Indian Citizenship Act was passed, granting us dual citizenship. Also known as the Snyder Act, it provided Native Americans with new identities in a step toward equality. While voting credentials were provided to some, several states decided to withhold the same rights from Native American women. Even as the remaining states finally provided voting privileges by 1975, barriers remain today in rural areas where polling locations are out of reach or tribally issued identification cards are not considered an acceptable form of identification by states.

Access to libraries can also be a challenge in these rural areas. We have the ability to accept tribal IDs for library access and create sustainable employment opportunities to ensure success without barriers. That way no one is left behind when acts of censorship are creating a division among us. If we work together in this way, everyone can see themselves written in stories, their voices can be heard, and no one is silenced.

Our core values help us see that what one holds sacred is a touchstone in advancing this work as we strive to serve everyone in ­#AGoodWay together."