Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Friday, August 29, 2025

Medicare Will Require Prior Approval for Certain Procedures; The New York Times, August 28, 2025

Reed Abelson and  , The New York Times; Medicare Will Require Prior Approval for Certain Procedures


[Kip Currier: Does anyone who receives Medicare -- or cares about someone who does -- really think that letting AI make "prior approvals" for any Medicare procedures is a good thing?

Read the entire article, but just the money quote below should give any thinking person heart palpitations about this AI Medicare pilot project's numerous red flags and conflicts of interest...]


[Excerpt]

"The A.I. companies selected to oversee the program would have a strong financial incentive to deny claims. Medicare plans to pay them a share of the savings generated from rejections."

Monday, August 25, 2025

Medical triage as an AI ethics benchmark; Nature, August 22, 2025

, Nature; Medical triage as an AI ethics benchmark

"We present the TRIAGE benchmark, a novel machine ethics benchmark designed to evaluate the ethical decision-making abilities of large language models (LLMs) in mass casualty scenarios. TRIAGE uses medical dilemmas created by healthcare professionals to evaluate the ethical decision-making of AI systems in real-world, high-stakes scenarios. We evaluated six major LLMs on TRIAGE, examining how different ethical and adversarial prompts influence model behavior. Our results show that most models consistently outperformed random guessing, with open source models making more serious ethical errors than proprietary models. Providing guiding ethical principles to LLMs degraded performance on TRIAGE, which stand in contrast to results from other machine ethics benchmarks where explicating ethical principles improved results. Adversarial prompts significantly decreased accuracy. By demonstrating the influence of context and ethical framing on the performance of LLMs, we provide critical insights into the current capabilities and limitations of AI in high-stakes ethical decision making in medicine."

Saturday, August 23, 2025

PittGPT debuts today as private AI source for University; University Times, August 21, 2025

MARTY LEVINE, University Times; PittGPT debuts today as private AI source for University

"Today marks the rollout of PittGPT, Pitt’s own generative AI for staff and faculty — a service that will be able to use Pitt’s sensitive, internal data in isolation from the Internet because it works only for those logging in with their Pitt ID.

“We want to be able to use AI to improve the things that we do” in our Pitt work, said Dwight Helfrich, director of the Pitt enterprise initiatives team at Pitt Digital. That means securely adding Pitt’s private information to PittGPT, including Human Resources, payroll and student data. However, he explains, in PittGPT “you would only have access to data that you would have access to in your daily role” — in your specific Pitt job.

“Security is a key part of AI,” he said. “It is much more important in AI than in other tools we provide.” Using PittGPT — as opposed to the other AI services available to Pitt employees — means that any data submitted to it “stays in our environment and it is not used to train a free AI model.”

Helfrich also emphasizes that “you should get a very similar response to PittGPT as you would get with ChatGPT,” since PittGPT had access to “the best LLM’s on the market” — the large language models used to train AI.

Faculty, staff and students already have free access to such AI services as Google Gemini and Microsoft Copilot. And “any generative AI tool provides the ability to analyze data … and to rewrite things” that are still in early or incomplete drafts, Helfrich said.

“It can help take the burden off some of the work we have to do in our lives” and help us focus on the larger tasks that, so far, humans are better at undertaking, added Pitt Digital spokesperson Brady Lutsko. “When you are working with your own information, you can tell it what to include” — it won’t add misinformation from the internet or its own programming, as AI sometimes does. “If you have a draft, it will make your good work even better.”

“The human still needs to review and evaluate that this is useful and valuable,” Helfrich said of AI’s contribution to our work. “At this point we can say that there is nothing in AI that is 100 percent reliable.”

On the other hand, he said, “they’re making dramatic enhancements at a pace we’ve never seen in technology. … I’ve been in technology 30 years and I’ve never seen anything improve as quickly as AI.” In his own work, he said, “AI can help review code and provide test cases, reducing work time by 75 percent. You just have to look at it with some caution and just (verify) things.”

“Treat it like you’re having a conversation with someone you’ve just met,” Lutsko added. “You have some skepticism — you go back and do some fact checking.”

Lutsko emphasized that the University has guidance on Acceptable Use of Generative Artificial Intelligence Tools as well as a University-Approved GenAI Tools List.

Pitt’s list of approved generative AI tools includes Microsoft 365 Copilot Chat, which is available to all students, faculty and staff (as opposed to the version of Copilot built into Microsoft 365 apps, which is an add-on available to departments through Panther Express for $30 per month, per person); Google Gemini; and Google NotebookLMwhich Lutsko said “serves as a dedicated research assistant for precise analysis using user-provided documents.”

PittGPT joins that list today, Helfrich said.

Pitt also has been piloting Pitt AI Connect, a tool for researchers to integrate AI into software development (using an API, or application programming interface).

And Pitt also is already deploying the PantherAI chatbot, clickable from the bottom right of the Pitt Digital and Office of Human Resources homepages, which provides answers to common questions that may otherwise be deep within Pitt’s webpages. It will likely be offered on other Pitt websites in the future.

“Dive in and use it,” Helfrich said of PittGPT. “I see huge benefits from all of the generative AI tools we have. I’ve saved time and produced better results.”"

Monday, August 11, 2025

Lost in the wild? AI could find you; Axios, August 10, 2025

"Hikers stranded in remote areas with no cell service or WiFi might have a new lifeline: AI.

The big picture: AI is helping some rescue teams find missing people faster by scanning satellite and drone images.


Zoom in: "AI's contribution is that it can dramatically reduce the time to process imagery and do it more accurately than humans," David Kovar, director of advocacy for NASAR and CEO of cybersecurity company URSA Inc., tells Axios.


Context: It's just one of many resources rescue teams use to help them, Kovar stresses.


AI already is eerily good at geolocating where photos are taken.


  • Last month, the body of a hiker lost for nearly a year was found in Italy in a matter of hours after The National Alpine and Speleological Rescue Corps used AI to analyze a series of drone images.

The intrigue: We also know when people are given the option to share their location as a safety measure, they do it.

What's next: AI agents could be trained to fly drones via an automated system. It's a theory Jan-Hendrik Ewers made the subject of his PhD at the University of Glasgow. 


  • "You could have a fully automated system that monitors reports and triggers drone-based search efforts before a human has lifted a finger," Ewers tells Axios.

  • Barriers to implementing this kind of system are many: money, politics and the fact that when lives are at stake, relying on experimental AI could complicate efforts. 

The other side: Some lost people don't want to be found. And, lost people can't consent.


  • Nearly everyone will want this help, but "there will be cases where, for example, a person who is a victim of domestic violence says she's going out hiking, but she's not. She's not intending to come back," Greg Nojeim, senior counsel and director for Democracy & Technology's Security and Surveillance Project tells Axios.

AI ethics depend on the circumstances, and who is using it, William Budington, senior staff technologist at nonprofit advocacy organization Electronic Frontier Foundation, tells Axios.


  • If it's used to save lives and private data used in a rescue operation is wiped after a hiker is found, there is less of a concern, he says.

  • "But, using it to scan images or locate and surveil people, especially those that don't want to be found — either just for privacy reasons, or political dissidents, perhaps — that's a worrying possibility."

Friday, July 25, 2025

Virginia teachers learn AI tools and ethics at largest statewide workshop; WTVR, July 23, 2025

 

Trump’s AI agenda hands Silicon Valley the win—while ethics, safety, and ‘woke AI’ get left behind; Fortune, July 24, 2025

 SHARON GOLDMAN, Fortune; Trump’s AI agenda hands Silicon Valley the win—while ethics, safety, and ‘woke AI’ get left behind

"For the “accelerationists”—those who believe the rapid development and deployment of artificial intelligence should be pursued as quickly as possible—innovation, scale, and speed are everything. Over-caution and regulation? Ill-conceived barriers that will actually cause more harm than good. They argue that faster progress will unlock massive economic growth, scientific breakthroughs, and national advantage. And if superintelligence is inevitable, they say, the U.S. had better get there first—before rivals like China’s authoritarian regime.

AI ethics and safety has been sidelined

This worldview, articulated by Marc Andreessen in his 2023 blog post, has now almost entirely displaced the diverse coalition of people who worked on AI ethics and safety during the Biden Administration—from mainstream policy experts focused on algorithmic fairness and accountability, to the safety researchers in Silicon Valley who warn of existential risks. While they often disagreed on priorities and tone, both camps shared the belief that AI needed thoughtful guardrails. Today, they find themselves largely out of step with an agenda that prizes speed, deregulation, and dominance.

Whether these groups can claw their way back to the table is still an open question. The mainstream ethics folks—with roots in civil rights, privacy, and democratic governance—may still have influence at the margins, or through international efforts. The existential risk researchers, once tightly linked to labs like OpenAI and Anthropic, still hold sway in academic and philanthropic circles. But in today’s environment—where speed, scale, and geopolitical muscle set the tone—both camps face an uphill climb. If they’re going to make a comeback, I get the feeling it won’t be through philosophical arguments. More likely, it would be because something goes wrong—and the public pushes back."

Wednesday, July 23, 2025

Partner Who Wrote About AI Ethics, Fired For Citing Fake AI Cases; Above The Law, July 23, 2025

Joe Patrice , Above The Law; Partner Who Wrote About AI Ethics, Fired For Citing Fake AI Cases

"Don’t blame the AI for the fact that you read a brief and never bothered to print out the cases. Who does that? Long before AI, we all understood that you needed to look at the case itself to make sure no one missed the literal red flag on top. It might’ve ended up in there because of AI, but three lawyers and presumably a para or two had this brief and no one built a binder of the cases cited? What if the court wanted oral argument? No one is excusing the decision to ask ChatGPT to resolve your $24 million case, but the blame goes far deeper.

Malaty will shoulder most of the blame as the link in the workflow who should’ve known better. That said, her article about AI ethics, written last year, doesn’t actually address the hallucination problem. While risks of job displacement and algorithms reinforcing implicit bias are important, it is a little odd to write a whole piece on the ethics of legal AI without even breathing on hallucinations."

Tuesday, July 22, 2025

Getting Along with GPT: The Psychology, Character, and Ethics of Your Newest Professional Colleague; ABA Journal, May 9, 2025

 ABA Journal; Getting Along with GPT: The Psychology, Character, and Ethics of Your Newest Professional Colleague

"The Limits of GenAI’s Simulated Humanity

  • Creative thinking. An LLM mirrors humanity’s collective intelligence, shaped by everything it has read. It excels at brainstorming and summarizing legal principles but lacks independent thought, opinions, or strategic foresight—all essential to legal practice. Therefore, if a model’s summary of your legal argument feels stale, illogical, or disconnected from human values, it may be because the model has no democratized data to pattern itself on. The good news? You may be on to something original—and truly meaningful!
  • True comprehension. An LLM does not know the law; it merely predicts legal-sounding text based on past examples and mathematical probabilities.
  • Judgment and ethics. An LLM does not possess a moral compass or the ability to make judgments in complex legal contexts. It handles facts, not subjective opinions.  
  • Long-term consistency. Due to its context window limitations, an LLM may contradict itself if key details fall outside its processing scope. It lacks persistent memory storage.
  • Limited context recognition. An LLM has limited ability to understand context beyond provided information and is limited by training data scope.
  • Trustfulness. Attorneys have a professional duty to protect client confidences, but privacy and PII (personally identifiable information) are evolving concepts within AI. Unlike humans, models can infer private information without PII, through abstract patterns in data. To safeguard client information, carefully review (or summarize with AI) your LLM’s terms of use."

Thursday, July 17, 2025

Hot Days, Hotter Topics | ALA Annual 2025; Library Journal, July 9, 2025

Matt Enis, Lisa Peet, Hallie Rich, & Kara Yorio , Library Journal; Hot Days, Hotter Topics | ALA Annual 2025

"This year’s American Library Association (ALA) Annual Conference, held from June 26–30 in Philadelphia, drew 14,250 participants: librarians and library staff, authors, publishers, educators, and exhibitors, including 165 international members. While still not up to pre-pandemic attendance levels, the conference was—by all accounts—buzzing and busy, with well-attended sessions and a bustling exhibit floor.

Even with temperatures topping 90˚, Philly wasn’t the only hot aspect of the conference. A cluster of topics seemed to be at the center of nearly every discussion: how libraries would cope in the face of current or anticipated budget cuts, the impacts of ongoing attacks on the freedom to read and DEI, the ramping up of ICE and police surveillance, the dismantling of the Institute of Museum and Library Services (IMLS) and firing of Librarian of Congress Dr. Carla Hayden, and the uses and ethics of artificial intelligence (AI)."

Friday, July 11, 2025

AI must have ethical management, regulation protecting human person, Pope Leo says; The Catholic Register, July 11, 2025

Carol Glatz , The Catholic Register; AI must have ethical management, regulation protecting human person, Pope Leo says

"Pope Leo XIV urged global leaders and experts to establish a network for the governance of AI and to seek ethical clarity regarding its use.

Artificial intelligence "requires proper ethical management and regulatory frameworks centered on the human person, and which goes beyond the mere criteria of utility or efficiency," Cardinal Pietro Parolin, Vatican secretary of state, wrote in a message sent on the pope's behalf.

The message was read aloud by Archbishop Ettore Balestrero, the Vatican representative to U.N. agencies in Geneva, at the AI for Good Summit 2025 being held July 8-11 in Geneva. The Vatican released a copy of the message July 10."

Thursday, July 10, 2025

EU's AI code of practice for companies to focus on copyright, safety; Reuters, July 10, 2025

, Reuters ; EU's AI code of practice for companies to focus on copyright, safety

"The European Commission on Thursday unveiled a draft code of practice aimed at helping firms comply with the European Union's artificial intelligence rules and focused on copyright-protected content safeguards and measures to mitigate systemic risks.

Signing up to the code, which was drawn up by 13 independent experts, is voluntary, but companies that decline to do so will not benefit from the legal certainty provided to a signatory.

The code is part of the AI rule book, which will come into effect in a staggered manner and will apply to Google owner Alphabet, Facebook owner Meta, OpenAI, Anthropic, Mistral and other companies."

Oprah Winfrey's latest book club pick, 'Culpability,' delves into AI ethics; ABC News, July 8, 2025

HILLEL ITALIE AP national writer, ABC News;  Oprah Winfrey's latest book club pick, 'Culpability,' delves into AI ethics

"Oprah Winfrey has chosen a novel with a timely theme for her latest book club pick. Bruce Holsinger's “Culpability” is a family drama that probes the morals and ethics of AI.

“I appreciated the prescience of this story,” Winfrey said in a statement Tuesday, the day of the novel's publication. “It’s where we are right now in our appreciation and dilemmas surrounding Artificial Intelligence, centered around an American family we can relate to. I was riveted until the very last shocking sentence!”"

Wednesday, July 9, 2025

How the Vatican Is Shaping the Ethics of Artificial Intelligence; American Enterprise Institute, July 7, 2025

Shane Tews , American Enterprise Institute; How the Vatican Is Shaping the Ethics of Artificial Intelligence

"Father Paolo Benanti is an Italian Catholic priest, theologian, and member of the Third Order Regular of St. Francis. He teaches at the Pontifical Gregorian University and has served as an advisor to both former Pope Francis and current Pope Leo on matters of artificial intelligence and technology ethics within the Vatican.

Below is a lightly edited and abridged transcript of our discussion...

In the Vatican document, you emphasize that AI is just a tool—an elegant one, but it shouldn’t control our thinking or replace human relationships. You mention it “requires careful ethical consideration for human dignity and common good.” How do we identify that human dignity point, and what mechanisms can alert us when we’re straying from it?

I’ll try to give a concise answer, but don’t forget that this is a complex element with many different applications, so you can’t reduce it to one answer. But the first element—one of the core elements of human dignity—is the ability to self-determine our trajectory in life. I think that’s the core element, for example, in the Declaration of Independence. All humans have rights, but you have the right to the pursuit of happiness. This could be the first description of human rights.

In that direction, we could have a problem with this kind of system because one of the first and most relevant elements of AI, from an engineering perspective, is its prediction capabilities.Every time a streaming platform suggests what you can watch next, it’s changing the number of people using the platform or the online selling system. This idea that interaction between human beings and machines can produce behavior is something that could interfere with our quality of life and pursuit of happiness. This is something that needs to be discussed.

Now, the problem is: don’t we have a cognitive right to know if we have a system acting in that way? Let me give you some numbers. When you’re 65, you’re probably taking three different drugs per day. When you reach 68 to 70, you probably have one chronic disease. Chronic diseases depend on how well you stick to therapy. Think about the debate around insulin and diabetes. If you forget to take your medication, your quality of life deteriorates significantly. Imagine using this system to help people stick to their therapy. Is that bad? No, of course not. Or think about using it in the workplace to enhance workplace safety. Is that bad? No, of course not.

But if you apply it to your life choices—your future, where you want to live, your workplace, and things like that—that becomes much more intense. Once again, the tool could become a weapon, or the weapon could become a tool. This is why we have to ask ourselves: do we need something like a cognitive right regarding this? That you are in a relationship with a machine that has the tendency to influence your behavior.

Then you can accept it: “I have diabetes, I need something that helps me stick to insulin. Let’s go.” It’s the same thing that happens with a smartwatch when you have to close the rings. The machine is pushing you to have healthy behavior, and we accept it. Well, right now we have nothing like that framework. Should we think about something in the public space? It’s not a matter of allowing or preventing some kind of technology. It’s a matter of recognizing what it means to be human in an age of such powerful technology—just to give a small example of what you asked me."

Wednesday, July 2, 2025

Evangelical Report Says AI Needs Ethics; Christianity Today, July/August 2025

  

DANIEL SILLIMAN, Christianity Today; Evangelical Report Says AI Needs Ethics

"The Swiss Evangelical Alliance published a 78-page report on the ethics of artificial intelligence, calling on Christians to “help reduce the misuse of AI” and “set an example in the use of AI by demonstrating how technology can be used responsibly and for the benefit of all.” Seven people worked on the paper, including two theologians, several software engineers and computer science experts, a business consultant, and a futurist. They rejected the idea that Christians should close themselves off to AI, as that would not do anything to mitigate the risks of the developing technology. The group concluded that AI has a lot of potential to do good, if given ethical boundaries and shaped by Christian values such as honesty, integrity, and charity."

Saturday, June 28, 2025

Global South voices ‘marginalised in AI Ethics’; Gates Cambridge, June 27, 2025

 Gates Cambridge; Global South voices ‘marginalised in AI Ethics’

"A Gates Cambridge Scholar is first author of a paper how AI Ethics is sidelining Global South voices, reinforcing marginalisation.

The study, Distributive Epistemic Injustice in AI Ethics: A Co-productionist Account of Global North-South Politics in Knowledge Production, was published by the Association for Computing Machinery and is based on a study of nearly 6,000 AI Ethics publications between 1960 and 2024. Its first author is Abdullah Hasan Safir [2024 – pictured above], who is doing a PhD in Interdisciplinary Design. Other co-authors include Gates Cambridge Scholars Ramit Debnath[2018] and Kerry McInerney [2017].

The findings were recently presented at the ACM’s FAccT conference, considered one of the top AI Ethics conferences in the world. They show that experts from the Global North currently legitimise their expertise in AI Ethics through dynamic citational and collaborative practices in knowledge production within the field, including co-citation and institutional of AI Ethics."

Sunday, June 22, 2025

Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican; CNN, June 20, 2025

 and , CNN ; Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican

"Pope Leo XIV says tech companies developing artificial intelligence should abide by an “ethical criterion” that respects human dignity.

AI must take “into account the well-being of the human person not only materially, but also intellectually and spiritually,” the pope said in a message sent Friday to a gathering on AI attended by Vatican officials and Silicon Valley executives.

“No generation has ever had such quick access to the amount of information now available through AI,” he said. But “access to data — however extensive — must not be confused with intelligence.”

He also expressed concern about AI’s impact on children’s “intellectual and neurological development,” writing that “society’s well-being depends upon their being given the ability to develop their God-given gifts and capabilities.”

That statement from the Pope came on the second of a two-day meeting for tech leaders in Rome to discuss the societal and ethical implications of artificial intelligence. The second annual Rome Conference on AI was attended by representatives from AI leaders including Google, OpenAI, Anthropic, IBM, Meta and Palantir along with academics from Harvard and Stanford and representatives of the Holy See.

The event comes at a somewhat fraught moment for AI, with the rapidly advancing technology promising to improve worker productivity, accelerate research and eradicate disease, but also threatening to take human jobsproduce misinformationworsen the climate crisis and create even more powerful weapons and surveillance capabilities. Some tech leaders have pushed back against regulationsintended to ensure that AI is used responsibly, which they say could hinder innovation and global competition.

“In some cases, AI has been used in positive and indeed noble ways to promote greater equality, but there is likewise the possibility of its misuse for selfish gain at the expense of others, or worse, to foment conflict and aggression,” Leo said in his Friday statement."

Thursday, June 19, 2025

AI ‘reanimations’: Making facsimiles of the dead raises ethical quandaries; The Conversation, June 17, 2025

 Professor of Philosophy and Director, Applied Ethics Center, UMass BostonSenior Research Fellow, Applied Ethics Center, UMass Boston; The Conversation; AI ‘reanimations’: Making facsimiles of the dead raises ethical quandaries

"The use of artificial intelligence to “reanimate” the dead for a variety of purposes is quickly gaining traction. Over the past few years, we’ve been studying the moral implications of AI at the Center for Applied Ethics at the University of Massachusetts, Boston, and we find these AI reanimations to be morally problematic.

Before we address the moral challenges the technology raises, it’s important to distinguish AI reanimations, or deepfakes, from so-called griefbots. Griefbots are chatbots trained on large swaths of data the dead leave behind – social media posts, texts, emails, videos. These chatbots mimic how the departed used to communicate and are meant to make life easier for surviving relations. The deepfakes we are discussing here have other aims; they are meant to promote legal, political and educational causes."

Monday, June 9, 2025

BFI Report Sets Out 9 Recommendations to Ensure “Ethical, Sustainable, Inclusive AI” Use; The Hollywood Reporter, June 8, 2025

Georg Szalai, The Hollywood Reporter; BFI Report Sets Out 9 Recommendations to Ensure “Ethical, Sustainable, Inclusive AI” Use

"A new report published on Monday by the British Film Institute (BFI) sets out nine recommendations for the U.K. screen sector to ensure that artificial intelligence will be a boon rather than bane for film and TV. 

“AI in the Screen Sector: Perspectives and Paths Forward” analyzes current usage and experimentation with “rapidly evolving generative artificial intelligence (AI) technologies,” the BFI said. “To ensure that the U.K. remains a global leader in screen production and creative innovation, the report sets out a roadmap of key recommendations to support the delivery of ethical, sustainable, and inclusive AI integration across the sector.”"

5 Dangerous Myths About AI Ethics You Shouldn’t Believe; Forbes, May 14, 2025

Bernard Marr , Forbes; 5 Dangerous Myths About AI Ethics You Shouldn’t Believe

"AI can empower just about any business to innovate and drive efficiency, but it also has the potential to do damage and cause harm. This means that everyone putting it to use needs to understand the ethical frameworks in place to keep everyone safe.

At the end of the day, AI is a tool. AI ethics can be thought of as the safety warning you get in big letters at the front of any user manual, setting out some firm dos and don’ts about using it.

Using AI almost always involves making ethical choices. In a business setting, understanding the many ways it can affect people and culture means we have the best information for making those choices.

It’s a subject there's still a lot of confusion around, not least involving who is responsible and who should be ensuring this gets done. So here are five common misconceptions I come across involving the ethics of generative AI and machine learning."

Saturday, June 7, 2025

Do AI systems have moral status?; Brookings, June 4, 2025

 , Brookings; Do AI systems have moral status?

"In March, researchers announced that a large language model (LLM) passed the famous Turing test, a benchmark designed by computer scientist Alan Turing in 1950 to evaluate whether computers could think. This follows research from last year suggesting that the time is now for artificial intelligence (AI) labs to take the welfare of their AI models into account."