Showing posts with label trustworthiness. Show all posts
Showing posts with label trustworthiness. Show all posts

Wednesday, July 23, 2025

AI chatbots remain overconfident -- even when they’re wrong; EurekAlert!, July 22, 2025

 CARNEGIE MELLON UNIVERSITY, EurekAlert!; AI chatbots remain overconfident -- even when they’re wrong

"Artificial intelligence chatbots are everywhere these days, from smartphone apps and customer service portals to online search engines. But what happens when these handy tools overestimate their own abilities? 

Researchers asked both human participants and four large language models (LLMs) how confident they felt in their ability to answer trivia questions, predict the outcomes of NFL games or Academy Award ceremonies, or play a Pictionary-like image identification game. Both the people and the LLMs tended to be overconfident about how they would hypothetically perform. Interestingly, they also answered questions or identified images with relatively similar success rates.

However, when the participants and LLMs were asked retroactively how well they thought they did, only the humans appeared able to adjust expectations, according to a study published today in the journal Memory & Cognition.

“Say the people told us they were going to get 18 questions right, and they ended up getting 15 questions right. Typically, their estimate afterwards would be something like 16 correct answers,” said Trent Cash, who recently completed a joint Ph.D. at Carnegie Mellon University in the departments of Social Decision Science and Psychology. “So, they’d still be a little bit overconfident, but not as overconfident.”

“The LLMs did not do that,” said Cash, who was lead author of the study. “They tended, if anything, to get more overconfident, even when they didn’t do so well on the task.”

The world of AI is changing rapidly each day, which makes drawing general conclusions about its applications challenging, Cash acknowledged. However, one strength of the study was that the data was collected over the course of two years, which meant using continuously updated versions of the LLMs known as ChatGPT, Bard/Gemini, Sonnet and Haiku. This means that AI overconfidence was detectable across different models over time.

“When an AI says something that seems a bit fishy, users may not be as skeptical as they should be because the AI asserts the answer with confidence, even when that confidence is unwarranted,” said Danny Oppenheimer, a professor in CMU’s Department of Social and Decision Sciences and coauthor of the study."

Tuesday, July 22, 2025

Getting Along with GPT: The Psychology, Character, and Ethics of Your Newest Professional Colleague; ABA Journal, May 9, 2025

 ABA Journal; Getting Along with GPT: The Psychology, Character, and Ethics of Your Newest Professional Colleague

"The Limits of GenAI’s Simulated Humanity

  • Creative thinking. An LLM mirrors humanity’s collective intelligence, shaped by everything it has read. It excels at brainstorming and summarizing legal principles but lacks independent thought, opinions, or strategic foresight—all essential to legal practice. Therefore, if a model’s summary of your legal argument feels stale, illogical, or disconnected from human values, it may be because the model has no democratized data to pattern itself on. The good news? You may be on to something original—and truly meaningful!
  • True comprehension. An LLM does not know the law; it merely predicts legal-sounding text based on past examples and mathematical probabilities.
  • Judgment and ethics. An LLM does not possess a moral compass or the ability to make judgments in complex legal contexts. It handles facts, not subjective opinions.  
  • Long-term consistency. Due to its context window limitations, an LLM may contradict itself if key details fall outside its processing scope. It lacks persistent memory storage.
  • Limited context recognition. An LLM has limited ability to understand context beyond provided information and is limited by training data scope.
  • Trustfulness. Attorneys have a professional duty to protect client confidences, but privacy and PII (personally identifiable information) are evolving concepts within AI. Unlike humans, models can infer private information without PII, through abstract patterns in data. To safeguard client information, carefully review (or summarize with AI) your LLM’s terms of use."

Tuesday, October 15, 2024

‘Armed Militias’ Claims In N.C. Driven By Social Media Misinformation; Forbes, October 14, 2024

Peter Suciu, Forbes; ‘Armed Militias’ Claims In N.C. Driven By Social Media Misinformation

""The amount of misinformation and disinformation we've seen around the recent hurricanes and help efforts is a strong example of how powerful those effects have become," explained Dr. Cliff Lampe, professor of information and associate dean for academic affairs in the School of Information at the University of Michigan.

Misinformation began even before Hurricane Helene made landfall, with the dubious claims that government officials were controlling the weather and directing the storm to hit "red states." The misinformation only intensified after the storm left a path of destruction.

"Over the last weeks we've seen death threats against meteorologists and now first responders in emergency situations," said Lampe. "There are a few things that are challenging about this. One is that belief persistence, which is the effect where people tend to keep believing what they have believed, makes it so that new information often doesn't make a difference in changing people's minds. We tend to think that good information will swamp out bad information, but unfortunately, it's not that simple."

Social media can amplify such misinformation in a way that was previously impossible.

"We saw that a small group of people acting on misinformation can disrupt services of the majority of people with a need," added Lampe.

"False information, especially on social media platforms, spreads incredibly fast. It's crucial to distinguish between misinformation and disinformation," said Rob Lalka, professor at Tulane University's Freeman School of Business and author of The Venture Alchemists: How Big Tech Turned Profits Into Power.

"Misinformation refers to false, incomplete, or inaccurate information shared without harmful intent, while disinformation is deliberately false information designed to deceive," Lalka continued...

"New technologies are making it increasingly hard to tell what's real and what's fake," said Lalka. "We now live in an era where Artificial Intelligence can generate lifelike images and audio, and these powerful tools should prompt us all to pause and consider whether a source is truly trustworthy.""

Tuesday, September 17, 2024

Disinformation, Trust, and the Role of AI: The Daniel Callahan Annual Lecture; The Hastings Center, September 12, 2024

 The Hastings Center; Disinformation, Trust, and the Role of AI: The Daniel Callahan Annual Lecture

"A Moderated Discussion on DISINFORMATION, TRUST, AND THE ROLE OF AI: Threats to Health & Democracy, The Daniel Callahan Annual Lecture

Panelists: Reed Tuckson, MD, FACP, Chair & Co-Founder of the Black Coalition Against Covid, Chair and Co-Founder of the Coalition For Trust In Health & Science Timothy Caulfield, LB, LLM, FCAHS, Professor, Faculty of Law and School of Public Health, University of Alberta; Best-selling author & TV host Moderator: Vardit Ravitsky, PhD, President & CEO, The Hastings Center"

Saturday, February 17, 2024

The New York Times’ AI copyright lawsuit shows that forgiveness might not be better than permission; The Conversation, February 13, 2024

 Senior Lecturer, Nottingham Law School, Nottingham Trent University, The Conversation; ; The New York Times’ AI copyright lawsuit shows that forgiveness might not be better than permission

"The lawsuit also presents a novel argument – not advanced by other, similar cases – that’s related to something called “hallucinations”, where AI systems generate false or misleading information but present it as fact. This argument could in fact be one of the most potent in the case.

The NYT case in particular raises three interesting takes on the usual approach. First, that due to their reputation for trustworthy news and information, NYT content has enhanced value and desirability as training data for use in AI. 

Second, that due to its paywall, the reproduction of articles on request is commercially damaging. Third, that ChatGPT “hallucinations” are causing reputational damage to the New York Times through, effectively, false attribution. 

This is not just another generative AI copyright dispute. The first argument presented by the NYT is that the training data used by OpenAI is protected by copyright, and so they claim the training phase of ChatGPT infringed copyright. We have seen this type of argument run before in other disputes."

Monday, July 3, 2023

Keeping true to the Declaration of Independence is a matter of ethics; Ventura County Star, July 2, 2023

Ed Jones, Ventura County Star; Keeping true to the Declaration of Independence is a matter of ethics

"How do we keep faith with Jefferson, Franklin and the other founders? Due to the imperfections in human nature, there is no foolproof way, but a good plan would be to have all levels of our government — national, state and local — adopt ethical training similar to that of elective office holders here in California. Periodically, they must participate in ethics training which assumes there are universal ethical values consisting of fairness, loyalty, compassion trustworthiness, and responsibility that transcend other considerations and should be adhered to. This training consists of biannual computer sessions in which they must solve real-life problems based on the aforementioned ethical values.

I believe a real danger for elected officials and voters as well is the idea that certain societal values are so vital, so crucial, that they transcend normal ethical practices. This might be termed an “ends — means philosophy,” the idea that the ends justify the means. Mohandas Gandhi, former leader of India, observed that “the means are the ends in a democracy and good ends cannot come from questionable means.” 

No matter how exemplary our Declaration of Independence and Constitution, we are still relying on human beings to fulfill their promise. Ever since the Supreme Court took the power of judicial review — the power to tell us what the Constitution means and, in the process, affirm certain laws by declaring them constitutional or removing others by declaring them unconstitutional — the judgement of nine people has had a profound effect on our society. Was the Supreme Court correct in 1973 by saying the Ninth Amendment guarantees pregnant women the right to an abortion, or was it correct in 2022 by saying it didn’t?

In the final analysis we must conclude that it will be well-intentioned, ethical citizens and their elected and appointed representatives who will ensure the equitable future of what Abraham Lincoln referred to as our “ongoing experiment in self-government.”"

Tuesday, March 1, 2022

The Battle for the Soul of the Library; The New York Times, February 24, 2022

Stanley Kurtz, The New York Times; The Battle for the Soul of the Library

"Ultimately, librarians who work to balance a library’s holdings will be far more persuasive advocates for intellectual freedom than those with a political ax to grind.

There is a lesson here for the professions upon whose trustworthy refereeing our society depends for its stability: judges, government bureaucrats, journalists and more. These occupations should work to recapture lost neutrality. As our political conflicts deepen, we need our traditionally fair and impartial referees far more, not less, than before." 

Tuesday, January 29, 2019

FaceTime Is Eroding Trust in Tech Privacy paranoiacs have been totally vindicated.; The Atlantic, January 29, 2019

Ian Bogost, The Atlantic;

FaceTime Is Eroding Trust in Tech

Privacy paranoiacs have been totally vindicated.

"Trustworthy is hardly a word many people use to characterize big tech these days. Facebook’s careless infrastructure upended democracy. Abuse is so rampant on Twitter and Instagram that those services feel designed to deliver harassment rather than updates from friends. Hacks, leaks, and other breaches of privacy, at companies from Facebook to Equifax, have become so common that it’s hard to believe that any digital information is secure. The tech economy seems designed to steal and resell data."

Thursday, June 15, 2017

Ethics And Artificial Intelligence With IBM Watson's Rob High; Forbes, June 13, 2017

Blake Morgan, Forbes; Ethics And Artificial Intelligence With IBM Watson's Rob High

"Artificial intelligence seems to be popping up everywhere, and it has the potential to change nearly everything we know about data and the customer experience. However, it also brings up new issues regarding ethics and privacy.

One of the keys to keeping AI ethical is for it to be transparent, says Rob High, vice president and chief technology officer of IBM Watson...

The future of technology is rooted in artificial intelligence. In order to stay ethical, transparency, proof, and trustworthiness need to be at the root of everything AI does for companies and customers. By staying honest and remembering the goals of AI, the technology can play a huge role in how we live and work."

Friday, August 12, 2016

Clinton’s Fibs vs. Trump’s Huge Lies; New York Times, 8/6/16

Nicholas Kristof, New York Times; Clinton’s Fibs vs. Trump’s Huge Lies:
"ONE persistent narrative in American politics is that Hillary Clinton is a slippery, compulsive liar while Donald Trump is a gutsy truth-teller.
Over all, the latest CBS News poll finds the public similarly repulsed by each candidate: 34 percent of registered voters say Clinton is honest and trustworthy compared with 36 percent for Trump.
Yet the idea that they are even in the same league is preposterous. If deception were a sport, Trump would be the Olympic gold medalist; Clinton would be an honorable mention at her local Y.
Let’s investigate."