Showing posts with label accuracy of information. Show all posts
Showing posts with label accuracy of information. Show all posts

Friday, January 17, 2025

Apple sidelines AI news summaries due to errors; Japan Today, January 17, 2025

Japan Today; Apple sidelines AI news summaries due to errors

"Apple pushed out a software update on Thursday which disabled news headlines and summaries generated using artificial intelligence that were lambasted for getting facts wrong.

The move by the tech titan comes as it enhances its latest lineup of devices with "Apple Intelligence" in a market keen for assurance that the iPhone maker is a contender in the AI race.

Apple's decision to temporarily disable the recently launched AI feature comes after the BBC and other news organizations complained that users were getting mistake-riddled or outright wrong headlines or news summary alerts."

Thursday, January 16, 2025

The Washington Post’s New Mission: Reach ‘All of America’; The New York Times, January 16, 2025

, The New York Times ; The Washington Post’s New Mission: Reach ‘All of America’


[Kip Currier: Two things only the people anxiously desire — bread and circuses.” 

-- Juvenal, Roman satirical poet (c. 100 AD).


To think that The Washington Post was the newspaper whose investigative reporters Bob Woodward and Carl Bernstein exposed the 1970's Watergate break-in and cover-up, resulting in the eventual resignation of Pres. Richard Nixon on August 8, 1974...

And to now see its stature intentionally diminished and its mission incrementally debased, week by week, at the hands of billionaire Jeff Bezos and hand-picked former newspaper administrators who worked for billionaire Rupert Murdoch-owned U.K. newspapers.]


[Excerpt]

"After Donald J. Trump entered the White House in 2017, The Washington Post adopted a slogan that underscored the newspaper’s traditional role as a government watchdog: “Democracy Dies in Darkness.”

This week, as Mr. Trump prepares to re-enter the White House, the newspaper debuted a mission statement that evokes a more expansive view of The Post’s journalism, without death or darkness: “Riveting Storytelling for All of America.”...

The slide deck that Ms. Watford presented describes artificial intelligence as a key enabler of The Post’s success, the people said. It describes The Post as “an A.I.-fueled platform for news” that delivers “vital news, ideas and insights for all Americans where, how and when they want it.” It also lays out three pillars of The Post’s overall plan: “great journalism,” “happy customers” and “make money.” The Post lost roughly $77 million in 2023.

But many aspects of The Post’s new mission have nothing to do with emerging technology. The slide deck includes a list of seven principles first articulated by Eugene Meyer, an influential Post owner, in 1935. Among them: “the newspaper shall tell all the truth” and “the newspaper’s duty is to its readers and to the public at large, and not to the private interests of its owners.”"

Monday, December 9, 2024

Stop using generative AI as a search engine; The Verge, December 5, 2024

Elizabeth Lopatto, The Verge; Stop using generative AI as a search engine

"Maybe there is a way to make generative AI useful, but in its current state, I feel tremendously sorry for anyone gullible enough to use it as a research tool.

I know people are sick of talking about glue on pizza, but I find the large-scale degradation of our information environment that has already taken place shocking. (Just search Amazon if you want to see what I mean.) This happens in small ways, like Google’s AI wrongly saying that male foxes mate for life, and big ones, like spreading false information around a major news event. What good is an answer machine that nobody can trust."

Wednesday, December 4, 2024

The imaginary justifications for Biden’s pardon of his son; The Washington Post, December 4, 2024

 , The Washington Post; The imaginary justifications for Biden’s pardon of his son

"There is robust documentation of known grants of clemency, a database created by a former college professor who built his career on being the foremost expert on presidential pardons. (He later killed his children and himself.) But that database is not accessible to the general public. You can’t just search “what presidents had relatives that they pardoned” and get a clear, detailed answer that draws from the public record.

What you can do, though, is ask an artificial intelligence trained to generate clear, detailed answers to questions — but not necessarily ones that include accurate information. That’s what Navarro-Cárdenas did, according to a follow-up post: she asked ChatGPT which presidents had pardoned relatives. And it reached into its database of phrases and snippets and pulled out the words “Roger Clinton” and “Charles Kushner” and “Hunter deButts.”

Friday, November 22, 2024

How To Avoid AI Misinformation: 2 Essential Steps For Smarter Research; Forbes, November 21, 2024

 Bruce Weinstein, Ph.D., Forbes; How To Avoid AI Misinformation: 2 Essential Steps For Smarter Research

"AI can be a powerful ally or a risky gamble, depending on how you use it. If you’re relying on AI for research, taking shortcuts can backfire—and cost you your credibility. To avoid AI misinformation, follow these two essential steps:

  1. Ask for references.
  2. Verify those references yourself.

Here’s why these steps are critical."

A.I. Chatbots Defeated Doctors at Diagnosing Illness; The New York Times, November 17, 2024

 , The New York Times; A.I. Chatbots Defeated Doctors at Diagnosing Illness

"Instead, in a study Dr. Rodman helped design, doctors who were given ChatGPT-4 along with conventional resources did only slightly better than doctors who did not have access to the bot. And, to the researchers’ surprise, ChatGPT alone outperformed the doctors.

“I was shocked,” Dr. Rodman said.

The chatbot, from the company OpenAI, scored an average of 90 percent when diagnosing a medical condition from a case report and explaining its reasoning. Doctors randomly assigned to use the chatbot got an average score of 76 percent. Those randomly assigned not to use it had an average score of 74 percent.

The study showed more than just the chatbot’s superior performance.

It unveiled doctors’ sometimes unwavering belief in a diagnosis they made, even when a chatbot potentially suggests a better one."

Wednesday, July 3, 2024

How ABC News Could Fix CNN’s Mockery Of The First Presidential Debate; Forbes, July 3, 2024

Subramaniam Vincent , Forbes; How ABC News Could Fix CNN’s Mockery Of The First Presidential Debate

"If we are bringing prolific liars live on an election debate, our responsibility to truth-telling and truth-determination requires that we make a sincere attempt to vet their claims within a few minutes of them being aired. This is when the audience of millions is in the frame of comparing candidates. And when those claims are dubious, it is an act of ethical journalism to intervene to ask its promoters to defend with actual evidence, or call them out."

Tuesday, July 2, 2024

More Adventures With AI Claude, The Contrite Poet; Religion Unplugged, June 11, 2024

Dr. Michael Brown , Religion Unplugged; More Adventures With AI Claude, The Contrite Poet

"Working with the AI bot Claude is, in no particular order, amazing, frustrating, and hilarious...

I have asked Claude detailed Hebrew grammatical questions or asked him to translate difficult rabbinic Hebrew passages, and time and time again, Claude has nailed it.

But just as frequently, he creates texts out of thin air, side by side with accurate citations, which then have to be vetted one by one.

When I asked Claude why he manufactured citations, he explained that he aims to please and can sometimes go a little too far. In other words, Claude tells me what he thinks I want to hear...

"I’m sure that AI bots are already providing “companionship” for an increasingly isolated generation, not to mention proving falsehoods side by side with truths for unsuspecting readers.

And so, the promise and the threat of AI continue to grow by the day, with a little entertainment and humor added in."

Navigate ethical and regulatory issues of using AI; Thomson Reuters, July 1, 2024

Thomson Reuters ; Navigate ethical and regulatory issues of using AI

"However, the need for regulation to ensure clarity, trust, and mitigate risk has not gone unnoticed. According to the report, the vast majority (93%) of professionals surveyed said they recognize the need for regulation. Among the top concerns: a lack of trust and unease about the accuracy of AI. This is especially true in the context of using the AI output as advice without a human checking for its accuracy."

Monday, June 17, 2024

Sinclair Infiltrates Local News With Lara Trump’s RNC Playbook; The New Republic, June 17, 2024

Ben Metzner, The New Republic; Sinclair Infiltrates Local News With Lara Trump’s RNC Playbook

"Sinclair Broadcast Group, the right-wing media behemoth swallowing up local news stations and spitting them out as zombie GOP propaganda mills, is ramping up pro-Trump content in the lead-up to the 2024 election. Its latest plot? A coordinated effort across at least 86 local news websites to suggest that Joe Biden is mentally unfit for the presidency, based on edited footage and misinformation.

According to Judd Legum, Sinclair, which owns hundreds of television news stations around the country, has been laundering GOP talking points about Biden’s age and mental capacity into news segments of local Fox, ABC, NBC, and CBS affiliates. One replica online article with the headline “Biden appears to freeze, slur words during White House Juneteenth event” shares no evidence other than a spliced-together clip of Biden watching a musical performance and another edited video of Biden giving a speech originally posted on X by Sean Hannity. The article was syndicated en masse on the same day at the same time, Legum found, suggesting that editors at the local affiliates were not given the chance to vet the segment for accuracy.

Most outrageously, the article, along with at least two others posted in June, makes the evidence-free claim that Biden may have pooped himself at a D-Day memorial event in France, based on a video of the president sitting down during the event. According to Legum, one of the article’s URLs includes the word “pooping.”

Saturday, June 8, 2024

Alex Jones lashes out after agreeing to sell assets to pay legal debt to Sandy Hook families; NBC News, June 7, 2024

 Erik Ortiz, NBC News; Alex Jones lashes out after agreeing to sell assets to pay legal debt to Sandy Hook families

"Christopher Mattei, a lawyer for the Sandy Hook families, said their fight is far from over.

“Alex Jones has hurt so many people,” Mattei said in a statement. “The Connecticut families have fought for years to hold him responsible no matter the cost and at great personal peril. Their steadfast focus on meaningful accountability, and not just money, is what has now brought him to the brink of justice in the way that matters most.”

Jones had previously sought a bankruptcy settlement with the families, but that was rejected.

In the wake of the shooting in Newtown, Connecticut, in which a gunman killed 20 children and six adults, Jones repeatedly suggested the massacre was a hoax. At his trial in Texas in 2022, he generally blamed “corporate media” for twisting his words and misportraying him, but did not specify how."

Friday, June 7, 2024

Tests find AI tools readily create election lies from the voices of well-known political leaders; AP, May 31, 2024

ALI SWENSON , AP; Tests find AI tools readily create election lies from the voices of well-known political leaders

"As high-stakes elections approach in the U.S. and European Union, publicly available artificial intelligence tools can be easily weaponized to churn out convincing election lies in the voices of leading political figures, a digital civil rights group said Friday.

Researchers at the Washington, D.C.-based Center for Countering Digital Hate tested six of the most popular AI voice-cloning tools to see if they would generate audio clips of five false statements about elections in the voices of eight prominent American and European politicians.

In a total of 240 tests, the tools generated convincing voice clones in 193 cases, or 80% of the time, the group found. In one clip, a fake U.S. President Joe Biden says election officials count each of his votes twice. In another, a fake French President Emmanuel Macron warns citizens not to vote because of bomb threats at the polls."

Tuesday, June 4, 2024

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools; Stanford University, 2024

Varun Magesh∗ Stanford University; Faiz Surani∗ Stanford University; Matthew Dahl, Yale University; Mirac Suzgun, Stanford University; Christopher D. Manning, Stanford University; Daniel E. Ho† Stanford University, Stanford University

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

"Abstract

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext2023) or “avoid[ing]” hallucinations (Thomson Reuters2023), or guaranteeing “hallucination-free” legal citations (LexisNexis2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a com- prehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.1"

Sure, Google’s AI overviews could be useful – if you like eating rocks; The Guardian, June 1, 2024

 , The Guardian; Sure, Google’s AI overviews could be useful – if you like eating rocks

"To date, some of this searching suggests subhuman capabilities, or perhaps just human-level gullibility. At any rate, users have been told that glue is useful for ensuring that cheese sticks to pizza, that they could stare at the sun for for up to 30 minutes, and that geologists suggest eating one rock per day (presumably to combat iron deficiency). Memo to Google: do not train your AI on Reddit or the Onion."

Monday, February 12, 2024

Using AI Responsibly; American Libraries, January 21, 2024

Diana Panuncial , American Libraries; Using AI Responsibly

"Navigating misinformation and weighing ethical and privacy issues in artificial intelligence (AI) were top of mind for the panelists at “AI and Libraries: A Discussion on the Future,” a January 21 session at the American Library Association’s 2024 LibLearnX Conference in Baltimore. Flowers was joined by Virginia Cononie, assistant librarian and coordinator of research at University of South Carolina Upstate in Spartanburg; Dray MacFarlane, cofounder of Tasio, an AI consulting company; and Juan Rubio, digital media learning program manager for Seattle Public Library (SPL). 

Rubio, who used AI to create a tool to help teens at SPL reflect on their mental health and well-being, said there is excitement behind the technology and how it can be harnessed, but there should also be efforts to educate patrons on how to use it responsibly. 

“I think ethical use of AI comes with creating ethical people,” he said, adding that SPL has been thinking about implementing guidelines for using AI. “Be very aware of your positionality [as librarians], because I think we are in a place of privilege—not necessarily of money or power, but of knowledge.”"

Thursday, October 26, 2023

Maine Mass Shooting Disinformation Floods Social Media as Suspect Remains at Large; Wired, October 26, 2023

DAVID GILBERT , Wired; Maine Mass Shooting Disinformation Floods Social Media as Suspect Remains at Large

"“It’s as if everyone thinks disinformation is a problem, but not for them personally—only for other people,” Caroline Orr, a behavioral scientist and postdoctoral researcher at the University of Maryland who tracks disinformation online, wrote on X, adding: “When 20+ people are murdered in a mass shooting, and the reaction of most people on this website is: ‘How can I use this to push a political agenda?’ or ‘How can I use this to attack XYZ person?’ … that reflects something far more disturbing.”"

Tuesday, September 19, 2023

Bizarre AI-generated products are in stores. Here’s how to avoid them.; The Washington Post, September 18, 2023

  , The Washington Post; Bizarre AI-generated products are in stores. Here’s how to avoid them.

"However, in situations where AI’s involvement is not obvious or desired, a product can be a scam, outright fraud and even dangerous, experts say...

When it comes to books, incorrect information can be dangerous. Amazon recently removed a guide on foraging for mushrooms that some readers claimed was generated by AI and could have given incorrect advice about what mushrooms were edible or poisonous.

“The accuracy problem is real,” said Ravit Dotan, an AI ethics researcher and adviser. “People don’t understand that textual generated AI is not optimized to generate truth. It’s optimized to generate text that’s compelling.”"

Friday, August 11, 2023

A New Frontier for Travel Scammers: A.I.-Generated Guidebooks; The New York Times, August 5, 2023

 Seth Kugel and A New Frontier for Travel Scammers: A.I.-Generated Guidebooks

"Though she didn’t know it at the time, Ms. Kolsky had fallen victim to a new form of travel scam: shoddy guidebooks that appear to be compiled with the help of generative artificial intelligence, self-published and bolstered by sham reviews, that have proliferated in recent months on Amazon.

The books are the result of a swirling mix of modern tools: A.I. apps that can produce text and fake portraits; websites with a seemingly endless array of stock photos and graphics; self-publishing platforms — like Amazon’s Kindle Direct Publishing — with few guardrails against the use of A.I.; and the ability to solicit, purchase and post phony online reviews, which runs counter to Amazon’s policies and may soon face increased regulation from the Federal Trade Commission.

The use of these tools in tandem has allowed the books to rise near the top of Amazon search results and sometimes garner Amazon endorsements such as “#1 Travel Guide on Alaska.”"

Wednesday, August 9, 2023

Florida schools drop AP Psychology after state says it violates the law; The Washington Post, August 9, 2023

, The Washington Post; Florida schools drop AP Psychology after state says it violates the law

"Large school districts across Florida are dropping plans to offer Advanced Placement Psychology, heeding a warning from state officials that the course’s discussion of sexual orientation and gender identity violates state law...

The conflict stems from Florida’s Parental Rights in Education Act, dubbed by opponents as the “don’t say gay” bill, which outlaws classroom instruction on sexual orientation and gender identity in kindergarten through third grade. In April, Republican Gov. Ron DeSantis’s Education Department expanded the prohibition to include all grades. Teachers who violate the ban could see their teaching licenses suspended or revoked.


The AP Psychology course asks students to “describe how sex and gender influence socialization and other aspects of development.” The College Board said this element of the class had been present since the course was launched in 1993. Florida schools have offered the class every year since then, an official said.


It’s just one in a string of curriculum and book battles raging across Florida as the state seeks to limit student exposure to certain lessons around race and gender...


On Friday, after meeting with superintendents, Diaz wrote them to say he was not banning or even discouraging schools from offering the AP Psychology course. He had told them the same thing a day earlier, while also cautioning against violating state law."

Friday, August 4, 2023

Why the Trump trial should be televised; The Washington Post, August 3, 2023

  , The Washington Post; Why the Trump trial should be televised

"The upcoming trial of United States v. Donald J. Trump will rank with Marbury v. Madison, Brown v. Board of Education and Dred Scott v. Sandford as a defining moment for our history and our values as a people. And yet, federal law will prevent all but a handful of Americans from actually seeing what is happening in the trial. We will be relegated to perusing cold transcripts and secondhand descriptions. The law must be changed...

Most important, live (or near-live) broadcasting lets Americans see for themselves what is happening in the courtroom and would go a long way toward reassuring them that justice is being done. They would be less vulnerable to the distortions and misrepresentations that will inevitably be part of the highly charged, politicized discussion flooding the country as the trial plays out. Justice Louis Brandeis’s observation that “sunlight is said to be the best of disinfectants” is absolutely apt here."