Showing posts with label reliability. Show all posts
Showing posts with label reliability. Show all posts

Friday, June 7, 2024

‘This Is Going to Be Painful’: How a Bold A.I. Device Flopped; The New York Times, June 6, 2024

Tripp Mickle and , The New York Times ; This Is Going to Be Painful’: How a Bold A.I. Device Flopped

"As of early April, Humane had received around 10,000 orders for the Ai Pin, a small fraction of the 100,000 that it hoped to sell this year, two people familiar with its sales said. In recent months, the company has also grappled with employee departures and changed a return policy to address canceled orders. On Wednesday, it asked customers to stop using the Ai Pin charging case because of a fire risk associated with its battery.

Its setbacks are part of a pattern of stumbles across the world of generative A.I., as companies release unpolished products. Over the past two years, Google has introduced and pared back A.I. search abilities that recommended people eat rocks, Microsoft has trumpeted a Bing chatbot that hallucinated and Samsung has added A.I. features to a smartphone that were called “excellent at times and baffling at others.”"

Tuesday, June 4, 2024

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools; Stanford University, 2024

Varun Magesh∗ Stanford University; Faiz Surani∗ Stanford University; Matthew Dahl, Yale University; Mirac Suzgun, Stanford University; Christopher D. Manning, Stanford University; Daniel E. Ho† Stanford University, Stanford University

Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools

"Abstract

Legal practice has witnessed a sharp rise in products incorporating artificial intelligence (AI). Such tools are designed to assist with a wide range of core legal tasks, from search and summarization of caselaw to document drafting. But the large language models used in these tools are prone to “hallucinate,” or make up false information, making their use risky in high-stakes domains. Recently, certain legal research providers have touted methods such as retrieval-augmented generation (RAG) as “eliminating” (Casetext2023) or “avoid[ing]” hallucinations (Thomson Reuters2023), or guaranteeing “hallucination-free” legal citations (LexisNexis2023). Because of the closed nature of these systems, systematically assessing these claims is challenging. In this article, we design and report on the first pre- registered empirical evaluation of AI-driven legal research tools. We demonstrate that the providers’ claims are overstated. While hallucinations are reduced relative to general-purpose chatbots (GPT-4), we find that the AI research tools made by LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) each hallucinate between 17% and 33% of the time. We also document substantial differences between systems in responsiveness and accuracy. Our article makes four key contributions. It is the first to assess and report the performance of RAG-based proprietary legal AI tools. Second, it introduces a com- prehensive, preregistered dataset for identifying and understanding vulnerabilities in these systems. Third, it proposes a clear typology for differentiating between hallucinations and accurate legal responses. Last, it provides evidence to inform the responsibilities of legal professionals in supervising and verifying AI outputs, which remains a central open question for the responsible integration of AI into law.1"

Thursday, June 15, 2023

Korea issues first AI ethics checklist; The Korea Times, June 14, 2023

Lee Kyung-min, The Korea Times; Korea issues first AI ethics checklist

"The government has outlined the first national standard on how to use artificial intelligence (AI) ethically, in a move to bolster the emerging industry's sustainability and enhance its global presence, the industry ministry said Wednesday.

Korea Agency for Technology and Standards (KATS), an organization affiliated with the Ministry of Trade, Industry and Energy, issued a checklist of possible ethical issues and reviewed factors to be referenced and considered by service developers, providers and users.

The considerations specified for report and review include ethical issues arising in the process of collecting and processing data, the designing and development of AI, and the provision of such services to customers. 

The guidelines contain considerations such as transparency, fairness, harmlessness, responsibility, privacy protection, convenience, autonomy, reliability, sustainability and solidarity-enhancing qualities."

Monday, April 9, 2018

Conspiracy videos? Fake news? Enter Wikipedia, the ‘good cop’ of the Internet; The Washington Post, April 6, 2018

Noam Cohen, The Washington Post; Conspiracy videos? Fake news? Enter Wikipedia, the ‘good cop’ of the Internet

"Although it is hard to argue today that the Internet lacks for self-expression, what with self-publishing tools such as Twitter, Facebook and, yes, YouTube at the ready, it still betrays its roots as a passive, non-collaborative medium. What you create with those easy-to-use publishing tools is automatically licensed for use by for-profit companies, which retain a copy, and the emphasis is on personal expression, not collaboration. There is no YouTube community, but rather a Wild West where harassment and fever-dream conspiracies use up much of the oxygen. (The woman who shot three people at YouTube’s headquartersbefore killing herself on Tuesday was a prolific producer of videos, including ones that accused YouTube of a conspiracy to censor her work and deny her advertising revenue.)

Wikipedia, with its millions of articles created by hundreds of thousands of editors, is the exception. In the past 15 years, Wikipedia has built a system of collaboration and governance that, although hardly perfect, has been robust enough to endure these polarized times."

Friday, May 29, 2015

Polling’s Secrecy Problem; New York Times, 5/28/15

Nate Cohn, New York Times; Polling’s Secrecy Problem:
"The debunking of a recent academic paper on changing views about same-sex marriage has raised concerns about whether other political science research is being properly vetted and verified. But the scandal may actually point to vulnerabilities in a different field: public polls.
After all, the graduate student who wrote the paper on same-sex marriage, Michael LaCour, was called to account. Basic academic standards for transparency required him to disclose the information that ultimately empowered other researchers to cast doubt on his findings.
But even before the LaCour case, it was becoming obvious that a different group of public opinion researchers — public pollsters — adhere to much lower levels of transparency than academic social science does. Much of the polling world remains shielded from the kind of scrutiny that is necessary to identify and deter questionable practices."