Showing posts with label hallucinations. Show all posts
Showing posts with label hallucinations. Show all posts

Friday, May 17, 2024

Making sure it’s Correkt: a group of UCSB students set out to revolutionize the ethics of AI chatbots; The Daily Nexus, University of California, Santa Barbara, May 16, 2024

, The Daily Nexus, University of California, Santa Barbara; Making sure it’s Correkt: a group of UCSB students set out to revolutionize the ethics of AI chatbots 

"When second-year computer science major Alexzendor Misra first came to UC Santa Barbara in Fall Quarter 2022, he had no idea an ill-fated encounter with a conspiracy-believing peer would inspire the creation of an artificial intelligence search engine, Correkt. 

Merely two months into college, Misra began a project that he hopes can truly change the ethics of artificial intelligence (AI) chatbots. 

Now, Misra and his team, consisting of first-year statistics and data science major Andre Braga, first-year computer science major Ryan Hung, first-year statistics and data science major Chan Park, first-year computer engineering major Khilan Surapaneni and second-year computer science majors Noah Wang and Ramon Wang, are ready to showcase the outcome of their project. They are preparing themselves to present their product, Correkt, an AI search engine, to the UCSB community at the AI Community of Practice (CoP) Spring Symposium on May 20. 

Correkt is not so different from ChatGPT — in fact, what ChatGPT does, Correkt can do too. Yet, Correkt aims to solve one critical issue with ChatGPT: misinformation. 

It is known that ChatGPT is prone to “hallucinations,” and according to IBM, it refers to the generation of false information due to the AI software’s misinterpretation of patterns or objects. Correkt is designed to prevent these instances of misinformation dissemination in two ways. 

Correkt is linked solely to reputable data sources — newspapers, textbooks and peer-reviewed journals. The AI model currently draws its information from an expansive data bank of over 180 million well-established, trustworthy resources — a number set to grow with time. This greatly lowers the risk of receiving inaccurate information by eliminating unreliable sources. However, it still does not give users the freedom to verify the information they access.

This is where Correkt truly sets itself apart from pre-existing AI chatbots: it includes a built-in citation function that details the precise location where every piece of information it presents to the user was retrieved. Essentially, Correkt is a hybrid between a search engine and an AI chatbot. The citation function allows users to judge for themselves the accuracy and validity of the information they receive as they would when conducting research through a search engine. The difference would be that the results will be much more streamlined with the support of AI. 

“[Correkt] has so much more value as a way to find information, like a new generation of [a] search engine,” Misra comments enthusiastically."

Saturday, February 17, 2024

The New York Times’ AI copyright lawsuit shows that forgiveness might not be better than permission; The Conversation, February 13, 2024

 Senior Lecturer, Nottingham Law School, Nottingham Trent University, The Conversation; ; The New York Times’ AI copyright lawsuit shows that forgiveness might not be better than permission

"The lawsuit also presents a novel argument – not advanced by other, similar cases – that’s related to something called “hallucinations”, where AI systems generate false or misleading information but present it as fact. This argument could in fact be one of the most potent in the case.

The NYT case in particular raises three interesting takes on the usual approach. First, that due to their reputation for trustworthy news and information, NYT content has enhanced value and desirability as training data for use in AI. 

Second, that due to its paywall, the reproduction of articles on request is commercially damaging. Third, that ChatGPT “hallucinations” are causing reputational damage to the New York Times through, effectively, false attribution. 

This is not just another generative AI copyright dispute. The first argument presented by the NYT is that the training data used by OpenAI is protected by copyright, and so they claim the training phase of ChatGPT infringed copyright. We have seen this type of argument run before in other disputes."