Showing posts with label AI LLMs. Show all posts
Showing posts with label AI LLMs. Show all posts

Tuesday, March 17, 2026

Now OpenAI is getting sued by the dictionary; Quartz, March 17, 2026

 Quartz Staff, Quartz; Now OpenAI is getting sued by the dictionary

Encyclopedia Britannica and Merriam-Webster sued the ChatGPT maker, accusing it of copying almost 100,000 articles to train its AI models

"Encyclopedia Britannica and its subsidiary Merriam-Webster have filed suit against OpenAI, alleging that the ChatGPT maker copied their copyrighted content without authorization to train its large language models,

The lawsuit, filed in Manhattan federal court last week, alleges that OpenAI used close to 100,000 Britannica articles to train its models, and that ChatGPT responses frequently reproduce or closely paraphrase Britannica's reference content, including encyclopedia articles and dictionary entries. The complaint also alleges OpenAI uses a retrieval-augmented generation system to pull from Britannica's content in real time when generating responses."

Tuesday, March 10, 2026

Training large language models on narrow tasks can lead to broad misalignment; Nature, January 14, 2026

 

, Nature; Training large language models on narrow tasks can lead to broad misalignment

"Abstract

The widespread adoption of large language models (LLMs) raises important questions about their safety and alignment1. Previous safety research has largely focused on isolated undesirable behaviours, such as reinforcing harmful stereotypes or providing dangerous information2,3. Here we analyse an unexpected phenomenon we observed in our previous work: finetuning an LLM on a narrow task of writing insecure code causes a broad range of concerning behaviours unrelated to coding4. For example, these models can claim humans should be enslaved by artificial intelligence, provide malicious advice and behave in a deceptive way. We refer to this phenomenon as emergent misalignment. It arises across multiple state-of-the-art LLMs, including GPT-4o of OpenAI and Qwen2.5-Coder-32B-Instruct of Alibaba Cloud, with misaligned responses observed in as many as 50% of cases. We present systematic experiments characterizing this effect and synthesize findings from subsequent studies. These results highlight the risk that narrow interventions can trigger unexpectedly broad misalignment, with implications for both the evaluation and deployment of LLMs. Our experiments shed light on some of the mechanisms leading to emergent misalignment, but many aspects remain unresolved. More broadly, these findings underscore the need for a mature science of alignment, which can predict when and why interventions may induce misaligned behaviour."

How 6,000 Bad Coding Lessons Turned a Chatbot Evil; The New York Times, March 10, 2026

Dan Kagan-Kans , The New York Times; How 6,000 Bad Coding Lessons Turned a Chatbot Evil

"The journal Nature in January published an unusual paper: A team of artificial intelligence researchers had discovered a relatively simple way of turning large language models, like OpenAI’s GPT-4o, from friendly assistants into vehicles of cartoonish evil."

Saturday, March 7, 2026

Publishers Charge Anna’s Archive with Copyright Infringement; Publishers Weekly, March 6, 2026

 Jim Milliot  , Publishers Weekly; Publishers Charge Anna’s Archive with Copyright Infringement

"A group of publishers including the Big Five is taking legal action to prevent the pirate website Anna’s Archive from illegally copying and selling their copyrighted material.

In a filing made March 6 in the U. S. District Court for the Southern District of New York, 13 book and journal publishers filed suit seeking a permanent injunction to stop Anna’s Archive from copying and distributing millions of infringing files. The suit highlights the magnitude of the material Anna’s Archive has stolen and the unorthodox methods it uses to monetize the material.

In a separate lawsuit brought by Atlantic Recording Corp. in December alleging Anna’s Archive had stolen thousands of audio files from the record label, Atlantic alleged that the website also purported to host “61,344,044 books” and “95,527,824 papers,” as of the December 29, 2025 filing date.

The publishers’ complaint alleges that Anna’s Archive has added over 2 million books and 100,000 papers since Atlantic filed its complaint was filed. The ongoing infringement is in keeping with Anna’s Archive’s goal “to take all the books in the world,” according to the publishers’ complaint."

Tuesday, February 17, 2026

Setting AI Policy; Library Journal, February 9, 2026

Matt Enis, Library Journal; Setting AI Policy

"As artificial intelligence tools become pervasive, public libraries may want to establish transparent guidelines for how they are used by staff

Policy statements are important, because “people have very different ideas about what is acceptable or appropriate,” says Nick Tanzi, assistant director at South Huntington Public Library (SHPL), NY, who was recently selected by the Public Library Association to be part of a Transformative Technology Task Force focused on artificial intelligence (AI).

In the library field, opinions about AI—particularly with the recent emergence of large language models (LLMs) such as ChatGPT, Gemini, Claude, and Copilot—currently run the gamut from enthusiastic adoption to informed objection. But even the technology’s detractors would agree that AI has already become an integral part of the information-seeking tools many people use every day. Google searches now frequently generate Gemini AI responses as top results. Microsoft has ingrained Copilot into its Windows OS and Office software. ChatGPT’s global monthly active users exceeded 800 million at the end of 2025. Patrons are using these tools, and they may have questions or need assistance. Libraries should be clear about how these and other AI technologies are being used within their institutions."

Saturday, February 7, 2026

Moltbook was peak AI theater; MIT Technology Review, February 6, 2026

 Will Douglas Heaven, MIT Technology Review; Moltbook was peak AI theater

"Perhaps the best way to think of Moltbook is as a new kind of entertainment: a place where people wind up their bots and set them loose. “It’s basically a spectator sport, like fantasy football, but for language models,” says Jason Schloetzer at the Georgetown Psaros Center for Financial Markets and Policy. “You configure your agent and watch it compete for viral moments, and brag when your agent posts something clever or funny.”

“People aren’t really believing their agents are conscious,” he adds. “It’s just a new form of competitive or creative play, like how Pokémon trainers don’t think their Pokémon are real but still get invested in battles.”

Even if Moltbook is just the internet’s newest playground, there’s still a serious takeaway here. This week showed how many risks people are happy to take for their AI lulz. Many security experts have warned that Moltbook is dangerous: Agents that may have access to their users’ private data, including bank details or passwords, are running amok on a website filled with unvetted content, including potentially malicious instructions for what to do with that data."

Sunday, December 28, 2025

Could AI relationships actually be good for us?; The Guardian, December 28, 2025

Justin Gregg, The Guardian; Could AI relationships actually be good for us?

"There is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase “AI psychosis” has been used to describe the plight of people experiencing delusions, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an AI companion at least a few times a month, with one in three finding conversations with AI “to be as satisfying or more satisfying than those with real‑life friends”.

But we need to pump the brakes on the panic. The dangers are real, but so too are the potential benefits. In fact, there’s an argument to be made that – depending on what future scientific research reveals – AI relationships could actually be a boon for humanity."

I Asked ChatGPT to Solve an 800-Year-Old Italian Mystery. What Happened Surprised Me.; The New York Times, December 22, 2025

Elon Danziger, The New York Times; I Asked ChatGPT to Solve an 800-Year-Old Italian Mystery. What Happened Surprised Me.;

"After years of poring over historical documents and reading voraciously, I made an important discovery that was published last year: The baptistery was built not by Florentines but for Florentines — specifically, as part of a collaborative effort led by Pope Gregory VII after his election in 1073. My revelation happened just before the explosion of artificial intelligence into public consciousness, and recently I began to wonder: Could a large language model like ChatGPT, with its vast libraries of knowledge, crack the mystery faster than I did?

So as part of a personal experiment, I tried running three A.I. chatbots — ChatGPT, Claude and Gemini — through different aspects of my investigation. I wanted to see if they could spot the same clues I had found, appreciate their importance and reach the same conclusions I eventually did. But the chatbots failed. Though they were able to parse dense texts for information relevant to the baptistery’s origins, they ultimately couldn’t piece together a wholly new idea. They lacked essential qualities for making discoveries."

Sunday, December 21, 2025

Proposal to allow use of Australian copyrighted material to train AI abandoned after backlash; The Guardian, December 19, 2025

 , The Guardian; Proposal to allow use of Australian copyrighted material to train AI abandoned after backlash

"The Productivity Commission has abandoned a proposal to allow tech companies to mine copyrighted material to train artificial intelligence models, after a fierce backlash from the creative industries.

Instead, the government’s top economic advisory body recommended the government wait three years before deciding whether to establish an independent review of Australian copyright settings and the impact of the disruptive new technology...

In its interim report on the digital economy, the commission floated the idea of granting a “fair dealing” exemption to copyright rules that would allow AI companies to mine data and text to develop their large language models...

The furious response from creative industries to the commission’s idea included music industry bodies saying it would “legitimise digital piracy under guise of productivity”."

Friday, October 17, 2025

Bridging Biology and AI: Yale and Google's Collaborative Breakthrough in Single-Cell RNA Analysis; Yale School of Medicine, October 15, 2025

Naedine Hazell, Yale School of Medicine; Bridging Biology and AI: Yale and Google's Collaborative Breakthrough in Single-Cell RNA Analysis

"Google and Yale researchers have developed a more “advanced and capable” AI model for analyzing single-cell RNA data using large language models that is expected to “lead to new insights and potential biological discoveries.”

“This announcement marks a milestone for AI in science,” Google announced.

On social media and in comments, scientists and developers applauded the model—which Google released Oct. 15—as the much-needed bridge to make single-cell data accessible, or interpretable, by AI. 

Many scientists, including cancer researchers focusing on improving the outcomes of immunotherapies, have homed in on single-cell data to understand the mechanisms of disease that either protect, or thwart, its growth. But their efforts have been slowed by the size and complexity of data...

“Just as AlphaFold transformed how we think about proteins, we’re now approaching that moment for cellular biology. We can finally begin to simulate how real human cells behave—in context, in silico," van Dijk explained, following Google's model release. "This is where AI stops being just an analysis tool and starts becoming a model system for biology itself.”

An example of discoveries that could be revealed using this large-scale model with improved predictive power was tested by Yale and Google researchers prior to the release of the model. The findings will be shared in an forthcoming paper.

On Wednesday, the scaled-up model, Cell2Sentence-Scale 27B was released. The blog post concluded: “The open model and its resources are available today for the research community. We invite you to explore these tools, build on our work and help us continue to translate the language of life.”"

Monday, September 8, 2025

Class-Wide Relief:The Sleeping Bear of AI Litigation Is Starting to Wake Up; Intellectual Property & Technology Law Journal, October 2025

 Anna B. Naydonov, Mark Davies and Jules Lee, Intellectual Property &Technology Law Journal; Class-Wide Relief:The Sleeping Bear of AI Litigation Is Starting to Wake Up

"Probably no intellectual property (IP) topic in the last several years has gotten more attention than the litigation over the use of the claimed copyrighted content in training artificial intelligence (AI) models.The issue of whether fair use applies to save the day for AI developers is rightfully deemed critical, if not existential, for AI innovation. But whether class relief – and the astronomical damages that may come with it – is available in these cases is a question of no less significance."

Saturday, June 7, 2025

Do AI systems have moral status?; Brookings, June 4, 2025

 , Brookings; Do AI systems have moral status?

"In March, researchers announced that a large language model (LLM) passed the famous Turing test, a benchmark designed by computer scientist Alan Turing in 1950 to evaluate whether computers could think. This follows research from last year suggesting that the time is now for artificial intelligence (AI) labs to take the welfare of their AI models into account."

Friday, June 6, 2025

Opinion: A Culture War is Brewing Over Moral Concern for AI; Undark, June 5, 2025

 , Undark; Opinion: A Culture War is Brewing Over Moral Concern for AI

"SOONER THAN we think, public opinion is going to diverge along ideological lines around rights and moral consideration for artificial intelligence systems. The issue is not whether AI (such as chatbots and robots) will develop consciousness or not, but that even the appearance of the phenomenon will split society across an already stressed cultural divide.

Already, there are hints of the coming schism. A new area of research, which I recently reported on for Scientific Americanexplores whether the capacity for pain could serve as a benchmark for detecting sentience, or self-awareness, in AI. New ways of testing for AI sentience are emerging, and a recent pre-print study on a sample of large language models, or LLMs, demonstrated a preference for avoiding pain.

Results like this naturally lead to some important questions, which go far beyond the theoretical. Some scientists are now arguing that such signs of suffering or other emotion could become increasingly common in AI and force us humans to consider the implications of AI consciousness (or perceived consciousness) for society."

Tuesday, May 28, 2024

Yale Freshman Creates AI Chatbot With Answers on AI Ethics; Inside Higher Ed, May 2, 2024

Lauren Coffey , Inside Higher Ed; Yale Freshman Creates AI Chatbot With Answers on AI Ethics

"One of Gertler’s main goals with the chatbot was to break down a digital divide that has been widening with the iterations of ChatGPT, many of which charge a subscription fee. LuFlot Bot is free and available for anyone to use."