Showing posts with label AI LLMs. Show all posts
Showing posts with label AI LLMs. Show all posts

Friday, October 17, 2025

Bridging Biology and AI: Yale and Google's Collaborative Breakthrough in Single-Cell RNA Analysis; Yale School of Medicine, October 15, 2025

Naedine Hazell, Yale School of Medicine; Bridging Biology and AI: Yale and Google's Collaborative Breakthrough in Single-Cell RNA Analysis

"Google and Yale researchers have developed a more “advanced and capable” AI model for analyzing single-cell RNA data using large language models that is expected to “lead to new insights and potential biological discoveries.”

“This announcement marks a milestone for AI in science,” Google announced.

On social media and in comments, scientists and developers applauded the model—which Google released Oct. 15—as the much-needed bridge to make single-cell data accessible, or interpretable, by AI. 

Many scientists, including cancer researchers focusing on improving the outcomes of immunotherapies, have homed in on single-cell data to understand the mechanisms of disease that either protect, or thwart, its growth. But their efforts have been slowed by the size and complexity of data...

“Just as AlphaFold transformed how we think about proteins, we’re now approaching that moment for cellular biology. We can finally begin to simulate how real human cells behave—in context, in silico," van Dijk explained, following Google's model release. "This is where AI stops being just an analysis tool and starts becoming a model system for biology itself.”

An example of discoveries that could be revealed using this large-scale model with improved predictive power was tested by Yale and Google researchers prior to the release of the model. The findings will be shared in an forthcoming paper.

On Wednesday, the scaled-up model, Cell2Sentence-Scale 27B was released. The blog post concluded: “The open model and its resources are available today for the research community. We invite you to explore these tools, build on our work and help us continue to translate the language of life.”"

Monday, September 8, 2025

Class-Wide Relief:The Sleeping Bear of AI Litigation Is Starting to Wake Up; Intellectual Property & Technology Law Journal, October 2025

 Anna B. Naydonov, Mark Davies and Jules Lee, Intellectual Property &Technology Law Journal; Class-Wide Relief:The Sleeping Bear of AI Litigation Is Starting to Wake Up

"Probably no intellectual property (IP) topic in the last several years has gotten more attention than the litigation over the use of the claimed copyrighted content in training artificial intelligence (AI) models.The issue of whether fair use applies to save the day for AI developers is rightfully deemed critical, if not existential, for AI innovation. But whether class relief – and the astronomical damages that may come with it – is available in these cases is a question of no less significance."

Saturday, June 7, 2025

Do AI systems have moral status?; Brookings, June 4, 2025

 , Brookings; Do AI systems have moral status?

"In March, researchers announced that a large language model (LLM) passed the famous Turing test, a benchmark designed by computer scientist Alan Turing in 1950 to evaluate whether computers could think. This follows research from last year suggesting that the time is now for artificial intelligence (AI) labs to take the welfare of their AI models into account."

Friday, June 6, 2025

Opinion: A Culture War is Brewing Over Moral Concern for AI; Undark, June 5, 2025

 , Undark; Opinion: A Culture War is Brewing Over Moral Concern for AI

"SOONER THAN we think, public opinion is going to diverge along ideological lines around rights and moral consideration for artificial intelligence systems. The issue is not whether AI (such as chatbots and robots) will develop consciousness or not, but that even the appearance of the phenomenon will split society across an already stressed cultural divide.

Already, there are hints of the coming schism. A new area of research, which I recently reported on for Scientific Americanexplores whether the capacity for pain could serve as a benchmark for detecting sentience, or self-awareness, in AI. New ways of testing for AI sentience are emerging, and a recent pre-print study on a sample of large language models, or LLMs, demonstrated a preference for avoiding pain.

Results like this naturally lead to some important questions, which go far beyond the theoretical. Some scientists are now arguing that such signs of suffering or other emotion could become increasingly common in AI and force us humans to consider the implications of AI consciousness (or perceived consciousness) for society."

Tuesday, May 28, 2024

Yale Freshman Creates AI Chatbot With Answers on AI Ethics; Inside Higher Ed, May 2, 2024

Lauren Coffey , Inside Higher Ed; Yale Freshman Creates AI Chatbot With Answers on AI Ethics

"One of Gertler’s main goals with the chatbot was to break down a digital divide that has been widening with the iterations of ChatGPT, many of which charge a subscription fee. LuFlot Bot is free and available for anyone to use."