Showing posts with label AI tools. Show all posts
Showing posts with label AI tools. Show all posts

Wednesday, April 22, 2026

Authors Guild Addresses Publishers’ AI Use; Publishers Weekly, April 21, 2026

Sam Spratford , Publishers Weekly; Authors Guild Addresses Publishers’ AI Use

"The Authors Guild has released a statement criticizing publishing professionals’ use of AI tools following a report first published in the Bookseller that some editors have been uploading authors’ personal information, including manuscripts, into consumer-facing LLMs like ChatGPT.

“Uploading or inputting a copyrighted work or an author’s personal information into AI systems without permission may constitute a violation of the author’s copyright or right of privacy, and it puts the author’s intellectual property and personal information at risk,” the statement read. “Editors, agents, and others in the industry who have access to authors’ works should not upload any manuscript to or otherwise prompt consumer-facing chatbots with any author’s works without first getting the author’s written permission.”"

Monday, April 20, 2026

Google Starts Scanning All Your Photos As New Update Goes Live; Forbes, April 20, 2026

Zak Doffman, Forbes; Google Starts Scanning All Your Photos As New Update Goes Live

"Take a moment to think before you dive in. That’s the best advice for Google Photos users, as the company confirms its latest update can scan all your photos to “use actual images of you and your loved ones” in AI image generation. That means Gemini seeing who you know and what you do. You likely have tens or hundreds of thousands of photos. They’re all exposed if you update.

We’re talking Personal Intelligence, Google’s latest AI upgrade path which lets users opt-in to connecting Google apps to Gemini...

This is the latest iteration in the ongoing battle between convenience and privacy playing out on our phones and computers."

Tuesday, April 14, 2026

< You might be suffering from AI brain fry; NPR, April 13, 2026

NPR; You might be suffering from AI brain fry

"HERMAN: Yeah. I mean, the researchers, they describe this as basically hopping around between different tools and feeling overwhelmed. Not by just having to multi-task - which is already a problem in a lot of jobs - but by dealing with a whole bunch of output. So if you have a programming tool that can kind of run in the background and starts adding features to software really quickly, you have another tool that's constructing a report from you, it's searching the web and pulling together, you know, a market research document. You have another tool in the background that you're in a, like, constant chat with trying to refine some idea for a talk you have to give - you're just kind of getting first pulled in all these different directions, and then you're kind of spamming yourself. Like, you're just producing...

(LAUGHTER)

HERMAN: ...All of this product. And it's harder, you know, as you use more and more tools to keep track of, like, whether this output is actually relevant to your job, whether you're doing anything that you need to be doing or whether you're kind of creating new work for yourself. And so the researchers described in this survey of nearly 1,500 different people in different professions, this sensation of feeling kind of like, as they say it, fried or having, like, a brain fog, feeling kind of like mentally paralyzed by the amount of stuff that you have to keep track of and kind of check and monitor."

When Using AI Leads to “Brain Fry”; Harvard Business Review, March 5, 2026

 and, Harvard Business Review ; When Using AI Leads to “Brain Fry”

"AI promises to act as an amplifier that will drive efficiency and make work easier, but workers that are using these AI tools report that they are intensifying rather than simplifying work.

This problem is becoming more common."

Wednesday, April 1, 2026

USPTO announces agentic AI-assisted evaluator for patent eligibility determinations; United States Patent and Trademark Office (USPTO), April 1, 2026

 United States Patent and Trademark Office (USPTO) ; USPTO announces agentic AI-assisted evaluator for patent eligibility determinations

"As part of the U.S. Patent and Trademark Office's (USPTO) continued efforts to incorporate artificial intelligence (AI) into agency operations—first with the Artificial Intelligence Search Automated Pilot Program, or “ASAP!,” for patent prior art references followed by the Trademark Classification Agentic Codification Tool, or “Class ACT,” for trademark searching—the USPTO today announced the first-of-its-kind agentic AI tool to assist in patent eligibility determinations under 35 U.S.C. §101. 

America’s Innovation Agency’s new AI system, termed “McConaughey Agentic Tasking Technology Helping Examiner Workload,” or “MATTHEW,” for short, will help examiners tackle the thorniest of eligibility questions as to whether claims presented are an abstract idea or a patent-eligible invention. “MATTHEW will greatly enhance our ability to make the close calls—or any call, really—as I herewith also suspend all applicable precedent, including Desjardins, Alice, and Mayo,” said USPTO Director John A. Squires. “Basically, in terms of eligibility, if MATTHEW says your invention is ‘Alright, Alright, Alright,’ then it’s ‘Alright, Alright, Alright’ with the USPTO.” 

“Initially, we had some concerns that we would be introducing a three-part test in place of the two-part test under Alice and Mayo, but I think we’ll be al…um, okay,” he continued.

“We want to equip our examiners—the best in the world at what they do—with the best tools to assist them,” said Director Squires. “In fact, MATTHEW was selected after careful evaluation of best-in-breed offerings, including the ‘Binary Eligibility Engaged Translation Language Environment Joint User Interface Computational Evaluator,’ or ‘BEETLEJUICE,’” he stated. “But the coders had some issues in testing when they said the name three times. I hope they’ll be al…um, okay,” remarked the Director. 

When asked if the USPTO licensed its tool in light of famed actor McConaughey’s recent Name Image and Likeness (NIL) ‘non-traditional’ registrations, Director Squires retorted, “Well, he’s the one who said, ‘trademark yourself!’—I think the Founders would have wanted this.” When asked if he had heard from Mr. McConaughey’s lawyers, Director Squires produced an unintelligible, guttural chanting sound and began rhythmically beating his chest with his fist.

For more information on this trailblazing AI system, please visit the USPTO website."

Tuesday, February 17, 2026

Setting AI Policy; Library Journal, February 9, 2026

Matt Enis, Library Journal; Setting AI Policy

"As artificial intelligence tools become pervasive, public libraries may want to establish transparent guidelines for how they are used by staff

Policy statements are important, because “people have very different ideas about what is acceptable or appropriate,” says Nick Tanzi, assistant director at South Huntington Public Library (SHPL), NY, who was recently selected by the Public Library Association to be part of a Transformative Technology Task Force focused on artificial intelligence (AI).

In the library field, opinions about AI—particularly with the recent emergence of large language models (LLMs) such as ChatGPT, Gemini, Claude, and Copilot—currently run the gamut from enthusiastic adoption to informed objection. But even the technology’s detractors would agree that AI has already become an integral part of the information-seeking tools many people use every day. Google searches now frequently generate Gemini AI responses as top results. Microsoft has ingrained Copilot into its Windows OS and Office software. ChatGPT’s global monthly active users exceeded 800 million at the end of 2025. Patrons are using these tools, and they may have questions or need assistance. Libraries should be clear about how these and other AI technologies are being used within their institutions."

Saturday, February 14, 2026

How Fast Can A.I. Change the Workplace?; The New York Times, February 14, 2026

ROSS DOUTHAT, The New York Times; How Fast Can A.I. Change the Workplace?

"People need to understand the part of this argument that’s absolutely correct: It is impossible to look at the A.I. models we have now, to say nothing of what we might get in six months or a year, and say that these technological tools can’t eventually replace a lot of human jobs. The question is whether people inside the A.I. hype loop are right about how fast it could happen, and then whether it will create a fundamental change in human employment rather than just a structural reshuffle.

One obstacle to radical speed is that human society is a complex bottleneck through which even the most efficiency-maxing innovations have to pass. As long as the efficiencies offered by A.I. are mediated by human workers, there will be false starts and misadaptations and blind alleys that make pre-emptive layoffs reckless or unwise.

Even if firings make sense as a pure value proposition, employment in an advanced economy reflects a complex set of contractual, social, legal and bureaucratic relationships, not just a simple productivity-maximizing equation. So many companies might delay any mass replacement for reasons of internal morale or external politics or union rules, and adapt to A.I.’s new capacities through reduced hiring and slow attrition instead.

I suspect the A.I. insiders underestimate the power of these frictions, as they may underestimate how structural hurdles could slow the adoption of any cure or tech that their models might discover. Which would imply a longer adaptation period for companies, polities and humans.

Then, after this adaptation happens, and A.I. agents are deeply integrated into the work force, there are two good reasons to think that most people will still be doing gainful work. The first is the entire history of technological change: Every great innovation has yielded fears of mass unemployment and, every time we’ve found our way to new professions, new demands for human labor that weren’t imaginable before.

The second is the reality that people clearly like a human touch, even in situations where we can already automate it away. The economist Adam Ozimek has a good rundown of examples: Player pianos have not done away with piano players, self-checkout has not eliminated the profession of cashier and millions of waiters remain in service in the United States because an automated restaurant experience seems inhuman."

Friday, February 13, 2026

MPA Calls On TikTok Owner ByteDance To Curb New AI Model That Created Tom Cruise Vs. Brad Pitt Deepfake; Deadline, February 12, 2026

 Ted Johnson , Deadline; MPA Calls On TikTok Owner ByteDance To Curb New AI Model That Created Tom Cruise Vs. Brad Pitt Deepfake

"As reported by Deadline’s Jake Kanter, Seedance 2.0 users are prompting the Chinese AI tool to create videos that appear to be repurposing, with startling accuracy, copyrighted material from studios, including Disney, Warner Bros Discovery and Paramount. In addition to the Cruise vs. Pitt fight, the model has produced remixes of Avengers: Endgame and a Friends scene in which Rachel and Joey are played by otters."

Friday, February 6, 2026

Young people in China have a new alternative to marriage and babies: AI pets; The Washington Post, February 6, 2026

, The Washington Post; Young people in China have a new alternative to marriage and babies: AI pets

"While China and the United States vie for supremacy in the artificial intelligence race, China is pulling ahead when it comes to finding ways to apply AI tools to everyday uses — from administering local government and streamlining police work to warding off loneliness. People falling in love with chatbots has captured headlines in the U.S., and the AI pet craze in China adds a new, furry dimension to the evolving human relationship with AI."

Saturday, January 17, 2026

Library offering two hybrid workshops on AI issues; University of Pittsburgh, University Times, January 16, 2026

 University of Pittsburgh, University Times; Library offering two hybrid workshops on AI issues

"Next week the University Library System will host two hybrid AI workshops, which are open to all faculty, staff and students.

Both workshops will be held in Hillman Library’s K. Leroy Irvis Reading Room and will be available online.

Navigating Pitt's AI Resources for Research & Learning: 4-5 p.m. Jan. 21. In this workshop, participants will learn about all the AI tools available to the Pitt community and what their strengths are when it comes to research and learning. The workshop will focus on identifying the appropriate AI tools, describing their strengths and weaknesses for specific learning needs, and developing a plan for using the tools effectively. Register here.

Creating a Personal Research & Learning Assistant: Writing Effective Prompts: 4-5 p.m. Jan. 22. Anyone can use an AI tool, but maximizing its potential for personalized learning takes some skills and forethought. If you have been using Claude or Gemini to support your research or learning and are interested in getting better results faster, this workshop is for you. Attend this session to learn strategies to write effective prompts which will help you both ideate on your topic of interest and increase the likelihood of generating useful responses. We will explore numerous frameworks for crafting prompts, including making use of personas, context, and references. Register here."

Thursday, December 11, 2025

AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures; International Business Times, December 11, 2025

 Lisa Parlagreco, International Business Times; AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures

"When segments of our profession begin treating AI outputs as inherently reliable, we normalize a lower threshold of scrutiny, and the law cannot function on lowered standards. The justice system depends on precision, on careful reading, on the willingness to challenge assumptions rather than accept the quickest answer. If lawyers become comfortable skipping that intellectual step, even once, we begin to erode the habits that make rigorous advocacy possible. The harm is not just procedural; it's generational. New lawyers watch what experienced lawyers do, not what they say, and if they see shortcuts rewarded rather than corrected, that becomes the new baseline.

This is not to suggest that AI has no place in law. When used responsibly, with human oversight, it can be a powerful tool. Legal teams are successfully incorporating AI into tasks like document review, contract analysis, and litigation preparation. In complex cases with tens of thousands of documents, AI has helped accelerate discovery and flag issues that humans might overlook. In academia as well, AI has shown promise in grading essays and providing feedback that can help educate the next generation of lawyers, but again, under human supervision.

The key distinction is between augmentation and automation. We must not be naive about what AI represents. It is not a lawyer. It doesn't hold professional responsibility. It doesn't understand nuance, ethics, or the weight of a client's freedom or financial well-being. It generates outputs based on patterns and statistical likelihoods. That's incredibly useful for ideation, summarization, and efficiency, but it is fundamentally unsuited to replace human reasoning.

To ignore this reality is to surrender the core values of our profession. Lawyers are trained not just to know the law but to apply it with judgment, integrity, and a commitment to truth. Practices that depend on AI without meaningful human oversight communicate a lack of diligence and care. They weaken public trust in our profession at a time when that trust matters more than ever.

We should also be thinking about how we prepare future lawyers. Law schools and firms must lead by example, teaching students not just how to use AI, but how to question it. They must emphasize that AI outputs require verification, context, and critical thinking. AI should supplement legal education, not substitute it. The work of a lawyer begins long before generating a draft; it begins with curiosity, skepticism, and the courage to ask the right questions.

And yes, regulation has its place. Many courts and bar associations are already developing guidelines for the responsible use of AI. These frameworks encourage transparency, require lawyers to verify any AI-assisted research, and emphasize the ethical obligations that cannot be delegated to a machine. That's progress, but it needs broader adoption and consistent enforcement.

At the end of the day, technology should push us forward, not backward. AI can make our work more efficient, but it cannot, and should not, replace our judgment. The lawyer who delegates their thinking to an algorithm risks their profession, their client's case, and the integrity of the justice system itself."

Tuesday, December 2, 2025

College Students Flock to a New Major: A.I.; The New York Times, December 1, 2025

, The New York Times; College Students Flock to a New Major: A.I.

"Artificial intelligence is the hot new college major...

Now interest in understanding, using and learning how to build A.I. technologies is soaring, and schools are racing to meet rising student and industry demand.

Over the last two years, dozens of U.S. universities and colleges have announced new A.I. departments, majors, minors, courses, interdisciplinary concentrations and other programs.

In 2022, for instance, the Massachusetts Institute of Technology created a major called “A.I. and decision-making.” Students in the program learn to develop A.I. systems and study how technologies like robots interact with humans and the environment. This year, nearly 330 students are enrolled in the program — making A.I. the second-largest major at M.I.T. after computer science.

“Students who prefer to work with data to address problems find themselves more drawn to an A.I. major,” said Asu Ozdaglar, the deputy dean of academics at the M.I.T. Schwarzman College of Computing. Students interested in applying A.I. in fields like biology and health care are also flocking to the new major, she added."

Thursday, November 27, 2025

Prosecutor Used Flawed A.I. to Keep a Man in Jail, His Lawyers Say; The New York Times, November 25, 2025

, The New York Times ; Prosecutor Used Flawed A.I. to Keep a Man in Jail, His Lawyers Say

"On Friday, the lawyers were joined by a group of 22 legal and technology scholars who warned that the unchecked use of A.I. could lead to wrongful convictions. The group, which filed its own brief with the state Supreme Court, included Barry Scheck, a co-founder of the Innocence Project, which has helped to exonerate more than 250 people; Chesa Boudin, a former district attorney of San Francisco; and Katherine Judson, executive director of the Center for Integrity in Forensic Sciences, a nonprofit that seeks to improve the reliability of criminal prosecutions.

The problem of A.I.-generated errors in legal papers has burgeoned along with the popular use of tools like ChatGPT and Gemini, which can perform a wide range of tasks, including writing emails, term papers and legal briefs. Lawyers and even judges have been caught filing court papers that were rife with fake legal references and faulty arguments, leading to embarrassment and sometimes hefty fines.

The Kjoller case, though, is one of the first in which prosecutors, whose words carry great sway with judges and juries, have been accused of using A.I. without proper safeguards...

Lawyers are not prohibited from using A.I., but they are required to ensure that their briefs, however they are written, are accurate and faithful to the law. Today’s artificial intelligence tools are known to sometimes “hallucinate,” or make things up, especially when asked complex legal questions...

Westlaw executives said that their A.I. tool does not write legal briefs, because they believe A.I. is not yet capable of the complex reasoning needed to do so...

Damien Charlotin, a senior researcher at HEC Paris, maintains a database that includes more than 590 cases from around the world in which courts and tribunals have detected hallucinated content. More than half involved people who represented themselves in court. Two-thirds of the cases were in United States courts. Only one, an Israeli case, involved A.I. use by a prosecutor."

Saturday, November 15, 2025

Pope Leo XIV’s important warning on ethics of AI and new technology; The Fresno Bee, November 15, 2025

Andrew Fiala , The Fresno Bee; Pope Leo XIV’s important warning on ethics of AI and new technology

"Recently, Pope Leo XIV addressed a conference on artificial intelligence in Rome, where he emphasized the need for deeper consideration of the “ethical and spiritual weight” of new technologies...

This begins with the insight that human beings are tool-using animals. Tools extend and amplify our operational power, and they can also either enhance or undermine who we are and what we care about. 

Whether we are enhancing or undermining our humanity ought to be the focus of moral reflection on technology.

This is a crucial question in the AI-era. The AI-revolution should lead us to ask fundamental questions about the ethical and spiritual side of technological development. AI is already changing how we think about intellectual work, such as teaching and learning. Human beings are already interacting with artificial systems that provide medical, legal, psychological and even spiritual advice. Are we prepared for all of this morally, culturally and spiritually?...

At the dawn of the age of artificial intelligence, we need a corresponding new dawn of critical moral judgment. Now is the time for philosophers, theologians and ordinary citizens to think deeply about the philosophy of technology and the values expressed or embodied in our tools. 

It will be exciting to see what the wizards of Silicon Valley will come up with next. But wizardry without wisdom is dangerous."

Monday, October 27, 2025

AI can help authors beat writer’s block, says Bloomsbury chief; The Guardian, October 27, 2025

, The Guardian; AI can help authors beat writer’s block, says Bloomsbury chief


[Kip Currier: These are interesting and unexpected comments by Nigel Newton, Bloomsbury publishing's founder and CEO. 

Bloomsbury is the publisher of my impending book Ethics, Information, and Technology. In the interest of transparency, I'll note that I researched and wrote my book the "oldfangled way" and didn't use AI for any aspects of my book, including brainstorming. Last year during a check-in meeting with my editor and a conversation about the book's AI chapter, I just happened to learn that Bloomsbury has had a policy on authors not using AI tools.

So it's noteworthy to see this publisher's shift on authors' use of AI tools.]


[Excerpt]

"Authors will come to rely on artificial intelligence to help them beat writer’s block, the boss of the book publisher Bloomsbury has said.

Nigel Newton, the founder and chief executive of the publisher behind the Harry Potter series, said the technology could support almost all creative arts, although it would not fully replace prominent writers.

“I think AI will probably help creativity, because it will enable the 8 billion people on the planet to get started on some creative area where they might have hesitated to take the first step,” he told the PA news agency...

Last week the publisher, which is headquartered in London and employs about 1,000 people, experienced a share rise of as much as 10% in a single day after it reported a 20% jump in revenue in its academic and professional division in the first half of its financial year, largely thanks to an AI licensing agreement.

However, revenues in its consumer division fell by about 20%, largely due to the absence of a new title from Maas."

Monday, October 20, 2025

The platform exposing exactly how much copyrighted art is used by AI tools; The Guardian, October 18, 2025

 , The Guardian; The platform exposing exactly how much copyrighted art is used by AI tools

"The US tech platform Vermillio tracks use of a client’s intellectual property online and claims it is possible to trace, approximately, the percentage to which an AI generated image has drawn on pre-existing copyrighted material."

Saturday, October 11, 2025

OpenAI’s Sora Is in Serious Trouble; Futurism, October 10, 2025

 , Futurism ; OpenAI’s Sora Is in Serious Trouble

"The cat was already out of the bag, though, sparking what’s likely to be immense legal drama for OpenAI. On Monday, the Motion Picture Association, a US trade association that represents major film studios, released a scorching statementurging OpenAI to “take immediate and decisive action” to stop the app from infringing on copyrighted media.

Meanwhile, OpenAI appears to have come down hard on what kind of text prompts can be turned into AI slop on Sora, implementing sweeping new guardrails presumably meant to appease furious rightsholders and protect their intellectual property.

As a result, power users experienced major whiplash that’s tarnishing the launch’s image even among fans. It’s a lose-lose moment for OpenAI’s flashy new app — either aggravate rightsholders by allowing mass copyright infringement, or turn it into yet another mind-numbing screensaver-generating experience like Meta’s widely mocked Vibes.

“It’s official, Sora 2 is completely boring and useless with these copyright restrictions. Some videos should be considered fair use,” one Reddit user lamented.

Others accused OpenAI of abusing copyright to hype up its new app...

How OpenAI’s eyebrow-raising ask-for-forgiveness-later approach to copyright will play out in the long term remains to be seen. For one, the company may already be in hot water, as major Hollywood studios have already started suing over less."

Saturday, October 4, 2025

I’m a Screenwriter. Is It All Right if I Use A.I.?; The Ethicist, The New York Times, October 4, 2025

 , The Ethicist, The New York Times; I’m a Screenwriter. Is It All Right if I Use A.I.?;

"I write for television, both series and movies. Much of my work is historical or fact-based, and I have found that researching with ChatGPT makes Googling feel like driving to the library, combing the card catalog, ordering books and waiting weeks for them to arrive. This new tool has been a game changer. Then I began feeding ChatGPT my scripts and asking for feedback. The notes on consistency, clarity and narrative build were extremely helpful. Recently I went one step further: I asked it to write a couple of scenes. In seconds, they appeared — quick paced, emotional, funny, driven by a propulsive heartbeat, with dialogue that sounded like real people talking. With a few tweaks, I could drop them straight into a screenplay. So what ethical line would I be crossing? Would it be plagiarism? Theft? Misrepresentation? I wonder what you think. — Name Withheld"

Sunday, September 28, 2025

Hastings Center Releases Medical AI Ethics Tool for Policymakers, Patients, and Providers; The Hastings Center for Bioethics, September 25, 2025

 The Hastings Center for Bioethics; Hastings Center Releases Medical AI Ethics Tool for Policymakers, Patients, and Providers

"As artificial intelligence rapidly transforms healthcare, The Hastings Center for Bioethics has released an interactive tool to help policymakers, patients and providers understand the ways that AI is being used in medicine—from making a diagnosis to evaluating insurance claims—and navigate the ethical questions that emerge along the way.

The new tool, a Patient’s Journey with Medical AI, follows an imaginary patient through five interactions with medical AI. It guides users through critical decision points in diagnostics, treatment, and communication, offering personalized insights into how algorithms might influence their care. 

Each decision point in the Patient’s Journey includes a summary of the ethical issues raised and multiple choice questions intended to stimulate thinking and discussion about particular uses of AI in medicine. Policy experts from across the political spectrum were invited to review the tool for accuracy and utility.

The Patient’s Journey is the latest in a set of resources developed through Hastings on the Hill, a project that translates bioethics research for use by policymakers—with an initial focus on medical AI. “This isn’t just about what AI can do — it’s about what it should do,” said Hastings Center President Vardit Ravitsky, who directs Hastings on the Hill. “Patients deserve to understand how technologies affect their health decisions, and policymakers can benefit from expert guidance as they seek to ensure that AI serves the public good.”

The Greenwall Foundation is supporting this initiative. Additional support comes from The Donaghue Foundation and the National Institutes of Health’s Bridge2AI initiative.

In addition to using Hastings on the Hill resources, policymakers, industry leaders, and others who shape medical AI policy and practice are invited to contact The Hastings Center with questions related to ethical issues they are encountering. Hastings Center scholars and fellows can provide expert nonpartisan analysis on urgent bioethics issues, such as algorithmic bias, patient privacy, data governance, and informed consent.

“Ethics should not be an afterthought,” says Ravitsky. “Concerns about biased health algorithms and opaque clinical decision tools have underscored the need for ethical oversight alongside technical innovation.”

“The speed of AI development has outpaced the ethical guardrails we need,” said Erin Williams, President and CEO of EDW Wisdom, LLC — the consultancy working with The Hastings Center. “Our role is to bridge that gap —ensuring that human dignity, equity, and trust are not casualties of technological progress.”

Explore Patient’s Journey with Medical AI. Learn more about Hastings on the Hill."

Monday, September 22, 2025

Librarians Are Being Asked to Find AI-Hallucinated Books; 404 Media, September 18, 2025

CLAIRE WOODCOCK , 404 Media; Librarians Are Being Asked to Find AI-Hallucinated Books

"Reference librarian Eddie Kristan said lenders at the library where he works have been asking him to find books that don’t exist without realizing they were hallucinated by AI ever since the release of GPT-3.5 in late 2022. But the problem escalated over the summer after fielding patron requests for the same fake book titles from real authors—the consequences of an AI-generated summer reading list circulated in special editions of the Chicago Sun-Times and The Philadelphia Inquirer earlier this year. At the time, the freelancer told 404 Media he used AI to produce the list without fact checking outputs before syndication. 

“We had people coming into the library and asking for those authors,” Kristan told 404 Media. He’s receiving similar requests for other types of media that don’t exist because they’ve been hallucinated by other AI-powered features. “It’s really, really frustrating, and it’s really setting us back as far as the community’s info literacy.” 

AI tools are changing the nature of how patrons treat librarians, both online and IRL. Alison Macrina, executive director of Library Freedom Project, told 404 Media early results from a recent survey of emerging trends in how AI tools are impacting libraries indicate that patrons are growing more trusting of their preferred generative AI tool or product, and the veracity of the outputs they receive. She said librarians report being treated like robots over library reference chat, and patrons getting defensive over the veracity of recommendations they’ve received from an AI-powered chatbot. Essentially, like more people trust their preferred LLM over their human librarian."