Showing posts with label AI literacy. Show all posts
Showing posts with label AI literacy. Show all posts

Monday, December 15, 2025

Kinds of Intelligence | LJ Directors’ Summit 2025; Library Journal, December 2, 2025

 Lisa Peet, Library Journal; Kinds of Intelligence | LJ Directors’ Summit 2025

"LJ’s 2025 Directors’ Summit looked at artificial—and very real—intelligence from multiple angles

If there was any doubt about what issues are on the minds of today’s library leaders, Library Journal’s 2025 Directors’ Summit, held October 16 and 17 at Denver Public Library (DPL), had some ready answers: AI and people.

Nick Tanzi hit both notes handily in his keynote, “Getting Your Public Library AI-Ready.” Tanzi, assistant director of South Huntington Public Library (SHPL), NY, and technology consultant at The-Digital-Librarian.com (and a 2025 LJ Mover & Shaker), began with a reminder of other at-the-time “disruptive” technologies, starting with a 1994 clip of Today Show anchors first encountering “@” and “.com.”

During most of this digital change, he noted, libraries had the technologies before many patrons and could lead the way. Now everyone has access to some form of AI, but it’s poorly understood. And access without understanding is a staff problem as well as a patron problem.

So, what does it mean for a library to be AI-ready? Start with policy and training, said Tanzi, and then translate that to public services, rather than the other way around. Library policies need to be AI-proofed, beginning by looking at what’s already in place and where it might be stressed by AI: policies governing collection development, reconsideration of materials, tool use, access control, the library’s editorial process, and confidential data. Staff are already using some form of AI at work—do they have organizational guidance?

Tanzi advised fostering AI literacy across the library. At SHPL, he formed an AI user group; it has no prerequisite for participation and staff are paid for their time. Members explore new tools, discuss best practices, complete “homework,” and share feedback, which also allows Tanzi to stress-test policies. It’s not a replacement for formal training, but helps him discover which tools work best in various departments and speeds up learning.

We need to demystify AI tools for staff and patrons, Tanzi noted, and teach ethics around them. Your ultimate goal is to create informed citizens; libraries can build community around AI education, partnering with the local school district, colleges, and government."

Sunday, October 12, 2025

Notre Dame hosts Vatican AI adviser, Carnegie Mellon professor during AI ethics conference; South Bend Tribune, October 9, 2025

Rayleigh Deaton, South Bend Tribune; Notre Dame hosts Vatican AI adviser, Carnegie Mellon professor during AI ethics conference

"The increasingly ubiquitous nature of artificial intelligence in today's world raises questions about how the technology should be approached and who should be making the decisions about its development and implementation.

To the Rev. Paolo Benanti, an associate professor of ethics of AI at LUISS University and the AI adviser to the Vatican, and Aarti Singh, a professor in Carnegie Mellon University's Machine Learning Department, ethical AI use begins when the technology is used to better humanity, and this is done by making AI equitable and inclusive.

Benanti and Singh were panelists during a session on Wednesday, Oct. 8, at the University of Notre Dame's inaugural R.I.S.E. (Responsibility, Inclusion, Safety and Ethics) AI Conference. Hosted by the university's Lucy Family Institute for Data & Society, the conference ran Oct. 6-8 and focused on how AI can be used to address multidisciplinary societal issues while upholding ethical standards...

And, Singh said, promoting public AI awareness is vital. She said this is done through introducing AI training as early as elementary school and encouraging academics to develop soft skills to be able to communicate their AI research with laypeople — something they're not always good at.

"There are many programs being started now that are encouraging from the student level, but of course also faculty, in academia, to go out there and talk," Singh said. "I think the importance of doing that now is really crucial, and we should step up.""

Wednesday, September 24, 2025

AI Influencers: Libraries Guiding AI Use; Library Journal, September 16, 2025

 Matt Enis, Library Journal ; AI Influencers: Libraries Guiding AI Use

"In addition to the field’s collective power, libraries can have a great deal of influence locally, says R. David Lankes, the Virginia and Charles Bowden Professor of Librarianship at the University of Texas at Austin and cohost of LJ’s Libraries Lead podcast.

“Right now, the place where librarians and libraries could have the most impact isn’t on trying to change OpenAI or Microsoft or Google; it’s really in looking at implementation policy,” Lankes says. For example, “on the public library side, many cities and states are adopting AI policies now, as we speak,” Lankes says. “Where I am in Austin, the city has more or less said, ‘go forth and use AI,’ and that has turned into a mandate for all of the city offices, which in this case includes the Austin Public Library” (APL). 

Rather than responding to that mandate by simply deciding how the library would use AI internally, APL created a professional development program to bring its librarians up to speed with the technology so that they can offer other city offices help with ways to use it, and advice on how to use it ethically and appropriately, Lankes explains.

“Cities and counties are wrestling with AI, and this is an absolutely perfect time for libraries to be part of that conversation,” Lankes says."

Monday, September 22, 2025

Librarians Are Being Asked to Find AI-Hallucinated Books; 404 Media, September 18, 2025

CLAIRE WOODCOCK , 404 Media; Librarians Are Being Asked to Find AI-Hallucinated Books

"Reference librarian Eddie Kristan said lenders at the library where he works have been asking him to find books that don’t exist without realizing they were hallucinated by AI ever since the release of GPT-3.5 in late 2022. But the problem escalated over the summer after fielding patron requests for the same fake book titles from real authors—the consequences of an AI-generated summer reading list circulated in special editions of the Chicago Sun-Times and The Philadelphia Inquirer earlier this year. At the time, the freelancer told 404 Media he used AI to produce the list without fact checking outputs before syndication. 

“We had people coming into the library and asking for those authors,” Kristan told 404 Media. He’s receiving similar requests for other types of media that don’t exist because they’ve been hallucinated by other AI-powered features. “It’s really, really frustrating, and it’s really setting us back as far as the community’s info literacy.” 

AI tools are changing the nature of how patrons treat librarians, both online and IRL. Alison Macrina, executive director of Library Freedom Project, told 404 Media early results from a recent survey of emerging trends in how AI tools are impacting libraries indicate that patrons are growing more trusting of their preferred generative AI tool or product, and the veracity of the outputs they receive. She said librarians report being treated like robots over library reference chat, and patrons getting defensive over the veracity of recommendations they’ve received from an AI-powered chatbot. Essentially, like more people trust their preferred LLM over their human librarian."

Thursday, July 10, 2025

EU AI Act at the Crossroads: GPAI Rules, AI Literacy Guidance and Potential Delays; JD Supra, July 8, 2025

Mark BoothSteven Farmer, Scott Morton , JD Supra; EU AI Act at the Crossroads: GPAI Rules, AI Literacy Guidance and Potential Delays

"The EU AI Act (AI Act), effective since February 2025, introduces a risk-based regulatory framework for AI systems and a parallel regime for general-purpose AI (GPAI) models. It imposes obligations on various actors, including providers, deployers, importers and manufacturers, and requires that organizations ensure an appropriate level of AI literacy among staff. The AI Act also prohibits “unacceptable risk” AI use cases and imposes rigorous requirements on “high-risk” systems. For a comprehensive overview of the AI Act, see our earlier client alert.

As of mid-2025, the implementation landscape is evolving. This update takes stock of where things stand, focusing on: (i) new guidance on the AI literacy obligations for providers and deployers; (ii) the status of the developing a General-Purpose AI Code of Practice and its implications; and (iii) the prospect of delayed enforcement of some of the AI Act’s key provisions."

Wednesday, October 2, 2024

Scottish university to host AI ethics conference; Holyrood, October 2, 2024

 Holyrood; Scottish university to host AI ethics conference

"The University of Glasgow will gather leading figures from the artificial intelligence (AI) community for a three-day conference this week in a bid to address the ethical challenges posed by the technology.

Starting tomorrow, the Lovelace-Hodgkin Symposium, will see academics, researchers, and policymakers discuss how to make AI a tool for “positive change” across higher education.

The event will inform the development of a new online course on AI ethics, which will boost ethical literacy "across higher education and beyond”, the university said...

During the symposium, speakers from the university’s research and student communities will present and participate in workshops alongside representatives to build the new course.

The first day of the event will examine the current state of AI, focusing on higher education and the use of AI in research and teaching.

On Thursday, the conference will discuss how to tackle inequality and bias in AI, featuring discussions on AI and race, gender, the environment, children’s rights, and how AI is communicated and consumed.

The final day will involve participants creating an ethical framework for inclusive AI, where they will outline a series of actionable steps and priorities for academic institutions, which will be used to underpin the online course."

Tuesday, May 14, 2024

Ethics and equity in the age of AI; Vanderbilt University Research News, May 7, 2024

Jenna Somers, Vanderbilt University Research News ; Ethics and equity in the age of AI

"Throughout the conversation, the role of human intellect in responsible AI use emerged as an essential theme. Because generative AI is trained on a huge body of text on the internet and designed to detect and repeat patterns of language use, it runs the risk of perpetuating societal biases and stereotypes. To mitigate these effects, the panelists emphasized the need to be intentional, critical, and evaluative when using AI, whether users are experts designing and training models at top-tier companies or college students completing an AI-based class assignment.

“There is a lot of work to do around AI literacy, and we can think about this in two parts,” Wise said."