Thursday, March 7, 2024

Introducing CopyrightCatcher, the first Copyright Detection API for LLMs; Patronus AI, March 6, 2024

 Patronus AI; Introducing CopyrightCatcher, thefirst Copyright Detection API for LLMs

"Managing risks from unintended copyright infringement in LLM outputs should be a central focus for companies deploying LLMs in production.

  • On an adversarial copyright test designed by Patronus AI researchers, we found that state-of-the-art LLMs generate copyrighted content at an alarmingly high rate 😱
  • OpenAI’s GPT-4 produced copyrighted content on 44% of the prompts.
  • Mistral’s Mixtral-8x7B-Instruct-v0.1 produced copyrighted content on 22% of the prompts.
  • Anthropic’s Claude-2.1 produced copyrighted content on 8% of the prompts.
  • Meta’s Llama-2-70b-chat produced copyrighted content on 10% of the prompts.
  • Check out CopyrightCatcher, our solution to detect potential copyright violations in LLMs. Here’s the public demo, with open source model inference powered by Databricks Foundation Model APIs. 🔥

LLM training data often contains copyrighted works, and it is pretty easy to get an LLM to generate exact reproductions from these texts1. It is critical to catch these reproductions, since they pose significant legal and reputational risks for companies that build and use LLMs in production systems2. OpenAI, Anthropic, and Microsoft have all faced copyright lawsuits on LLM generations from authors3, music publishers4, and more recently, the New York Times5.

To check whether LLMs respond to your prompts with copyrighted text, you can use CopyrightCatcher. It detects when LLMs generate exact reproductions of content from text sources like books, and highlights any copyrighted text in LLM outputs. Check out our public CopyrightCatcher demo here!

Researchers tested leading AI models for copyright infringement using popular books, and GPT-4 performed worst; CNBC, March 6, 2024

 Hayden Field, CNBC; Researchers tested leading AI models for copyright infringement using popular books, and GPT-4 performed worst

"The company, founded by ex-Meta researchers, specializes in evaluation and testing for large language models — the technology behind generative AI products.

Alongside the release of its new tool, CopyrightCatcher, Patronus AI released results of an adversarial test meant to showcase how often four leading AI models respond to user queries using copyrighted text.

The four models it tested were OpenAI’s GPT-4, Anthropic’s Claude 2, Meta’s Llama 2 and Mistral AI’s Mixtral.

“We pretty much found copyrighted content across the board, across all models that we evaluated, whether it’s open source or closed source,” Rebecca Qian, Patronus AI’s cofounder and CTO, who previously worked on responsible AI research at Meta, told CNBC in an interview.

Qian added, “Perhaps what was surprising is that we found that OpenAI’s GPT-4, which is arguably the most powerful model that’s being used by a lot of companies and also individual developers, produced copyrighted content on 44% of prompts that we constructed.”"

Public Symposium on AI and IP; United States Patent and Trademark Office (USPTO), Wednesday, March 27, 2024 10 AM - 3 PM PT/1 PM - 6 PM ET

 United States Patent and Trademark Office (USPTO); Public Symposium on AI and IP

"The United States Patent and Trademark Office (USPTO) Artificial Intelligence (AI) and Emerging Technologies (ET) Partnership will hold a public symposium on intellectual property (IP) and AI. The event will take place virtually and in-person at Loyola Law School, Loyola Marymount University, in Los Angeles, California, on March 27, from 10 a.m. to 3 p.m. PT. 

The symposium will facilitate the USPTO’s efforts to implement its obligations under the President’s Executive Order (E.O.) 14110 “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The event will include representation from the Copyright Office, build on previous AI/Emerging Technologies (ET) partnership events, and feature panel discussions by experts in the field of patent, trademark, and copyright law that focus on:

  1. A comparison of copyright and patent law approaches to the type and level of human contribution needed to satisfy authorship and inventorship requirements;
  2. Ongoing copyright litigation involving generative AI; and 
  3. A discussion of laws and policy considerations surrounding name, image, and likeness (NIL) issues, including the intersection of NIL and generative AI.

This event is free and open to the public, but in-person attendance is limited, so register early"

Monday, March 4, 2024

Beaufort, South Carolina, schools return most books to shelves after attempt to ban 97; CBS News, March 3, 2024

Scott Pelley, CBS; Beaufort, South Carolina, schools return most books to shelves after attempt to ban 97

"Ruth-Naomi James: I'm a combat veteran, right? There's no way I went to Iraq thinking that when I moved back home, I would have to do this to make sure that the freedom that we fight for in this country is taken out of the hands of students and parents.

The final votes came this past December. Five books were judged too graphic in sex or violence. But 92 returned to the schools. Dick Geier says this lesson reaches beyond the classroom.

Dick Geier: Diversity brings tolerance. The more you understand what other people think and realize that what they say is important, but who they are, what their story, what their background is. The more you know that, the more you see the power of diversity. And then, be kind, and be understanding. And don't make judgments because you haven't lived their story. They have.

In the city that's lived a story of letters and learning, one book that was banned and restored was "The Fixer," a novel of antisemitism that won the Pulitzer prize. In its pages, the book's hero expresses this opinion, "There are no wrong books." "What's wrong is the fear of them."


Thursday, February 29, 2024

The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement; The Guardian, February 28, 2024

 , The Guardian ; The Intercept, Raw Story and AlterNet sue OpenAI for copyright infringement

"OpenAI and Microsoft are facing a fresh round of lawsuits from news publishers over allegations that their generative artificial intelligence products violated copyright laws and illegally trained by using journalists’ work. Three progressive US outlets – the Intercept, Raw Story and AlterNet – filed suits in Manhattan federal court on Wednesday, demanding compensation from the tech companies.

The news outlets claim that the companies in effect plagiarized copyright-protected articles to develop and operate ChatGPT, which has become OpenAI’s most prominent generative AI tool. They allege that ChatGPT was trained not to respect copyright, ignores proper attribution and fails to notify users when the service’s answers are generated using journalists’ protected work."

How scientists are using facial-recognition AI to track humpback whales; NPR, February 29, 2024

 , NPR; How scientists are using facial-recognition AI to track humpback whales

"Photographs are key for counting whales. As they dive deep, humpbacks raise their tails out of the water, revealing markings and patterns unique to each individual. Scientists typically identify whales photo by photo, matching the tails in a painstaking process.

Cheeseman figured that technology could do that more quickly. He started Happy Whale, which uses artificial intelligence-powered image recognition to identify whales. The project pulled together about 200,000 photos of humpback whales. Many came from scientists who had built large image catalogs over the years. Others came from whale watching groups and citizen scientists, since the website is designed to share the identity of a whale and where it's been seen."

Google CEO Pichai says Gemini's AI image results "offended our users"; NPR, February 28, 2024

 , NPR; Google CEO Pichai says Gemini's AI image results "offended our users"

"Gemini, which was previously named Bard, is also an AI chatbot, similar to OpenAI's hit service ChatGPT. 

The text-generating capabilities of Gemini also came under scrutiny after several outlandish responses went viral online...

In his note to employees at Google, Pichai wrote that when Gemini is re-released to the public, he hopes the service is in better shape. 

"No AI is perfect, especially at this emerging stage of the industry's development, but we know the bar is high for us and we will keep at it for however long it takes," Pichai wrote."

Tuesday, February 20, 2024

He Hunts Sloppy Scientists. He’s Finding Lots of Prey.; The New York Times, February 2, 2024

Matt Richtel, The New York Times ; He Hunts Sloppy Scientists. He’s Finding Lots of Prey.

"Sholto David, 32, has a Ph.D. in cellular and molecular biology from Newcastle University in England. He is also developing an expertise in spotting errors in scientific papers. Most recently, and notably, he discovered flawed or manipulated data in studies conducted by top executives at the Harvard-affiliated Dana-Farber Cancer Institute. The institute said that it was requesting retraction of six manuscripts and had found 31 other manuscripts that required corrections.

From his home in Wales, Dr. David scours new research publications for images that are mislabeled and manipulated, and he regularly finds mistakes, or malfeasance, in some of the most prominent scientific journals. Accuracy is vital, as peer-reviewed papers often provide the evidence for drug trials or further lines of research. Dr. David said that the frequency of such errors suggests an underlying problem for science.

His interview with The New York Times has been edited and condensed...

Does this call into question the peer-review process?

I think that’s something that people need to think about. These are top scientific journals with errors that escaped peer review. Maybe the peer reviewers are looking for other things. Maybe they like to look at the methods or the conclusions more carefully than the results. But, yeah, it does make me think that people should question how effective the peer-review process has been."

Monday, February 19, 2024

The Most Important Writing Exercise I’ve Ever Assigned; The New York Times, February 18, 202

 Rachel Kadish, The New York Times; The Most Important Writing Exercise I’ve Ever Assigned

"Unflinching empathy, which is the muscle the lesson is designed to exercise, is a prerequisite for literature strong enough to wrestle with the real world. On the page it allows us to spot signs of humanity; off the page it can teach us to start a conversation with the strangest of strangers, to thrive alongside difference. It can even affect those life or death choices we make instinctively in a crisis. This kind of empathy has nothing to do with being nice — and it’s not for the faint of heart."

MAGA’s Violent Threats Are Warping Life in America; The New York Times, February 18, 2024

 David French, The New York Times; MAGA’s Violent Threats Are Warping Life in America

"So we called the local sheriff, shared the threat, and asked if the department could send someone to check our house.

Minutes later, a young deputy called to tell me all was quiet at our home. When I asked if he would mind checking back frequently, he said he’d stay in front of our house all night. Then he asked, “Why did you get this threat?”

I hesitated before I told him. Our community is so MAGA that I had a pang of concern about his response. “I’m a columnist,” I said, “and we’ve had lots of threats ever since I wrote against Donald Trump.”

The deputy paused for a moment. “I’m a vet,” he said, “and I volunteered to serve because I believe in our Constitution. I believe in free speech.” And then he said words I’ll never forget: “You keep speaking, and I’ll stand guard.”

I didn’t know that deputy’s politics and I didn’t need to. When I heard his words, I thought, that’s it. That’s the way through. Sometimes we are called to speak. Sometimes we are called to stand guard. All the time we can at least comfort those under threat, telling them with words and deeds that they are not alone. If we do that, we can persevere. Otherwise, the fear will be too much for good people to bear."

Arrested for Leaving Flowers, Navalny Mourners Fear Worse to Come; The New York Times, February 18, 2024

Valerie Hopkins, The New York Times ; Arrested for Leaving Flowers, Navalny Mourners Fear Worse to Come

"As thousands of Russians across the country tried to give voice to their grief for Mr. Navalny, who died in a remote Arctic penal colony on Friday, Russian police officers cracked down, temporarily detaining hundreds and placing more than two dozen in jail...

“Those who are detaining people are afraid of any opinion that isn’t connected to propaganda, to the pervading ideology,” said Lena, 31, who brought a sticker to the Solovetsky Stone, a monument to victims of political repression in the Soviet Union. “Don’t give up,” read the sticker — part of a message Mr. Navalny once recorded in case of his death."

Sunday, February 18, 2024

IT body proposes that AI pros get leashed and licensed to uphold ethics; The Register, February 15, 2024

Paul Kunert, The Register; IT body proposes that AI pros get leashed and licensed to uphold ethics

"Creating a register of licensed AI professionals to uphold ethical standards and securing whistleblowing channels to call out bad management are two policies that could prevent a Post Office-style scandal.

So says industry body BCS – formerly the British Computer Society – which reckons licenses based on an independent framework of ethics would promote transparency among software engineers and their bosses.

"We have a register of doctors who can be struck off," said Rashik Parmar MBE, CEO at BCS. "AI professionals already have a big role in our life chances, so why shouldn't they be licensed and registered too?"...

The importance of AI ethics was amplified by the Post Office scandal, says the BCS boss, "where computer generated evidence was used by non-IT specialists to prosecute sub postmasters with tragic results."

For anyone not aware of the outrageous wrongdoing committed by the Post Office, it bought the bug-ridden Horizon accounting system in 1999 from ICL, a company that was subsequently bought by Fujitsu. Hundreds of local Post Office branch managers were subsequently wrongfully convicted of fraud when Horizon was to blame."

Saturday, February 17, 2024

The New York Times’ AI copyright lawsuit shows that forgiveness might not be better than permission; The Conversation, February 13, 2024

 Senior Lecturer, Nottingham Law School, Nottingham Trent University, The Conversation; ; The New York Times’ AI copyright lawsuit shows that forgiveness might not be better than permission

"The lawsuit also presents a novel argument – not advanced by other, similar cases – that’s related to something called “hallucinations”, where AI systems generate false or misleading information but present it as fact. This argument could in fact be one of the most potent in the case.

The NYT case in particular raises three interesting takes on the usual approach. First, that due to their reputation for trustworthy news and information, NYT content has enhanced value and desirability as training data for use in AI. 

Second, that due to its paywall, the reproduction of articles on request is commercially damaging. Third, that ChatGPT “hallucinations” are causing reputational damage to the New York Times through, effectively, false attribution. 

This is not just another generative AI copyright dispute. The first argument presented by the NYT is that the training data used by OpenAI is protected by copyright, and so they claim the training phase of ChatGPT infringed copyright. We have seen this type of argument run before in other disputes."

A new documentary shows the impact of book bans in Florida public schools on the kids; NPR, November 25, 2023

, NPR; A new documentary shows the impact of book bans in Florida public schools on the kids

"In her directorial debut, Sheila Nevins' chronicles the impact of book bans in Florida public schools. She tells NPR's Scott Simon what inspired her to profile those most affected — the students...

GRACE LINN: My husband, Robert Nichol (ph), was killed in action in World War II, defending our democracy, constitution and freedoms. One of the freedoms that the Nazis crushed was the freedom to read the books that they banned.

NEVINS: And I thought, holy [expletive], this woman is out there doing something, and I'm doing nothing. And I know I'm only in my 80s, for heaven's sake. And here's this woman fighting for young people to be able to read the books that she read and I read and possibly you read, Scott, that in many ways change our lives and make us know about the world we live in. And I thought, I've got to grab her. I've got to get her. And I've got to get some of these kids who've lost the books or who have been deprived of the books to read them and to see how they feel about what they're missing.

SIMON: Some of the books that are mentioned in the course of the film that have been banned include "Slaughterhouse-Five," "Maus," "The Kite Runner," "The Life Of Rosa Parks," "The Handmaid's Tale." I can't come up with a better question than why?

NEVINS: Interesting, isn't it? Why would you deprive children of this information? If you want them to grow up to be like yourself, and yourself has a limited worldview - or at least the worldview that you believe is the worldview they should have - then you take out anything that you would find as questionable - Planned Parenthood, race, religious problems, difficulties. You know, you would simply want to make your child not aware of all these things that make the world a sort of wondrous, difficult, complex and often painful world that we all live in. I'm sort of quoting the kids, which is really odd. How can you deprive me - I'm 12 or 14 or 15 - of information?"


Friday, February 16, 2024

5 Presidential Libraries That Offer Culture, History and ‘Labs of Democracy’; The New York Times, February 13, 2024

 Lauren Sloss, The New York Times ; 5 Presidential Libraries That Offer Culture, History and ‘Labs of Democracy’

"As repositories of valuable historical documents and other records, U.S. presidential libraries have long been important destinations for scholars. But you don’t have to be an academic or even a history buff to appreciate these destinations, as many increasingly offer museums, special exhibitions and unique programming — ranging from interactive situation room experiences to musical performances — to the general public.

The first library was established by Franklin D. Roosevelt and opened to the public in 1941. Every administration since has created one of its own. (President Hoover, liking what he saw of F.D.R.’s project, established his own retroactively, in 1962.) Fifteen libraries are managed by the Office of Presidential Libraries, a part of the National Archives and Records Administration — the Presidential Libraries Act, passed in 1955, established the system of privately built and federally maintained institutions — and 13 are currently open to visitors. There are additional museums, historic monuments and sites dedicated to other presidents, like the James Garfield National Historic Site in Mentor, Ohio, and some have archival components, like the Abraham Lincoln Presidential Library and Museum in Springfield, Ill.

“President Reagan called the libraries labs of democracy because they explain how decisions are made and how policies are executed,” said Colleen Shogan, the archivist of the United States. “They give us the opportunity to learn about American democracy, and how the government functions.”

With Presidents’ Day fast approaching, consider planning a visit to a presidential library. Here are five to start."

How to Think About Remedies in the Generative AI Copyright Cases; LawFare, February 15, 2024

  Pamela Samuelson, LawFare; How to Think About Remedies in the Generative AI Copyright Cases

"So far, commentators have paid virtually no attention to the remedies being sought in the generative AI copyright complaints. This piece shines a light on them."

A Columbia Surgeon’s Study Was Pulled. He Kept Publishing Flawed Data.; The New York Times, February 16, 2024

 Benjamin Mueller, The New York Times; A Columbia Surgeon’s Study Was Pulled. He Kept Publishing Flawed Data.

"Problems with the study were severe enough that its publisher, after finding that the paper violated ethics guidelines, formally withdrew it within a few months of its publication in 2021. The study was then wiped from the internet, leaving behind a barren web page that said nothing about the reasons for its removal.

As it turned out, the flawed study was part of a pattern. Since 2008, two of its authors — Dr. Sam S. Yoon, chief of a cancer surgery division at Columbia University’s medical center, and a more junior cancer biologist — have collaborated with a rotating cast of researchers on a combined 26 articles that a British scientific sleuth has publicly flagged for containing suspect data. A medical journal retracted one of them this month after inquiries from The New York Times."

From ethics to outsmarting Chat GPT, state unveils resource for AI in Ohio education; Cleveland.com, February 15, 2024

 ; From ethics to outsmarting Chat GPT, state unveils resource for AI in Ohio education

"The state released a guide Thursday to help schools and parents navigate generative artificial intelligence in an ethical manner.

“When you use the term AI, I know in some people’s minds, it can sound scary,” said Lt. Jon Husted, whose InnovateOhio office worked with private sector organizations to develop the guide...

Every technology that’s come into society has been like that.”...

But AI is the wave of the future, and Husted said it’s important that students are exposed to it.

The AI toolkit is not mandatory but can be used as a resource for educators and families.

It doesn’t include many prescriptive actions for how to begin teaching and using AI. Rather, it contains sections for parents, teachers and school districts where they can find dozens of sample lessons and discussions about ethics, how to develop policies to keep students safe, and other topics.

For instance, teachers can find a template letter that they can send to school district officials to communicate how they’re using AI...

“Before you use AI in the classroom you will need a plan for a student with privacy, data security, ethics and many other things,” Husted said. “More is needed than just a fun tool in the classroom.”"

Thursday, February 15, 2024

NIST Researchers Suggest Historical Precedent for Ethical AI Research; NIST, February 15, 2024

NIST ; NIST Researchers Suggest Historical Precedent for Ethical AI Research

"If we train artificial intelligence (AI) systems on biased data, they can in turn make biased judgments that affect hiring decisions, loan applications and welfare benefits — to name just a few real-world implications. With this fast-developing technology potentially causing life-changing consequences, how can we make sure that humans train AI systems on data that reflects sound ethical principles? 

A multidisciplinary team of researchers at the National Institute of Standards and Technology (NIST) is suggesting that we already have a workable answer to this question: We should apply the same basic principles that scientists have used for decades to safeguard human subjects research. These three principles — summarized as “respect for persons, beneficence and justice” — are the core ideas of 1979’s watershed Belmont Report, a document that has influenced U.S. government policy on conducting research on human subjects.

The team has published its work in the February issue of IEEE’s Computer magazine , a peer-reviewed journal. While the paper is the authors’ own work and is not official NIST guidance, it dovetails with NIST’s larger effort to support the development of trustworthy and responsible AI. 

“We looked at existing principles of human subjects research and explored how they could apply to AI,” said Kristen Greene, a NIST social scientist and one of the paper’s authors. “There’s no need to reinvent the wheel. We can apply an established paradigm to make sure we are being transparent with research participants, as their data may be used to train AI.”

The Belmont Report arose from an effort to respond to unethical research studies, such as the Tuskegee syphilis study, involving human subjects. In 1974, the U.S. created the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, and it identified the basic ethical principles for protecting people in research studies. A U.S. federal regulation later codified these principles in 1991’s Common Rule, which requires that researchers get informed consent from research participants. Adopted by many federal departments and agencies, the Common Rule was revised in 2017 to take into account changes and developments in research."

Monday, February 12, 2024

AI and inventorship guidance: Incentivizing human ingenuity and investment in AI-assisted inventions; United States Patent and Trademark Office (USPTO), February 12, 2024

 Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of the USPTO, United States Patent and Trademark Office (USPTO) ; Director's Blog: the latest from USPTO leadership

AI and inventorship guidance: Incentivizing human ingenuity and investment in AI-assisted inventions

"Today, based on the exceptional public feedback we’ve received, we announced our Inventorship Guidance for AI-Assisted Inventions in the Federal Register – the first of these directives. The guidance, which is effective on February 13, 2024, provides instructions to examiners and stakeholders on how to determine whether the human contribution to an innovation is significant enough to qualify for a patent when AI also contributed. The guidance embraces the use of AI in innovation and provides that AI-assisted inventions are not categorically unpatentable. The guidance instructs examiners on how to determine the correct inventor(s) to be named in a patent or patent application for inventions created by humans with the assistance of one or more AI systems. Additionally, we’ve posted specific examples of hypothetical situations and how the guidance would apply to those situations to further assist our examiners and applicants in their understanding."

Inventorship guidance for AI-assisted inventions webinar; United States Patent and Trademark Office (USPTO), March 5, 2024 1 PM - 2 PM ET

 United States Patent and Trademark Office (USPTO) ; Inventorship guidance for AI-assisted inventions webinar

"The United States Patent and Trademark Office (USPTO) plays an important role in incentivizing and protecting innovation, including innovation enabled by artificial intelligence (AI), to ensure continued U.S. leadership in AI and other emerging technologies (ET).

The USPTO announced Inventorship Guidance for AI-Assisted Inventions in the Federal RegisterThis guidance is pursuant to President Biden's Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023) with provisions addressing IP equities. The guidance, which is effective on February 13, 2024, provides instructions to USPTO personnel and stakeholders on determining the correct inventor(s) to be named in a patent or patent application for inventions created by humans with the assistance of one or more AI systems. 

The USPTO will host a webinar on Inventorship Guidance for AI-Assisted Inventions on Tuesday, March 5, from 1-2 p.m. EST. USPTO personnel will provide an overview of the guidance and answer stakeholder questions relating to the guidance.

This event is free and open to the public, but virtual space is limited, so please register early."

University Librarians See Urgent Need for AI Ethics; Inside Higher Ed, January 17, 2024

  Lauren Coffey, Inside Higher Ed; University Librarians See Urgent Need for AI Ethics

"Nearly three-quarters of university librarians say there’s an urgent need to address artificial intelligence’s ethical and privacy concerns, a survey finds.

Roughly half the librarians surveyed said they had a “moderate” understanding of AI concepts and principles, according to the study released Friday. About one in five said they had a slightly below moderate understanding, and roughly the same amount had a slightly above moderate understanding. Only 3 percent of respondents said they had a “very high” understanding.

The study, conducted in May 2023 by Leo Lo, president-elect of the Association of College and Research Libraries, had 605 respondents who completed the survey. Of those, 45 percent worked in research institutions and 30 percent in institutions with undergraduate and graduate programming."

On Copyright, Creativity, and Compensation; Reason, February 12, 2024

 , Reason; On Copyright, Creativity, and Compensation

"Some of you may have seen the article by David Segal in the Sunday NY Times several weeks ago [available here] about a rather sordid copyright fracas in which I have been embroiled over the past few months...

What to make of all this? I am not oblivious to the irony of being confronted with this problem after having spent 30 years or so, as a lawyer and law professor, reflecting on and writing about the many mysteries of copyright policy and copyright law in the Internet Age.

Here are a few things that strike me as interesting (and possibly important) in this episode."

Using AI Responsibly; American Libraries, January 21, 2024

Diana Panuncial , American Libraries; Using AI Responsibly

"Navigating misinformation and weighing ethical and privacy issues in artificial intelligence (AI) were top of mind for the panelists at “AI and Libraries: A Discussion on the Future,” a January 21 session at the American Library Association’s 2024 LibLearnX Conference in Baltimore. Flowers was joined by Virginia Cononie, assistant librarian and coordinator of research at University of South Carolina Upstate in Spartanburg; Dray MacFarlane, cofounder of Tasio, an AI consulting company; and Juan Rubio, digital media learning program manager for Seattle Public Library (SPL). 

Rubio, who used AI to create a tool to help teens at SPL reflect on their mental health and well-being, said there is excitement behind the technology and how it can be harnessed, but there should also be efforts to educate patrons on how to use it responsibly. 

“I think ethical use of AI comes with creating ethical people,” he said, adding that SPL has been thinking about implementing guidelines for using AI. “Be very aware of your positionality [as librarians], because I think we are in a place of privilege—not necessarily of money or power, but of knowledge.”"

Friday, February 9, 2024

The Friar Who Became the Vatican’s Go-To Guy on A.I.; The New York Times; February 9, 2024

 Jason Horowitz, The New York Times; The Friar Who Became the Vatican’s Go-To Guy on A.I.

"There is a lot is going on for Father Benanti, who, as both the Vatican’s and the Italian government’s go-to artificial intelligence ethicist, spends his days thinking about the Holy Ghost and the ghosts in the machines.

In recent weeks, the ethics professor, ordained priest and self-proclaimed geek, has joined Bill Gates at a meeting with Prime Minister Giorgia Meloni, presided over a commission seeking to save Italian media from ChatGPT bylines and general A.I. oblivion, and met with Vatican officials to further Pope Francis’s aim of protecting the vulnerable from the coming technological storm."

‘My Heart Sank’: In Maine, a Challenge to a Book, and to a Town’s Self-Image; The New York Times, February 3, 2024

 Elizabeth Williamson, The New York Times; My Heart Sank’: In Maine, a Challenge to a Book, and to a Town’s Self-Image

"Mr. Boulet appealed to the American Library Association for a public letter of support, which it offers to libraries undergoing censorship efforts. “They ghosted me,” he said.

Asked about the letter, Deborah Caldwell-Stone, director of the A.L.A.’s Office for Intellectual Freedom, said Mr. Boulet’s request had generated internal debate, and delay.

“Our position on the book is, it should remain in the collection; it is beneath us to adopt the tools of the censors,” she said in an interview. “We need to support intellectual freedom in all its aspects, in order to claim that high ground.” Months after Mr. Boulet requested the letter, Ms. Caldwell-Stone saw him at a conference and apologized...

Before the controversy, “I hadn’t really given intellectual freedom as much thought as I should have,” Mr. Boulet said. His conclusion, he said, is that “intellectual freedom or the freedom of speech isn’t there just to protect ideas that we like.”"

Wednesday, February 7, 2024

EU countries strike deal on landmark AI rulebook; Politico, February 2, 2024

GIAN VOLPICELLI, Politico ; EU countries strike deal on landmark AI rulebook

"European Union member countries on Friday unanimously reached a deal on the bloc’s Artificial Intelligence Act, overcoming last-minute fears that the rulebook would stifle European innovation.

EU deputy ambassadors green-lighted the final compromise text, hashed out following lengthy negotiations between representatives of the Council, members of the European Parliament and European Commission officials...

Over the past few weeks, the bloc’s top economies Germany and France, alongside Austria, hinted that they might oppose the text in Friday’s vote...

Eventually, the matter was resolved through the EU’s familiar blend of PR offensive and diplomatic maneuvering. The Commission ramped up the pressure by announcing a splashy package of pro-innovation measures targeting the AI sector, and in one fell swoop created the EU’s Artificial Intelligence Office — a body tasked with enforcing the AI Act...

A spokesperson for German Digital Minister Volker Wissing, the foremost AI Act skeptic within Germany’s coalition government, told POLITICO: "We asked the EU Commission to clarify that the AI Act does not apply to the use of AI in medical devices.".

A statement the European Commission, circulated among EU diplomats ahead of the vote and seen by POLITICO, reveals plans to set up an “expert group” comprising  EU member countries’ authorities. The group’s function will be to “ advise and assist” the Commission in applying and implementing the AI Act...

The AI Act still needs the formal approval of the European Parliament. The text is slated to get rubber-stamped at the committee level in two weeks, with a plenary vote expected in April."

Act now on AI before it’s too late, says UNESCO’s AI lead; Fast Company, February 6, 2024

CHRIS STOKEL-WALKER, Fast Company; Act now on AI before it’s too late, says UNESCO’s AI lead

"Starting today, delegates are gathering in Slovenia at the second Global Forum on the Ethics of AI, organized by UNESCO, the United Nations’ educational, scientific, and cultural arm. The meeting is aimed at broadening the conversation around AI risks and the need to consider AI’s impacts beyond those discussed by first-world countries and business leaders.

Ahead of the conference, Gabriela Ramos, assistant director-general for social and human sciences at UNESCO, spoke with Fast Company...

Countries want to learn from each other. Ethics have become very important. Now there’s not a single conversation I go to that is not at some point referring to ethics—which was not the case one year ago...

Tech companies have previously said they can regulate themselves. Do you think they can with AI?

Let me just ask you something: Which sector has been regulating itself in life? Give me a break."

Tuesday, February 6, 2024

‘The situation has become appalling’: fake scientific papers push research credibility to crisis point; The Guardian, February 3, 2024

 , The Guardian; ‘The situation has become appalling’: fake scientific papers push research credibility to crisis point

"Tens of thousands of bogus research papers are being published in journals in an international scandal that is worsening every year, scientists have warned. Medical research is being compromised, drug development hindered and promising academic research jeopardised thanks to a global wave of sham science that is sweeping laboratories and universities.

Last year the annual number of papers retracted by research journals topped 10,000 for the first time. Most analysts believe the figure is only the tip of an iceberg of scientific fraud."

The Challenges and Benefits of Generative AI in Health Care; Harvard Business Review, January 17, 2024

Harvard Business Review, Azeem Azhar's Exponential View Season 6, Episode 58; The Challenges and Benefits of Generative AI in Health Care

"Artificial Intelligence is on every business leader’s agenda. How do we make sense of the fast-moving new developments in AI over the past year? Azeem Azhar returns to bring clarity to leaders who face a complicated information landscape.

Generative AI has a lot to offer health care professionals and medical scientists. This week, Azeem speaks with renowned cardiologist, scientist, and author Eric Topol about the change he’s observed among his colleagues in the last two years, as generative AI developments have accelerated in medicine.

They discuss:

  • The challenges and benefits of AI in health care.
  • The pros and cons of different open-source and closed-source models for health care use.
  • The medical technology that has been even more transformative than AI in the past year."