Showing posts with label AI developers. Show all posts
Showing posts with label AI developers. Show all posts

Sunday, June 15, 2025

AI chatbots need more books to learn from. These libraries are opening their stacks; AP, June 12, 2025

  MATT O’BRIEN, AP; AI chatbots need more books to learn from. These libraries are opening their stacks

"Supported by “unrestricted gifts” from Microsoft and ChatGPT maker OpenAI, the Harvard-based Institutional Data Initiative is working with libraries and museums around the world on how to make their historic collections AI-ready in a way that also benefits the communities they serve.

“We’re trying to move some of the power from this current AI moment back to these institutions,” said Aristana Scourtas, who manages research at Harvard Law School’s Library Innovation Lab. “Librarians have always been the stewards of data and the stewards of information.

Harvard’s newly released dataset, Institutional Books 1.0, contains more than 394 million scanned pages of paper. One of the earlier works is from the 1400s — a Korean painter’s handwritten thoughts about cultivating flowers and trees. The largest concentration of works is from the 19th century, on subjects such as literature, philosophy, law and agriculture, all of it meticulously preserved and organized by generations of librarians. 

It promises to be a boon for AI developers trying to improve the accuracy and reliability of their systems."

Thursday, June 5, 2025

Government AI copyright plan suffers fourth House of Lords defeat; BBC, June 2, 2025

 Zoe Kleinman , BBC; Government AI copyright plan suffers fourth House of Lords defeat

"The argument is over how best to balance the demands of two huge industries: the tech and creative sectors. 

More specifically, it's about the fairest way to allow AI developers access to creative content in order to make better AI tools - without undermining the livelihoods of the people who make that content in the first place.

What's sparked it is the Data (Use and Access) Bill.

This proposed legislation was broadly expected to finish its long journey through parliament this week and sail off into the law books. 

Instead, it is currently stuck in limbo, ping-ponging between the House of Lords and the House of Commons.

A government consultation proposes AI developers should have access to all content unless its individual owners choose to opt out. 

But 242 members of the House of Lords disagree with the bill in its current form.

They think AI firms should be forced to disclose which copyrighted material they use to train their tools, with a view to licensing it."

Friday, December 27, 2024

The AI Boom May Be Too Good to Be True; Wall Street Journal, December 26, 2024

 Josh Harlan, Wall Street Journal; The AI Boom May Be Too Good to Be True

 "Investors rushing to capitalize on artificial intelligence have focused on the technology—the capabilities of new models, the potential of generative tools, and the scale of processing power to sustain it all. What too many ignore is the evolving legal structure surrounding the technology, which will ultimately shape the economics of AI. The core question is: Who controls the value that AI produces? The answer depends on whether AI companies must compensate rights holders for using their data to train AI models and whether AI creations can themselves enjoy copyright or patent protections.

The current landscape of AI law is rife with uncertainty...How these cases are decided will determine whether AI developers can harvest publicly available data or must license the content used to train their models."

Wednesday, October 16, 2024

What's Next in AI: How do we regulate AI, and protect against worst outcomes?; Pittsburgh Post-Gazette, October 13, 2024

EVAN ROBINSON-JOHNSON , Pittsburgh Post-Gazette; What's Next in AI: How do we regulate AI, and protect against worst outcomes?

"Gov. Josh Shapiro will give more of an update on that project and others at a Monday event in Pittsburgh.

While most folks will likely ask him how Pennsylvania can build and use the tools of the future, a growing cadre in Pittsburgh is asking a broader policy question about how to protect against AI’s worst tendencies...

There are no federal laws that regulate the development and use of AI. Even at the state level, policies are sparse. California Gov. Gavin Newsom vetoed a major AI safety bill last month that would have forced greater commitments from the nation’s top AI developers, most of which are based in the Golden State...

Google CEO Sundar Pichai made a similar argument during a visit to Pittsburgh last month. He encouraged students from local high schools to build AI systems that will make the world a better place, then told a packed audience at Carnegie Mellon University that AI is “too important a technology not to regulate.”

Mr. Pichai said he’s hoping for an “innovation-oriented approach” that mostly leverages existing regulations rather than reinventing the wheel."

Wednesday, August 28, 2024

Controversial California AI regulation bill finds unlikely ally in Elon Musk; The Mercury News, August 28, 2024

  , The Mercury News; Controversial California AI regulation bill finds unlikely ally in Elon Musk

"With a make-or-break deadline just days away, a polarizing bill to regulate the fast-growing artificial intelligence industry from progressive state Sen. Scott Wiener has gained support from an unlikely source.

Elon Musk, the Donald Trump-supporting, often regulation-averse Tesla CEO and X owner, this week said he thinks “California should probably pass” the proposal, which would regulatethe development and deployment of advanced AI models, specifically large-scale AI products costing at least $100 million to build.

The surprising endorsement from a man who also owns an AI company comes as other political heavyweights typically much more aligned with Wiener’s views, including San Francisco Mayor London Breed and Rep. Nancy Pelosi, join major tech companies in urging Sacramento to put on the brakes."

Monday, July 1, 2024

Vatican conference ponders who really holds the power of AI; Religion News Service, June 27, 2024

Claire Giangravé, Religion News Service; Vatican conference ponders who really holds the power of AI

"The vice director general of Italy’s Agency for National Cybersecurity, Nunzia Ciardi, also warned at the conference of the influence held by leading AI developers.

“Artificial intelligence is made up of massive economic investments that only large superpowers can afford and through which they ensure a very important geopolitical dominance and access to the large amount of data that AI must process to produce outputs,” Ciardi said.

Participants agreed that international organizations must enforce stronger regulations for the use and advancement of AI technologies.

“You could say that we are colonized by AI, which is managed by select companies that brutally rack through our data,” she added.

“We need guardrails, because what is coming is a radical transformation that will change real and digital relations and require not only reflection but also regulation,” Benanti said.

The “Rome Call for AI Ethics,” a document signed by IBM, Microsoft, Cisco and U.N. Food and Agriculture Organization representatives, was promoted by the Vatican’s Academy for Life and lays out guidelines for promoting ethics, transparency and inclusivity in AI.

Other religious communities have also joined the “Rome Call,” including the Anglican Church and Jewish and Muslim representatives. On July 9, representatives from Eastern religions will gather for a Vatican-sponsored event to sign the “Rome Call” in Hiroshima, Japan. The location was decided to emphasize the dangerous consequences of technology when unchecked."

Wednesday, May 15, 2024

The Generative AI Copyright Disclosure Act of 2024: Balancing Innovation and IP Rights; The National Law Review, May 13, 2024

  Danner Kline of Bradley Arant Boult Cummings LLP, The National Law Review; The Generative AI Copyright Disclosure Act of 2024: Balancing Innovation and IP Rights

"As generative AI systems become increasingly sophisticated and widespread, concerns around the use of copyrighted works in their training data continue to intensify. The proposed Generative AI Copyright Disclosure Act of 2024 attempts to address this unease by introducing new transparency requirements for AI developers.

The Bill’s Purpose and Requirements

The primary goal of the bill is to ensure that copyright owners have visibility into whether their intellectual property is being used to train generative AI models. If enacted, the law would require companies to submit notices to the U.S. Copyright Office detailing the copyrighted works used in their AI training datasets. These notices would need to be filed within 30 days before or after the public release of a generative AI system.

The Copyright Office would then maintain a public database of these notices, allowing creators to search and see if their works have been included. The hope is that this transparency will help copyright holders make more informed decisions about licensing their IP and seeking compensation where appropriate."

Monday, May 21, 2018

How the Enlightenment Ends; The Atlantic, June 2018 Issue

Henry A. Kissinger, The Atlantic; How the Enlightenment Ends

 

"Heretofore confined to specific fields of activity, AI research now seeks to bring about a “generally intelligent” AI capable of executing tasks in multiple fields. A growing percentage of human activity will, within a measurable time period, be driven by AI algorithms. But these algorithms, being mathematical interpretations of observed data, do not explain the underlying reality that produces them. Paradoxically, as the world becomes more transparent, it will also become increasingly mysterious. What will distinguish that new world from the one we have known? How will we live in it? How will we manage AI, improve it, or at the very least prevent it from doing harm, culminating in the most ominous concern: that AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself as it turns it into data...

The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy. Other countries have made AI a major national project. The United States has not yet, as a nation, systematically explored its full scope, studied its implications, or begun the process of ultimate learning. This should be given a high national priority, above all, from the point of view of relating AI to humanistic traditions.

AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late."