Showing posts with label AI developers. Show all posts
Showing posts with label AI developers. Show all posts

Wednesday, October 16, 2024

What's Next in AI: How do we regulate AI, and protect against worst outcomes?; Pittsburgh Post-Gazette, October 13, 2024

EVAN ROBINSON-JOHNSON , Pittsburgh Post-Gazette; What's Next in AI: How do we regulate AI, and protect against worst outcomes?

"Gov. Josh Shapiro will give more of an update on that project and others at a Monday event in Pittsburgh.

While most folks will likely ask him how Pennsylvania can build and use the tools of the future, a growing cadre in Pittsburgh is asking a broader policy question about how to protect against AI’s worst tendencies...

There are no federal laws that regulate the development and use of AI. Even at the state level, policies are sparse. California Gov. Gavin Newsom vetoed a major AI safety bill last month that would have forced greater commitments from the nation’s top AI developers, most of which are based in the Golden State...

Google CEO Sundar Pichai made a similar argument during a visit to Pittsburgh last month. He encouraged students from local high schools to build AI systems that will make the world a better place, then told a packed audience at Carnegie Mellon University that AI is “too important a technology not to regulate.”

Mr. Pichai said he’s hoping for an “innovation-oriented approach” that mostly leverages existing regulations rather than reinventing the wheel."

Wednesday, August 28, 2024

Controversial California AI regulation bill finds unlikely ally in Elon Musk; The Mercury News, August 28, 2024

  , The Mercury News; Controversial California AI regulation bill finds unlikely ally in Elon Musk

"With a make-or-break deadline just days away, a polarizing bill to regulate the fast-growing artificial intelligence industry from progressive state Sen. Scott Wiener has gained support from an unlikely source.

Elon Musk, the Donald Trump-supporting, often regulation-averse Tesla CEO and X owner, this week said he thinks “California should probably pass” the proposal, which would regulatethe development and deployment of advanced AI models, specifically large-scale AI products costing at least $100 million to build.

The surprising endorsement from a man who also owns an AI company comes as other political heavyweights typically much more aligned with Wiener’s views, including San Francisco Mayor London Breed and Rep. Nancy Pelosi, join major tech companies in urging Sacramento to put on the brakes."

Monday, July 1, 2024

Vatican conference ponders who really holds the power of AI; Religion News Service, June 27, 2024

Claire Giangravé, Religion News Service; Vatican conference ponders who really holds the power of AI

"The vice director general of Italy’s Agency for National Cybersecurity, Nunzia Ciardi, also warned at the conference of the influence held by leading AI developers.

“Artificial intelligence is made up of massive economic investments that only large superpowers can afford and through which they ensure a very important geopolitical dominance and access to the large amount of data that AI must process to produce outputs,” Ciardi said.

Participants agreed that international organizations must enforce stronger regulations for the use and advancement of AI technologies.

“You could say that we are colonized by AI, which is managed by select companies that brutally rack through our data,” she added.

“We need guardrails, because what is coming is a radical transformation that will change real and digital relations and require not only reflection but also regulation,” Benanti said.

The “Rome Call for AI Ethics,” a document signed by IBM, Microsoft, Cisco and U.N. Food and Agriculture Organization representatives, was promoted by the Vatican’s Academy for Life and lays out guidelines for promoting ethics, transparency and inclusivity in AI.

Other religious communities have also joined the “Rome Call,” including the Anglican Church and Jewish and Muslim representatives. On July 9, representatives from Eastern religions will gather for a Vatican-sponsored event to sign the “Rome Call” in Hiroshima, Japan. The location was decided to emphasize the dangerous consequences of technology when unchecked."

Wednesday, May 15, 2024

The Generative AI Copyright Disclosure Act of 2024: Balancing Innovation and IP Rights; The National Law Review, May 13, 2024

  Danner Kline of Bradley Arant Boult Cummings LLP, The National Law Review; The Generative AI Copyright Disclosure Act of 2024: Balancing Innovation and IP Rights

"As generative AI systems become increasingly sophisticated and widespread, concerns around the use of copyrighted works in their training data continue to intensify. The proposed Generative AI Copyright Disclosure Act of 2024 attempts to address this unease by introducing new transparency requirements for AI developers.

The Bill’s Purpose and Requirements

The primary goal of the bill is to ensure that copyright owners have visibility into whether their intellectual property is being used to train generative AI models. If enacted, the law would require companies to submit notices to the U.S. Copyright Office detailing the copyrighted works used in their AI training datasets. These notices would need to be filed within 30 days before or after the public release of a generative AI system.

The Copyright Office would then maintain a public database of these notices, allowing creators to search and see if their works have been included. The hope is that this transparency will help copyright holders make more informed decisions about licensing their IP and seeking compensation where appropriate."

Monday, May 21, 2018

How the Enlightenment Ends; The Atlantic, June 2018 Issue

Henry A. Kissinger, The Atlantic; How the Enlightenment Ends

 

"Heretofore confined to specific fields of activity, AI research now seeks to bring about a “generally intelligent” AI capable of executing tasks in multiple fields. A growing percentage of human activity will, within a measurable time period, be driven by AI algorithms. But these algorithms, being mathematical interpretations of observed data, do not explain the underlying reality that produces them. Paradoxically, as the world becomes more transparent, it will also become increasingly mysterious. What will distinguish that new world from the one we have known? How will we live in it? How will we manage AI, improve it, or at the very least prevent it from doing harm, culminating in the most ominous concern: that AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself as it turns it into data...

The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy. Other countries have made AI a major national project. The United States has not yet, as a nation, systematically explored its full scope, studied its implications, or begun the process of ultimate learning. This should be given a high national priority, above all, from the point of view of relating AI to humanistic traditions.

AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late."