Showing posts with label AI innovation. Show all posts
Showing posts with label AI innovation. Show all posts

Wednesday, October 16, 2024

What's Next in AI: How do we regulate AI, and protect against worst outcomes?; Pittsburgh Post-Gazette, October 13, 2024

EVAN ROBINSON-JOHNSON , Pittsburgh Post-Gazette; What's Next in AI: How do we regulate AI, and protect against worst outcomes?

"Gov. Josh Shapiro will give more of an update on that project and others at a Monday event in Pittsburgh.

While most folks will likely ask him how Pennsylvania can build and use the tools of the future, a growing cadre in Pittsburgh is asking a broader policy question about how to protect against AI’s worst tendencies...

There are no federal laws that regulate the development and use of AI. Even at the state level, policies are sparse. California Gov. Gavin Newsom vetoed a major AI safety bill last month that would have forced greater commitments from the nation’s top AI developers, most of which are based in the Golden State...

Google CEO Sundar Pichai made a similar argument during a visit to Pittsburgh last month. He encouraged students from local high schools to build AI systems that will make the world a better place, then told a packed audience at Carnegie Mellon University that AI is “too important a technology not to regulate.”

Mr. Pichai said he’s hoping for an “innovation-oriented approach” that mostly leverages existing regulations rather than reinventing the wheel."

Monday, September 9, 2024

New Resource Examines Data Governance in the Age of AI; Government Technology, September 6, 2024

 News Staff, Government Technology; New Resource Examines Data Governance in the Age of AI

"A new guide for policymakers, “Data Policy in the Age of AI: A Guide to Using Data for Artificial Intelligence,” aims to educate leaders on responsible AI data use.

The question of how to best regulate artificial intelligence (AI) is one lawmakers are still addressing, as they are trying to balance innovation with risk mitigation. Meanwhile, state and local governments are creating their own regulations in the absence of a comprehensive federal policy.

The new white paper, from the Data Foundation, a nonprofit supporting data-informed public policy, is intended to be a comprehensive resource. It outlines three key pieces of effective data policy: high-quality data, effective governance principles and technical capacity."

Saturday, September 7, 2024

Council of Europe opens first ever global treaty on AI for signature; Council of Europe, September 5, 2024

 Council of Europe; Council of Europe opens first ever global treaty on AI for signature

"The Council of Europe Framework Convention on artificial intelligence and human rights, democracy, and the rule of law (CETS No. 225) was opened for signature during a conference of Council of Europe Ministers of Justice in Vilnius. It is the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.

The Framework Convention was signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom as well as Israel, the United States of America and the European Union...

The treaty provides a legal framework covering the entire lifecycle of AI systems. It promotes AI progress and innovation, while managing the risks it may pose to human rights, democracy and the rule of law. To stand the test of time, it is technology-neutral."

Thursday, August 29, 2024

California advances landmark legislation to regulate large AI models; AP, August 28, 2024

TRÂN NGUYỄN, AP ; California advances landmark legislation to regulate large AI models

"Wiener’s proposal is among dozens of AI bills California lawmakers proposed this year to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography. With AI increasingly affecting the daily lives of Americans, state legislators have tried to strike a balance of reigning in the technology and its potential risks without stifling the booming homegrown industry. 

California, home of 35 of the world’s top 50 AI companies, has been an early adopter of AI technologies and could soon deploy generative AI tools to address highway congestion and road safety, among other things."

Wednesday, August 28, 2024

Controversial California AI regulation bill finds unlikely ally in Elon Musk; The Mercury News, August 28, 2024

  , The Mercury News; Controversial California AI regulation bill finds unlikely ally in Elon Musk

"With a make-or-break deadline just days away, a polarizing bill to regulate the fast-growing artificial intelligence industry from progressive state Sen. Scott Wiener has gained support from an unlikely source.

Elon Musk, the Donald Trump-supporting, often regulation-averse Tesla CEO and X owner, this week said he thinks “California should probably pass” the proposal, which would regulatethe development and deployment of advanced AI models, specifically large-scale AI products costing at least $100 million to build.

The surprising endorsement from a man who also owns an AI company comes as other political heavyweights typically much more aligned with Wiener’s views, including San Francisco Mayor London Breed and Rep. Nancy Pelosi, join major tech companies in urging Sacramento to put on the brakes."

Friday, July 26, 2024

In Hiroshima, a call for peaceful, ethical AI; Cisco, The Newsroom, July 18, 2024

Kevin Delaney , Cisco, The Newsroom; In Hiroshima, a call for peaceful, ethical AI

"“Artificial intelligence is a great tool with unlimited possibilities of application,” Archbishop Vincenzo Paglia, president of the Pontifical Academy for Life, said in an opening address at the AI Ethics for Peace conference in Hiroshima this month.

But Paglia was quick to add that AI’s great promise is fraught with potential dangers.

“AI can and must be guided so that its potential serves the good since the moment of its design,” he stressed. “This is our common responsibility.”

The two-day conference aimed to further the Rome Call for AI Ethics, a document first signed on February 28, 2020, at the Vatican. It promoted an ethical approach to artificial intelligence through shared responsibility among international organizations, governments, institutions and technology companies.

This month’s Hiroshima conference drew dozens of global religious, government, and technology leaders to a city that has transcended its dark past of tech-driven, atomic destruction to become a center for peace and cooperation.

The overarching goal in Hiroshima? To ensure that, unlike atomic energy, artificial intelligence is used only for peace and positive human advancement. And as an industry leader in AI innovation and its responsible use, Cisco was amply represented by Dave West, Cisco’s president for Asia Pacific, Japan, and Greater China (APJC)."