Showing posts with label AI innovation. Show all posts
Showing posts with label AI innovation. Show all posts

Thursday, July 24, 2025

President Trump’s AI Action Plan Misses the Mark, Calls for Action Without Vision; Public Knowledge, July 23, 2025

Shiva Stella, Public Knowledge; President Trump’s AI Action Plan Misses the Mark, Calls for Action Without Vision

"Today, the Trump administration announced its artificial intelligence action plan designed to “accelerate AI innovation” – by stepping aside and giving technology companies free rein over how the technology develops. The plan removes state and federal regulatory requirements, eliminates protections against bias and discrimination, fails to address competition concerns, and ignores climate and environmental risks.

The plan does continue to advance important work on developing an AI evaluation ecosystem and supporting critical research on AI interpretability, control, security risks, and advancing the fundamental science of AI. However, these modest steps throw into stark contrast the failure to meaningfully invest in America’s AI future.

Public Knowledge argues that real AI innovation will require real leadership from our democratically elected leaders, investments and actions that break down monopolies and corporate control, and public trust earned by creating AI systems that are safe, fair, and subject to the rule of law...

The following can be attributed to Nicholas Garcia, Senior Policy Counsel at Public Knowledge: 

“This plan is action without vision or direction. Cutting regulations and eliminating protections is, by itself, not a plan for innovation and competition in AI – it is a handout to already-entrenched, powerful tech companies. The real constraints on AI innovation are well-known: access to training data, compute power, and research talent. This plan’s solutions in those areas are severely lacking. At its heart, the plan is starkly divided between political posturing and serious science.

“It is clear that some of the experts’ messages from the public comments reached the White House. Continuing to develop an AI evaluation ecosystem; investing in research on AI interpretability and control; promoting the development and use of open-source and open-weights models; and claiming an international leadership position on evaluating AI national security risks are all critically important policy pursuits. 

“President Trump also spoke strongly in his speech tonight about the need to protect the rights to read and learn. He is absolutely correct about the need to protect those fundamental rights for everyone, including for AI training. Unfortunately, there is no mention of how to protect these rights or address questions about copyright in the AI action plan. 

“Instead of focusing more deeply on research or promoting competition, the AI action plan continues the Trump administration’s attack on diversity and equality, on the green energy solutions needed to both protect our planet and power AI, and on the very institutions of science and learning that are necessary to secure the promise of AI. This demonstrates how the vindictive political project of ‘preventing woke’ directly clashes with achieving actual leadership in AI.

“Ultimately, the plan’s soaring and optimistic language of AI acceleration is undermined by a failure to embrace an affirmative vision of how AI will improve the lives of everyday Americans and how to actually get there. We can only hope that these small steps in the right direction on evaluations, research, and open-source – along with the administration’s remarks on copyright – means that there is more to come to ensure that the American people are the winners of the AI race. As it stands right now, this plan fails to meet the challenges of this pivotal moment.” 

You may view our recent blog post, “Hopes and Fears for President Trump’s AI Action Plan,” for more information."

Donald Trump Is Fairy-Godmothering AI; The Atlantic, July 23, 2025

Matteo Wong , The Atlantic; Donald Trump Is Fairy-Godmothering AI

"In a sense, the action plan is a bet. AI is already changing a number of industries, including software engineering, and a number of scientific disciplines. Should AI end up producing incredible prosperity and new scientific discoveries, then the AI Action Plan may well get America there faster simply by removing any roadblocks and regulations, however sensible, that would slow the companies down. But should the technology prove to be a bubble—AI products remain error-prone, extremely expensive to build, and unproven in many business applications—the Trump administration is more rapidly pushing us toward the bust. Either way, the nation is in Silicon Valley’s hands...

Once the red tape is gone, the Trump administration wants to create a “dynamic, ‘try-first’ culture for AI across American industry.” In other words, build and test out AI products first, and then determine if those products are actually helpful—or if they pose any risks.

Trump gestured toward other concessions to the AI industry in his speech. He specifically targeted intellectual-property laws, arguing that training AI models on copyrighted books and articles does not infringe upon copyright because the chatbots, like people, are simply learning from the content. This has been a major conflict in recent years, with more than 40 related lawsuits filed against AI companies since 2022. (The Atlantic is suing the AI company Cohere, for example.) If courts were to decide that training AI models with copyrighted material is against the law, it would be a major setback for AI companies. In their official recommendations for the AI Action Plan, OpenAI, Microsoft, and Google all requested a copyright exception, known as “fair use,” for AI training. Based on his statements, Trump appears to strongly agree with this position, although the AI Action Plan itself does not reference copyright and AI training.

Also sprinkled throughout the AI Action Plan are gestures toward some MAGA priorities. Notably, the policy states that the government will contract with only AI companies whose models are “free from top-down ideological bias”—a reference to Sacks’s crusade against “woke” AI—and that a federal AI-risk-management framework should “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” Trump signed a third executive order today that, in his words, will eliminate “woke, Marxist lunacy” from AI models...

Looming over the White House’s AI agenda is the threat of Chinese technology getting ahead. The AI Action Plan repeatedly references the importance of staying ahead of Chinese AI firms, as did the president’s speech: “We will not allow any foreign nation to beat us; our nation will not live in a planet controlled by the algorithms of the adversaries,” Trump declared...

But whatever happens on the international stage, hundreds of millions of Americans will feel more and more of generative AI’s influence—on salaries and schools, air quality and electricity costs, federal services and doctor’s offices. AI companies have been granted a good chunk of their wish list; if anything, the industry is being told that it’s not moving fast enough. Silicon Valley has been given permission to accelerate, and we’re all along for the ride."

Wednesday, October 16, 2024

What's Next in AI: How do we regulate AI, and protect against worst outcomes?; Pittsburgh Post-Gazette, October 13, 2024

EVAN ROBINSON-JOHNSON , Pittsburgh Post-Gazette; What's Next in AI: How do we regulate AI, and protect against worst outcomes?

"Gov. Josh Shapiro will give more of an update on that project and others at a Monday event in Pittsburgh.

While most folks will likely ask him how Pennsylvania can build and use the tools of the future, a growing cadre in Pittsburgh is asking a broader policy question about how to protect against AI’s worst tendencies...

There are no federal laws that regulate the development and use of AI. Even at the state level, policies are sparse. California Gov. Gavin Newsom vetoed a major AI safety bill last month that would have forced greater commitments from the nation’s top AI developers, most of which are based in the Golden State...

Google CEO Sundar Pichai made a similar argument during a visit to Pittsburgh last month. He encouraged students from local high schools to build AI systems that will make the world a better place, then told a packed audience at Carnegie Mellon University that AI is “too important a technology not to regulate.”

Mr. Pichai said he’s hoping for an “innovation-oriented approach” that mostly leverages existing regulations rather than reinventing the wheel."

Monday, September 9, 2024

New Resource Examines Data Governance in the Age of AI; Government Technology, September 6, 2024

 News Staff, Government Technology; New Resource Examines Data Governance in the Age of AI

"A new guide for policymakers, “Data Policy in the Age of AI: A Guide to Using Data for Artificial Intelligence,” aims to educate leaders on responsible AI data use.

The question of how to best regulate artificial intelligence (AI) is one lawmakers are still addressing, as they are trying to balance innovation with risk mitigation. Meanwhile, state and local governments are creating their own regulations in the absence of a comprehensive federal policy.

The new white paper, from the Data Foundation, a nonprofit supporting data-informed public policy, is intended to be a comprehensive resource. It outlines three key pieces of effective data policy: high-quality data, effective governance principles and technical capacity."

Saturday, September 7, 2024

Council of Europe opens first ever global treaty on AI for signature; Council of Europe, September 5, 2024

 Council of Europe; Council of Europe opens first ever global treaty on AI for signature

"The Council of Europe Framework Convention on artificial intelligence and human rights, democracy, and the rule of law (CETS No. 225) was opened for signature during a conference of Council of Europe Ministers of Justice in Vilnius. It is the first-ever international legally binding treaty aimed at ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.

The Framework Convention was signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom as well as Israel, the United States of America and the European Union...

The treaty provides a legal framework covering the entire lifecycle of AI systems. It promotes AI progress and innovation, while managing the risks it may pose to human rights, democracy and the rule of law. To stand the test of time, it is technology-neutral."

Thursday, August 29, 2024

California advances landmark legislation to regulate large AI models; AP, August 28, 2024

TRÂN NGUYỄN, AP ; California advances landmark legislation to regulate large AI models

"Wiener’s proposal is among dozens of AI bills California lawmakers proposed this year to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography. With AI increasingly affecting the daily lives of Americans, state legislators have tried to strike a balance of reigning in the technology and its potential risks without stifling the booming homegrown industry. 

California, home of 35 of the world’s top 50 AI companies, has been an early adopter of AI technologies and could soon deploy generative AI tools to address highway congestion and road safety, among other things."

Wednesday, August 28, 2024

Controversial California AI regulation bill finds unlikely ally in Elon Musk; The Mercury News, August 28, 2024

  , The Mercury News; Controversial California AI regulation bill finds unlikely ally in Elon Musk

"With a make-or-break deadline just days away, a polarizing bill to regulate the fast-growing artificial intelligence industry from progressive state Sen. Scott Wiener has gained support from an unlikely source.

Elon Musk, the Donald Trump-supporting, often regulation-averse Tesla CEO and X owner, this week said he thinks “California should probably pass” the proposal, which would regulatethe development and deployment of advanced AI models, specifically large-scale AI products costing at least $100 million to build.

The surprising endorsement from a man who also owns an AI company comes as other political heavyweights typically much more aligned with Wiener’s views, including San Francisco Mayor London Breed and Rep. Nancy Pelosi, join major tech companies in urging Sacramento to put on the brakes."

Friday, July 26, 2024

In Hiroshima, a call for peaceful, ethical AI; Cisco, The Newsroom, July 18, 2024

Kevin Delaney , Cisco, The Newsroom; In Hiroshima, a call for peaceful, ethical AI

"“Artificial intelligence is a great tool with unlimited possibilities of application,” Archbishop Vincenzo Paglia, president of the Pontifical Academy for Life, said in an opening address at the AI Ethics for Peace conference in Hiroshima this month.

But Paglia was quick to add that AI’s great promise is fraught with potential dangers.

“AI can and must be guided so that its potential serves the good since the moment of its design,” he stressed. “This is our common responsibility.”

The two-day conference aimed to further the Rome Call for AI Ethics, a document first signed on February 28, 2020, at the Vatican. It promoted an ethical approach to artificial intelligence through shared responsibility among international organizations, governments, institutions and technology companies.

This month’s Hiroshima conference drew dozens of global religious, government, and technology leaders to a city that has transcended its dark past of tech-driven, atomic destruction to become a center for peace and cooperation.

The overarching goal in Hiroshima? To ensure that, unlike atomic energy, artificial intelligence is used only for peace and positive human advancement. And as an industry leader in AI innovation and its responsible use, Cisco was amply represented by Dave West, Cisco’s president for Asia Pacific, Japan, and Greater China (APJC)."