Showing posts with label AI oversight. Show all posts
Showing posts with label AI oversight. Show all posts

Friday, April 3, 2026

AI Is a Threat to Everything the American People Hold Dear.; The Wall Street Journal, April 2, 2026

Bernie Sanders , The Wall Street Journal; AI Is a Threat to Everything the American People Hold Dear. It kills jobs, equality, connection, democracy and maybe the human race. Congress must act.

"The American people are deeply apprehensive about the impact that artificial intelligence will have on their lives. A recent Quinnipiac poll found that 55% of Americans think AI will do more harm than good, 70% think AI will lead to fewer jobs, and only 5% think AI development is being led by people and organizations that represent their interests.

In the midst of all of this deep concern about the future of AI, 74% of Americans think the government isn't doing enough to regulate the use of AI."

Thursday, April 2, 2026

AI gaps in the boardroom are becoming a reputational risk; Axios, April 2, 2026

Eleanor Hawkins, Axios; AI gaps in the boardroom are becoming a reputational risk

"The big picture: Companies across every industry are being forced into rapid AI-driven transformation, but many corporate boards lack the expertise to guide strategy, manage risk or communicate decisions credibly to stakeholders.

By the numbers: Only 39% of Fortune 100 boards have any form of AI oversight, such as committees, a director with AI expertise, or an ethics board, according to McKinsey research.


Another recent report found that only 13% of S&P 500 companies have at least one director with AI-related expertise.


Similarly, McKinsey's survey of directors found that 66% say their boards have "limited to no knowledge or experience" with AI, and nearly one in three say AI does not even appear on their agendas.


And a report from the National Association of Corporate Directors (NACD) found that only 17% have established an AI education plan for directors, and 6% have a dedicated committee to oversee AI.


Between the lines: Having an AI-savvy board is a major competitive advantage, according to a recent MIT study."

Thursday, March 26, 2026

White House Unveils A.I. Policy Aimed at Blocking State Laws; The New York Times, March 20, 2026

 , The New York Times; White House Unveils A.I. Policy Aimed at Blocking State Laws

The Trump administration on Friday released new guidelines for federal legislation on the technology, recommending some safeguards for children and consumer protections for energy costs.

"The White House on Friday released policy guidelines that called for blocking state laws regulating artificial intelligence, while also recommending some safeguards for children and consumer protections for energy costs.

Dozens of states have passed laws in recent months to regulate A.I., which has created concerns about the technology’s potential to steal jobs, push up energy prices and threaten national security. But President Trump has made clear U.S. companies should have mostly free rein in a global race to dominate the technology.

On Friday, the White House called on Congress to pass federal A.I. legislation to override the state laws. Among the Trump administration’s suggested measures, Congress would streamline the process for building data centers, the warehouses full of computers that power A.I. The framework also proposed guardrails to prevent the government from using the technology for censorship, as well as mandating A.I.-related work force training."

Tuesday, March 10, 2026

OpenAI robotics leader resigns over concerns about Pentagon AI deal; NPR, March 8, 2026

 , NPR; OpenAI robotics leader resigns over concerns about Pentagon AI deal

"A senior member of OpenAI's robotics team has resigned, citing concerns about how the company moved forward with a recently announced partnership with the U.S. Department of Defense.

Caitlin Kalinowski, who served as a member of technical staff focused on robotics and hardware, posted on social media that she had stepped down on "principle" after the company revealed plans to make its AI systems available inside secure Defense Department computing systems...

In public posts explaining her decision, Kalinowski wrote: "I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn't an easy call."

She said policy guardrails around certain AI uses were not sufficiently defined before OpenAI announced an agreement with the Pentagon. "AI has an important role in national security," Kalinowski wrote. "But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.""

Monday, February 23, 2026

Backed by Anthropic, a Super PAC Group Begins an Ad Blitz in Support of A.I. Regulation; The New York Times, February 23, 2026

 , The New York Times ; Backed by Anthropic, a Super PAC Group Begins an Ad Blitz in Support of A.I. Regulation

The ads by Public First Action, which started airing on Monday, are part of an escalating political war over artificial intelligence before the midterm elections.

"A new ad campaign on Monday warned northern New Jersey residents that Congress could leave them vulnerable to harm by artificial intelligence.

The ad, which opens with photos of A.I.-generated women smiling on social media alongside A.I.-generated headlines, urged voters to tell their House representative to vote against a bill that would block states from creating protections against A.I. scams.

“He can make sure A.I. serves us, not the other way around,” the ad said of Josh Gottheimer, the Democratic co-chair of the House’s new A.I. commission, which is expected to heavily influence legislation on the topic. “New Jersey families come before Big Tech’s bottom line.”

The $300,000 ad campaign was paid for by Public First Action, a super PAC operation backed by the A.I. start-up Anthropic. Focused on New Jersey, the campaign is likely to run several weeks — part of several similar initiatives by the group nationally."

Tuesday, February 17, 2026

The economics of AI outweigh ethics for tech CEOs, business leader says; CNN, February 16, 2026

CNN; The economics of AI outweigh ethics for tech CEOs, business leader says

"Podcast host and business leader Scott Galloway joins Dana Bash on "Inside Politics" to discuss the need for comprehensive government regulation of AI. “We have increasingly outsourced our ethics, our civic responsibility, what is good for the public to the CEOs of companies of tech," Galloway tells Bash, adding, "This is another example of how government is failing to step in and provide thoughtful, sensible regulations.” His comments come as the Pentagon confirms it's reviewing a contract with AI company Anthropic after a reported clash over the scope of AI guardrails."

Tuesday, February 3, 2026

‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report; The Guardian, February 3, 2026

, The Guardian; ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report

"The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market.

Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the “daunting challenges” posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.

Here are some of the key points from the second annual report, published on Tuesday. It stresses that it is a state-of-play document, rather than a vehicle for making specific policy recommendations to governments. Nonetheless, it is likely to help frame the debate for policymakers, tech executives and NGOs attending the next global AI summit in India this month...

1. The capabilities of AI models are improving...


2. Deepfakes are improving and proliferating...


3. AI companies have introduced biological and chemical risk safeguards...


4. AI companions have grown rapidly in popularity...


5. AI is not yet capable of fully autonomous cyber-attacks...


6. AI systems are getting better at undermining oversight...


7. The jobs impact remains unclear"

Monday, June 30, 2025

Senate’s New A.I. Moratorium Proposal Draws Fresh Criticism; The New York Times, June 30, 2025

 , The New York Times; Senate’s New A.I. Moratorium Proposal Draws Fresh Criticism

"Two senior senators have reached a compromise on an amendment in the Republican economic policy bill that would block state laws on artificial intelligence.

Senators Marsha Blackburn, Republican of Tennessee, and Ted Cruz, Republican of Texas, agreed late Sunday to decrease a proposed moratorium on state laws regulating the technology to five years from 10.

But Democratic lawmakers and consumer protection groups on Monday criticized new language in the amendment that would create a higher standard for the enforcement of existing tech-related state laws, including those for online child safety and consumer protections. Any current laws related to A.I. cannot pose an “undue or disproportionate burden” to A.I. companies, according to the amendment."

Monday, October 30, 2023

Biden plans to step up government oversight of AI with new 'pressure tests'; NPR, October 30, 2023

 , NPR; Biden plans to step up government oversight of AI with new 'pressure tests'

"President Biden on Monday will take sweeping executive action to try to establish oversight of the rapidly evolving artificial intelligence sector, setting new standards for safety tests for AI products – as well as a system for federal "pressure tests" of major systems, White House chief of staff Jeff Zients told NPR.

Months in the making, the executive order reflects White House concerns that the technology, left unchecked, could pose significant risks to national security, the economy, public health and privacy. The announcement comes just days ahead of a major global summit on AI taking place in London, which Vice President Harris will attend."

Thursday, September 14, 2023

Transcript: US Senate Judiciary Hearing on Oversight of A.I.; Tech Policy Press, September 13, 2023

Gabby Miller, Tech Policy Press; Transcript: US Senate Judiciary Hearing on Oversight of A.I.

"Artificial Intelligence (AI) is in the spotlight only a week into the U.S. Congress’ return from recess. On Tuesday, the Senate held two AI-focused Subcommittee hearings just a day before the first AI Insight Forum hosted by Senate Majority Leader Charles Schumer (D-NY).

Tuesday’s hearing before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law was led by Chairman Sen. Richard Blumenthal (D-CT) and Ranking Member Josh Hawley (R-MO), another of a series of hearings in the committee on how best to govern artificial intelligence. It also corresponded with their formal introduction of a bipartisan bill by Sens. Blumenthal and Hawley that would deny AI companies Section 230 immunity. 

  • Woodrow Hartzog, Professor of Law, Boston University School of Law Fellow, Cordell Institute for Policy in Medicine & Law, Washington University in St. Louis (written testimony)
  • William Dally, Chief Scientist and Senior Vice President of Research, NVIDIA Corporation (written testimony)
  • Brad Smith, Vice Chair and President, Microsoft Corporation (written testimony)

(Microsoft’s Smith will also be in attendance for Sen. Schumer’s first AI Insight Forum on Wednesday and NVIDIA’s CEO, Jensen Huang, will be joining him.)"