Showing posts with label AI regulation. Show all posts
Showing posts with label AI regulation. Show all posts

Monday, May 4, 2026

Poll: The midterms' new big players are pushing agendas that voters don’t fully support; Politico, May 3, 2026

 ERIN DOHERTY,  JASPER GOODMANJESSICA PIPER,  DANIEL BARNES and BRENDAN BORDELON, Politico ; Poll: The midterms' new big players are pushing agendas that voters don’t fully support

"Deep-pocketed political groups tied to artificial intelligence and cryptocurrency are rapidly reshaping the midterm money landscape — but many Americans are uneasy with the industries behind the spending.

New results from The POLITICO Poll find broad public skepticism about crypto and AI, creating a possible conflict for candidates benefitting from an influx of contributions from the two industries. These groups are pouring millions of dollars into competitive 2026 races to elevate politicians who they believe will support their agendas in Washington.

Meanwhile, Americans have been slow to embrace either technology.

A 45 percent plurality of Americans say investing in cryptocurrency is not worth the risk, even if it can yield high returns, and a 44 percent plurality say AI is developing too quickly, according to the April survey conducted by independent firm Public First.

Nearly half of Americans say they trust a traditional bank with their money more than a cryptocurrency platform, while just 17 percent say the opposite. And two-thirds support lawmakers either imposing strict regulations or setting broad principles for the AI industry."

Wednesday, April 29, 2026

Americans are down on AI. These two caricatures are to blame.; The Washington Post, April 28, 2026

 and 
Sha Sajadieh
, The Washington Post; Americans are down on AI. These two caricatures are to blame.

"America is all-in on artificial intelligence. Americans are not. 

That is clear enough from domestic polling alone. But the 2026 AI Index, released this month by Stanford University’s Institute for Human-Centered AI, puts this skepticism into a global context. 

While the United States and China are nearly matched in their aggressive investment and economic stakes, they are worlds apart in public sentiment. About 84 percent of respondents in China say they are excited about AI, versus just 38 percent in the U.S., a gap with profound implications for how each country builds, adopts and governs the technology.

Public opinion does not merely reflect the AI debate. It decides whether democratic societies can govern the technology wisely, adopt it productively and distinguish between real risk and manufactured panic.

The U.S. now stands out with its distrust. Americans reported the lowest trust in their government to regulate AI responsibly of any country surveyed: just 31 percent. The global average was 54 percent. In Singapore, that number was 81 percent."

Monday, April 27, 2026

Trump’s anti-DEI movement comes for AI; Politico, April 27, 2026

AARON MAK , Politico ; Trump’s anti-DEI movement comes for AI

"The legal crusade against affirmative action is coming for artificial intelligence.

On Friday, the Justice Department intervened in xAI’s challenge to Colorado’s “Consumer Protections for Artificial Intelligence” law. In its complaint, the DOJ argues the law’s provisions curbing algorithmic bias violates people’s 14th Amendment right to be treated equally under the law.

The intervention is in some ways an outgrowth of the movement to eradicate all race-conscious policies after the landmark Supreme Court case Students for Fair Admissions v. Harvard in 2023 struck down affirmative action in college admissions."

Sunday, April 26, 2026

This Is How We Get Moral A.I. Companies; The New York Times, April 26, 2026

The New York Times; This Is How We Get Moral A.I. Companies

"Artificial intelligence can be wondrous, but the technology underneath is more than a little monstrous. It eats up all the words in the world, from blogs to books, often without permission. It burns whole forests’ worth of energy, digesting that raw material into its models, and gulps billions of gallons of water to cool down. These are the same qualities we perceive in Godzilla, but distributed. Is it any wonder that the Japanese word “kaiju,” or strange beast, has “AI” smack in the middle?...

The entire culture of American technology is built around two terms: disruption and, of course, scale. But ethics are constraints on disruption and scale. Truly ethics-bound organizations — the U.S. justice system, the American Medical Association, the Catholic priesthood — have hard scaling limits. Their rules run deep, and their requirements to serve are so onerous that only a few people can do the job. Punishments for transgressors include losing their licenses, being defrocked and being disbarred. Software industry people might have good degrees and are often good people, but they are making it up as they go along. They take no oath, are inconsistently certified and can only be fired, not exiled from the trade."

Monday, April 13, 2026

Nobody is governing AI; Quartz, April 8, 2026

 Jackie Snow, Quartz ; Nobody is governing AI

Artificial intelligence is advancing faster than lawmakers can regulate it, while global AI governance fragments in real time

"Artificial intelligence is now making hiring decisions, tutoring children, optimizing power grids, and targeting weapons systems. The rules governing any of that are, almost everywhere, either nonexistent, stalled in committee, or under active attack.

In the United States, the federal government has spent three years producing executive orders, frameworks, and guidelines, none of which have become law. States that tried to fill the gap have been threatened with funding cuts and lawsuits. In Europe, the most ambitious AI legislation in the world is being delayed or softened before most of it has even taken effect. The technology, meanwhile, has not paused for any of this."

Friday, April 10, 2026

OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters; Wired, April 9, 2026

MAXWELL ZEFF , Wired; OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters

The ChatGPT-maker testified in favor of an Illinois bill that would limit when AI labs can be held liable—even in cases where their products cause “critical harm.”

"OPENAI IS THROWING its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage."

Thursday, April 9, 2026

Claude Mythos Is Everyone’s Problem; The Atlantic, April 9, 2026

Matteo Wong , The Atlantic; Claude Mythos Is Everyone’s Problem

What happens when AI can hack everything?

"These companies can or could soon have the capability to launch major cyberattacks, conduct mass surveillance, influence military operations, cause huge swings in financial and labor markets, and reorient global supply chains. In theory, nothing governs these companies other than their own morals and their investors. They are developing the power to upend nations and economies. These are the AI superpowers."

Friday, April 3, 2026

AI Is a Threat to Everything the American People Hold Dear.; The Wall Street Journal, April 2, 2026

Bernie Sanders , The Wall Street Journal; AI Is a Threat to Everything the American People Hold Dear. It kills jobs, equality, connection, democracy and maybe the human race. Congress must act.

"The American people are deeply apprehensive about the impact that artificial intelligence will have on their lives. A recent Quinnipiac poll found that 55% of Americans think AI will do more harm than good, 70% think AI will lead to fewer jobs, and only 5% think AI development is being led by people and organizations that represent their interests.

In the midst of all of this deep concern about the future of AI, 74% of Americans think the government isn't doing enough to regulate the use of AI."

Thursday, March 26, 2026

White House Unveils A.I. Policy Aimed at Blocking State Laws; The New York Times, March 20, 2026

 , The New York Times; White House Unveils A.I. Policy Aimed at Blocking State Laws

The Trump administration on Friday released new guidelines for federal legislation on the technology, recommending some safeguards for children and consumer protections for energy costs.

"The White House on Friday released policy guidelines that called for blocking state laws regulating artificial intelligence, while also recommending some safeguards for children and consumer protections for energy costs.

Dozens of states have passed laws in recent months to regulate A.I., which has created concerns about the technology’s potential to steal jobs, push up energy prices and threaten national security. But President Trump has made clear U.S. companies should have mostly free rein in a global race to dominate the technology.

On Friday, the White House called on Congress to pass federal A.I. legislation to override the state laws. Among the Trump administration’s suggested measures, Congress would streamline the process for building data centers, the warehouses full of computers that power A.I. The framework also proposed guardrails to prevent the government from using the technology for censorship, as well as mandating A.I.-related work force training."

Monday, February 23, 2026

Backed by Anthropic, a Super PAC Group Begins an Ad Blitz in Support of A.I. Regulation; The New York Times, February 23, 2026

 , The New York Times ; Backed by Anthropic, a Super PAC Group Begins an Ad Blitz in Support of A.I. Regulation

The ads by Public First Action, which started airing on Monday, are part of an escalating political war over artificial intelligence before the midterm elections.

"A new ad campaign on Monday warned northern New Jersey residents that Congress could leave them vulnerable to harm by artificial intelligence.

The ad, which opens with photos of A.I.-generated women smiling on social media alongside A.I.-generated headlines, urged voters to tell their House representative to vote against a bill that would block states from creating protections against A.I. scams.

“He can make sure A.I. serves us, not the other way around,” the ad said of Josh Gottheimer, the Democratic co-chair of the House’s new A.I. commission, which is expected to heavily influence legislation on the topic. “New Jersey families come before Big Tech’s bottom line.”

The $300,000 ad campaign was paid for by Public First Action, a super PAC operation backed by the A.I. start-up Anthropic. Focused on New Jersey, the campaign is likely to run several weeks — part of several similar initiatives by the group nationally."

Tuesday, February 17, 2026

The economics of AI outweigh ethics for tech CEOs, business leader says; CNN, February 16, 2026

CNN; The economics of AI outweigh ethics for tech CEOs, business leader says

"Podcast host and business leader Scott Galloway joins Dana Bash on "Inside Politics" to discuss the need for comprehensive government regulation of AI. “We have increasingly outsourced our ethics, our civic responsibility, what is good for the public to the CEOs of companies of tech," Galloway tells Bash, adding, "This is another example of how government is failing to step in and provide thoughtful, sensible regulations.” His comments come as the Pentagon confirms it's reviewing a contract with AI company Anthropic after a reported clash over the scope of AI guardrails."

Tuesday, February 10, 2026

No, the human-robot singularity isn’t here. But we must take action to govern AI; The Guardian, February 10, 2026

 , The Guardian; No, the human-robot singularity isn’t here. But we must take action to govern AI

"Based upon my years of research on bots, AI and computational propaganda, I can tell you two things with near certainty. First, Moltbook is nothing new. Humans have built bots that can talk to one another – and to humans – for decades. They’ve been designed to make outlandish, even frightening, claims throughout this time. Second, the singularity is not here. Nor is AGI. According to most researchers, neither is remotely close. AI’s advancement is limited by a number of very tangible factors: mathematics, data access and business costs among them. Claims that AGI or the singularity have arrived are not grounded in empirical research or science.

But as tech companies breathlessly promote their AI capabilities another thing is also clear: big tech is now far from being the countervailing force it was during the first Trump administration. The overblown claims emanating from Silicon Valley about AI have become intertwined with the nationalism of the US government as the two work together in a bid to “win” the AI race. Meanwhile, ICE is paying Palantir $30m to provide AI-enabled software that may be used for government surveillance. Musk and other tech executives continue to champion far-right causes. Google and Apple also removed apps people were using to track ICE from their digital storefronts after political pressure.

Even if we don’t yet have to worry about the singularity, we do need to fight back against this marriage of convenience caused by big tech’s quest for higher valuations and Washington’s desire for control. When tech and politicians are in lockstep, constituents will need to use their power to decide what will happen with AI."

Friday, January 30, 2026

The $1.5 Billion Reckoning: AI Copyright and the 2026 Regulatory Minefield; JD Supra, January 27, 2026

 Rob Robinson, JD Supra ; The $1.5 Billion Reckoning: AI Copyright and the 2026 Regulatory Minefield

"In the silent digital halls of early 2026, the era of “ask for forgiveness later” has finally hit a $1.5 billion brick wall. As legal frameworks in Brussels and New Delhi solidify, the wild west of AI training data is being partitioned into clearly marked zones of liability and license. For those who manage information, secure data, or navigate the murky waters of eDiscovery, this landscape is no longer a theoretical debate—it is an active regulatory battlefield where every byte of training data carries a price tag."

Sunday, December 28, 2025

Americans Hate AI. Which Party Will Benefit?; Politico, December 28, 2025

CALDER MCHUGH, Politico; Americans Hate AI. Which Party Will Benefit?

"There is a massive, growing opportunity for Democrats to tap into rising anxiety, fear and anger about the havoc AI could wreak in people’s lives, they say, on issues from energy affordability to large-scale job losses, and channel it toward a populist movement — and not doing it, or not doing it strongly enough, will hurt the party...

There is hardly any issue that polls lower than unchecked AI development among Americans. Gallup polling showed that 80 percent of American adults think the government should regulate AI, even if it means growing more slowly. Pew, meanwhile, ran a study that showed only 17 percent of Americans think AI will have a positive impact on the U.S. over the next 20 years. Even congressional Democrats, at a record low 18 percent approval, beat that out, according to Quinnipiac.

“It’s not just the working class [that’s hurting]. It’s the middle class. It’s the upper middle class,” said Morris Katz, a strategist who has worked with incoming New York mayor Zohran Mamdani, Maine Senate candidate Graham Platner and Nebraska independent Dan Osborn, among others. “We’re really headed towards a point in which it feels like we will all be struggling, except for 12 billionaires hiding out in a wine cave somewhere.”"

Bernie Sanders calls for pause in AI development: ‘What are they gonna do when people have no jobs?’; The Independent, December 28, 2025

John Bowden  , The Independent; Bernie Sanders calls for pause in AI development: ‘What are they gonna do when people have no jobs?’

Senator’s warnings come as Trump renews calls to ban states from regulating AI

"“This is the most consequential technology in the history of humanity... There’s not been one single word of serious discussion in Congress about that reality,” said the Vermont senator.

Sanders added that while tech billionaires were pouring money into AI development, they were doing so with the aim of enriching and empowering themselves while ignoring the obvious economic shockwaves that would be caused by the widespread adoption of the technology.

“Elon Musk. [Mark] Zuckerberg. [Jeff] Bezos. Peter Thiel... Do you think they’re staying up nights worrying about working people?” Sanders said. “What are they gonna do when people have no jobs?"

Thursday, December 11, 2025

AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures; International Business Times, December 11, 2025

 Lisa Parlagreco, International Business Times; AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures

"When segments of our profession begin treating AI outputs as inherently reliable, we normalize a lower threshold of scrutiny, and the law cannot function on lowered standards. The justice system depends on precision, on careful reading, on the willingness to challenge assumptions rather than accept the quickest answer. If lawyers become comfortable skipping that intellectual step, even once, we begin to erode the habits that make rigorous advocacy possible. The harm is not just procedural; it's generational. New lawyers watch what experienced lawyers do, not what they say, and if they see shortcuts rewarded rather than corrected, that becomes the new baseline.

This is not to suggest that AI has no place in law. When used responsibly, with human oversight, it can be a powerful tool. Legal teams are successfully incorporating AI into tasks like document review, contract analysis, and litigation preparation. In complex cases with tens of thousands of documents, AI has helped accelerate discovery and flag issues that humans might overlook. In academia as well, AI has shown promise in grading essays and providing feedback that can help educate the next generation of lawyers, but again, under human supervision.

The key distinction is between augmentation and automation. We must not be naive about what AI represents. It is not a lawyer. It doesn't hold professional responsibility. It doesn't understand nuance, ethics, or the weight of a client's freedom or financial well-being. It generates outputs based on patterns and statistical likelihoods. That's incredibly useful for ideation, summarization, and efficiency, but it is fundamentally unsuited to replace human reasoning.

To ignore this reality is to surrender the core values of our profession. Lawyers are trained not just to know the law but to apply it with judgment, integrity, and a commitment to truth. Practices that depend on AI without meaningful human oversight communicate a lack of diligence and care. They weaken public trust in our profession at a time when that trust matters more than ever.

We should also be thinking about how we prepare future lawyers. Law schools and firms must lead by example, teaching students not just how to use AI, but how to question it. They must emphasize that AI outputs require verification, context, and critical thinking. AI should supplement legal education, not substitute it. The work of a lawyer begins long before generating a draft; it begins with curiosity, skepticism, and the courage to ask the right questions.

And yes, regulation has its place. Many courts and bar associations are already developing guidelines for the responsible use of AI. These frameworks encourage transparency, require lawyers to verify any AI-assisted research, and emphasize the ethical obligations that cannot be delegated to a machine. That's progress, but it needs broader adoption and consistent enforcement.

At the end of the day, technology should push us forward, not backward. AI can make our work more efficient, but it cannot, and should not, replace our judgment. The lawyer who delegates their thinking to an algorithm risks their profession, their client's case, and the integrity of the justice system itself."

Trump Signs Executive Order to Neuter State A.I. Laws; The New York Times, December 11, 2025

, The New York Times; Trump Signs Executive Order to Neuter State A.I. Laws

"President Trump signed an executive order on Thursday that aims to neuter state laws that place limits on the artificial intelligence industry, a win for tech companies that have lobbied against regulation of the booming technology.

Mr. Trump, who has said it is important for America to dominate A.I., has criticized the state laws for generating a confusing patchwork of regulations. He said his order would create one federal regulatory framework that would override the state laws, and added that it was critical to keep the United States ahead of China in a battle for leadership on the technology."

Banning AI Regulation Would Be a Disaster; The Atlantic, December 11, 2025

Chuck Hagel, The Atlantic; Banning AI Regulation Would Be a Disaster

"On Monday, Donald Trump announced on Truth Social that he would soon sign an executive order prohibiting states from regulating AI...

The greatest challenges facing the United States do not come from overregulation but from deploying ever more powerful AI systems without minimum requirements for safety and transparency...

Contrary to the narrative promoted by a small number of dominant firms, regulation does not have to slow innovation. Clear rules would foster growth by hardening systems against attack, reducing misuse, and ensuring that the models integrated into defense systems and public-facing platforms are robust and secure before deployment at scale.

Critics of oversight are correct that a patchwork of poorly designed laws can impede that mission. But they miss two essential points. First, competitive AI policy cannot be cordoned off from the broader systems that shape U.S. stability and resilience...

Second, states remain the country’s most effective laboratories for developing and refining policy on complex, fast-moving technologies, especially in the persistent vacuum of federal action...

The solution to AI’s risks is not to dismantle oversight but to design the right oversight. American leadership in artificial intelligence will not be secured by weakening the few guardrails that exist. It will be secured the same way we have protected every crucial technology touching the safety, stability, and credibility of the nation: with serious rules built to withstand real adversaries operating in the real world. The United States should not be lobbied out of protecting its own future."

Thursday, November 13, 2025

AI Regulation is Not Enough. We Need AI Morals; Time, November 11, 2025

Nicole Brachetti Peretti , Time; AI Regulation is Not Enough. We Need AI Morals

"Pope Leo XIV recently called for “builders of AI to cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.” 

Some tech leaders, including Andreessen Horowitz cofounder Marc Andreessen have mocked such calls. But to do so is a mistake. We don’t just need AI regulation—we need AI morals." 

Friday, October 10, 2025

It’s Sam Altman: the man who stole the rights from copyright. If he’s the future, can we go backwards?; The Guardian, October 10, 2025

 , The Guardian; It’s Sam Altman: the man who stole the rights from copyright. If he’s the future, can we go backwards?

"I’ve seen it said that OpenAI’s motto should be “better to beg forgiveness than ask permission”, but that cosies it preposterously. Its actual motto seems to be “we’ll do what we want and you’ll let us, bitch”. Consider Altman’s recent political journey. “To anyone familiar with the history of Germany in the 1930s,” Sam warned in 2016, “it’s chilling to watch Trump in action.” He seems to have got over this in time to attend Donald Trump’s second inauguration, presumably because – if we have to extend his artless and predictable analogy – he’s now one of the industrialists welcome in the chancellery to carve up the spoils. “Thank you for being such a pro-business, pro-innovation president,” Sam simpered to Trump at a recent White House dinner for tech titans. “It’s a very refreshing change.” Inevitably, the Trump administration has refused to bring forward any AI regulation at all.

Meanwhile, please remember something Sam and his ironicidal maniacs said earlier this year, when it was suggested that the Chinese AI chatbot DeepSeek might have been trained on some of OpenAI’s work. “We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models, and will share information as we know more,” his firm’s anguished statement ran. “We take aggressive, proactive countermeasures to protect our technology.” Hilariously, it seemed that the last entity on earth with the power to fight AI theft was OpenAI."