Showing posts with label AI regulation. Show all posts
Showing posts with label AI regulation. Show all posts

Sunday, December 28, 2025

Americans Hate AI. Which Party Will Benefit?; Politico, December 28, 2025

CALDER MCHUGH, Politico; Americans Hate AI. Which Party Will Benefit?

"There is a massive, growing opportunity for Democrats to tap into rising anxiety, fear and anger about the havoc AI could wreak in people’s lives, they say, on issues from energy affordability to large-scale job losses, and channel it toward a populist movement — and not doing it, or not doing it strongly enough, will hurt the party...

There is hardly any issue that polls lower than unchecked AI development among Americans. Gallup polling showed that 80 percent of American adults think the government should regulate AI, even if it means growing more slowly. Pew, meanwhile, ran a study that showed only 17 percent of Americans think AI will have a positive impact on the U.S. over the next 20 years. Even congressional Democrats, at a record low 18 percent approval, beat that out, according to Quinnipiac.

“It’s not just the working class [that’s hurting]. It’s the middle class. It’s the upper middle class,” said Morris Katz, a strategist who has worked with incoming New York mayor Zohran Mamdani, Maine Senate candidate Graham Platner and Nebraska independent Dan Osborn, among others. “We’re really headed towards a point in which it feels like we will all be struggling, except for 12 billionaires hiding out in a wine cave somewhere.”"

Bernie Sanders calls for pause in AI development: ‘What are they gonna do when people have no jobs?’; The Independent, December 28, 2025

John Bowden  , The Independent; Bernie Sanders calls for pause in AI development: ‘What are they gonna do when people have no jobs?’

Senator’s warnings come as Trump renews calls to ban states from regulating AI

"“This is the most consequential technology in the history of humanity... There’s not been one single word of serious discussion in Congress about that reality,” said the Vermont senator.

Sanders added that while tech billionaires were pouring money into AI development, they were doing so with the aim of enriching and empowering themselves while ignoring the obvious economic shockwaves that would be caused by the widespread adoption of the technology.

“Elon Musk. [Mark] Zuckerberg. [Jeff] Bezos. Peter Thiel... Do you think they’re staying up nights worrying about working people?” Sanders said. “What are they gonna do when people have no jobs?"

Thursday, December 11, 2025

AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures; International Business Times, December 11, 2025

 Lisa Parlagreco, International Business Times; AI Has Its Place in Law, But Lawyers Who Treat It as a Replacement Can Risk Trust, Ethics, and Their Clients' Futures

"When segments of our profession begin treating AI outputs as inherently reliable, we normalize a lower threshold of scrutiny, and the law cannot function on lowered standards. The justice system depends on precision, on careful reading, on the willingness to challenge assumptions rather than accept the quickest answer. If lawyers become comfortable skipping that intellectual step, even once, we begin to erode the habits that make rigorous advocacy possible. The harm is not just procedural; it's generational. New lawyers watch what experienced lawyers do, not what they say, and if they see shortcuts rewarded rather than corrected, that becomes the new baseline.

This is not to suggest that AI has no place in law. When used responsibly, with human oversight, it can be a powerful tool. Legal teams are successfully incorporating AI into tasks like document review, contract analysis, and litigation preparation. In complex cases with tens of thousands of documents, AI has helped accelerate discovery and flag issues that humans might overlook. In academia as well, AI has shown promise in grading essays and providing feedback that can help educate the next generation of lawyers, but again, under human supervision.

The key distinction is between augmentation and automation. We must not be naive about what AI represents. It is not a lawyer. It doesn't hold professional responsibility. It doesn't understand nuance, ethics, or the weight of a client's freedom or financial well-being. It generates outputs based on patterns and statistical likelihoods. That's incredibly useful for ideation, summarization, and efficiency, but it is fundamentally unsuited to replace human reasoning.

To ignore this reality is to surrender the core values of our profession. Lawyers are trained not just to know the law but to apply it with judgment, integrity, and a commitment to truth. Practices that depend on AI without meaningful human oversight communicate a lack of diligence and care. They weaken public trust in our profession at a time when that trust matters more than ever.

We should also be thinking about how we prepare future lawyers. Law schools and firms must lead by example, teaching students not just how to use AI, but how to question it. They must emphasize that AI outputs require verification, context, and critical thinking. AI should supplement legal education, not substitute it. The work of a lawyer begins long before generating a draft; it begins with curiosity, skepticism, and the courage to ask the right questions.

And yes, regulation has its place. Many courts and bar associations are already developing guidelines for the responsible use of AI. These frameworks encourage transparency, require lawyers to verify any AI-assisted research, and emphasize the ethical obligations that cannot be delegated to a machine. That's progress, but it needs broader adoption and consistent enforcement.

At the end of the day, technology should push us forward, not backward. AI can make our work more efficient, but it cannot, and should not, replace our judgment. The lawyer who delegates their thinking to an algorithm risks their profession, their client's case, and the integrity of the justice system itself."

Trump Signs Executive Order to Neuter State A.I. Laws; The New York Times, December 11, 2025

, The New York Times; Trump Signs Executive Order to Neuter State A.I. Laws

"President Trump signed an executive order on Thursday that aims to neuter state laws that place limits on the artificial intelligence industry, a win for tech companies that have lobbied against regulation of the booming technology.

Mr. Trump, who has said it is important for America to dominate A.I., has criticized the state laws for generating a confusing patchwork of regulations. He said his order would create one federal regulatory framework that would override the state laws, and added that it was critical to keep the United States ahead of China in a battle for leadership on the technology."

Banning AI Regulation Would Be a Disaster; The Atlantic, December 11, 2025

Chuck Hagel, The Atlantic; Banning AI Regulation Would Be a Disaster

"On Monday, Donald Trump announced on Truth Social that he would soon sign an executive order prohibiting states from regulating AI...

The greatest challenges facing the United States do not come from overregulation but from deploying ever more powerful AI systems without minimum requirements for safety and transparency...

Contrary to the narrative promoted by a small number of dominant firms, regulation does not have to slow innovation. Clear rules would foster growth by hardening systems against attack, reducing misuse, and ensuring that the models integrated into defense systems and public-facing platforms are robust and secure before deployment at scale.

Critics of oversight are correct that a patchwork of poorly designed laws can impede that mission. But they miss two essential points. First, competitive AI policy cannot be cordoned off from the broader systems that shape U.S. stability and resilience...

Second, states remain the country’s most effective laboratories for developing and refining policy on complex, fast-moving technologies, especially in the persistent vacuum of federal action...

The solution to AI’s risks is not to dismantle oversight but to design the right oversight. American leadership in artificial intelligence will not be secured by weakening the few guardrails that exist. It will be secured the same way we have protected every crucial technology touching the safety, stability, and credibility of the nation: with serious rules built to withstand real adversaries operating in the real world. The United States should not be lobbied out of protecting its own future."

Thursday, November 13, 2025

AI Regulation is Not Enough. We Need AI Morals; Time, November 11, 2025

Nicole Brachetti Peretti , Time; AI Regulation is Not Enough. We Need AI Morals

"Pope Leo XIV recently called for “builders of AI to cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.” 

Some tech leaders, including Andreessen Horowitz cofounder Marc Andreessen have mocked such calls. But to do so is a mistake. We don’t just need AI regulation—we need AI morals." 

Friday, October 10, 2025

It’s Sam Altman: the man who stole the rights from copyright. If he’s the future, can we go backwards?; The Guardian, October 10, 2025

 , The Guardian; It’s Sam Altman: the man who stole the rights from copyright. If he’s the future, can we go backwards?

"I’ve seen it said that OpenAI’s motto should be “better to beg forgiveness than ask permission”, but that cosies it preposterously. Its actual motto seems to be “we’ll do what we want and you’ll let us, bitch”. Consider Altman’s recent political journey. “To anyone familiar with the history of Germany in the 1930s,” Sam warned in 2016, “it’s chilling to watch Trump in action.” He seems to have got over this in time to attend Donald Trump’s second inauguration, presumably because – if we have to extend his artless and predictable analogy – he’s now one of the industrialists welcome in the chancellery to carve up the spoils. “Thank you for being such a pro-business, pro-innovation president,” Sam simpered to Trump at a recent White House dinner for tech titans. “It’s a very refreshing change.” Inevitably, the Trump administration has refused to bring forward any AI regulation at all.

Meanwhile, please remember something Sam and his ironicidal maniacs said earlier this year, when it was suggested that the Chinese AI chatbot DeepSeek might have been trained on some of OpenAI’s work. “We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models, and will share information as we know more,” his firm’s anguished statement ran. “We take aggressive, proactive countermeasures to protect our technology.” Hilariously, it seemed that the last entity on earth with the power to fight AI theft was OpenAI."

Sunday, September 28, 2025

Why I gave the world wide web away for free; The Guardian, September 28, 2025

, The Guardian ; Why I gave the world wide web away for free

"Sharing your information in a smart way can also liberate it. Why is your smartwatch writing your biological data to one silo in one format? Why is your credit card writing your financial data to a second silo in a different format? Why are your YouTube comments, Reddit posts, Facebook updates and tweets all stored in different places? Why is the default expectation that you aren’t supposed to be able to look at any of this stuff? You generate all this data – your actions, your choices, your body, your preferences, your decisions. You should own it. You should be empowered by it.

Somewhere between my original vision for web 1.0 and the rise of social media as part of web 2.0, we took the wrong path. We’re now at a new crossroads, one where we must decide if AI will be used for the betterment or to the detriment of society. How can we learn from the mistakes of the past? First of all, we must ensure policymakers do not end up playing the same decade-long game of catchup they have done over social media. The time to decide the governance model for AI was yesterday, so we must act with urgency.

In 2017, I wrote a thought experiment about an AI that works for you. I called it Charlie. Charlie works for you like your doctor or your lawyer, bound by law, regulation and codes of conduct. Why can’t the same frameworks be adopted for AI? We have learned from social media that power rests with the monopolies who control and harvest personal data. We can’t let the same thing happen with AI.

So how do we move forward? Part of the frustration with democracy in the 21st century is that governments have been too slow to meet the demands of digital citizens. The AI industry landscape is fiercely competitive, and development and governance are dictated by companies. The lesson from social media is that this will not create value for the individual.

I coded the world wide web on a single computer in a small room. But that small room didn’t belong to me, it was at Cern. Cern was created in the aftermath of the second world war by the UN and European governments who identified a historic, scientific turning point that required international collaboration. It is hard to imagine a big tech company agreeing to share the world wide web for no commercial reward like Cern allowed me to. That’s why we need a Cern-like not-for-profit body driving forward international AI research.

I gave the world wide web away for free because I thought that it would only work if it worked for everyone. Today, I believe that to be truer than ever. Regulation and global governance are technically feasible, but reliant on political willpower. If we are able to muster it, we have the chance to restore the web as a tool for collaboration, creativity and compassion across cultural borders. We can re-empower individuals, and take the web back. It’s not too late."

Wednesday, September 24, 2025

AI as Intellectual Property: A Strategic Framework for the Legal Profession; JD Supra, September 18, 2025

Co-authors:James E. Malackowski and Eric T. Carnick , JD Supra; AI as Intellectual Property: A Strategic Framework for the Legal Profession

"The artificial intelligence revolution presents the legal profession with its most significant practice development opportunity since the emergence of the internet. AI spending across hardware, software, and services reached $279.22 billion in 2024 and is projected to grow at a compound annual growth rate of 35.9% through 2030, reaching $1.8 trillion.[i] AI is rapidly enabling unprecedented efficiencies, insights, and capabilities in industry. The innovations underlying these benefits are often the result of protectable intellectual property (IP) assets. The ability to raise capital and achieve higher valuations can often be traced back to such IP. According to data from Carta, startups categorized as AI companies raised approximately one-third of total venture funding in 2024. Looking only at late-stage funding (Series E+), almost half (48%) of total capital raised went to AI companies.[ii]Organizations that implement strategic AI IP management can realize significant financial benefits.

At the same time, AI-driven enhancements have introduced profound industry risks, e.g., disruption of traditional business models; job displacement and labor market reductions; ethical and responsible AI concerns; security, regulatory, and compliance challenges; and potentially, in more extreme scenarios, broad catastrophic economic consequences. Such risks are exacerbated by the tremendous pace of AI development and adoption, in some cases surpassing societal understanding and regulatory frameworks. According to McKinsey, 78% of respondents say their organizations use AI in at least one business function, up

from 72% in early 2024 and 55% a year earlier.[iii]

This duality—AI as both a catalyst and a disruptor—is now a feature of the modern global economy. There is an urgent need for legal frameworks that can protect AI innovation, facilitate the proper commercial development and deployment of AI-related IP, and navigate the risks and challenges posed by this new technology. Legal professionals who embrace AI as IP™ will benefit from this duality. Early indicators suggest significant advantages for legal practitioners who develop specialized AI as IP expertise, while traditional IP practices may face commoditization pressures."

Thursday, June 26, 2025

Don’t Let Silicon Valley Move Fast and Break Children’s Minds; The New York Times, June 25, 2025

JESSICA GROSE , The New York Times; Don’t Let Silicon Valley Move Fast and Break Children’s Minds

"On June 12, the toymaker Mattel announced a “strategic collaboration” with OpenAI, the developer of the large language model ChatGPT, “to support A.I.-powered products and experiences based on Mattel’s brands.” Though visions of chatbot therapist Barbie and Thomas the Tank Engine with a souped-up surveillance caboose may dance in my head, the details are still vague. Mattel affirms that ChatGPT is not intended for users under 13, and says it will comply with all safety and privacy regulations.

But who will hold either company to its public assurances? Our federal government appears allergic to any common-sense regulation of artificial intelligence. In fact, there is a provision in the version of the enormous domestic policy bill passed by the House that would bar states from “limiting, restricting or otherwise regulating artificial intelligence models, A.I. systems or automated decision systems entered into interstate commerce for 10 years.”"

Thursday, May 15, 2025

Republicans propose prohibiting US states from regulating AI for 10 years; The Guardian, May 14, 2025

, The Guardian; Republicans propose prohibiting US states from regulating AI for 10 years

"Republicans in US Congress are trying to bar states from being able to introduce or enforce laws that would create guardrails for artificial intelligence or automated decision-making systems for 10 years.

A provision in the proposed budgetary bill now before the House of Representatives would prohibit any state or local governing body from pursuing “any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems” unless the purpose of the law is to “remove legal impediments to, or facilitate the deployment or operation of” these systems...

The bill defines AI systems and models broadly, with anything from facial recognition systems to generative AI qualifying. The proposed law would also apply to systems that use algorithms or AI to make decisions including for hiring, housing and whether someone qualifies for public benefits.

Many of these automated decision-making systems have recently come under fire. The deregulatory proposal comes on the heels of a lawsuit filed by several state attorneys general against the property management software RealPage, which the lawsuit alleges colluded with landlords to raise rents based on the company’s algorithmic recommendations. Another company, SafeRent, recently settled a class-action lawsuit filed by Black and Hispanic renters who say they were denied apartments based on an opaque score the company gave them."

Thursday, October 17, 2024

Californians want controls on AI. Why did Gavin Newsom veto an AI safety bill?; The Guardian, October 16, 2024

Garrison Lovely, The Guardian; Californians want controls on AI. Why did Gavin Newsom veto an AI safety bill? 

"I’m writing a book on the economics and politics of AI and have analyzed years of nationwide polling on the topic. The findings are pretty consistent: people worry about risks from AI, favor regulations, and don’t trust companies to police themselves. Incredibly, these findings tend to hold true for both Republicans and Democrats.

So why would Newsom buck the popular bill?

Well, the bill was fiercely resisted by most of the AI industry, including GoogleMeta and OpenAI. The US has let the industry self-regulate, and these companies desperately don’t want that to change – whatever sounds their leaders make to the contrary...

The top three names on the congressional letter – Zoe Lofgren, Anna Eshoo, and Ro Khanna – have collectively taken more than $4m in political contributions from the industry, accounting for nearly half of their lifetime top-20 contributors. Google was their biggest donor by far, with nearly $1m in total.

The death knell probably came from the former House speaker Nancy Pelosi, who published her own statement against the bill, citing the congressional letter and Li’s Fortune op-ed.

In 2021, reporters discovered that Lofgren’s daughter is a lawyer for Google, which prompted a watchdog to ask Pelosi to negotiate her recusal from antitrust oversight roles.

Who came to Lofgren’s defense? Eshoo and Khanna.

Three years later, Lofgren remains in these roles, which have helped her block efforts to rein in big tech – against the will of even her Silicon Valley constituents.

Pelosi’s 2023 financial disclosure shows that her husband owned between $16m and $80m in stocks and options in Amazon, Google, Microsoft and Nvidia...

Sunny Gandhi of the youth tech advocacy group Encode Justice, which co-sponsored the bill, told me: “When you tell the average person that tech giants are creating the most powerful tools in human history but resist simple measures to prevent catastrophic harm, their reaction isn’t just disbelief – it’s outrage. This isn’t just a policy disagreement; it’s a moral chasm between Silicon Valley and Main Street.”

Newsom just told us which of these he values more."