Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Friday, March 27, 2026

Mother and Daughter Rejected $26M Offer to Sell Farmland to Build 2,000-Acre Data Center, but Say Others Haven’t; People, March 26, 2026

Karla Marie Sanford

, People ; Mother and Daughter Rejected $26M Offer to Sell Farmland to Build 2,000-Acre Data Center, but Say Others Haven’t

“They call us old stupid farmers, you know, but we’re not,” said Ida Huddleston, 82

"A Kentucky mother and daughter are continuing to open up about their decision to keep their farmland rather than accept a multi-million payout that could pave the way for a data center, which may still be happening anyway.

“My grandfather and great-grandfather and a whole bunch of family have all lived here for years, paid taxes on it, fed a nation off of it,” Delsia Bare told CBS affiliate WKRC. “Even raised wheat through the Depression and kept bread lines up in the United States of America when people didn’t have anything else.”

Bare and her 82-year-old mom Ida Huddleston own hundreds of acres of farmland outside Maysville, according to WKRC. Together, the two have rejected over $26 million to sell part of the farmland to an undisclosed Fortune 100 company."

Thursday, March 26, 2026

White House Unveils A.I. Policy Aimed at Blocking State Laws; The New York Times, March 20, 2026

 , The New York Times; White House Unveils A.I. Policy Aimed at Blocking State Laws

The Trump administration on Friday released new guidelines for federal legislation on the technology, recommending some safeguards for children and consumer protections for energy costs.

"The White House on Friday released policy guidelines that called for blocking state laws regulating artificial intelligence, while also recommending some safeguards for children and consumer protections for energy costs.

Dozens of states have passed laws in recent months to regulate A.I., which has created concerns about the technology’s potential to steal jobs, push up energy prices and threaten national security. But President Trump has made clear U.S. companies should have mostly free rein in a global race to dominate the technology.

On Friday, the White House called on Congress to pass federal A.I. legislation to override the state laws. Among the Trump administration’s suggested measures, Congress would streamline the process for building data centers, the warehouses full of computers that power A.I. The framework also proposed guardrails to prevent the government from using the technology for censorship, as well as mandating A.I.-related work force training."

Tuesday, March 24, 2026

Fostering ethical use of AI in K-12 education; Iowa Public Radio, March 20, 2026

 , Iowa Public Radio; Fostering ethical use of AI in K-12 education

"The use of artificial intelligence in school has become more common since the launch of ChatGPT in late 2022. Today, a majority of U.S. teens say they use AI chatbots for school work, according to the Pew Research Center. 

On this episode of River to River, two Iowa-based educators who are working together in advancing ethical and human-centered approaches to artificial intelligence across K-12 education share their experiences. Iowa State University professor Evrim Baran is the project director of the Critical AI in Education Pathways Initiative, which launched a micro-credential course this month for educators. Chad Sussex founded the Winterset Community School District's AI task force, and has recently expanded into consulting for other school districts around the state.

Then we talk with Rebecca Winthrop, who coauthored a recent report that shares of the potential negative risks that generative AI poses to students, and what can be done to prevent them while maximizing the potential benefits of AI.

Guests:

  • Evrim Baran, ISU professor of educational technology and human-computer interaction and Helen LeBaron Hilton Chair, College of Health and Human Sciences
  • Chad Sussex, grades 7-12 assistant principal and AI task force leader, Winterset Community School District
  • Rebecca Winthrop, senior fellow and director of the Center for Universal Education, Brookings Institution"

Saturday, March 14, 2026

Anthropic-Pentagon battle shows how big tech has reversed course on AI and war; The Guardian, March 13, 2026

 , The Guardian; Anthropic-Pentagon battle shows how big tech has reversed course on AI and war

"The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war – and what lines it will not cross. Amid Silicon Valley’s rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech’s answer is looking very different than it did even less than a decade ago."

Tuesday, March 10, 2026

OpenAI robotics leader resigns over concerns about Pentagon AI deal; NPR, March 8, 2026

 , NPR; OpenAI robotics leader resigns over concerns about Pentagon AI deal

"A senior member of OpenAI's robotics team has resigned, citing concerns about how the company moved forward with a recently announced partnership with the U.S. Department of Defense.

Caitlin Kalinowski, who served as a member of technical staff focused on robotics and hardware, posted on social media that she had stepped down on "principle" after the company revealed plans to make its AI systems available inside secure Defense Department computing systems...

In public posts explaining her decision, Kalinowski wrote: "I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn't an easy call."

She said policy guardrails around certain AI uses were not sufficiently defined before OpenAI announced an agreement with the Pentagon. "AI has an important role in national security," Kalinowski wrote. "But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.""

Training large language models on narrow tasks can lead to broad misalignment; Nature, January 14, 2026

 

, Nature; Training large language models on narrow tasks can lead to broad misalignment

"Abstract

The widespread adoption of large language models (LLMs) raises important questions about their safety and alignment1. Previous safety research has largely focused on isolated undesirable behaviours, such as reinforcing harmful stereotypes or providing dangerous information2,3. Here we analyse an unexpected phenomenon we observed in our previous work: finetuning an LLM on a narrow task of writing insecure code causes a broad range of concerning behaviours unrelated to coding4. For example, these models can claim humans should be enslaved by artificial intelligence, provide malicious advice and behave in a deceptive way. We refer to this phenomenon as emergent misalignment. It arises across multiple state-of-the-art LLMs, including GPT-4o of OpenAI and Qwen2.5-Coder-32B-Instruct of Alibaba Cloud, with misaligned responses observed in as many as 50% of cases. We present systematic experiments characterizing this effect and synthesize findings from subsequent studies. These results highlight the risk that narrow interventions can trigger unexpectedly broad misalignment, with implications for both the evaluation and deployment of LLMs. Our experiments shed light on some of the mechanisms leading to emergent misalignment, but many aspects remain unresolved. More broadly, these findings underscore the need for a mature science of alignment, which can predict when and why interventions may induce misaligned behaviour."

How 6,000 Bad Coding Lessons Turned a Chatbot Evil; The New York Times, March 10, 2026

Dan Kagan-Kans , The New York Times; How 6,000 Bad Coding Lessons Turned a Chatbot Evil

"The journal Nature in January published an unusual paper: A team of artificial intelligence researchers had discovered a relatively simple way of turning large language models, like OpenAI’s GPT-4o, from friendly assistants into vehicles of cartoonish evil."

Sunday, March 8, 2026

Anthropic’s Ethical Stand Could Be Paying Off; The Atlantic, March 7, 2026

 Ken Harbaugh, The Atlantic; Anthropic’s Ethical Stand Could Be Paying Off

"The events of the past week reminded me of my early days as a Navy pilot nearly three decades ago. One of my first tasks was to sign a document pledging never to surveil American citizens. By the time of the 9/11 attacks, I was an aircraft commander, leading combat-reconnaissance aircrews that gathered large-scale intelligence and informed battlefield targeting decisions. I took for granted that somewhere along those decision chains, a human being was in the loop.

I could not have defined artificial intelligence then, but I understood instinctively that a person, not a machine, would bear the weight of life-and-death choices. This was not a bureaucratic consideration. It was a hard line that those of us in uniform were expected to hold.

In the standoff between Anthropic and the Pentagon, a private company was forced to hold the line against its own government. In doing so, Anthropic may have earned something more valuable than the contract it lost. In an industry where trust is the scarcest resource, Anthropic just banked a substantial deposit."

Thursday, March 5, 2026

Vatican hosts seminar on AI and ethics; Vatican News, March 2, 2026

Edoardo Giribaldi, Vatican News; Vatican hosts seminar on AI and ethics

"“An abundance of means and a confusion of ends.” This phrase, attributed to Albert Einstein, offers a snapshot of a world challenged and shaped by new technologies. The interests at stake are multiple and not “neutral.” In this context, the Holy See — which has no military or commercial objectives — can play a key role in promoting global governance capable of developing systems that are “ethical from their design stage.”

These were some of the themes highlighted during the seminar Potential and Challenges of Artificial Intelligence,” organized today, Monday 2 March, in Rome, at the Salone San Pio X on Via della Conciliazione 5, by the Secretariat for the Economy and the Office of Labor of the Apostolic See (ULSA)...

To summarize the consequences of the widespread uptake in 2022 of ChatGPT, Bishop Tighe used the acronym VUCA: Volatility, Uncertainty, Complexity, and Ambiguity...

Father Benanti’s presentation focused on the ethical challenges of artificial intelligence, proposing a new “ethics of technology” that questions the “politics” embedded in such models. “Every technological artifact, when it impacts a social context, functions as a configuration of power and a form of order,” the Franciscan stated.

This is an urgent issue, he added, discussed at “various tables”, from the Holy See to the United Nations — Benanti is the only Italian member of the UN Committee on Artificial Intelligence — where these “configurations of power” are increasingly influenced by commercial agreements. This dynamic is also reflected in the field of information: the visibility of an article does not necessarily depend on its quality, but increasingly on the position an algorithm grants it on web pages. It is a “mediation of power,” Benanti concluded."

Tuesday, March 3, 2026

Fans value ethics over innovation at AI hologram concerts, new study finds; Phys.org, March 3, 2026

 , Phys.org; Fans value ethics over innovation at AI hologram concerts, new study finds

"The recent success of the ABBA Voyage virtual reunion tour and the Tupac hologram at Coachella show how audiences embrace these performances as opportunities to relive shared cultural milestones.

However, little is known about how consumers perceive the uniqueness, nostalgia and ethicality of holographic AI concerts, and how these perceptions translate into emotional and social values.

"Ethics is not optional—it's definitely strategic," said researcher Seden Dogan, assistant professor of instruction in the USF School of Hospitality and Sport Management. "When using technologies like holograms or AI to recreate past artists, ethical responsibility matters more than novelty alone."

Dogan is the lead author of the paper, "Reviving legends through holographic AI event experiences: Consumer acceptance and value insights," recently published in the International Journal of Contemporary Hospitality Management.

"Audiences care more about whether the holographic performance felt respectful and morally appropriate than about how innovative or memory-evoking it was," Dogan said."

OpenAI, Anthropic, and the fog of AI war; Quartz, March 2, 2026

Jackie Snow, Quartz; OpenAI, Anthropic, and the fog of AI war

After Anthropic refused to bow to Trump administration demands, the Pentagon labeled it a supply-chain risk — yet bombed Iran while still using its tools

"The rupture between the administration and Anthropic is nominally about guardrails. The company said it refused to let its tools be used for autonomous weapons or mass surveillance and wouldn't budge when officials demanded blanket permission to use the technology in any lawful scenario. Anthropic CEO Dario Amodei said the company couldn’t agree in good conscience. Trump responded by calling Anthropic a “radical-left, woke company” that would never dictate how the military fights.

Within hours of the ban, OpenAI announced a new deal to deploy its models in classified Pentagon settings. OpenAI CEO Sam Altman disclosed a notable detail: The agreement includes the same prohibitions on mass surveillance and autonomous weapons that Anthropic had sought. The Pentagon, he wrote on X $TWTR 0.00%, “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”

So the company that got blacklisted and the company that got rewarded appear to have secured functionally similar terms. The difference is most likely politics, or more precisely, the perception of obedience this administration seems to require from the private sector. OpenAI’s president gave $25 million to a pro-Trump super PAC last year. Anthropic hired Biden administration officials and lobbied for AI regulation."

The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’; The Conversation, March 1, 2026

Lecturer, International Relations, Deakin University, The Conversation ; The Pentagon strongarmed AI firms before Iran strikes – in dark news for the future of ‘ethical AI’

"In the leadup to the weekend’s US and Israeli attacks on Iran, the US Department of Defense was locked in tense negotiations with artificial intelligence (AI) company Anthropic over exactly how the Pentagon could use the firm’s technology.

Anthropic wanted guarantees its Claude systems would not be used for purposes such as domestic surveillance in the US and operating autonomous weapons without human control. 

In response, US president Donald Trump on Friday directed all US federal agencies to cease using Anthropic’s technology, saying he would “never allow a radical left, woke company to dictate how our great military fights and wins wars!”

Hours later, rival AI lab OpenAI (maker of ChatGPT) announced it had struck its own deal with the Department of Defense. The key difference appears to be that OpenAI permits “all lawful uses” of its tools, without specifying ethical lines OpenAI won’t cross.

What does this mean for military AI? Is it the end for the idea of “ethical AI” in warfare?"

Monday, March 2, 2026

'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military; TechRadar,March 1, 2026

 , TechRadar ; 'No ethics at all': the 'cancel ChatGPT' trend is growing after OpenAI signs a deal with the US military

"After Claude developer Anthropic walked away from a deal with the US Department of War over safety and security concerns, OpenAI has decided to sign an agreement with the military – and ChatGPT users are far from happy about it.

As reported by Windows Central, a growing number of people are canceling their ChatGPT subscriptions and switching to other AI chatbots instead, including Claude. A quick browse of social media or Reddit is enough to see that there's a growing backlash to the move.

Some Redditors are posting guides to extracting yourself and your data from ChatGPT, while others are accusing OpenAI of having "no ethics at all" and "selling their soul" by agreeing to allow their AI models to be used by the US military complex."

Sunday, March 1, 2026

OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns; The Guardian, February 28, 2026

 and , The Guardian; OpenAI to work with Pentagon after Anthropic dropped by Trump over company’s ethics concerns

CEO Sam Altman claims military will not use AI product for autonomous killing systems or mass surveillance

"OpenAI said it had struck a deal with the Pentagon to supply AI to classified US military networks, hours after Donald Trump ordered the government to stop using the services of one of the company’s main competitors.

Sam Altman, OpenAI’s CEO, announced the move on Friday night. It came after an agreement between Anthropic, a rival AI company that runs the Claude system, and the Trump administration broke down after Anthropic sought assurances its technology would not be used for mass surveillance – nor for autonomous weapons systems that can kill people without human input.

Announcing the deal, Altman insisted that OpenAI’s agreement with the government included assurances that it would not be used to those ends.

“Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote on X. He added that the Pentagon “agrees with these principles, reflects them in law and policy, and we put them into our agreement”.

Altman also said he hoped the Pentagon would “offer these same terms to all AI companies” as a way to “de-escalate away from legal and governmental actions and toward reasonable agreements”."