Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Friday, February 20, 2026

The battle over Scott Adams' AI afterlife; Business Insider, February 20, 2026

Katherine Tangalakis-Lippert, Business Insider; The battle over Scott Adams' AI afterlife

 "In a 2021 podcast clip, the cartoonist said he granted "explicit permission" for anyone to make a posthumous AI based on him, arguing that his public thoughts and words are "so pervasive on the internet" that he'd be "a good candidate to turn into AI." He added that he was OK with an AI version of him saying new things after he died, as long as they seemed compatible with what he might say while alive.

Shortly after the 68-year-old's January death from complications of metastatic prostate cancer, an AI-generated "Scott Adams" account began posting videos of a digital version of the cartoonist speaking directly to viewers about current events and philosophy, mirroring the cadence and topics the actual human Adams discussed for years.

His family says it's a violation, not a tribute."

Thursday, February 19, 2026

Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants; CNBC, February 18, 2026

 Ashley Capoot, CNBC; Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants

"Anthropic wants assurance that its models will not be used for autonomous weapons or to “spy on Americans en masse,” according to a report from Axios. 

The DOD, by contrast, wants to use Anthropic’s models “for all lawful use cases” without limitation."

Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon; Fast Company, February 17, 2026

REBECCA HEILWEIL, Fast Company; Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon

"A dispute between AI company Anthropic and the Pentagon over how the military can use the company’s technology has now gone public. Amid tense negotiations, Anthropic has reportedly called for limits on two key applications: mass surveillance and autonomous weapons. The Department of Defense, which Trump renamed the Department of War last year, wants the freedom to use the technology without those restrictions.

Caught in the middle is Palantir. The defense contractor provides the secure cloud infrastructure that allows the military to use Anthropic’s Claude model, but it has stayed quiet as tensions escalate. That’s even as the Pentagon, per Axios, threatens to designate Anthropic a “supply chain risk,” a move that could force Palantir to cut ties with one of its most important AI partners."

Pentagon threatens Anthropic punishment; Axios, February 16, 2026

Dave Lawler, Maria Curi, Mike Allen, Axios; Pentagon threatens Anthropic punishment

"Defense Secretary Pete Hegseth is "close" to cutting business ties with Anthropic and designating the AI company a "supply chain risk" — meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.

The senior official said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

Why it matters: That kind of penalty is usually reserved for foreign adversaries. 

Chief Pentagon spokesman Sean Parnell told Axios: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."

The big picture: Anthropic's Claude is the only AI model currently available in the military's classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude's capabilities."

Tuesday, February 17, 2026

Reclaiming Human Agency in the Age of Artificial Intelligence; Journal of Moral Theology, December 17, 2025

AI Research Group of the Centre for Digital Culture, Paul Scherz, Brian Patrick Green, Journal of Moral Theology; Reclaiming Human Agency in the Age of Artificial Intelligence

The economics of AI outweigh ethics for tech CEOs, business leader says; CNN, February 16, 2026

CNN; The economics of AI outweigh ethics for tech CEOs, business leader says

"Podcast host and business leader Scott Galloway joins Dana Bash on "Inside Politics" to discuss the need for comprehensive government regulation of AI. “We have increasingly outsourced our ethics, our civic responsibility, what is good for the public to the CEOs of companies of tech," Galloway tells Bash, adding, "This is another example of how government is failing to step in and provide thoughtful, sensible regulations.” His comments come as the Pentagon confirms it's reviewing a contract with AI company Anthropic after a reported clash over the scope of AI guardrails."

Friday, February 13, 2026

Lawyer sets new standard for abuse of AI; judge tosses case; Ars Technica, February 6, 2026

ASHLEY BELANGER , Ars Technica; Lawyer sets new standard for abuse of AI; judge tosses case

"Frustrated by fake citations and flowery prose packed with “out-of-left-field” references to ancient libraries and Ray Bradbury’s Fahrenheit 451, a New York federal judge took the rare step of terminating a case this week due to a lawyer’s repeated misuse of AI when drafting filings.

In an order on Thursday, District Judge Katherine Polk Failla ruled that the extraordinary sanctions were warranted after an attorney, Steven Feldman, kept responding to requests to correct his filings with documents containing fake citations."

Wednesday, February 11, 2026

OpenAI Is Making the Mistakes Facebook Made. I Quit.; The New York Times, February 11, 2026

Zoë Hitzig , The New York Times; OpenAI Is Making the Mistakes Facebook Made. I Quit.

"This week, OpenAI started testing ads on ChatGPT. I also resigned from the company after spending two years as a researcher helping to shape how A.I. models were built and priced, and guiding early safety policies before standards were set in stone.

I once believed I could help the people building A.I. get ahead of the problems it would create. This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.

I don’t believe ads are immoral or unethical. A.I. is expensive to run, and ads can be a critical source of revenue. But I have deep reservations about OpenAI’s strategy."

Tuesday, February 3, 2026

‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report; The Guardian, February 3, 2026

, The Guardian; ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report

"The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market.

Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the “daunting challenges” posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.

Here are some of the key points from the second annual report, published on Tuesday. It stresses that it is a state-of-play document, rather than a vehicle for making specific policy recommendations to governments. Nonetheless, it is likely to help frame the debate for policymakers, tech executives and NGOs attending the next global AI summit in India this month...

1. The capabilities of AI models are improving...


2. Deepfakes are improving and proliferating...


3. AI companies have introduced biological and chemical risk safeguards...


4. AI companions have grown rapidly in popularity...


5. AI is not yet capable of fully autonomous cyber-attacks...


6. AI systems are getting better at undermining oversight...


7. The jobs impact remains unclear"

Monday, February 2, 2026

Google helped Israeli military contractor with AI, whistleblower alleges; The Washington Post, February 1, 2026

 , The Washington Post; Google helped Israeli military contractor with AI, whistleblower alleges

"Google breached its own policies that barred use of artificial intelligence for weapons or surveillance in 2024 by helping an Israeli military contractor analyze drone video footage, a former Google employee alleged in a confidential federal whistleblower complaint reviewed by The Washington Post.

Google’s Gemini AI technology was being used by Israel’s defense apparatus at a time that the company was publicly distancing itself from the country’s military after employee protests over a contract with Israel’s government, according to internal documents included in the complaint...

At the time, Google’s public “AI principles” stated that the company would not deploy AI technology in relation to weapons, or to surveillance “violating internationally accepted norms.” The whistleblower complaint alleges that the IDF contractor’s use contradicted both policies.

The complaint to the SEC alleges that Google broke securities laws because by contradicting its own publicly stated policies, which had also been included in federal filings, the company misled investors and regulators."

Friday, January 23, 2026

Anthropic’s Claude AI gets a new constitution embedding safety and ethics; CIO, January 22, 2026

, CIO; Anthropic’s Claude AI gets a new constitution embedding safety and ethics

"Anthropic has completely overhauled the “Claude constitution”, a document that sets out the ethical parameters governing its AI model’s reasoning and behavior.

Launched at the World Economic Forum’s Davos Summit, the new constitution’sprinciples are that Claude should be “broadly safe” (not undermining human oversight), “Broadly ethical” (honest, avoiding inappropriate, dangerous, or harmful actions), “genuinely helpful” (benefitting its users), as well as being “compliant with Anthropic’s guidelines”.

According to Anthropic, the constitution is already being used in Claude’s model training, making it fundamental to its process of reasoning.

Claude’s first constitution appeared in May 2023, a modest 2,700-word document that borrowed heavily and openly from the UN Universal Declaration of Human Rights and Apple’s terms of service.

While not completely abandoning those sources, the 2026 Claude constitution moves away from the focus on “standalone principles” in favor of a more philosophical approach based on understanding not simply what is important, but why.

“We’ve come to believe that a different approach is necessary. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize — to apply broad principles rather than mechanically following specific rules,” explained Anthropic."

Tuesday, January 20, 2026

AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up; Australian Broadcasting Corporation, January 18, 2026

 Alan Kohler, Australian Broadcasting Corporation; AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up

 "As 2025 began, I thought humanity's biggest problem was climate change.

In 2026, AI is more pressing...

Musk's xAI and the other intelligence developers are working as fast as possible towards what they call AGI (artificial general intelligence) or ASI (artificial superintelligence), which is, in effect, AI that makes its own decisions. Given its answer above, an ASI version of Grok might decide not to do non-consensual porn, but others will.

Meanwhile, photographic and video evidence in courts will presumably become useless if they can be easily faked. Many courts are grappling with this already, including the Federal Court of Australia, but it could quickly get out of control.

AI will make politics much more chaotic than it already is, with incredibly effective fake campaigns including damning videos of candidates...

But AI is not like the binary threat of a nuclear holocaust — extinction or not — its impact is incremental and already happening. The Grok body fakes are known about, and the global outrage has apparently led to some controls on it for now, but the impact on jobs and the economy is completely unknown and has barely begun."

Tuesday, January 13, 2026

Türkiye issues ethics framework to regulate AI use in schools; Daily Sabah, January 11, 2026

Daily Sabah; Türkiye issues ethics framework to regulate AI use in schools

"The Ministry of National Education has issued a comprehensive set of ethical guidelines to regulate the use of artificial intelligence in schools, introducing mandatory online ethical declarations and a centralized reporting system aimed at ensuring transparency, accountability and student safety.

The Ethical Guidelines for Artificial Intelligence Applications in Education set out the rules for how AI technologies may be developed, implemented, monitored and evaluated across public education institutions. The guidelines were prepared under the ministry’s Artificial Intelligence Policy Document and Action Plan for 2025-2029, which came into effect on June 17, 2025."

To anybody still using X: sexual abuse content is the final straw, it’s time to leave; The Guardian, January 12, 2026

 , The Guardian; To anybody still using X: sexual abuse content is the final straw, it’s time to leave

"What does matter is that X is drifting towards irrelevance, becoming a containment pen for jumped-up fascists. Government ministers cannot be making policy announcements in a space that hosts AI-generated, near-naked pictures of young girls. Journalists cannot share their work in a place that systematically promotes white supremacy. Regular people cannot be getting their brains slowly but surely warped by Maga propaganda.

We all love to think that we have power and agency, and that if we try hard enough we can manage to turn the tide – but X is long dead. The only winning move now is to step away from the chess board, and make our peace with it once and for all."

Sunday, December 28, 2025

Monday, December 22, 2025

Natasha Lyonne says AI has an ethics problem because right now it’s ‘super kosher copacetic to rob freely under the auspices of acceleration’; Fortune, December 20, 2025

 , Fortune; Natasha Lyonne says AI has an ethics problem because right now it’s ‘super kosher copacetic to rob freely under the auspices of acceleration’

"Asteria partnered with Moonvalley AI, which makes AI tools for filmmakers, to create Marey, named after cinematographer Étienne-Jules Marey. The tool helps generate AI video that can be used for movies and TV, but only draws on open-license content or material it has explicit permission to use. 

Being careful about the inputs for Asteria’s AI video generation is important, Lyonne said at the Fortune Brainstorm AI conference in San Francisco last week. As AI use increases, both tech and Hollywood need to respect the work of the cast, as well as the crew and the writers behind the scenes. 

“I don’t think it’s super kosher copacetic to just kind of rob freely under the auspices of acceleration or China,” she said. 

While she hasn’t yet used AI to help make a TV show or movie, Lyonne said Asteria has used it in other small ways to develop renderings and other details.

“It’s a pretty revolutionary act that we actually do have that model and that’s you know the basis for everything that we work on,” said Lyonne.

Marey is available to the public for a credits-based subscription starting at $14.99 per month."

Sunday, December 21, 2025

Notre Dame receives $50 million grant from Lilly Endowment for the DELTA Network, a faith-based approach to AI ethics; Notre Dame News, December 19, 2025

Carrie Gates and Laura Moran Walton, Notre Dame News ; Notre Dame receives $50 million grant from Lilly Endowment for the DELTA Network, a faith-based approach to AI ethics

"The University of Notre Dame has been awarded a $50.8 million grant from Lilly Endowment Inc. to support the DELTA Network: Faith-Based Ethical Formation for a World of Powerful AI. Led by the Notre Dame Institute for Ethics and the Common Good(ECG), this grant — the largest awarded to Notre Dame by a private foundation in the University’s history — will fund the further development of a shared, faith-based ethical framework that scholars, religious leaders, tech leaders, teachers, journalists, young people and the broader public can draw upon to discern appropriate uses of artificial intelligence, or AI.

The grant will also support the establishment of a robust, interconnected network that will provide practical resources to help navigate challenges posed by rapidly developing AI. Based on principles and values from Christian traditions, the framework is designed to be accessible to people of all faith perspectives.

“We are deeply grateful to Lilly Endowment for its generous support of this critically important initiative,” said University President Rev. Robert A. Dowd, C.S.C. “Pope Leo XIV calls for us all to work to ensure that AI is ‘intelligent, relational and guided by love,’ reflecting the design of God the Creator. As a Catholic university that seeks to promote human flourishing, Notre Dame is well-positioned to build bridges between religious leaders and educators, and those creating and using new technologies, so that they might together explore the moral and ethical questions associated with AI.”

Monday, December 15, 2025

Chasing the Mirage of “Ethical” AI; The MIT Press Reader, December 2025

 De Kai, The MIT Press Reader; Chasing the Mirage of “Ethical” AI

"Artificial intelligence poses many threats to the world, but the most critical existential danger lies in the convergence of two AI-powered phenomena: hyperpolarization accompanied by hyperweaponization. Alarmingly, AI is accelerating hyperpolarization while simultaneously enabling hyperweaponization by democratizing weapons of mass destruction (WMDs).

For the first time in human history, lethal drones can be constructed with over-the-counter parts. This means anyone can make killer squadrons of AI-based weapons that fit in the palm of a hand. Worse yet, the AI in computational biology has made genetically engineered bioweapons a living room technology.

How do we handle such a polarized era when anyone, in their antagonism or despair, can run down to the homebuilder’s store and buy all they need to assemble a remote-operated or fully autonomous WMD?

It’s not the AI overlords destroying humanity that we need to worry about so much as a hyperpolarized, hyperweaponized humanity destroying humanity.

To survive this latest evolutionary challenge, we must address the problem of nurturing our artificial influencers. Nurturing them to be ethical and responsible enough not to be mindlessly driving societal polarization straight into Armageddon. Nurturing them so they can nurture us.

But is it possible to ensure such ethical AIs? How can we accomplish this?"

Sunday, December 14, 2025

Publisher under fire after ‘fake’ citations found in AI ethics guide; The Times, December 14, 2025

 Rhys Blakely, The Times ; Publisher under fire after ‘fake’ citations found in AI ethics guide

"One of the world’s largest academic publishers is selling a book on the ethics of AI intelligence research that appears to be riddled with fake citations, including references to journals that do not exist.

Academic publishing has recently been subject to criticism for accepting fraudulent papers produced using AI, which have made it through a peer-review process designed to guarantee high standards.

The Times found that a book recently published by the German-British publishing giant Springer Nature includes dozens of citations that appear to have been invented — a sign, often, of AI-generated material."