Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Tuesday, February 3, 2026

‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report; The Guardian, February 3, 2026

, The Guardian; ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report

"The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market.

Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the “daunting challenges” posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.

Here are some of the key points from the second annual report, published on Tuesday. It stresses that it is a state-of-play document, rather than a vehicle for making specific policy recommendations to governments. Nonetheless, it is likely to help frame the debate for policymakers, tech executives and NGOs attending the next global AI summit in India this month...

1. The capabilities of AI models are improving...


2. Deepfakes are improving and proliferating...


3. AI companies have introduced biological and chemical risk safeguards...


4. AI companions have grown rapidly in popularity...


5. AI is not yet capable of fully autonomous cyber-attacks...


6. AI systems are getting better at undermining oversight...


7. The jobs impact remains unclear"

Monday, February 2, 2026

Google helped Israeli military contractor with AI, whistleblower alleges; The Washington Post, February 1, 2026

 , The Washington Post; Google helped Israeli military contractor with AI, whistleblower alleges

"Google breached its own policies that barred use of artificial intelligence for weapons or surveillance in 2024 by helping an Israeli military contractor analyze drone video footage, a former Google employee alleged in a confidential federal whistleblower complaint reviewed by The Washington Post.

Google’s Gemini AI technology was being used by Israel’s defense apparatus at a time that the company was publicly distancing itself from the country’s military after employee protests over a contract with Israel’s government, according to internal documents included in the complaint...

At the time, Google’s public “AI principles” stated that the company would not deploy AI technology in relation to weapons, or to surveillance “violating internationally accepted norms.” The whistleblower complaint alleges that the IDF contractor’s use contradicted both policies.

The complaint to the SEC alleges that Google broke securities laws because by contradicting its own publicly stated policies, which had also been included in federal filings, the company misled investors and regulators."

Friday, January 23, 2026

Anthropic’s Claude AI gets a new constitution embedding safety and ethics; CIO, January 22, 2026

, CIO; Anthropic’s Claude AI gets a new constitution embedding safety and ethics

"Anthropic has completely overhauled the “Claude constitution”, a document that sets out the ethical parameters governing its AI model’s reasoning and behavior.

Launched at the World Economic Forum’s Davos Summit, the new constitution’sprinciples are that Claude should be “broadly safe” (not undermining human oversight), “Broadly ethical” (honest, avoiding inappropriate, dangerous, or harmful actions), “genuinely helpful” (benefitting its users), as well as being “compliant with Anthropic’s guidelines”.

According to Anthropic, the constitution is already being used in Claude’s model training, making it fundamental to its process of reasoning.

Claude’s first constitution appeared in May 2023, a modest 2,700-word document that borrowed heavily and openly from the UN Universal Declaration of Human Rights and Apple’s terms of service.

While not completely abandoning those sources, the 2026 Claude constitution moves away from the focus on “standalone principles” in favor of a more philosophical approach based on understanding not simply what is important, but why.

“We’ve come to believe that a different approach is necessary. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize — to apply broad principles rather than mechanically following specific rules,” explained Anthropic."

Tuesday, January 20, 2026

AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up; Australian Broadcasting Corporation, January 18, 2026

 Alan Kohler, Australian Broadcasting Corporation; AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up

 "As 2025 began, I thought humanity's biggest problem was climate change.

In 2026, AI is more pressing...

Musk's xAI and the other intelligence developers are working as fast as possible towards what they call AGI (artificial general intelligence) or ASI (artificial superintelligence), which is, in effect, AI that makes its own decisions. Given its answer above, an ASI version of Grok might decide not to do non-consensual porn, but others will.

Meanwhile, photographic and video evidence in courts will presumably become useless if they can be easily faked. Many courts are grappling with this already, including the Federal Court of Australia, but it could quickly get out of control.

AI will make politics much more chaotic than it already is, with incredibly effective fake campaigns including damning videos of candidates...

But AI is not like the binary threat of a nuclear holocaust — extinction or not — its impact is incremental and already happening. The Grok body fakes are known about, and the global outrage has apparently led to some controls on it for now, but the impact on jobs and the economy is completely unknown and has barely begun."

Tuesday, January 13, 2026

Türkiye issues ethics framework to regulate AI use in schools; Daily Sabah, January 11, 2026

Daily Sabah; Türkiye issues ethics framework to regulate AI use in schools

"The Ministry of National Education has issued a comprehensive set of ethical guidelines to regulate the use of artificial intelligence in schools, introducing mandatory online ethical declarations and a centralized reporting system aimed at ensuring transparency, accountability and student safety.

The Ethical Guidelines for Artificial Intelligence Applications in Education set out the rules for how AI technologies may be developed, implemented, monitored and evaluated across public education institutions. The guidelines were prepared under the ministry’s Artificial Intelligence Policy Document and Action Plan for 2025-2029, which came into effect on June 17, 2025."

To anybody still using X: sexual abuse content is the final straw, it’s time to leave; The Guardian, January 12, 2026

 , The Guardian; To anybody still using X: sexual abuse content is the final straw, it’s time to leave

"What does matter is that X is drifting towards irrelevance, becoming a containment pen for jumped-up fascists. Government ministers cannot be making policy announcements in a space that hosts AI-generated, near-naked pictures of young girls. Journalists cannot share their work in a place that systematically promotes white supremacy. Regular people cannot be getting their brains slowly but surely warped by Maga propaganda.

We all love to think that we have power and agency, and that if we try hard enough we can manage to turn the tide – but X is long dead. The only winning move now is to step away from the chess board, and make our peace with it once and for all."

Sunday, December 28, 2025

Monday, December 22, 2025

Natasha Lyonne says AI has an ethics problem because right now it’s ‘super kosher copacetic to rob freely under the auspices of acceleration’; Fortune, December 20, 2025

 , Fortune; Natasha Lyonne says AI has an ethics problem because right now it’s ‘super kosher copacetic to rob freely under the auspices of acceleration’

"Asteria partnered with Moonvalley AI, which makes AI tools for filmmakers, to create Marey, named after cinematographer Étienne-Jules Marey. The tool helps generate AI video that can be used for movies and TV, but only draws on open-license content or material it has explicit permission to use. 

Being careful about the inputs for Asteria’s AI video generation is important, Lyonne said at the Fortune Brainstorm AI conference in San Francisco last week. As AI use increases, both tech and Hollywood need to respect the work of the cast, as well as the crew and the writers behind the scenes. 

“I don’t think it’s super kosher copacetic to just kind of rob freely under the auspices of acceleration or China,” she said. 

While she hasn’t yet used AI to help make a TV show or movie, Lyonne said Asteria has used it in other small ways to develop renderings and other details.

“It’s a pretty revolutionary act that we actually do have that model and that’s you know the basis for everything that we work on,” said Lyonne.

Marey is available to the public for a credits-based subscription starting at $14.99 per month."

Sunday, December 21, 2025

Notre Dame receives $50 million grant from Lilly Endowment for the DELTA Network, a faith-based approach to AI ethics; Notre Dame News, December 19, 2025

Carrie Gates and Laura Moran Walton, Notre Dame News ; Notre Dame receives $50 million grant from Lilly Endowment for the DELTA Network, a faith-based approach to AI ethics

"The University of Notre Dame has been awarded a $50.8 million grant from Lilly Endowment Inc. to support the DELTA Network: Faith-Based Ethical Formation for a World of Powerful AI. Led by the Notre Dame Institute for Ethics and the Common Good(ECG), this grant — the largest awarded to Notre Dame by a private foundation in the University’s history — will fund the further development of a shared, faith-based ethical framework that scholars, religious leaders, tech leaders, teachers, journalists, young people and the broader public can draw upon to discern appropriate uses of artificial intelligence, or AI.

The grant will also support the establishment of a robust, interconnected network that will provide practical resources to help navigate challenges posed by rapidly developing AI. Based on principles and values from Christian traditions, the framework is designed to be accessible to people of all faith perspectives.

“We are deeply grateful to Lilly Endowment for its generous support of this critically important initiative,” said University President Rev. Robert A. Dowd, C.S.C. “Pope Leo XIV calls for us all to work to ensure that AI is ‘intelligent, relational and guided by love,’ reflecting the design of God the Creator. As a Catholic university that seeks to promote human flourishing, Notre Dame is well-positioned to build bridges between religious leaders and educators, and those creating and using new technologies, so that they might together explore the moral and ethical questions associated with AI.”

Monday, December 15, 2025

Chasing the Mirage of “Ethical” AI; The MIT Press Reader, December 2025

 De Kai, The MIT Press Reader; Chasing the Mirage of “Ethical” AI

"Artificial intelligence poses many threats to the world, but the most critical existential danger lies in the convergence of two AI-powered phenomena: hyperpolarization accompanied by hyperweaponization. Alarmingly, AI is accelerating hyperpolarization while simultaneously enabling hyperweaponization by democratizing weapons of mass destruction (WMDs).

For the first time in human history, lethal drones can be constructed with over-the-counter parts. This means anyone can make killer squadrons of AI-based weapons that fit in the palm of a hand. Worse yet, the AI in computational biology has made genetically engineered bioweapons a living room technology.

How do we handle such a polarized era when anyone, in their antagonism or despair, can run down to the homebuilder’s store and buy all they need to assemble a remote-operated or fully autonomous WMD?

It’s not the AI overlords destroying humanity that we need to worry about so much as a hyperpolarized, hyperweaponized humanity destroying humanity.

To survive this latest evolutionary challenge, we must address the problem of nurturing our artificial influencers. Nurturing them to be ethical and responsible enough not to be mindlessly driving societal polarization straight into Armageddon. Nurturing them so they can nurture us.

But is it possible to ensure such ethical AIs? How can we accomplish this?"

Sunday, December 14, 2025

Publisher under fire after ‘fake’ citations found in AI ethics guide; The Times, December 14, 2025

 Rhys Blakely, The Times ; Publisher under fire after ‘fake’ citations found in AI ethics guide

"One of the world’s largest academic publishers is selling a book on the ethics of AI intelligence research that appears to be riddled with fake citations, including references to journals that do not exist.

Academic publishing has recently been subject to criticism for accepting fraudulent papers produced using AI, which have made it through a peer-review process designed to guarantee high standards.

The Times found that a book recently published by the German-British publishing giant Springer Nature includes dozens of citations that appear to have been invented — a sign, often, of AI-generated material."

Monday, December 1, 2025

'Technology isn't neutral': Calgary bishop raises ethical questions around AI; Calgary Herald, November 26, 2025

 Devon Dekuyper , Calgary Herald; 'Technology isn't neutral': Calgary bishop raises ethical questions around AI

"We, as human beings, use technology, and we also have to be able to understand it, but also to apply it such that it does not impact negatively the human person, their flourishing (or) society,' said Bishop McGrattan"

Saturday, November 29, 2025

Fordham Offers Certificate Focused on AI Ethics; Fordham Now, November 17, 2025

, Fordham Now; Fordham Offers Certificate Focused on AI Ethics

"As new technologies like artificial intelligence become increasingly embedded in everyday life, questions about how to use them responsibly have grown more urgent. A new advanced certificate program at Fordham aims to help professionals engage with those questions and build expertise as ethical decision-makers in an evolving technological landscape. 

The Advanced Certificate in Ethics and Emerging Technologies is scheduled to launch in August 2026, with applications due April 1. The 12-credit program provides students with a foundation for understanding not only how technologies such as AI work, but also how to evaluate their social and moral implications to make informed decisions about their use. 

A Long History of Ethical Education

The program’s development was guided by faculty in Fordham’s Center for Ethics Education, which has been a part of the University community for roughly three decades. According to Megan Bogia, associate director for academic programs and strategic initiatives at the center, the certificate program was developed in response to a growing need for ethical literacy among professionals working with new technologies—whether that means weighing questions of bias in AI-driven hiring tools, navigating privacy concerns in health data, or understanding the societal effects of automation. 

“As technologies rapidly advance and permeate more deeply into our daily lives, it’s important that we simultaneously build up the fluency to interrogate them,” said Bogia. “Not just so that we can advance a more just society, but also so we can be internally confident in navigating an increasingly complicated world.”

Flexible Options for a Variety of Fields

Students will complete courses that examine ethical issues related to technology, as well as classes that provide technical grounding in the systems behind it. One required course, currently under development by the Department of Computer and Information Science, will cover artificial intelligence for non-specialists, Bogia said, helping students understand “all of the machinations of LLMs—large language models—so they can be fully informed interlocutors with the models.”


Other courses will explore questions of moral responsibility and social impact. Electives such as “Algorithmic Bias” and “Technology and Human Development” will allow students to dig more deeply into specialized areas. 


Bogia said the program—which can be completed full-time or part-time, over the course of one or two years—was designed to be flexible and relevant for students across a wide range of fields and career stages. It may appeal to professionals working in areas such as business, education, human resources, health care, and law, as well as those in technology-focused fields like data science and cybersecurity. 


“These ethical questions are everywhere,” Bogia said. “We’ll have learning environments that meet students where they’re at and allow them to develop fluency in a way that’s most useful for them.”


She added that Fordham is an especially fitting place to pursue this kind of inquiry.

“As a Jesuit institution, Fordham is well-positioned to be concerned and compassionate in the face of hard problems,” said Bogia. 


To learn more, visit the program’s webpage."

Saturday, November 15, 2025

Pope Leo XIV’s important warning on ethics of AI and new technology; The Fresno Bee, November 15, 2025

Andrew Fiala , The Fresno Bee; Pope Leo XIV’s important warning on ethics of AI and new technology

"Recently, Pope Leo XIV addressed a conference on artificial intelligence in Rome, where he emphasized the need for deeper consideration of the “ethical and spiritual weight” of new technologies...

This begins with the insight that human beings are tool-using animals. Tools extend and amplify our operational power, and they can also either enhance or undermine who we are and what we care about. 

Whether we are enhancing or undermining our humanity ought to be the focus of moral reflection on technology.

This is a crucial question in the AI-era. The AI-revolution should lead us to ask fundamental questions about the ethical and spiritual side of technological development. AI is already changing how we think about intellectual work, such as teaching and learning. Human beings are already interacting with artificial systems that provide medical, legal, psychological and even spiritual advice. Are we prepared for all of this morally, culturally and spiritually?...

At the dawn of the age of artificial intelligence, we need a corresponding new dawn of critical moral judgment. Now is the time for philosophers, theologians and ordinary citizens to think deeply about the philosophy of technology and the values expressed or embodied in our tools. 

It will be exciting to see what the wizards of Silicon Valley will come up with next. But wizardry without wisdom is dangerous."

Friday, November 7, 2025

The ethics of AI, from policing to healthcare; KPBS; November 3, 2025

 Jade Hindmon / KPBS Midday Edition Host,  Ashley Rusch / Producer, KPBS; The ethics of AI, from policing to healthcare

"Artificial intelligence is everywhere — from our office buildings, to schools and government agencies.

The Chula Vista Police Department is joining cities to use AI to write police reports. Several San Diego County police departments also use AI-powered drones to support their work. 

Civil liberties advocates are concerned about privacy, safety and surveillance. 

On Midday Edition, we sit down with an expert in AI ethics to discuss the philosophical questions of responsible AI.

Guest:

  • David Danks, professor of data science, philosophy and policy at UC San Diego"

Wednesday, November 5, 2025

Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI; ZME Science, November 4, 2025

Tudor Tarita , ZME Science; Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI


[Kip Currier: This is a troubling, eye-opening report by Originality.ai on AI-generated books proliferating on Amazon in the sub-area of "herbal remedies". As a ZME Science article on the report suggests, if this is the state of herbal books on the world's largest bookseller platform, what is the state of other book areas and genres?

The lack of transparency and authenticity vis-a-vis AI-generated books is deeply concerning. If a potential book buyer knows that a book is principally or wholly "authored" by AI and that person still elects to purchase that book with that knowledge, that's their choice. But, as the Originality.ai report identifies, potential book buyers are being presented with fake author names on AI-generated books and are not being informed by the purveyors of AI-generated books, or the platforms that make those books accessible for purchase, that those works are not written by human experts and authors. That is deceptive business practice and consumer fraud.

Consumers should have the right to know material information about all products in the marketplace. No one would countenance (except for bad actors) children's toys deceptively containing harmful lead or dog and cat treats made with substances that can cause harm or death. Why should consumers not be concerned in similar fashion about books that purport to be created by human experts but which may contain information that can cause harm and even death in some cases? 

Myriad ethical and legal questions are implicated, such as:

  • What are the potential harms of AI-generated books that falsely pose as human authors?
  • What responsibility do platforms like Amazon have for fake products?
  • What responsibility do platforms like Amazon have for AI-generated books?
  • What do you as a consumer want to know about books that are available for purchase on platforms like Amazon?
  • What are the potential short-term and long-term implications of AI-generated books posing as human authors for consumers, authors, publishers, and societies?]


[Excerpt]

"At the top of Amazon’s “Herbal Remedies” bestseller list, The Natural Healing Handbook looked like a typical wellness guide. With leafy cover art and promises of “ancient wisdom” and “self-healing,” it seemed like a harmless book for health-conscious readers.

But “Luna Filby”, the Australian herbalist credited with writing the book, doesn’t exist.

A new investigation from Originality.ai, a company that develops tools to detect AI-generated writing, reveals that The Natural Healing Handbook and hundreds of similar titles were likely produced by artificial intelligence. The company scanned 558 paperback titles published in Amazon’s “Herbal Remedies” subcategory in 2025 and found that 82% were likely written by AI.

“We inputted Luna’s author biography, book summary, and any available sample pages,” the report states. “All came back flagged as likely AI-generated with 100% confidence.

A Forest of Fakes

It’s become hard (sometimes, almost impossible) to distinguish whether something is written by AI. So there’s often a sliver of a doubt. But according to the report, The Natural Healing Handbook is part of a sprawling canopy of probable AI-generated books. Many of them are climbing Amazon’s rankings, often outselling work by real writers...

Where This Leaves Us

AI is flooding niches that once relied on careful expertise and centuries of accumulated knowledge. Real writers are being drowned out by machines regurgitating fragments of folklore scraped from the internet.

“This is a damning revelation of the sheer scope of unlabeled, unverified, unchecked, likely AI content that has completely invaded [Amazon’s] platform,” wrote Michael Fraiman, author of the Originality.ai report.

The report looked at herbal books, but there’s likely many other niches hidden

Amazon’s publishing model allows self-published authors to flood categories for profit. And now, AI tools make it easier than ever to generate convincing, although hollow, manuscripts. Every new “Luna Filby” who hits #1 proves that the model still works.

Unless something changes, we may be witnessing the quiet corrosion of trust in consumer publishing."

Tuesday, October 21, 2025

Staying Human in the Age of AI: November 6-7, 2025; The Grefenstette Center for Ethics, Duquesne University, November 6-7, 2025

The Grefenstette Center for Ethics, Duquesne University; Staying Human in the Age of AI: November 6-7, 2025

"The Grefenstette Center for Ethics is excited to announce our sixth annual Tech Ethics Symposium, Staying Human in the Age of AI, which will be held in person at Duquesne University's Power Center and livestreamed online. This year's event will feature internationally leading figures in the ongoing discussion of ethical and responsible uses of AI. The two-day Symposium is co-sponsored by the Patricia Doherty Yoder Institute for Ethics and Integrity in Journalism and Media, the Center for Teaching Excellence, and the Albert P. Viragh Institute for Ethics in Business.

We are excited to once again host a Student Research Poster Competition at the Symposium. All undergraduate and graduate student research posters on any topic in the area of tech/digital/AI ethics are welcome. Accepted posters will be awarded $75 to offset printing costs. In addition to that award, undergraduate posters will compete for the following prizes: the Outstanding Researcher Award, the Ethical PA Award, and the Pope Francis Award. Graduate posters can win Grand Prize or Runner-Up. All accepted posters are eligible for an Audience Choice award, to be decided by Symposium attendees on the day of the event! Student Research Poster submissions will be due Friday, October 17. Read the full details of the 2025 Student Research Poster Competition.

The Symposium is free to attend and open to all university students, faculty, and staff, as well as community members. Registrants can attend in person or experience the Symposium via livestream. Registration is now open!"