Showing posts with label AI ethics. Show all posts
Showing posts with label AI ethics. Show all posts

Tuesday, January 20, 2026

AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up; Australian Broadcasting Corporation, January 18, 2026

 Alan Kohler, Australian Broadcasting Corporation; AI platforms like Grok are an ethical, social and economic nightmare — and we're starting to wake up

 "As 2025 began, I thought humanity's biggest problem was climate change.

In 2026, AI is more pressing...

Musk's xAI and the other intelligence developers are working as fast as possible towards what they call AGI (artificial general intelligence) or ASI (artificial superintelligence), which is, in effect, AI that makes its own decisions. Given its answer above, an ASI version of Grok might decide not to do non-consensual porn, but others will.

Meanwhile, photographic and video evidence in courts will presumably become useless if they can be easily faked. Many courts are grappling with this already, including the Federal Court of Australia, but it could quickly get out of control.

AI will make politics much more chaotic than it already is, with incredibly effective fake campaigns including damning videos of candidates...

But AI is not like the binary threat of a nuclear holocaust — extinction or not — its impact is incremental and already happening. The Grok body fakes are known about, and the global outrage has apparently led to some controls on it for now, but the impact on jobs and the economy is completely unknown and has barely begun."

Tuesday, January 13, 2026

Türkiye issues ethics framework to regulate AI use in schools; Daily Sabah, January 11, 2026

Daily Sabah; Türkiye issues ethics framework to regulate AI use in schools

"The Ministry of National Education has issued a comprehensive set of ethical guidelines to regulate the use of artificial intelligence in schools, introducing mandatory online ethical declarations and a centralized reporting system aimed at ensuring transparency, accountability and student safety.

The Ethical Guidelines for Artificial Intelligence Applications in Education set out the rules for how AI technologies may be developed, implemented, monitored and evaluated across public education institutions. The guidelines were prepared under the ministry’s Artificial Intelligence Policy Document and Action Plan for 2025-2029, which came into effect on June 17, 2025."

To anybody still using X: sexual abuse content is the final straw, it’s time to leave; The Guardian, January 12, 2026

 , The Guardian; To anybody still using X: sexual abuse content is the final straw, it’s time to leave

"What does matter is that X is drifting towards irrelevance, becoming a containment pen for jumped-up fascists. Government ministers cannot be making policy announcements in a space that hosts AI-generated, near-naked pictures of young girls. Journalists cannot share their work in a place that systematically promotes white supremacy. Regular people cannot be getting their brains slowly but surely warped by Maga propaganda.

We all love to think that we have power and agency, and that if we try hard enough we can manage to turn the tide – but X is long dead. The only winning move now is to step away from the chess board, and make our peace with it once and for all."

Sunday, December 28, 2025

Monday, December 22, 2025

Natasha Lyonne says AI has an ethics problem because right now it’s ‘super kosher copacetic to rob freely under the auspices of acceleration’; Fortune, December 20, 2025

 , Fortune; Natasha Lyonne says AI has an ethics problem because right now it’s ‘super kosher copacetic to rob freely under the auspices of acceleration’

"Asteria partnered with Moonvalley AI, which makes AI tools for filmmakers, to create Marey, named after cinematographer Étienne-Jules Marey. The tool helps generate AI video that can be used for movies and TV, but only draws on open-license content or material it has explicit permission to use. 

Being careful about the inputs for Asteria’s AI video generation is important, Lyonne said at the Fortune Brainstorm AI conference in San Francisco last week. As AI use increases, both tech and Hollywood need to respect the work of the cast, as well as the crew and the writers behind the scenes. 

“I don’t think it’s super kosher copacetic to just kind of rob freely under the auspices of acceleration or China,” she said. 

While she hasn’t yet used AI to help make a TV show or movie, Lyonne said Asteria has used it in other small ways to develop renderings and other details.

“It’s a pretty revolutionary act that we actually do have that model and that’s you know the basis for everything that we work on,” said Lyonne.

Marey is available to the public for a credits-based subscription starting at $14.99 per month."

Sunday, December 21, 2025

Notre Dame receives $50 million grant from Lilly Endowment for the DELTA Network, a faith-based approach to AI ethics; Notre Dame News, December 19, 2025

Carrie Gates and Laura Moran Walton, Notre Dame News ; Notre Dame receives $50 million grant from Lilly Endowment for the DELTA Network, a faith-based approach to AI ethics

"The University of Notre Dame has been awarded a $50.8 million grant from Lilly Endowment Inc. to support the DELTA Network: Faith-Based Ethical Formation for a World of Powerful AI. Led by the Notre Dame Institute for Ethics and the Common Good(ECG), this grant — the largest awarded to Notre Dame by a private foundation in the University’s history — will fund the further development of a shared, faith-based ethical framework that scholars, religious leaders, tech leaders, teachers, journalists, young people and the broader public can draw upon to discern appropriate uses of artificial intelligence, or AI.

The grant will also support the establishment of a robust, interconnected network that will provide practical resources to help navigate challenges posed by rapidly developing AI. Based on principles and values from Christian traditions, the framework is designed to be accessible to people of all faith perspectives.

“We are deeply grateful to Lilly Endowment for its generous support of this critically important initiative,” said University President Rev. Robert A. Dowd, C.S.C. “Pope Leo XIV calls for us all to work to ensure that AI is ‘intelligent, relational and guided by love,’ reflecting the design of God the Creator. As a Catholic university that seeks to promote human flourishing, Notre Dame is well-positioned to build bridges between religious leaders and educators, and those creating and using new technologies, so that they might together explore the moral and ethical questions associated with AI.”

Monday, December 15, 2025

Chasing the Mirage of “Ethical” AI; The MIT Press Reader, December 2025

 De Kai, The MIT Press Reader; Chasing the Mirage of “Ethical” AI

"Artificial intelligence poses many threats to the world, but the most critical existential danger lies in the convergence of two AI-powered phenomena: hyperpolarization accompanied by hyperweaponization. Alarmingly, AI is accelerating hyperpolarization while simultaneously enabling hyperweaponization by democratizing weapons of mass destruction (WMDs).

For the first time in human history, lethal drones can be constructed with over-the-counter parts. This means anyone can make killer squadrons of AI-based weapons that fit in the palm of a hand. Worse yet, the AI in computational biology has made genetically engineered bioweapons a living room technology.

How do we handle such a polarized era when anyone, in their antagonism or despair, can run down to the homebuilder’s store and buy all they need to assemble a remote-operated or fully autonomous WMD?

It’s not the AI overlords destroying humanity that we need to worry about so much as a hyperpolarized, hyperweaponized humanity destroying humanity.

To survive this latest evolutionary challenge, we must address the problem of nurturing our artificial influencers. Nurturing them to be ethical and responsible enough not to be mindlessly driving societal polarization straight into Armageddon. Nurturing them so they can nurture us.

But is it possible to ensure such ethical AIs? How can we accomplish this?"

Sunday, December 14, 2025

Publisher under fire after ‘fake’ citations found in AI ethics guide; The Times, December 14, 2025

 Rhys Blakely, The Times ; Publisher under fire after ‘fake’ citations found in AI ethics guide

"One of the world’s largest academic publishers is selling a book on the ethics of AI intelligence research that appears to be riddled with fake citations, including references to journals that do not exist.

Academic publishing has recently been subject to criticism for accepting fraudulent papers produced using AI, which have made it through a peer-review process designed to guarantee high standards.

The Times found that a book recently published by the German-British publishing giant Springer Nature includes dozens of citations that appear to have been invented — a sign, often, of AI-generated material."

Monday, December 1, 2025

'Technology isn't neutral': Calgary bishop raises ethical questions around AI; Calgary Herald, November 26, 2025

 Devon Dekuyper , Calgary Herald; 'Technology isn't neutral': Calgary bishop raises ethical questions around AI

"We, as human beings, use technology, and we also have to be able to understand it, but also to apply it such that it does not impact negatively the human person, their flourishing (or) society,' said Bishop McGrattan"

Saturday, November 29, 2025

Fordham Offers Certificate Focused on AI Ethics; Fordham Now, November 17, 2025

, Fordham Now; Fordham Offers Certificate Focused on AI Ethics

"As new technologies like artificial intelligence become increasingly embedded in everyday life, questions about how to use them responsibly have grown more urgent. A new advanced certificate program at Fordham aims to help professionals engage with those questions and build expertise as ethical decision-makers in an evolving technological landscape. 

The Advanced Certificate in Ethics and Emerging Technologies is scheduled to launch in August 2026, with applications due April 1. The 12-credit program provides students with a foundation for understanding not only how technologies such as AI work, but also how to evaluate their social and moral implications to make informed decisions about their use. 

A Long History of Ethical Education

The program’s development was guided by faculty in Fordham’s Center for Ethics Education, which has been a part of the University community for roughly three decades. According to Megan Bogia, associate director for academic programs and strategic initiatives at the center, the certificate program was developed in response to a growing need for ethical literacy among professionals working with new technologies—whether that means weighing questions of bias in AI-driven hiring tools, navigating privacy concerns in health data, or understanding the societal effects of automation. 

“As technologies rapidly advance and permeate more deeply into our daily lives, it’s important that we simultaneously build up the fluency to interrogate them,” said Bogia. “Not just so that we can advance a more just society, but also so we can be internally confident in navigating an increasingly complicated world.”

Flexible Options for a Variety of Fields

Students will complete courses that examine ethical issues related to technology, as well as classes that provide technical grounding in the systems behind it. One required course, currently under development by the Department of Computer and Information Science, will cover artificial intelligence for non-specialists, Bogia said, helping students understand “all of the machinations of LLMs—large language models—so they can be fully informed interlocutors with the models.”


Other courses will explore questions of moral responsibility and social impact. Electives such as “Algorithmic Bias” and “Technology and Human Development” will allow students to dig more deeply into specialized areas. 


Bogia said the program—which can be completed full-time or part-time, over the course of one or two years—was designed to be flexible and relevant for students across a wide range of fields and career stages. It may appeal to professionals working in areas such as business, education, human resources, health care, and law, as well as those in technology-focused fields like data science and cybersecurity. 


“These ethical questions are everywhere,” Bogia said. “We’ll have learning environments that meet students where they’re at and allow them to develop fluency in a way that’s most useful for them.”


She added that Fordham is an especially fitting place to pursue this kind of inquiry.

“As a Jesuit institution, Fordham is well-positioned to be concerned and compassionate in the face of hard problems,” said Bogia. 


To learn more, visit the program’s webpage."

Saturday, November 15, 2025

Pope Leo XIV’s important warning on ethics of AI and new technology; The Fresno Bee, November 15, 2025

Andrew Fiala , The Fresno Bee; Pope Leo XIV’s important warning on ethics of AI and new technology

"Recently, Pope Leo XIV addressed a conference on artificial intelligence in Rome, where he emphasized the need for deeper consideration of the “ethical and spiritual weight” of new technologies...

This begins with the insight that human beings are tool-using animals. Tools extend and amplify our operational power, and they can also either enhance or undermine who we are and what we care about. 

Whether we are enhancing or undermining our humanity ought to be the focus of moral reflection on technology.

This is a crucial question in the AI-era. The AI-revolution should lead us to ask fundamental questions about the ethical and spiritual side of technological development. AI is already changing how we think about intellectual work, such as teaching and learning. Human beings are already interacting with artificial systems that provide medical, legal, psychological and even spiritual advice. Are we prepared for all of this morally, culturally and spiritually?...

At the dawn of the age of artificial intelligence, we need a corresponding new dawn of critical moral judgment. Now is the time for philosophers, theologians and ordinary citizens to think deeply about the philosophy of technology and the values expressed or embodied in our tools. 

It will be exciting to see what the wizards of Silicon Valley will come up with next. But wizardry without wisdom is dangerous."

Friday, November 7, 2025

The ethics of AI, from policing to healthcare; KPBS; November 3, 2025

 Jade Hindmon / KPBS Midday Edition Host,  Ashley Rusch / Producer, KPBS; The ethics of AI, from policing to healthcare

"Artificial intelligence is everywhere — from our office buildings, to schools and government agencies.

The Chula Vista Police Department is joining cities to use AI to write police reports. Several San Diego County police departments also use AI-powered drones to support their work. 

Civil liberties advocates are concerned about privacy, safety and surveillance. 

On Midday Edition, we sit down with an expert in AI ethics to discuss the philosophical questions of responsible AI.

Guest:

  • David Danks, professor of data science, philosophy and policy at UC San Diego"

Wednesday, November 5, 2025

Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI; ZME Science, November 4, 2025

Tudor Tarita , ZME Science; Amazon’s Bestselling Herbal Guides Are Overrun by Fake Authors and AI


[Kip Currier: This is a troubling, eye-opening report by Originality.ai on AI-generated books proliferating on Amazon in the sub-area of "herbal remedies". As a ZME Science article on the report suggests, if this is the state of herbal books on the world's largest bookseller platform, what is the state of other book areas and genres?

The lack of transparency and authenticity vis-a-vis AI-generated books is deeply concerning. If a potential book buyer knows that a book is principally or wholly "authored" by AI and that person still elects to purchase that book with that knowledge, that's their choice. But, as the Originality.ai report identifies, potential book buyers are being presented with fake author names on AI-generated books and are not being informed by the purveyors of AI-generated books, or the platforms that make those books accessible for purchase, that those works are not written by human experts and authors. That is deceptive business practice and consumer fraud.

Consumers should have the right to know material information about all products in the marketplace. No one would countenance (except for bad actors) children's toys deceptively containing harmful lead or dog and cat treats made with substances that can cause harm or death. Why should consumers not be concerned in similar fashion about books that purport to be created by human experts but which may contain information that can cause harm and even death in some cases? 

Myriad ethical and legal questions are implicated, such as:

  • What are the potential harms of AI-generated books that falsely pose as human authors?
  • What responsibility do platforms like Amazon have for fake products?
  • What responsibility do platforms like Amazon have for AI-generated books?
  • What do you as a consumer want to know about books that are available for purchase on platforms like Amazon?
  • What are the potential short-term and long-term implications of AI-generated books posing as human authors for consumers, authors, publishers, and societies?]


[Excerpt]

"At the top of Amazon’s “Herbal Remedies” bestseller list, The Natural Healing Handbook looked like a typical wellness guide. With leafy cover art and promises of “ancient wisdom” and “self-healing,” it seemed like a harmless book for health-conscious readers.

But “Luna Filby”, the Australian herbalist credited with writing the book, doesn’t exist.

A new investigation from Originality.ai, a company that develops tools to detect AI-generated writing, reveals that The Natural Healing Handbook and hundreds of similar titles were likely produced by artificial intelligence. The company scanned 558 paperback titles published in Amazon’s “Herbal Remedies” subcategory in 2025 and found that 82% were likely written by AI.

“We inputted Luna’s author biography, book summary, and any available sample pages,” the report states. “All came back flagged as likely AI-generated with 100% confidence.

A Forest of Fakes

It’s become hard (sometimes, almost impossible) to distinguish whether something is written by AI. So there’s often a sliver of a doubt. But according to the report, The Natural Healing Handbook is part of a sprawling canopy of probable AI-generated books. Many of them are climbing Amazon’s rankings, often outselling work by real writers...

Where This Leaves Us

AI is flooding niches that once relied on careful expertise and centuries of accumulated knowledge. Real writers are being drowned out by machines regurgitating fragments of folklore scraped from the internet.

“This is a damning revelation of the sheer scope of unlabeled, unverified, unchecked, likely AI content that has completely invaded [Amazon’s] platform,” wrote Michael Fraiman, author of the Originality.ai report.

The report looked at herbal books, but there’s likely many other niches hidden

Amazon’s publishing model allows self-published authors to flood categories for profit. And now, AI tools make it easier than ever to generate convincing, although hollow, manuscripts. Every new “Luna Filby” who hits #1 proves that the model still works.

Unless something changes, we may be witnessing the quiet corrosion of trust in consumer publishing."

Tuesday, October 21, 2025

Staying Human in the Age of AI: November 6-7, 2025; The Grefenstette Center for Ethics, Duquesne University, November 6-7, 2025

The Grefenstette Center for Ethics, Duquesne University; Staying Human in the Age of AI: November 6-7, 2025

"The Grefenstette Center for Ethics is excited to announce our sixth annual Tech Ethics Symposium, Staying Human in the Age of AI, which will be held in person at Duquesne University's Power Center and livestreamed online. This year's event will feature internationally leading figures in the ongoing discussion of ethical and responsible uses of AI. The two-day Symposium is co-sponsored by the Patricia Doherty Yoder Institute for Ethics and Integrity in Journalism and Media, the Center for Teaching Excellence, and the Albert P. Viragh Institute for Ethics in Business.

We are excited to once again host a Student Research Poster Competition at the Symposium. All undergraduate and graduate student research posters on any topic in the area of tech/digital/AI ethics are welcome. Accepted posters will be awarded $75 to offset printing costs. In addition to that award, undergraduate posters will compete for the following prizes: the Outstanding Researcher Award, the Ethical PA Award, and the Pope Francis Award. Graduate posters can win Grand Prize or Runner-Up. All accepted posters are eligible for an Audience Choice award, to be decided by Symposium attendees on the day of the event! Student Research Poster submissions will be due Friday, October 17. Read the full details of the 2025 Student Research Poster Competition.

The Symposium is free to attend and open to all university students, faculty, and staff, as well as community members. Registrants can attend in person or experience the Symposium via livestream. Registration is now open!"

Saturday, October 18, 2025

OpenAI Blocks Videos of Martin Luther King Jr. After Racist Depictions; The New York Times, October 17, 2025

, The New York Times ; OpenAI Blocks Videos of Martin Luther King Jr. After Racist Depictions


[Kip Currier: This latest tech company debacle is another example of breakdowns in technology design thinking and ethical leadership. No one in all of OpenAI could foresee that Sora 2.0 might be used in these ways? Or they did but didn't care? Either way, this is morally reckless and/or negligent conduct.

The leaders and design folks at OpenAI (and other tech companies) would be well-advised to look at Tool 6 in An Ethical Toolkit for Engineering/Design Practice, created by Santa Clara University Markkula Center for Applied Ethics:

Tool 6: Think About the Terrible People: Positive thinking about our work, as Tool 5 reminds us, is an important part of ethical design. But we must not envision our work being used only by the wisest and best people, in the wisest and best ways. In reality, technology is power, and there will always be those who wish to abuse that power. This tool helps design teams to manage the risks associated with technology abuse.

https://www.scu.edu/ethics-in-technology-practice/ethical-toolkit/

The "Move Fast and Break Things" ethos is alive and well in Big Tech.]


[Excerpt]

"OpenAI said Thursday that it was blocking people from creating videos using the image of the Rev. Dr. Martin Luther King Jr. with its Sora app after users created vulgar and racist depictions of him.

The company said it had made the decision at the request of the King Center as well as Dr. Bernice King, the civil rights leader’s daughter, who had objected to the videos.

The announcement was another effort by OpenAI to respond to criticism of its tools, which critics say operate with few safeguards.

“Some users generated disrespectful depictions of Dr. King’s image,” OpenAI said in a statement. “OpenAI has paused generations depicting Dr. King as it strengthens guardrails for historical figures.”"

Thursday, October 16, 2025

AI’s Copyright War Could Be Its Undoing. Only the US Can End It.; Bloomberg, October 14, 2025

 , Bloomberg; AI’s Copyright War Could Be Its Undoing. Only the US Can End It.

 "Whether creatives like Ulvaeus are entitled to any payment from AI companies is one of the sector’s most pressing and consequential questions. It’s being asked not just by Ulvaeus and fellow musicians including Elton John, Dua Lipa and Paul McCartney, but also by authors, artists, filmmakers, journalists and any number of others whose work has been fed into the models that power generative AI — tools that are now valued in the hundreds of billions of dollars."

Sunday, October 12, 2025

Notre Dame hosts Vatican AI adviser, Carnegie Mellon professor during AI ethics conference; South Bend Tribune, October 9, 2025

Rayleigh Deaton, South Bend Tribune; Notre Dame hosts Vatican AI adviser, Carnegie Mellon professor during AI ethics conference

"The increasingly ubiquitous nature of artificial intelligence in today's world raises questions about how the technology should be approached and who should be making the decisions about its development and implementation.

To the Rev. Paolo Benanti, an associate professor of ethics of AI at LUISS University and the AI adviser to the Vatican, and Aarti Singh, a professor in Carnegie Mellon University's Machine Learning Department, ethical AI use begins when the technology is used to better humanity, and this is done by making AI equitable and inclusive.

Benanti and Singh were panelists during a session on Wednesday, Oct. 8, at the University of Notre Dame's inaugural R.I.S.E. (Responsibility, Inclusion, Safety and Ethics) AI Conference. Hosted by the university's Lucy Family Institute for Data & Society, the conference ran Oct. 6-8 and focused on how AI can be used to address multidisciplinary societal issues while upholding ethical standards...

And, Singh said, promoting public AI awareness is vital. She said this is done through introducing AI training as early as elementary school and encouraging academics to develop soft skills to be able to communicate their AI research with laypeople — something they're not always good at.

"There are many programs being started now that are encouraging from the student level, but of course also faculty, in academia, to go out there and talk," Singh said. "I think the importance of doing that now is really crucial, and we should step up.""

Wednesday, October 8, 2025

What AI-generated Tilly Norwood reveals about digital culture, ethics and the responsibilities of creators; The Conversation, October 8, 2025

Director, Creative Innovation Studio; Associate Professor, RTA School of Media, Toronto Metropolitan University , The Conversation; What AI-generated Tilly Norwood reveals about digital culture, ethics and the responsibilities of creators

"Imagine an actor who never ages, never walks off set or demands a higher salary.

That’s the promise behind Tilly Norwood, a fully AI-generated “actress” currently being courted by Hollywood’s top talent agenciesHer synthetic presence has ignited a media firestorm, denounced as an existential threat to human performers by some and hailed as a breakthrough in digital creativity by others.

But beneath the headlines lies a deeper tension. The binaries used to debate Norwood — human versus machine, threat versus opportunity, good versus bad — flatten complex questions of art, justice and creative power into soundbites. 

The question isn’t whether the future will be synthetic; it already is. Our challenge now is to ensure that it is also meaningfully human."

Wednesday, September 24, 2025

AI Influencers: Libraries Guiding AI Use; Library Journal, September 16, 2025

 Matt Enis, Library Journal ; AI Influencers: Libraries Guiding AI Use

"In addition to the field’s collective power, libraries can have a great deal of influence locally, says R. David Lankes, the Virginia and Charles Bowden Professor of Librarianship at the University of Texas at Austin and cohost of LJ’s Libraries Lead podcast.

“Right now, the place where librarians and libraries could have the most impact isn’t on trying to change OpenAI or Microsoft or Google; it’s really in looking at implementation policy,” Lankes says. For example, “on the public library side, many cities and states are adopting AI policies now, as we speak,” Lankes says. “Where I am in Austin, the city has more or less said, ‘go forth and use AI,’ and that has turned into a mandate for all of the city offices, which in this case includes the Austin Public Library” (APL). 

Rather than responding to that mandate by simply deciding how the library would use AI internally, APL created a professional development program to bring its librarians up to speed with the technology so that they can offer other city offices help with ways to use it, and advice on how to use it ethically and appropriately, Lankes explains.

“Cities and counties are wrestling with AI, and this is an absolutely perfect time for libraries to be part of that conversation,” Lankes says."