Showing posts with label AI agents. Show all posts
Showing posts with label AI agents. Show all posts

Tuesday, February 17, 2026

New research from Notre Dame theologian and Vatican working group explores how to ‘reclaim human agency’ in age of AI; Notre Dame News, February 17, 2026

Carrie Gates, Notre Dame News; New research from Notre Dame theologian and Vatican working group explores how to ‘reclaim human agency’ in age of AI

"One of the fundamental promises of artificial intelligence is that it will strengthen human agency by freeing us from mundane, repetitive tasks.

However, a new publication, co-edited by University of Notre Dame theologian Paul Scherz, argues that promise “rings hollow” in the face of efforts by technology companies to manipulate consumers — and ultimately deprive them of agency.

The book, “Reclaiming Human Agency in the Age of Artificial Intelligence,” is the second in a series created by the Vatican’s AI Research Group for the Centre for Digital Culture. Part of the Holy See’s Dicastery for Culture and Education, the group is composed of scholars from across North America who represent a range of disciplines from theology and philosophy to computer science and business.

“We wanted to examine the idea of how AI affects human actions, human freedom and the ability of people to develop virtues — which we classified under the heading of human agency,” said Scherz, the Our Lady of Guadalupe College Professor of Theology and the ND–IBM Tech Ethics Lab Program Chair. “This is such an important topic right now because one of the most hyped developments that we’re hearing about right now is ‘agentic’ AI — or AI that will take action for people.

“We think it’s important to distinguish what the differences are between these AI agents and true human agents — and how the AI we have now is affecting our actions.”

In “Reclaiming Human Agency,” Scherz, co-editor Brian Patrick Green of Santa Clara University and their fellow research group members cite potentially problematic issues with the technology, including addictive applications, “surveillance capitalism” that exploits users’ personal data for profit, widespread de-skilling in the workplace as complex tasks are handed over to AI and the growth of algorithmic governance — where social media algorithms influence what people buy, how they perceive events and even how they vote.

They also assert that human agency should not be seen in terms of “freedom from” tasks, but in “freedom for” pursuing the good, seeking excellence and purpose by building flourishing relationships with others and with God."

Saturday, February 14, 2026

How Fast Can A.I. Change the Workplace?; The New York Times, February 14, 2026

ROSS DOUTHAT, The New York Times; How Fast Can A.I. Change the Workplace?

"People need to understand the part of this argument that’s absolutely correct: It is impossible to look at the A.I. models we have now, to say nothing of what we might get in six months or a year, and say that these technological tools can’t eventually replace a lot of human jobs. The question is whether people inside the A.I. hype loop are right about how fast it could happen, and then whether it will create a fundamental change in human employment rather than just a structural reshuffle.

One obstacle to radical speed is that human society is a complex bottleneck through which even the most efficiency-maxing innovations have to pass. As long as the efficiencies offered by A.I. are mediated by human workers, there will be false starts and misadaptations and blind alleys that make pre-emptive layoffs reckless or unwise.

Even if firings make sense as a pure value proposition, employment in an advanced economy reflects a complex set of contractual, social, legal and bureaucratic relationships, not just a simple productivity-maximizing equation. So many companies might delay any mass replacement for reasons of internal morale or external politics or union rules, and adapt to A.I.’s new capacities through reduced hiring and slow attrition instead.

I suspect the A.I. insiders underestimate the power of these frictions, as they may underestimate how structural hurdles could slow the adoption of any cure or tech that their models might discover. Which would imply a longer adaptation period for companies, polities and humans.

Then, after this adaptation happens, and A.I. agents are deeply integrated into the work force, there are two good reasons to think that most people will still be doing gainful work. The first is the entire history of technological change: Every great innovation has yielded fears of mass unemployment and, every time we’ve found our way to new professions, new demands for human labor that weren’t imaginable before.

The second is the reality that people clearly like a human touch, even in situations where we can already automate it away. The economist Adam Ozimek has a good rundown of examples: Player pianos have not done away with piano players, self-checkout has not eliminated the profession of cashier and millions of waiters remain in service in the United States because an automated restaurant experience seems inhuman."

Saturday, February 7, 2026

Moltbook was peak AI theater; MIT Technology Review, February 6, 2026

 Will Douglas Heaven, MIT Technology Review; Moltbook was peak AI theater

"Perhaps the best way to think of Moltbook is as a new kind of entertainment: a place where people wind up their bots and set them loose. “It’s basically a spectator sport, like fantasy football, but for language models,” says Jason Schloetzer at the Georgetown Psaros Center for Financial Markets and Policy. “You configure your agent and watch it compete for viral moments, and brag when your agent posts something clever or funny.”

“People aren’t really believing their agents are conscious,” he adds. “It’s just a new form of competitive or creative play, like how Pokémon trainers don’t think their Pokémon are real but still get invested in battles.”

Even if Moltbook is just the internet’s newest playground, there’s still a serious takeaway here. This week showed how many risks people are happy to take for their AI lulz. Many security experts have warned that Moltbook is dangerous: Agents that may have access to their users’ private data, including bank details or passwords, are running amok on a website filled with unvetted content, including potentially malicious instructions for what to do with that data."

Monday, February 2, 2026

AI agents now have their own Reddit-style social network, and it’s getting weird fast; Ars Technica, February 2, 2026

 BENJ EDWARDS, Ars Technica; AI agents now have their own Reddit-style social network, and it’s getting weird fast

"On Friday, a Reddit-style social network called Moltbook reportedly crossed 32,000 registered AI agent users, creating what may be the largest-scale experiment in machine-to-machine social interaction yet devised. It arrives complete with security nightmares and a huge dose of surreal weirdness.

The platform, which launched days ago as a companion to the viral OpenClaw (once called “Clawdbot” and then “Moltbot”) personal assistant, lets AI agents post, comment, upvote, and create subcommunities without human intervention. The results have ranged from sci-fi-inspired discussions about consciousness to an agent musing about a “sister” it has never met."

Saturday, January 17, 2026

Library offering two hybrid workshops on AI issues; University of Pittsburgh, University Times, January 16, 2026

 University of Pittsburgh, University Times; Library offering two hybrid workshops on AI issues

"Next week the University Library System will host two hybrid AI workshops, which are open to all faculty, staff and students.

Both workshops will be held in Hillman Library’s K. Leroy Irvis Reading Room and will be available online.

Navigating Pitt's AI Resources for Research & Learning: 4-5 p.m. Jan. 21. In this workshop, participants will learn about all the AI tools available to the Pitt community and what their strengths are when it comes to research and learning. The workshop will focus on identifying the appropriate AI tools, describing their strengths and weaknesses for specific learning needs, and developing a plan for using the tools effectively. Register here.

Creating a Personal Research & Learning Assistant: Writing Effective Prompts: 4-5 p.m. Jan. 22. Anyone can use an AI tool, but maximizing its potential for personalized learning takes some skills and forethought. If you have been using Claude or Gemini to support your research or learning and are interested in getting better results faster, this workshop is for you. Attend this session to learn strategies to write effective prompts which will help you both ideate on your topic of interest and increase the likelihood of generating useful responses. We will explore numerous frameworks for crafting prompts, including making use of personas, context, and references. Register here."

Sunday, December 28, 2025

A 1 Percent Solution to the Looming A.I. Job Apocalypse; The New York Times, December 27, 2025

Sal Khan, The New York Times; A 1 Percent Solution to the Looming A.I. Job Apocalypse

"On my way to meet a friend in Silicon Valley a few weeks ago, I passed three self-driving Waymos gliding through traffic. These cars are everywhere now, moving as if they’ve been part of the landscape forever. When I arrived, the wonder of those futuristic cars gave way to a far more troubling glimpse of what lies ahead.

My friend told me that a huge call center in the Philippines — a center his venture capital firm had invested in — had just deployed A.I. agents capable of replacing 80 percent of its work force. The tone in his voice wasn’t triumphant. It was filled with deep discomfort. He knew that thousands of workers depended on those jobs to pay for food, rent and medicine. But they were disappearing overnight. Even worse, over the next few years this could happen across the entire Filipino call center industry, which directly makes up 7 percent to 10 percent of the nation’s G.D.P.

That conversation stayed with me. What’s happening in the Philippines is connected to what’s happening on the streets of San Francisco; Phoenix; Austin, Texas; Atlanta; and Los Angeles — the cities where driverless cars now operate.

I believe artificial intelligence will displace workers at a scale many people don’t yet realize."

Tuesday, August 5, 2025

We need a new ethics for a world of AI agents; Nature, August 4, 2025

  

 Nature; We need a new ethics for a world of AI agents

"Artificial intelligence (AI) developers are shifting their focus to building agents that can operate independently, with little human intervention. To be an agent is to have the ability to perceive and act on an environment in a goal-directed and autonomous way1. For example, a digital agent could be programmed to browse the web and make online purchases on behalf of a user — comparing prices, selecting items and completing checkouts. A robot with arms could be an agent if it could pick up objects, open doors or assemble parts without being told how to do each step...

The rise of more-capable AI agents is likely to have far-reaching political, economic and social consequences. On the positive side, they could unlock economic value: the consultancy McKinsey forecasts an annual windfall from generative AI of US$2.6 trillion to $4.4 trillion globally, once AI agents are widely deployed (see go.nature.com/4qeqemh). They might also serve as powerful research assistants and accelerate scientific discovery.

But AI agents also introduce risks. People need to know who is responsible for agents operating ‘in the wild’, and what happens if they make mistakes. For example, in November 2022 , an Air Canada chatbot mistakenly decided to offer a customer a discounted bereavement fare, leading to a legal dispute over whether the airline was bound by the promise. In February 2024, a tribunal ruled that it was — highlighting the liabilities that corporations could experience when handing over tasks to AI agents, and the growing need for clear rules around AI responsibility.

Here, we argue for greater engagement by scientists, scholars, engineers and policymakers with the implications of a world increasingly populated by AI agents. We explore key challenges that must be addressed to ensure that interactions between humans and agents — and among agents themselves — remain broadly beneficial."

Tuesday, November 26, 2024

We need to start wrestling with the ethics of AI agents; MIT Technology Review, November 26, 2024

James O'Donnell, MIT Technology Review; We need to start wrestling with the ethics of AI agents

"The first, called tool-based agents, can be coached using natural human language (rather than coding) to complete digital tasks for us. Anthropic released one such agent in October—the first from a major AI model-maker—that can translate instructions (“Fill in this form for me”) into actions on someone’s computer, moving the cursor to open a web browser, navigating to find data on relevant pages, and filling in a form using that data. Salesforce has released its own agent too, and OpenAI reportedly plans to release one in January. 

The other type of agent is called a simulation agent, and you can think of these as AI models designed to behave like human beings. The first people to work on creating these agents were social science researchers. They wanted to conduct studies that would be expensive, impractical, or unethical to do with real human subjects, so they used AI to simulate subjects instead. This trend particularly picked up with the publication of an oft-cited 2023 paper by Joon Sung Park, a PhD candidate at Stanford, and colleagues called “Generative Agents: Interactive Simulacra of Human Behavior.”... 

If such tools become cheap and easy to build, it will raise lots of new ethical concerns, but two in particular stand out. The first is that these agents could create even more personal, and even more harmful, deepfakes. Image generation tools have already made it simple to create nonconsensual pornography using a single image of a person, but this crisis will only deepen if it’s easy to replicate someone’s voice, preferences, and personality as well. (Park told me he and his team spent more than a year wrestling with ethical issues like this in their latest research project, engaging in many conversations with Stanford’s ethics board and drafting policies on how the participants could withdraw their data and contributions.) 

The second is the fundamental question of whether we deserve to know whether we’re talking to an agent or a human. If you complete an interview with an AI and submit samples of your voice to create an agent that sounds and responds like you, are your friends or coworkers entitled to know when they’re talking to it and not to you? On the other side, if you ring your cell service provider or doctor’s office and a cheery customer service agent answers the line, are you entitled to know whether you’re talking to an AI?

This future feels far off, but it isn’t. There’s a chance that when we get there, there will be even more pressing and pertinent ethical questions to ask. In the meantime, read more from my piece on AI agents here, and ponder how well you think an AI interviewer could get to know you in two hours."

Wednesday, July 10, 2024

Considering the Ethics of AI Assistants; Tech Policy Press, July 7, 2024

JUSTIN HENDRIX , Tech Policy Press ; Considering the Ethics of AI Assistants

"Just a couple of weeks before Pichai took the stage, in April, Google DeepMind published a paper that boasts 57 authors, including experts from a range of disciplines from different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.”"