Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, February 19, 2026

Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants; CNBC, February 18, 2026

 Ashley Capoot, CNBC; Anthropic is clashing with the Pentagon over AI use. Here’s what each side wants

"Anthropic wants assurance that its models will not be used for autonomous weapons or to “spy on Americans en masse,” according to a report from Axios. 

The DOD, by contrast, wants to use Anthropic’s models “for all lawful use cases” without limitation."

Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon; Fast Company, February 17, 2026

REBECCA HEILWEIL, Fast Company; Palantir is caught in the middle of a brewing fight between Anthropic and the Pentagon

"A dispute between AI company Anthropic and the Pentagon over how the military can use the company’s technology has now gone public. Amid tense negotiations, Anthropic has reportedly called for limits on two key applications: mass surveillance and autonomous weapons. The Department of Defense, which Trump renamed the Department of War last year, wants the freedom to use the technology without those restrictions.

Caught in the middle is Palantir. The defense contractor provides the secure cloud infrastructure that allows the military to use Anthropic’s Claude model, but it has stayed quiet as tensions escalate. That’s even as the Pentagon, per Axios, threatens to designate Anthropic a “supply chain risk,” a move that could force Palantir to cut ties with one of its most important AI partners."

Pentagon threatens Anthropic punishment; Axios, February 16, 2026

Dave Lawler, Maria Curi, Mike Allen, Axios; Pentagon threatens Anthropic punishment

"Defense Secretary Pete Hegseth is "close" to cutting business ties with Anthropic and designating the AI company a "supply chain risk" — meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.

The senior official said: "It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this."

Why it matters: That kind of penalty is usually reserved for foreign adversaries. 

Chief Pentagon spokesman Sean Parnell told Axios: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."

The big picture: Anthropic's Claude is the only AI model currently available in the military's classified systems, and is the world leader for many business applications. Pentagon officials heartily praise Claude's capabilities."

Tuesday, February 17, 2026

Reclaiming Human Agency in the Age of Artificial Intelligence; Journal of Moral Theology, December 17, 2025

AI Research Group of the Centre for Digital Culture, Paul Scherz, Brian Patrick Green, Journal of Moral Theology; Reclaiming Human Agency in the Age of Artificial Intelligence

New research from Notre Dame theologian and Vatican working group explores how to ‘reclaim human agency’ in age of AI; Notre Dame News, February 17, 2026

Carrie Gates, Notre Dame News; New research from Notre Dame theologian and Vatican working group explores how to ‘reclaim human agency’ in age of AI

"One of the fundamental promises of artificial intelligence is that it will strengthen human agency by freeing us from mundane, repetitive tasks.

However, a new publication, co-edited by University of Notre Dame theologian Paul Scherz, argues that promise “rings hollow” in the face of efforts by technology companies to manipulate consumers — and ultimately deprive them of agency.

The book, “Reclaiming Human Agency in the Age of Artificial Intelligence,” is the second in a series created by the Vatican’s AI Research Group for the Centre for Digital Culture. Part of the Holy See’s Dicastery for Culture and Education, the group is composed of scholars from across North America who represent a range of disciplines from theology and philosophy to computer science and business.

“We wanted to examine the idea of how AI affects human actions, human freedom and the ability of people to develop virtues — which we classified under the heading of human agency,” said Scherz, the Our Lady of Guadalupe College Professor of Theology and the ND–IBM Tech Ethics Lab Program Chair. “This is such an important topic right now because one of the most hyped developments that we’re hearing about right now is ‘agentic’ AI — or AI that will take action for people.

“We think it’s important to distinguish what the differences are between these AI agents and true human agents — and how the AI we have now is affecting our actions.”

In “Reclaiming Human Agency,” Scherz, co-editor Brian Patrick Green of Santa Clara University and their fellow research group members cite potentially problematic issues with the technology, including addictive applications, “surveillance capitalism” that exploits users’ personal data for profit, widespread de-skilling in the workplace as complex tasks are handed over to AI and the growth of algorithmic governance — where social media algorithms influence what people buy, how they perceive events and even how they vote.

They also assert that human agency should not be seen in terms of “freedom from” tasks, but in “freedom for” pursuing the good, seeking excellence and purpose by building flourishing relationships with others and with God."

Saturday, February 14, 2026

How Fast Can A.I. Change the Workplace?; The New York Times, February 14, 2026

ROSS DOUTHAT, The New York Times; How Fast Can A.I. Change the Workplace?

"People need to understand the part of this argument that’s absolutely correct: It is impossible to look at the A.I. models we have now, to say nothing of what we might get in six months or a year, and say that these technological tools can’t eventually replace a lot of human jobs. The question is whether people inside the A.I. hype loop are right about how fast it could happen, and then whether it will create a fundamental change in human employment rather than just a structural reshuffle.

One obstacle to radical speed is that human society is a complex bottleneck through which even the most efficiency-maxing innovations have to pass. As long as the efficiencies offered by A.I. are mediated by human workers, there will be false starts and misadaptations and blind alleys that make pre-emptive layoffs reckless or unwise.

Even if firings make sense as a pure value proposition, employment in an advanced economy reflects a complex set of contractual, social, legal and bureaucratic relationships, not just a simple productivity-maximizing equation. So many companies might delay any mass replacement for reasons of internal morale or external politics or union rules, and adapt to A.I.’s new capacities through reduced hiring and slow attrition instead.

I suspect the A.I. insiders underestimate the power of these frictions, as they may underestimate how structural hurdles could slow the adoption of any cure or tech that their models might discover. Which would imply a longer adaptation period for companies, polities and humans.

Then, after this adaptation happens, and A.I. agents are deeply integrated into the work force, there are two good reasons to think that most people will still be doing gainful work. The first is the entire history of technological change: Every great innovation has yielded fears of mass unemployment and, every time we’ve found our way to new professions, new demands for human labor that weren’t imaginable before.

The second is the reality that people clearly like a human touch, even in situations where we can already automate it away. The economist Adam Ozimek has a good rundown of examples: Player pianos have not done away with piano players, self-checkout has not eliminated the profession of cashier and millions of waiters remain in service in the United States because an automated restaurant experience seems inhuman."

Microsoft AI CEO predicts 'most, if not all' white-collar tasks will be automated by AI within 18 months; Business Insider, February 12, 2026

  and , Business Insider; Microsoft AI CEO predicts 'most, if not all' white-collar tasks will be automated by AI within 18 months


[Kip Currier: Microsoft AI Chief Mustafa Suleyman's assertion that AI will be performing "most, if not all" white-collar  tasks within 12 to 18 months raises lots of questions, like:

  • Is this forecast accurate or AI hype?
  • As individuals and societies, do we want AI to displace human workers? Who has decided that this is "a good thing"?
  • What are the spiritual implications of this revolutionary transformation of our world?
  • What are the implications of such changes for the physical and mental well-being of children, young people, and adults?
  • What are the short-term and long-term cognitive impacts of AI use?
  • How will marginalized persons around the globe be affected by such radical employment changes? How will the Global South be impacted?
  • What are the implications for income disparities and wealth concentration?
  • In what ways will culture, the arts, science, medicine, and research be influenced?
  • What are the impacts on education, life-long learning, and professional development?
  • How will the environment, diminishing resources like water, and climate change be influenced by this employment forecast?
  • In what ways will AI proliferation impact people in need and the fauna and flora of the world, particularly vulnerable organisms and ecosystems?
  • How will monies and resources spent on AI data centers create new environmental justice communities and exacerbate inequities in existing ones?
  • What are the implications for democracy, human rights, and civil liberties, like privacy, data agency, free expression, intellectual freedom, and access to accurate, uncensored information?
  • Do you trust AI to do the white-collar jobs that humans have done? 
  • Are Microsoft and Suleyman disinterested parties? Microsoft has major self-interest in hyping AI enterprise products that Microsoft will be charging users to adopt and license.
  • If Suleyman's claim is accurate, or even is accurate but in a longer time period than 12 to 18 months, what kinds of oversight, regulations, and ethical guardrails are needed/desired?]


[Excerpt]

"Mustafa Suleyman, the Microsoft AI chief, said in an interview with the Financial Times that he predicts most, if not every, task in white-collar fields will be automated by AI within the next year or year and a half.

"I think that we're going to have a human-level performance on most, if not all, professional tasks," Suleyman said in the interview that was published Wednesday. "So white-collar work, where you're sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person — most of those tasks will be fully automated by an AI within the next 12 to 18 months.""

Friday, February 13, 2026

Lawyer sets new standard for abuse of AI; judge tosses case; Ars Technica, February 6, 2026

ASHLEY BELANGER , Ars Technica; Lawyer sets new standard for abuse of AI; judge tosses case

"Frustrated by fake citations and flowery prose packed with “out-of-left-field” references to ancient libraries and Ray Bradbury’s Fahrenheit 451, a New York federal judge took the rare step of terminating a case this week due to a lawyer’s repeated misuse of AI when drafting filings.

In an order on Thursday, District Judge Katherine Polk Failla ruled that the extraordinary sanctions were warranted after an attorney, Steven Feldman, kept responding to requests to correct his filings with documents containing fake citations."

Wednesday, February 11, 2026

Adam Schiff And John Curtis Introduce Bill To Require Tech To Disclose Copyrighted Works Used In AI Training Models; Deadline, February 10, 2026

 Ted Johnson, Deadline; Adam Schiff And John Curtis Introduce Bill To Require Tech To Disclose Copyrighted Works Used In AI Training Models

"Sen. Adam Schiff (D-CA) and Sen. John Curtis (R-UT) are introducing a bill that touches on one of the hottest Hollywood-tech debates in the development of AI: The use of copyrighted works in training models.

The Copyright Labeling and Ethical AI Reporting Act would require companies file a notice with the Register of Copyrights that detail the copyrighted works used to train datasets for an AI model. The notice would have to be filed before a new model is publicly released, and would apply retroactively to models already available to consumers.

The Copyright Office also would be required to establish a public database of the notices filed. There also would be civil penalties for failure to disclose the works used."

OpenAI Is Making the Mistakes Facebook Made. I Quit.; The New York Times, February 11, 2026

Zoë Hitzig , The New York Times; OpenAI Is Making the Mistakes Facebook Made. I Quit.

"This week, OpenAI started testing ads on ChatGPT. I also resigned from the company after spending two years as a researcher helping to shape how A.I. models were built and priced, and guiding early safety policies before standards were set in stone.

I once believed I could help the people building A.I. get ahead of the problems it would create. This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.

I don’t believe ads are immoral or unethical. A.I. is expensive to run, and ads can be a critical source of revenue. But I have deep reservations about OpenAI’s strategy."

Tuesday, February 10, 2026

No, the human-robot singularity isn’t here. But we must take action to govern AI; The Guardian, February 10, 2026

 , The Guardian; No, the human-robot singularity isn’t here. But we must take action to govern AI

"Based upon my years of research on bots, AI and computational propaganda, I can tell you two things with near certainty. First, Moltbook is nothing new. Humans have built bots that can talk to one another – and to humans – for decades. They’ve been designed to make outlandish, even frightening, claims throughout this time. Second, the singularity is not here. Nor is AGI. According to most researchers, neither is remotely close. AI’s advancement is limited by a number of very tangible factors: mathematics, data access and business costs among them. Claims that AGI or the singularity have arrived are not grounded in empirical research or science.

But as tech companies breathlessly promote their AI capabilities another thing is also clear: big tech is now far from being the countervailing force it was during the first Trump administration. The overblown claims emanating from Silicon Valley about AI have become intertwined with the nationalism of the US government as the two work together in a bid to “win” the AI race. Meanwhile, ICE is paying Palantir $30m to provide AI-enabled software that may be used for government surveillance. Musk and other tech executives continue to champion far-right causes. Google and Apple also removed apps people were using to track ICE from their digital storefronts after political pressure.

Even if we don’t yet have to worry about the singularity, we do need to fight back against this marriage of convenience caused by big tech’s quest for higher valuations and Washington’s desire for control. When tech and politicians are in lockstep, constituents will need to use their power to decide what will happen with AI."

Monday, February 9, 2026

Essential Knowledge for Journalists Reporting on AI, Creativity and Copyright; Webinar, National Press Foundation: Thursday, February 19, 2026 12 PM - 1 PM EST

 Webinar, National Press Foundation: Essential Knowledge for Journalists Reporting on AI, Creativity and Copyright 

"Generative AI is one of the biggest technological and cultural stories of our time – and one of the hardest to explain. As AI companies train models on news articles, books, images and music, reporters face tough questions about permission, transparency and fair use. Should AI companies pay when creative works are used to train their AI models? Where’s the line between innovation and theft?

The National Press Foundation will host a webinar to help journalists make sense of the evolving AI licensing landscape and report on it with clarity and confidence. We’ll unpack what “AI licensing” really means, how early one-off deals are turning into structured revenue-sharing systems, and why recent agreements in media and entertainment could shift the conversation from conflict to cooperation.

Join NPF and a panel of experts for a free online briefing from 12-1 p.m. ET, Feb. 19, 2026. The practical, forward-looking discussion examines how trust, creativity, and innovation can coexist as this new era unfolds and will equip journalists with plain-language explanations, real-world examples, and story angles that help readers understand why AI licensing matters to culture, innovation and the creative ecosystem they rely on every day."

The New Fabio Is Claude; The New York Times, February 8, 2026

  , The New York Times; The New Fabio Is Claude

The romance industry, always at the vanguard of technological change, is rapidly adapting to A.I. Not everyone is on board.

"A longtime romance novelist who has been published by Harlequin and Mills & Boon, Ms. Hart was always a fast writer. Working on her own, she released 10 to 12 books a year under five pen names, on top of ghostwriting. But with the help of A.I., Ms. Hart can publish books at an astonishing rate. Last year, she produced more than 200 romance novels in a range of subgenres, from dark mafia romances to sweet teen stories, and self-published them on Amazon. None were huge blockbusters, but collectively, they sold around 50,000 copies, earning Ms. Hart six figures...

Ms. Hart has become an A.I. evangelist. Through her author-coaching business, Plot Prose, she’s taught more than 1,600 people how to produce a novel with artificial intelligence, she said. She’s rolling out her proprietary A.I. writing program, which can generate a book based on an outline in less than an hour, and costs between $80 and $250 a month.

But when it comes to her current pen names, Ms. Hart doesn’t disclose her use of A.I., because there’s still a strong stigma around the technology, she said. Coral Hart is one of her early, now retired pseudonyms, and it’s the name she uses to teach A.I.-assisted writing; she requested anonymity because she still uses her real name for some publishing and coaching projects. She fears that revealing her A.I. use would damage her business for that work.

But she predicts attitudes will soon change, and is adding three new pen names that will be openly A.I.-assisted, she said.

The way Ms. Hart sees it, romance writers must either embrace artificial intelligence, or get left behind...

The writer Elizabeth Ann West, one of Future Fiction’s founders, who came up with the plot of “Bridesmaids and Bourbon,” believes the audience would be bigger if the books weren’t labeled as A.I. The novels, which are available on Amazon, come with a disclaimer on their product page: “This story was produced using author‑directed AI tools.”

“If you hide that there’s A.I., it sells just fine,” she said."

Friday, February 6, 2026

Young people in China have a new alternative to marriage and babies: AI pets; The Washington Post, February 6, 2026

, The Washington Post; Young people in China have a new alternative to marriage and babies: AI pets

"While China and the United States vie for supremacy in the artificial intelligence race, China is pulling ahead when it comes to finding ways to apply AI tools to everyday uses — from administering local government and streamlining police work to warding off loneliness. People falling in love with chatbots has captured headlines in the U.S., and the AI pet craze in China adds a new, furry dimension to the evolving human relationship with AI."

Thursday, February 5, 2026

When AI and IP Collide: What Journalists Need to Know; National Press Foundation (NPF), January 22, 2026

 National Press Foundation (NPF); When AI and IP Collide: What Journalists Need to Know

"With roughly 70 federal lawsuits waged against AI developers, the intersection of technology and intellectual property is now one of the most influential legal beats. Courts are jumping in to define the future of “fair use.” To bridge the gap between complex legal proceedings and the public’s understanding, NPF held a webinar to unpack these intellectual property battles. One thing all of the expert panelists agreed on: most cases are either an issue of input – i.e. what the AI models pull in to train on – or output – what AI generates, as in the case of Disney and other Hollywood studios v. Midjourney.

“The behavior here of AI companies and the assertion of fair use is completely understandable in our market capitalist system – all players want something very simple. They want their inputs for little or nothing and their outputs to be very expensive,” said Loyola Law professor Justin Hughes. “The fair use argument is all about AI companies wanting their inputs to be free, just like ranchers want their grazing land from the federal government to be free or their mining rights to be free.” AI Copyright Cases Journalists Should Know: Bartz et al. v. Anthropic: Anthropic reached a $1.5 billion settlement in a landmark case for the industry after a class of book authors accused the company of using pirated books to train the Claude AI model. “The mere training itself may be fair use, but the retention of these large copy data sets and their replication or your training from having taken pirated data sets, that’s not fair use,” Hughes explained. The NYT Company v. Microsoft Corporation et al.: This is a massive multi-district litigation in New York where the NYT is suing OpenAI. The Times has pushed for discovery into over 20 million private ChatGPT logs to prove that this model is being used to get past paywalls. Advance Local Media LLC et al. v. Cohere Inc.: The case against the startup Cohere is particularly vital for newsrooms as a judge ruled that AI-generated summaries infringe of news organizations’ ability to get traffic on their sites. “We’ve seen, there’s been a lot of developers who have taken the kind of classic Silicon Valley approach of ask forgiveness rather than permission,” said Terry Hart, general counsel of the Association of American Publishers. “They have gone ahead and trained a lot of models using a lot of copyrighted works without authorization.” Tech companies have trained massive models to ingest the entirety of the internet, including articles, without prior authorization, and Hughes points out that this is a repeated occurrence. AI companies often keep unauthorized copies of these vast datasets to retrain and tweak their models, leading to multiple steps of reproduction that could violate copyright. AI and U.S. Innovation A common defense from tech companies in Silicon Valley is that using these vast amounts of data is necessary to U.S. innovation and keeping the economy competitive. “‘We need to beat China, take our word for it, this is going to be great, and we’re just going to cut out a complete sector of the economy that’s critical to the success of our models,'” Hart said. “In the long run, that’s not good for innovation. It’s not good for the creative sectors and it’s not good for the AI sector.” Reuters technology reporter Deepa Seetharaman has also heard the China competition argument, among others. “The metaphor that I’ll hear a lot here is, ‘it’s like somebody visiting a library and reading every book, except this is a system that can remember every book and remember all the pieces of every book. And so why are you … harming us for developing something that’s so capable?'” Seetharaman said. Hughes noted that humans are not walking into a library with a miniature high-speed photocopier to memorize every book. Humans don’t memorize with the “faithful” precision of a machine. Hart added that the metaphor breaks down because technology has created a new market space that isn’t comparable to a human reader. Speakers:
  • Wayne Brough, Resident Senior Fellow, Technology and Innovation Team, R Street
  • Terry Hart, General Counsel, Association of American Publishers
  • Justin Hughes, Honorable William Matthew Byrne Distinguished Professor of Law, Loyola Marymount University
  • Deepa Seetharaman, Tech Correspondent, Reuters
Summary and transcript: https://nationalpress.org/topic/when-... This event is sponsored by The Copyright Alliance and NSG Next Solutions Group. This video was produced within the Evelyn Y. Davis studios. NPF is solely responsible for the content."

‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI; The Guardian, February 5, 2026

Anuj Behal, The Guardian ; ‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI


[Kip Currier: The largely unaddressed plight of content moderators became more real for me after reading this haunting 9/9/24 piece in the Washington Post, "I quit my job as a content moderator. I can never go back to who I was before."

As mentioned in the graphic article's byline, content moderator Alberto Cuadra spoke with journalist Beatrix Lockwood. Maya Scarpa's illustrations poignantly give life to Alberto Cuadra's first-hand experiences and ongoing impacts from the content moderation he performed for an unnamed tech company. I talk about Cuadra's experiences and the ethical issues of content moderation, social media, and AI in my Ethics, Information, and Technology book.]


[Excerpt]

"Murmu, 26, is a content moderator for a global technology company, logging on from her village in India’s Jharkhand state. Her job is to classify images, videos and text that have been flagged by automated systems as possible violations of the platform’s rules.

On an average day, she views up to 800 videos and images, making judgments that train algorithms to recognise violence, abuse and harm.

This work sits at the core of machine learning’s recent breakthroughs, which rest on the fact that AI is only as good as the data it is trained on. In India, this labour is increasingly performed by women, who are part of a workforce often described as “ghost workers”.

“The first few months, I couldn’t sleep,” she says. “I would close my eyes and still see the screen loading.” Images followed her into her dreams: of fatal accidents, of losing family members, of sexual violence she could not stop or escape. On those nights, she says, her mother would wake and sit with her...

“In terms of risk,” she says, “content moderation belongs in the category of dangerous work, comparable to any lethal industry.”

Studies indicate content moderation triggers lasting cognitive and emotional strain, often resulting in behavioural changes such as heightened vigilance. Workers report intrusive thoughts, anxiety and sleep disturbances.

A study of content moderators published last December, which included workers in India, identified traumatic stress as the most pronounced psychological risk. The study found that even where workplace interventions and support mechanisms existed, significant levels of secondary trauma persisted."

Tuesday, February 3, 2026

Pay More Attention to A.I.; The New York Times, January 31, 2026

 ROSS DOUTHAT , The New York Times; Pay More Attention to A.I.

"Unfortunately everyone I talk with offers conflicting reports. There are the people who envision A.I. as a revolutionary technology, but ultimately merely akin to the internet in its effects — the equivalent, let’s say, of someone telling you that the Indies are a collection of interesting islands, like the Canaries or the Azores, just bigger and potentially more profitable.

Then there are the people who talk about A.I. as an epoch-making, Industrial Revolution-level shift — which would be the equivalent of someone in 1500 promising that entire continents waited beyond the initial Caribbean island chain, and that not only fortunes but empires and superpowers would eventually rise and fall based on initial patterns of exploration and settlement and conquest.

And then, finally, there are the people with truly utopian and apocalyptic perspectives — the Singularitarians, the A.I. doomers, the people who expect us to merge with our machines or be destroyed by them. Think of them as the equivalent of Ponce de Leon seeking the Fountain of Youth, envisioning the New World as a territory where history fundamentally ruptures and the merely human age is left behind."

The Copyright Conversation; Library Journal, February 3, 2026

 Hallie Rich, Library Journal; The Copyright Conversation

"Welcome to the Library Journal Roundtable. The theme for today is copyright. The context is libraries. My name is Jim Neal. I’m University Librarian Emeritus at Columbia University in New York and Senior Policy Fellow at the American Library Association. I will serve as the moderator.

Allow me to introduce the members of the panel. Jonathan Band is the counsel to the Library Copyright Alliance. He works with the American Library Association and the Association of Research Libraries. Sara Benson is Associate Professor and Copyright Librarian at the University of Illinois Library. She’s also an affiliate professor at the School of Information of the Siebel Center for Design, the European Union Center and the Center for Global Studies. Rick Anderson is the University Librarian at Brigham Young University. Kyle Courtney is Director of Copyright and Information Policy at Harvard and founder of two library nonprofits, Library Futures and the eBook Study Group.

All of these individuals are copyright and information policy experts with years and years of deep involvement in education and advocacy around the importance of copyright for libraries, the laws and legislation which influence our work in libraries."

‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report; The Guardian, February 3, 2026

, The Guardian; ‘Deepfakes spreading and more AI companions’: seven takeaways from the latest artificial intelligence safety report

"The International AI Safety report is an annual survey of technological progress and the risks it is creating across multiple areas, from deepfakes to the jobs market.

Commissioned at the 2023 global AI safety summit, it is chaired by the Canadian computer scientist Yoshua Bengio, who describes the “daunting challenges” posed by rapid developments in the field. The report is also guided by senior advisers, including Nobel laureates Geoffrey Hinton and Daron Acemoglu.

Here are some of the key points from the second annual report, published on Tuesday. It stresses that it is a state-of-play document, rather than a vehicle for making specific policy recommendations to governments. Nonetheless, it is likely to help frame the debate for policymakers, tech executives and NGOs attending the next global AI summit in India this month...

1. The capabilities of AI models are improving...


2. Deepfakes are improving and proliferating...


3. AI companies have introduced biological and chemical risk safeguards...


4. AI companions have grown rapidly in popularity...


5. AI is not yet capable of fully autonomous cyber-attacks...


6. AI systems are getting better at undermining oversight...


7. The jobs impact remains unclear"