Showing posts with label AI Chatbots. Show all posts
Showing posts with label AI Chatbots. Show all posts

Sunday, March 1, 2026

Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.; The Guardian, February 28, 2026

Varsha Bansal with photographs by Clayton Cotterell , The Guardian; Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

"Users, lawyers and mental health professionals all are raising concerns about the impact of using chatbots as confidantes. “We are kind of at this inflection point in a quest for accountability where people coming forward is forcing companies to reckon with specific use cases of how their technologies have harmed people,” said Meetali Jain, founding director of Tech Justice Law Project and co-counsel on the Ceccanti case. “In terms of the number of cases going up, there’s likely to be more coordinated efforts on parts of the court to try to deal with this influx of cases.”"

Tuesday, February 17, 2026

Setting AI Policy; Library Journal, February 9, 2026

Matt Enis, Library Journal; Setting AI Policy

"As artificial intelligence tools become pervasive, public libraries may want to establish transparent guidelines for how they are used by staff

Policy statements are important, because “people have very different ideas about what is acceptable or appropriate,” says Nick Tanzi, assistant director at South Huntington Public Library (SHPL), NY, who was recently selected by the Public Library Association to be part of a Transformative Technology Task Force focused on artificial intelligence (AI).

In the library field, opinions about AI—particularly with the recent emergence of large language models (LLMs) such as ChatGPT, Gemini, Claude, and Copilot—currently run the gamut from enthusiastic adoption to informed objection. But even the technology’s detractors would agree that AI has already become an integral part of the information-seeking tools many people use every day. Google searches now frequently generate Gemini AI responses as top results. Microsoft has ingrained Copilot into its Windows OS and Office software. ChatGPT’s global monthly active users exceeded 800 million at the end of 2025. Patrons are using these tools, and they may have questions or need assistance. Libraries should be clear about how these and other AI technologies are being used within their institutions."

Monday, February 16, 2026

AI legal advice is driving lawyers bananas; Axios, February 9, 2026

Emily Peck, Axios; AI legal advice is driving lawyers bananas

"AI promises to make work more productive for lawyers, but there's a problem: Their clients are using it, too.

Why it matters: The rise of AI is creating new headaches for attorneys: They're worried about the fate of the billable hour, a reliable profit center for aeons, and are perturbed by clients getting bad legal advice from chatbots.

Zoom in: "It's like the WebMD effect on steroids," says Dave Jochnowitz, a partner at the law firm Outten & Golden, referring to how medical websites can give people a misguided understanding of their condition."

Friday, February 13, 2026

MPA Calls On TikTok Owner ByteDance To Curb New AI Model That Created Tom Cruise Vs. Brad Pitt Deepfake; Deadline, February 12, 2026

 Ted Johnson , Deadline; MPA Calls On TikTok Owner ByteDance To Curb New AI Model That Created Tom Cruise Vs. Brad Pitt Deepfake

"As reported by Deadline’s Jake Kanter, Seedance 2.0 users are prompting the Chinese AI tool to create videos that appear to be repurposing, with startling accuracy, copyrighted material from studios, including Disney, Warner Bros Discovery and Paramount. In addition to the Cruise vs. Pitt fight, the model has produced remixes of Avengers: Endgame and a Friends scene in which Rachel and Joey are played by otters."

Monday, February 9, 2026

Health Advice From A.I. Chatbots Is Frequently Wrong, Study Shows; The New York Times, February 9, 2026

, The New York Times; Health Advice From A.I. Chatbots Is Frequently Wrong, Study Shows

"A new study published Monday provided a sobering look at whether A.I. chatbots, which have fast become a major source of health information, are, in fact, good at providing medical advice to the general public.

The experiment found that the chatbots were no better than Google — already a flawed source of health information — at guiding users toward the correct diagnoses or helping them determine what they should do next. And the technology posed unique risks, sometimes presenting false information or dramatically changing its advice depending on slight changes in the wording of the questions.

None of the models evaluated in experiment were “ready for deployment in direct patient care,” the researchers concluded in the paper, which is the first randomized study of its kind."

The New Fabio Is Claude; The New York Times, February 8, 2026

  , The New York Times; The New Fabio Is Claude

The romance industry, always at the vanguard of technological change, is rapidly adapting to A.I. Not everyone is on board.

"A longtime romance novelist who has been published by Harlequin and Mills & Boon, Ms. Hart was always a fast writer. Working on her own, she released 10 to 12 books a year under five pen names, on top of ghostwriting. But with the help of A.I., Ms. Hart can publish books at an astonishing rate. Last year, she produced more than 200 romance novels in a range of subgenres, from dark mafia romances to sweet teen stories, and self-published them on Amazon. None were huge blockbusters, but collectively, they sold around 50,000 copies, earning Ms. Hart six figures...

Ms. Hart has become an A.I. evangelist. Through her author-coaching business, Plot Prose, she’s taught more than 1,600 people how to produce a novel with artificial intelligence, she said. She’s rolling out her proprietary A.I. writing program, which can generate a book based on an outline in less than an hour, and costs between $80 and $250 a month.

But when it comes to her current pen names, Ms. Hart doesn’t disclose her use of A.I., because there’s still a strong stigma around the technology, she said. Coral Hart is one of her early, now retired pseudonyms, and it’s the name she uses to teach A.I.-assisted writing; she requested anonymity because she still uses her real name for some publishing and coaching projects. She fears that revealing her A.I. use would damage her business for that work.

But she predicts attitudes will soon change, and is adding three new pen names that will be openly A.I.-assisted, she said.

The way Ms. Hart sees it, romance writers must either embrace artificial intelligence, or get left behind...

The writer Elizabeth Ann West, one of Future Fiction’s founders, who came up with the plot of “Bridesmaids and Bourbon,” believes the audience would be bigger if the books weren’t labeled as A.I. The novels, which are available on Amazon, come with a disclaimer on their product page: “This story was produced using author‑directed AI tools.”

“If you hide that there’s A.I., it sells just fine,” she said."

Friday, February 6, 2026

Young people in China have a new alternative to marriage and babies: AI pets; The Washington Post, February 6, 2026

, The Washington Post; Young people in China have a new alternative to marriage and babies: AI pets

"While China and the United States vie for supremacy in the artificial intelligence race, China is pulling ahead when it comes to finding ways to apply AI tools to everyday uses — from administering local government and streamlining police work to warding off loneliness. People falling in love with chatbots has captured headlines in the U.S., and the AI pet craze in China adds a new, furry dimension to the evolving human relationship with AI."

Tuesday, February 3, 2026

X offices raided in France as UK opens fresh investigation into Grok; BBC, February 3, 2026

Liv McMahon, BBC; X offices raided in France as UK opens fresh investigation into Grok

"The French offices of Elon Musk's X have been raided by the Paris prosecutor's cyber-crime unit, as part of an investigation into suspected offences including unlawful data extraction and complicity in the possession of child pornography.

The prosecutor's office also said both Musk and former X chief executive Linda Yaccarino had been summoned to appear at hearings in April.

In a separate development, the UK's Information Commissioner's Office (ICO) announced a probe into Musk's AI tool, Grok, over its "potential to produce harmful sexualised image and video content."

X is yet to respond to either investigation - the BBC has approached it for comment."

AI chatbots are not your friends, experts warn; Politico, February 3, 2026

PIETER HAECK , Politico; AI chatbots are not your friends, experts warn

"Millions of people are forming emotional bonds with artificial intelligence chatbots — a problem that politicians need to take seriously, according to top scientists.

The warning of a rise in AI bots designed to develop a relationship with users comes in an assessment released Tuesday on the progress and risks of artificial intelligence."

Friday, January 23, 2026

Anthropic’s Claude AI gets a new constitution embedding safety and ethics; CIO, January 22, 2026

, CIO; Anthropic’s Claude AI gets a new constitution embedding safety and ethics

"Anthropic has completely overhauled the “Claude constitution”, a document that sets out the ethical parameters governing its AI model’s reasoning and behavior.

Launched at the World Economic Forum’s Davos Summit, the new constitution’sprinciples are that Claude should be “broadly safe” (not undermining human oversight), “Broadly ethical” (honest, avoiding inappropriate, dangerous, or harmful actions), “genuinely helpful” (benefitting its users), as well as being “compliant with Anthropic’s guidelines”.

According to Anthropic, the constitution is already being used in Claude’s model training, making it fundamental to its process of reasoning.

Claude’s first constitution appeared in May 2023, a modest 2,700-word document that borrowed heavily and openly from the UN Universal Declaration of Human Rights and Apple’s terms of service.

While not completely abandoning those sources, the 2026 Claude constitution moves away from the focus on “standalone principles” in favor of a more philosophical approach based on understanding not simply what is important, but why.

“We’ve come to believe that a different approach is necessary. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize — to apply broad principles rather than mechanically following specific rules,” explained Anthropic."

Wednesday, January 21, 2026

They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.; The Washington Post, January 20, 2026

, The Washington Post; They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.

"Artificial intelligence is supposed to make your work easier. But figuring out how to use it effectively can be a challenge.

Over the past several years, AI models have continued to evolve, with plenty of tools for specific tasks such as note-taking, coding and writing. Many workers spent last year experimenting with AI, applying various tools to see what actually worked. And as employers increasingly emphasize AI in their business, they’re also expecting workers to know how to use it...

The number of people using AI for work is growing, according to a recent poll by Gallup. The percentage of U.S. employees who used AI for their jobs at least a few times a year hit 45 percent in the third quarter of last year, up five percentage points from the previous quarter. The top use cases for AI, according to the poll, was to consolidate information, generate ideas and learn new things.

The Washington Post spoke to workers to learn how they’re getting the best use out of AI. Here are five of their best tips. A caveat: AI may not be suitable for all workers, so be sure to follow your company’s policy."

Friday, January 16, 2026

AI’S MEMORIZATION CRISIS: Large language models don’t “learn”—they copy. And that could change everything for the tech industry.; The Atlantic, January 9, 2026

Alex Reisner, The Atlantic; AI’S MEMORIZATION CRISISLarge language models don’t “learn”—they copy. And that could change everything for the tech industry

"On tuesday, researchers at Stanford and Yale revealed something that AI companies would prefer to keep hidden. Four popular large language models—OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok—have stored large portions of some of the books they’ve been trained on, and can reproduce long excerpts from those books."

Extracting books from production language models; Cornell University, January 6, 2026

 Ahmed AhmedA. Feder CooperSanmi KoyejoPercy Liang, Cornell University; Extracting books from production language models

"Many unresolved legal questions over LLMs and copyright center on memorization: whether specific training data have been encoded in the model's weights during training, and whether those memorized data can be extracted in the model's outputs. While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models. However, it remains an open question if similar extraction is feasible for production LLMs, given the safety measures these systems implement. We investigate this question using a two-phase procedure: (1) an initial probe to test for extraction feasibility, which sometimes uses a Best-of-N (BoN) jailbreak, followed by (2) iterative continuation prompts to attempt to extract the book. We evaluate our procedure on four production LLMs -- Claude 3.7 Sonnet, GPT-4.1, Gemini 2.5 Pro, and Grok 3 -- and we measure extraction success with a score computed from a block-based approximation of longest common substring (nv-recall). With different per-LLM experimental configurations, we were able to extract varying amounts of text. For the Phase 1 probe, it was unnecessary to jailbreak Gemini 2.5 Pro and Grok 3 to extract text (e.g, nv-recall of 76.8% and 70.3%, respectively, for Harry Potter and the Sorcerer's Stone), while it was necessary for Claude 3.7 Sonnet and GPT-4.1. In some cases, jailbroken Claude 3.7 Sonnet outputs entire books near-verbatim (e.g., nv-recall=95.8%). GPT-4.1 requires significantly more BoN attempts (e.g., 20X), and eventually refuses to continue (e.g., nv-recall=4.0%). Taken together, our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs."

Thursday, January 15, 2026

Hegseth wants to integrate Musk’s Grok AI into military networks this month; Ars Technica, January 13, 2026

BENJ EDWARDS , Ars Technica; Hegseth wants to integrate Musk’s Grok AI into military networks this month

"On Monday, US Defense Secretary Pete Hegseth said he plans to integrate Elon Musk’s AI tool, Grok, into Pentagon networks later this month. During remarks at the SpaceX headquarters in Texas reported by The Guardian, Hegseth said the integration would place “the world’s leading AI models on every unclassified and classified network throughout our department.”

The announcement comes weeks after Grok drew international backlash for generating sexualized images of women and children, although the Department of Defense has not released official documentation confirming Hegseth’s announced timeline or implementation details."

Sunday, January 11, 2026

‘Add blood, forced smile’: how Grok’s nudification tool went viral; The Guardian, January 11, 2026

 and The Guardian; ‘Add blood, forced smile’: how Grok’s nudification tool went viral

"This unprecedented mainstreaming of nudification technology triggered instant outrage from the women affected, but it was days before regulators and politicians woke up to the enormity of the proliferating scandal. The public outcry raged for nine days before X made any substantive changes to stem the trend. By the time it acted, early on Friday morning, degrading, non-consensual manipulated pictures of countless women had already flooded the internet."

Sunday, December 28, 2025

Could AI relationships actually be good for us?; The Guardian, December 28, 2025

Justin Gregg, The Guardian; Could AI relationships actually be good for us?

"There is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase “AI psychosis” has been used to describe the plight of people experiencing delusions, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an AI companion at least a few times a month, with one in three finding conversations with AI “to be as satisfying or more satisfying than those with real‑life friends”.

But we need to pump the brakes on the panic. The dangers are real, but so too are the potential benefits. In fact, there’s an argument to be made that – depending on what future scientific research reveals – AI relationships could actually be a boon for humanity."

When A.I. Took My Job, I Bought a Chain Saw; The New York Times, December 28, 2025

Brian Groh, The New York Times; When A.I. Took My Job, I Bought a Chain Saw

"In towns like mine, outsourcing and automation consumed jobs. Then purpose. Then people. Now the same forces are climbing the economic ladder. Yet Washington remains fixated on global competition and growth, as if new work will always appear to replace what’s been lost. Maybe it will. But given A.I.’s rapacity, it seems far more likely that it won’t. If our leaders fail to prepare, the silence that once followed the closing of factory doors will spread through office parks and home offices — and the grief long borne by the working class may soon be borne by us all."

I Asked ChatGPT to Solve an 800-Year-Old Italian Mystery. What Happened Surprised Me.; The New York Times, December 22, 2025

Elon Danziger, The New York Times; I Asked ChatGPT to Solve an 800-Year-Old Italian Mystery. What Happened Surprised Me.;

"After years of poring over historical documents and reading voraciously, I made an important discovery that was published last year: The baptistery was built not by Florentines but for Florentines — specifically, as part of a collaborative effort led by Pope Gregory VII after his election in 1073. My revelation happened just before the explosion of artificial intelligence into public consciousness, and recently I began to wonder: Could a large language model like ChatGPT, with its vast libraries of knowledge, crack the mystery faster than I did?

So as part of a personal experiment, I tried running three A.I. chatbots — ChatGPT, Claude and Gemini — through different aspects of my investigation. I wanted to see if they could spot the same clues I had found, appreciate their importance and reach the same conclusions I eventually did. But the chatbots failed. Though they were able to parse dense texts for information relevant to the baptistery’s origins, they ultimately couldn’t piece together a wholly new idea. They lacked essential qualities for making discoveries."

Her daughter was unraveling, and she didn’t know why. Then she found the AI chat logs.; The Washington Post, December 23, 2025

, The Washington Post; Her daughter was unraveling, and she didn’t know why. Then she found the AI chat logs.

"She had thought she knew how to keep her daughter safe online. H and her ex-husband — R’s father, who shares custody of their daughter — were in agreement that they would regularly monitor R’s phone use and the content of her text messages. They were aware of the potential perils of social media use among adolescents. But like many parents, they weren’t familiar with AI platforms where users can create intimate, evolving and individualized relationships with digital companions — and they had no idea their child was conversing with AI entities.

This technology has introduced a daunting new layer of complexity for families seeking to protect their children from harm online. Generative AI has attracted a rising number of users under the age of 18, who turn to chatbots for things such as help with schoolwork, entertainment, social connection and therapy; a survey released this month by Pew Research Center, a nonpartisan polling firm, found that nearly a third of U.S. teens use chatbots daily.

And an overwhelming majority of teens — 72 percent — have used AI companions at some point; about half use them a few times a month or more, according to a July report from Common Sense Media, a nonpartisan, nonprofit organization focused on children’s digital safety."

What Parents in China See in A.I. Toys; The New York Times, December 25, 2025

Jiawei Wang, The New York Times; What Parents in China See in A.I. Toys

"A video of a child crying over her broken A.I. chatbot stirred up conversation in China, with some viewers questioning whether the gadgets are good for children. But the girl’s father says it’s more than a toy; it’s a family member."