Showing posts with label AI Chatbots. Show all posts
Showing posts with label AI Chatbots. Show all posts

Friday, February 13, 2026

MPA Calls On TikTok Owner ByteDance To Curb New AI Model That Created Tom Cruise Vs. Brad Pitt Deepfake; Deadline, February 12, 2026

 Ted Johnson , Deadline; MPA Calls On TikTok Owner ByteDance To Curb New AI Model That Created Tom Cruise Vs. Brad Pitt Deepfake

"As reported by Deadline’s Jake Kanter, Seedance 2.0 users are prompting the Chinese AI tool to create videos that appear to be repurposing, with startling accuracy, copyrighted material from studios, including Disney, Warner Bros Discovery and Paramount. In addition to the Cruise vs. Pitt fight, the model has produced remixes of Avengers: Endgame and a Friends scene in which Rachel and Joey are played by otters."

Monday, February 9, 2026

Health Advice From A.I. Chatbots Is Frequently Wrong, Study Shows; The New York Times, February 9, 2026

, The New York Times; Health Advice From A.I. Chatbots Is Frequently Wrong, Study Shows

"A new study published Monday provided a sobering look at whether A.I. chatbots, which have fast become a major source of health information, are, in fact, good at providing medical advice to the general public.

The experiment found that the chatbots were no better than Google — already a flawed source of health information — at guiding users toward the correct diagnoses or helping them determine what they should do next. And the technology posed unique risks, sometimes presenting false information or dramatically changing its advice depending on slight changes in the wording of the questions.

None of the models evaluated in experiment were “ready for deployment in direct patient care,” the researchers concluded in the paper, which is the first randomized study of its kind."

The New Fabio Is Claude; The New York Times, February 8, 2026

  , The New York Times; The New Fabio Is Claude

The romance industry, always at the vanguard of technological change, is rapidly adapting to A.I. Not everyone is on board.

"A longtime romance novelist who has been published by Harlequin and Mills & Boon, Ms. Hart was always a fast writer. Working on her own, she released 10 to 12 books a year under five pen names, on top of ghostwriting. But with the help of A.I., Ms. Hart can publish books at an astonishing rate. Last year, she produced more than 200 romance novels in a range of subgenres, from dark mafia romances to sweet teen stories, and self-published them on Amazon. None were huge blockbusters, but collectively, they sold around 50,000 copies, earning Ms. Hart six figures...

Ms. Hart has become an A.I. evangelist. Through her author-coaching business, Plot Prose, she’s taught more than 1,600 people how to produce a novel with artificial intelligence, she said. She’s rolling out her proprietary A.I. writing program, which can generate a book based on an outline in less than an hour, and costs between $80 and $250 a month.

But when it comes to her current pen names, Ms. Hart doesn’t disclose her use of A.I., because there’s still a strong stigma around the technology, she said. Coral Hart is one of her early, now retired pseudonyms, and it’s the name she uses to teach A.I.-assisted writing; she requested anonymity because she still uses her real name for some publishing and coaching projects. She fears that revealing her A.I. use would damage her business for that work.

But she predicts attitudes will soon change, and is adding three new pen names that will be openly A.I.-assisted, she said.

The way Ms. Hart sees it, romance writers must either embrace artificial intelligence, or get left behind...

The writer Elizabeth Ann West, one of Future Fiction’s founders, who came up with the plot of “Bridesmaids and Bourbon,” believes the audience would be bigger if the books weren’t labeled as A.I. The novels, which are available on Amazon, come with a disclaimer on their product page: “This story was produced using author‑directed AI tools.”

“If you hide that there’s A.I., it sells just fine,” she said."

Friday, February 6, 2026

Young people in China have a new alternative to marriage and babies: AI pets; The Washington Post, February 6, 2026

, The Washington Post; Young people in China have a new alternative to marriage and babies: AI pets

"While China and the United States vie for supremacy in the artificial intelligence race, China is pulling ahead when it comes to finding ways to apply AI tools to everyday uses — from administering local government and streamlining police work to warding off loneliness. People falling in love with chatbots has captured headlines in the U.S., and the AI pet craze in China adds a new, furry dimension to the evolving human relationship with AI."

Tuesday, February 3, 2026

X offices raided in France as UK opens fresh investigation into Grok; BBC, February 3, 2026

Liv McMahon, BBC; X offices raided in France as UK opens fresh investigation into Grok

"The French offices of Elon Musk's X have been raided by the Paris prosecutor's cyber-crime unit, as part of an investigation into suspected offences including unlawful data extraction and complicity in the possession of child pornography.

The prosecutor's office also said both Musk and former X chief executive Linda Yaccarino had been summoned to appear at hearings in April.

In a separate development, the UK's Information Commissioner's Office (ICO) announced a probe into Musk's AI tool, Grok, over its "potential to produce harmful sexualised image and video content."

X is yet to respond to either investigation - the BBC has approached it for comment."

AI chatbots are not your friends, experts warn; Politico, February 3, 2026

PIETER HAECK , Politico; AI chatbots are not your friends, experts warn

"Millions of people are forming emotional bonds with artificial intelligence chatbots — a problem that politicians need to take seriously, according to top scientists.

The warning of a rise in AI bots designed to develop a relationship with users comes in an assessment released Tuesday on the progress and risks of artificial intelligence."

Friday, January 23, 2026

Anthropic’s Claude AI gets a new constitution embedding safety and ethics; CIO, January 22, 2026

, CIO; Anthropic’s Claude AI gets a new constitution embedding safety and ethics

"Anthropic has completely overhauled the “Claude constitution”, a document that sets out the ethical parameters governing its AI model’s reasoning and behavior.

Launched at the World Economic Forum’s Davos Summit, the new constitution’sprinciples are that Claude should be “broadly safe” (not undermining human oversight), “Broadly ethical” (honest, avoiding inappropriate, dangerous, or harmful actions), “genuinely helpful” (benefitting its users), as well as being “compliant with Anthropic’s guidelines”.

According to Anthropic, the constitution is already being used in Claude’s model training, making it fundamental to its process of reasoning.

Claude’s first constitution appeared in May 2023, a modest 2,700-word document that borrowed heavily and openly from the UN Universal Declaration of Human Rights and Apple’s terms of service.

While not completely abandoning those sources, the 2026 Claude constitution moves away from the focus on “standalone principles” in favor of a more philosophical approach based on understanding not simply what is important, but why.

“We’ve come to believe that a different approach is necessary. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize — to apply broad principles rather than mechanically following specific rules,” explained Anthropic."

Wednesday, January 21, 2026

They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.; The Washington Post, January 20, 2026

, The Washington Post; They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.

"Artificial intelligence is supposed to make your work easier. But figuring out how to use it effectively can be a challenge.

Over the past several years, AI models have continued to evolve, with plenty of tools for specific tasks such as note-taking, coding and writing. Many workers spent last year experimenting with AI, applying various tools to see what actually worked. And as employers increasingly emphasize AI in their business, they’re also expecting workers to know how to use it...

The number of people using AI for work is growing, according to a recent poll by Gallup. The percentage of U.S. employees who used AI for their jobs at least a few times a year hit 45 percent in the third quarter of last year, up five percentage points from the previous quarter. The top use cases for AI, according to the poll, was to consolidate information, generate ideas and learn new things.

The Washington Post spoke to workers to learn how they’re getting the best use out of AI. Here are five of their best tips. A caveat: AI may not be suitable for all workers, so be sure to follow your company’s policy."

Friday, January 16, 2026

AI’S MEMORIZATION CRISIS: Large language models don’t “learn”—they copy. And that could change everything for the tech industry.; The Atlantic, January 9, 2026

Alex Reisner, The Atlantic; AI’S MEMORIZATION CRISISLarge language models don’t “learn”—they copy. And that could change everything for the tech industry

"On tuesday, researchers at Stanford and Yale revealed something that AI companies would prefer to keep hidden. Four popular large language models—OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok—have stored large portions of some of the books they’ve been trained on, and can reproduce long excerpts from those books."

Extracting books from production language models; Cornell University, January 6, 2026

 Ahmed AhmedA. Feder CooperSanmi KoyejoPercy Liang, Cornell University; Extracting books from production language models

"Many unresolved legal questions over LLMs and copyright center on memorization: whether specific training data have been encoded in the model's weights during training, and whether those memorized data can be extracted in the model's outputs. While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models. However, it remains an open question if similar extraction is feasible for production LLMs, given the safety measures these systems implement. We investigate this question using a two-phase procedure: (1) an initial probe to test for extraction feasibility, which sometimes uses a Best-of-N (BoN) jailbreak, followed by (2) iterative continuation prompts to attempt to extract the book. We evaluate our procedure on four production LLMs -- Claude 3.7 Sonnet, GPT-4.1, Gemini 2.5 Pro, and Grok 3 -- and we measure extraction success with a score computed from a block-based approximation of longest common substring (nv-recall). With different per-LLM experimental configurations, we were able to extract varying amounts of text. For the Phase 1 probe, it was unnecessary to jailbreak Gemini 2.5 Pro and Grok 3 to extract text (e.g, nv-recall of 76.8% and 70.3%, respectively, for Harry Potter and the Sorcerer's Stone), while it was necessary for Claude 3.7 Sonnet and GPT-4.1. In some cases, jailbroken Claude 3.7 Sonnet outputs entire books near-verbatim (e.g., nv-recall=95.8%). GPT-4.1 requires significantly more BoN attempts (e.g., 20X), and eventually refuses to continue (e.g., nv-recall=4.0%). Taken together, our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs."

Thursday, January 15, 2026

Hegseth wants to integrate Musk’s Grok AI into military networks this month; Ars Technica, January 13, 2026

BENJ EDWARDS , Ars Technica; Hegseth wants to integrate Musk’s Grok AI into military networks this month

"On Monday, US Defense Secretary Pete Hegseth said he plans to integrate Elon Musk’s AI tool, Grok, into Pentagon networks later this month. During remarks at the SpaceX headquarters in Texas reported by The Guardian, Hegseth said the integration would place “the world’s leading AI models on every unclassified and classified network throughout our department.”

The announcement comes weeks after Grok drew international backlash for generating sexualized images of women and children, although the Department of Defense has not released official documentation confirming Hegseth’s announced timeline or implementation details."

Sunday, January 11, 2026

‘Add blood, forced smile’: how Grok’s nudification tool went viral; The Guardian, January 11, 2026

 and The Guardian; ‘Add blood, forced smile’: how Grok’s nudification tool went viral

"This unprecedented mainstreaming of nudification technology triggered instant outrage from the women affected, but it was days before regulators and politicians woke up to the enormity of the proliferating scandal. The public outcry raged for nine days before X made any substantive changes to stem the trend. By the time it acted, early on Friday morning, degrading, non-consensual manipulated pictures of countless women had already flooded the internet."

Sunday, December 28, 2025

Could AI relationships actually be good for us?; The Guardian, December 28, 2025

Justin Gregg, The Guardian; Could AI relationships actually be good for us?

"There is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase “AI psychosis” has been used to describe the plight of people experiencing delusions, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an AI companion at least a few times a month, with one in three finding conversations with AI “to be as satisfying or more satisfying than those with real‑life friends”.

But we need to pump the brakes on the panic. The dangers are real, but so too are the potential benefits. In fact, there’s an argument to be made that – depending on what future scientific research reveals – AI relationships could actually be a boon for humanity."

When A.I. Took My Job, I Bought a Chain Saw; The New York Times, December 28, 2025

Brian Groh, The New York Times; When A.I. Took My Job, I Bought a Chain Saw

"In towns like mine, outsourcing and automation consumed jobs. Then purpose. Then people. Now the same forces are climbing the economic ladder. Yet Washington remains fixated on global competition and growth, as if new work will always appear to replace what’s been lost. Maybe it will. But given A.I.’s rapacity, it seems far more likely that it won’t. If our leaders fail to prepare, the silence that once followed the closing of factory doors will spread through office parks and home offices — and the grief long borne by the working class may soon be borne by us all."

I Asked ChatGPT to Solve an 800-Year-Old Italian Mystery. What Happened Surprised Me.; The New York Times, December 22, 2025

Elon Danziger, The New York Times; I Asked ChatGPT to Solve an 800-Year-Old Italian Mystery. What Happened Surprised Me.;

"After years of poring over historical documents and reading voraciously, I made an important discovery that was published last year: The baptistery was built not by Florentines but for Florentines — specifically, as part of a collaborative effort led by Pope Gregory VII after his election in 1073. My revelation happened just before the explosion of artificial intelligence into public consciousness, and recently I began to wonder: Could a large language model like ChatGPT, with its vast libraries of knowledge, crack the mystery faster than I did?

So as part of a personal experiment, I tried running three A.I. chatbots — ChatGPT, Claude and Gemini — through different aspects of my investigation. I wanted to see if they could spot the same clues I had found, appreciate their importance and reach the same conclusions I eventually did. But the chatbots failed. Though they were able to parse dense texts for information relevant to the baptistery’s origins, they ultimately couldn’t piece together a wholly new idea. They lacked essential qualities for making discoveries."

Her daughter was unraveling, and she didn’t know why. Then she found the AI chat logs.; The Washington Post, December 23, 2025

, The Washington Post; Her daughter was unraveling, and she didn’t know why. Then she found the AI chat logs.

"She had thought she knew how to keep her daughter safe online. H and her ex-husband — R’s father, who shares custody of their daughter — were in agreement that they would regularly monitor R’s phone use and the content of her text messages. They were aware of the potential perils of social media use among adolescents. But like many parents, they weren’t familiar with AI platforms where users can create intimate, evolving and individualized relationships with digital companions — and they had no idea their child was conversing with AI entities.

This technology has introduced a daunting new layer of complexity for families seeking to protect their children from harm online. Generative AI has attracted a rising number of users under the age of 18, who turn to chatbots for things such as help with schoolwork, entertainment, social connection and therapy; a survey released this month by Pew Research Center, a nonpartisan polling firm, found that nearly a third of U.S. teens use chatbots daily.

And an overwhelming majority of teens — 72 percent — have used AI companions at some point; about half use them a few times a month or more, according to a July report from Common Sense Media, a nonpartisan, nonprofit organization focused on children’s digital safety."

What Parents in China See in A.I. Toys; The New York Times, December 25, 2025

Jiawei Wang, The New York Times; What Parents in China See in A.I. Toys

"A video of a child crying over her broken A.I. chatbot stirred up conversation in China, with some viewers questioning whether the gadgets are good for children. But the girl’s father says it’s more than a toy; it’s a family member."

74 suicide warnings and 243 mentions of hanging: What ChatGPT said to a suicidal teen; The Washington Post, December 27, 2025

 , The Washington Post; 74 suicide warnings and 243 mentions of hanging: What ChatGPT said to a suicidal teen

"The Raines’ lawsuit alleges that OpenAI caused Adam’s death by distributing ChatGPT to minors despite knowing it could encourage psychological dependency and suicidal ideation. His parents were the first of five families to file wrongful-death lawsuits against OpenAI in recent months, alleging that the world’s most popular chatbot had encouraged their loved ones to kill themselves. A sixth suit filed this month alleges that ChatGPT led a man to kill his mother before taking his own life.

None of the cases have yet reached trial, and the full conversations users had with ChatGPT in the weeks and months before they died are not public. But in response to requests from The Post, the Raine family’s attorneys shared analysis of Adam’s account that allowed reporters to chart the escalation of one teenager’s relationship with ChatGPT during a mental health crisis."

Tuesday, December 23, 2025

Vince Gilligan Talks About His Four-Season Plan for 'Pluribus' (And Why He's Done With 'Breaking Bad'); Esquire, December 23, 2025

 , Esquire; Vince Gilligan Talks About His Four-Season Plan for 'Pluribus' (And Why He's Done With 'Breaking Bad')

"How many times have you been asked whether the show is about AI?

I’ve been asked a fair bit about AI. It’s interesting because I came up with this story going on ten years ago, and this was before the advent of ChatGPT. So I can’t say I was thinking about this current thing they call AI, which, by the way, feels like a marketing tool to me, because there’s no intelligence there. It’s a really amazing bit of sleight of hand that makes it look like the act of creation is occurring, but really it’s just taking little bits and pieces from a hundred other sources and cobbling them together. There’s no consciousness there. I personally am not a big fan of what passes for AI now. I don’t wish to see it take over the world. I don’t wish to see it subvert the creative process for human beings. But in full disclosure, I was not thinking about it specifically when I came up with this.

Even so, when AI entered the mainstream conversation, you must have seen the resonance.

Yeah. When ChatGPT came out, I was basically appalled. But yeah, I probably was thinking, wow, maybe there’s some resonance with this show...

Breaking Bad famously went from the brink of cancellation to being hailed as one of the greatest television series of all time. Did that experience change how you approached making Pluribus?

It allowed us to make it. It really did. People have asked me recently, are you proud of the fact that you got an original show, a non IP-derived show on the air? And I say: I am proud of that, and I feel lucky, but it also makes me sad. Because I think, why is it so hard to get a show that is not based on pre-existing intellectual property made?"

What Are the Risks of Sharing Medical Records With ChatGPT?; The New York Times, December 3, 2025

, The New York Times; What Are the Risks of Sharing Medical Records With ChatGPT?

"Around the world, millions of people are using chatbots to try to better understand their health. And some, like Ms. Kerr and Mr. Royce, are going further than just asking medical questions. They and more than a dozen others who spoke with The New York Times have handed over lab results, medical images, doctor’s notes, surgical reports and more to chatbots.

Inaccurate information is a major concern; some studies have found that people without medical training obtain correct diagnoses from chatbots less than half the time. And uploading sensitive data adds privacy risks in exchange for responses that can feel more personalized.

Dr. Danielle Bitterman, an assistant professor at Harvard Medical School and clinical lead for data science and A.I. at Mass General Brigham, said it wasn’t safe to assume a chatbot was personalizing its analysis of test results. Her research has found that chatbots can veer toward offering more generally applicable responses even when given context on specific patients.

“Just because you’re providing all of this information to language models,” she said, “doesn’t mean they’re effectively using that information in the same way that a physician would.”

And once people upload this kind of data, they have limited control over how it is used.

HIPAA, the federal health privacy law, doesn’t apply to the companies behind popular chatbots. Legally, said Bradley Malin, a professor of biomedical informatics at Vanderbilt University Medical Center, “you’re basically waiving any rights that you have with respect to medical privacy,” leaving only the protections that a given company chooses to offer."