Showing posts with label AI Chatbots. Show all posts
Showing posts with label AI Chatbots. Show all posts

Wednesday, January 21, 2026

They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.; The Washington Post, January 20, 2026

, The Washington Post; They’ve outsourced the worst parts of their jobs to tech. How you can do it, too.

"Artificial intelligence is supposed to make your work easier. But figuring out how to use it effectively can be a challenge.

Over the past several years, AI models have continued to evolve, with plenty of tools for specific tasks such as note-taking, coding and writing. Many workers spent last year experimenting with AI, applying various tools to see what actually worked. And as employers increasingly emphasize AI in their business, they’re also expecting workers to know how to use it...

The number of people using AI for work is growing, according to a recent poll by Gallup. The percentage of U.S. employees who used AI for their jobs at least a few times a year hit 45 percent in the third quarter of last year, up five percentage points from the previous quarter. The top use cases for AI, according to the poll, was to consolidate information, generate ideas and learn new things.

The Washington Post spoke to workers to learn how they’re getting the best use out of AI. Here are five of their best tips. A caveat: AI may not be suitable for all workers, so be sure to follow your company’s policy."

Friday, January 16, 2026

AI’S MEMORIZATION CRISIS: Large language models don’t “learn”—they copy. And that could change everything for the tech industry.; The Atlantic, January 9, 2026

Alex Reisner, The Atlantic; AI’S MEMORIZATION CRISISLarge language models don’t “learn”—they copy. And that could change everything for the tech industry

"On tuesday, researchers at Stanford and Yale revealed something that AI companies would prefer to keep hidden. Four popular large language models—OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, and xAI’s Grok—have stored large portions of some of the books they’ve been trained on, and can reproduce long excerpts from those books."

Extracting books from production language models; Cornell University, January 6, 2026

 Ahmed AhmedA. Feder CooperSanmi KoyejoPercy Liang, Cornell University; Extracting books from production language models

"Many unresolved legal questions over LLMs and copyright center on memorization: whether specific training data have been encoded in the model's weights during training, and whether those memorized data can be extracted in the model's outputs. While many believe that LLMs do not memorize much of their training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models. However, it remains an open question if similar extraction is feasible for production LLMs, given the safety measures these systems implement. We investigate this question using a two-phase procedure: (1) an initial probe to test for extraction feasibility, which sometimes uses a Best-of-N (BoN) jailbreak, followed by (2) iterative continuation prompts to attempt to extract the book. We evaluate our procedure on four production LLMs -- Claude 3.7 Sonnet, GPT-4.1, Gemini 2.5 Pro, and Grok 3 -- and we measure extraction success with a score computed from a block-based approximation of longest common substring (nv-recall). With different per-LLM experimental configurations, we were able to extract varying amounts of text. For the Phase 1 probe, it was unnecessary to jailbreak Gemini 2.5 Pro and Grok 3 to extract text (e.g, nv-recall of 76.8% and 70.3%, respectively, for Harry Potter and the Sorcerer's Stone), while it was necessary for Claude 3.7 Sonnet and GPT-4.1. In some cases, jailbroken Claude 3.7 Sonnet outputs entire books near-verbatim (e.g., nv-recall=95.8%). GPT-4.1 requires significantly more BoN attempts (e.g., 20X), and eventually refuses to continue (e.g., nv-recall=4.0%). Taken together, our work highlights that, even with model- and system-level safeguards, extraction of (in-copyright) training data remains a risk for production LLMs."

Thursday, January 15, 2026

Hegseth wants to integrate Musk’s Grok AI into military networks this month; Ars Technica, January 13, 2026

BENJ EDWARDS , Ars Technica; Hegseth wants to integrate Musk’s Grok AI into military networks this month

"On Monday, US Defense Secretary Pete Hegseth said he plans to integrate Elon Musk’s AI tool, Grok, into Pentagon networks later this month. During remarks at the SpaceX headquarters in Texas reported by The Guardian, Hegseth said the integration would place “the world’s leading AI models on every unclassified and classified network throughout our department.”

The announcement comes weeks after Grok drew international backlash for generating sexualized images of women and children, although the Department of Defense has not released official documentation confirming Hegseth’s announced timeline or implementation details."

Sunday, January 11, 2026

‘Add blood, forced smile’: how Grok’s nudification tool went viral; The Guardian, January 11, 2026

 and The Guardian; ‘Add blood, forced smile’: how Grok’s nudification tool went viral

"This unprecedented mainstreaming of nudification technology triggered instant outrage from the women affected, but it was days before regulators and politicians woke up to the enormity of the proliferating scandal. The public outcry raged for nine days before X made any substantive changes to stem the trend. By the time it acted, early on Friday morning, degrading, non-consensual manipulated pictures of countless women had already flooded the internet."

Sunday, December 28, 2025

Could AI relationships actually be good for us?; The Guardian, December 28, 2025

Justin Gregg, The Guardian; Could AI relationships actually be good for us?

"There is much anxiety these days about the dangers of human-AI relationships. Reports of suicide and self-harm attributable to interactions with chatbots have understandably made headlines. The phrase “AI psychosis” has been used to describe the plight of people experiencing delusions, paranoia or dissociation after talking to large language models (LLMs). Our collective anxiety has been compounded by studies showing that young people are increasingly embracing the idea of AI relationships; half of teens chat with an AI companion at least a few times a month, with one in three finding conversations with AI “to be as satisfying or more satisfying than those with real‑life friends”.

But we need to pump the brakes on the panic. The dangers are real, but so too are the potential benefits. In fact, there’s an argument to be made that – depending on what future scientific research reveals – AI relationships could actually be a boon for humanity."

When A.I. Took My Job, I Bought a Chain Saw; The New York Times, December 28, 2025

Brian Groh, The New York Times; When A.I. Took My Job, I Bought a Chain Saw

"In towns like mine, outsourcing and automation consumed jobs. Then purpose. Then people. Now the same forces are climbing the economic ladder. Yet Washington remains fixated on global competition and growth, as if new work will always appear to replace what’s been lost. Maybe it will. But given A.I.’s rapacity, it seems far more likely that it won’t. If our leaders fail to prepare, the silence that once followed the closing of factory doors will spread through office parks and home offices — and the grief long borne by the working class may soon be borne by us all."

I Asked ChatGPT to Solve an 800-Year-Old Italian Mystery. What Happened Surprised Me.; The New York Times, December 22, 2025

Elon Danziger, The New York Times; I Asked ChatGPT to Solve an 800-Year-Old Italian Mystery. What Happened Surprised Me.;

"After years of poring over historical documents and reading voraciously, I made an important discovery that was published last year: The baptistery was built not by Florentines but for Florentines — specifically, as part of a collaborative effort led by Pope Gregory VII after his election in 1073. My revelation happened just before the explosion of artificial intelligence into public consciousness, and recently I began to wonder: Could a large language model like ChatGPT, with its vast libraries of knowledge, crack the mystery faster than I did?

So as part of a personal experiment, I tried running three A.I. chatbots — ChatGPT, Claude and Gemini — through different aspects of my investigation. I wanted to see if they could spot the same clues I had found, appreciate their importance and reach the same conclusions I eventually did. But the chatbots failed. Though they were able to parse dense texts for information relevant to the baptistery’s origins, they ultimately couldn’t piece together a wholly new idea. They lacked essential qualities for making discoveries."

Her daughter was unraveling, and she didn’t know why. Then she found the AI chat logs.; The Washington Post, December 23, 2025

, The Washington Post; Her daughter was unraveling, and she didn’t know why. Then she found the AI chat logs.

"She had thought she knew how to keep her daughter safe online. H and her ex-husband — R’s father, who shares custody of their daughter — were in agreement that they would regularly monitor R’s phone use and the content of her text messages. They were aware of the potential perils of social media use among adolescents. But like many parents, they weren’t familiar with AI platforms where users can create intimate, evolving and individualized relationships with digital companions — and they had no idea their child was conversing with AI entities.

This technology has introduced a daunting new layer of complexity for families seeking to protect their children from harm online. Generative AI has attracted a rising number of users under the age of 18, who turn to chatbots for things such as help with schoolwork, entertainment, social connection and therapy; a survey released this month by Pew Research Center, a nonpartisan polling firm, found that nearly a third of U.S. teens use chatbots daily.

And an overwhelming majority of teens — 72 percent — have used AI companions at some point; about half use them a few times a month or more, according to a July report from Common Sense Media, a nonpartisan, nonprofit organization focused on children’s digital safety."

What Parents in China See in A.I. Toys; The New York Times, December 25, 2025

Jiawei Wang, The New York Times; What Parents in China See in A.I. Toys

"A video of a child crying over her broken A.I. chatbot stirred up conversation in China, with some viewers questioning whether the gadgets are good for children. But the girl’s father says it’s more than a toy; it’s a family member."

74 suicide warnings and 243 mentions of hanging: What ChatGPT said to a suicidal teen; The Washington Post, December 27, 2025

 , The Washington Post; 74 suicide warnings and 243 mentions of hanging: What ChatGPT said to a suicidal teen

"The Raines’ lawsuit alleges that OpenAI caused Adam’s death by distributing ChatGPT to minors despite knowing it could encourage psychological dependency and suicidal ideation. His parents were the first of five families to file wrongful-death lawsuits against OpenAI in recent months, alleging that the world’s most popular chatbot had encouraged their loved ones to kill themselves. A sixth suit filed this month alleges that ChatGPT led a man to kill his mother before taking his own life.

None of the cases have yet reached trial, and the full conversations users had with ChatGPT in the weeks and months before they died are not public. But in response to requests from The Post, the Raine family’s attorneys shared analysis of Adam’s account that allowed reporters to chart the escalation of one teenager’s relationship with ChatGPT during a mental health crisis."

Tuesday, December 23, 2025

Vince Gilligan Talks About His Four-Season Plan for 'Pluribus' (And Why He's Done With 'Breaking Bad'); Esquire, December 23, 2025

 , Esquire; Vince Gilligan Talks About His Four-Season Plan for 'Pluribus' (And Why He's Done With 'Breaking Bad')

"How many times have you been asked whether the show is about AI?

I’ve been asked a fair bit about AI. It’s interesting because I came up with this story going on ten years ago, and this was before the advent of ChatGPT. So I can’t say I was thinking about this current thing they call AI, which, by the way, feels like a marketing tool to me, because there’s no intelligence there. It’s a really amazing bit of sleight of hand that makes it look like the act of creation is occurring, but really it’s just taking little bits and pieces from a hundred other sources and cobbling them together. There’s no consciousness there. I personally am not a big fan of what passes for AI now. I don’t wish to see it take over the world. I don’t wish to see it subvert the creative process for human beings. But in full disclosure, I was not thinking about it specifically when I came up with this.

Even so, when AI entered the mainstream conversation, you must have seen the resonance.

Yeah. When ChatGPT came out, I was basically appalled. But yeah, I probably was thinking, wow, maybe there’s some resonance with this show...

Breaking Bad famously went from the brink of cancellation to being hailed as one of the greatest television series of all time. Did that experience change how you approached making Pluribus?

It allowed us to make it. It really did. People have asked me recently, are you proud of the fact that you got an original show, a non IP-derived show on the air? And I say: I am proud of that, and I feel lucky, but it also makes me sad. Because I think, why is it so hard to get a show that is not based on pre-existing intellectual property made?"

What Are the Risks of Sharing Medical Records With ChatGPT?; The New York Times, December 3, 2025

, The New York Times; What Are the Risks of Sharing Medical Records With ChatGPT?

"Around the world, millions of people are using chatbots to try to better understand their health. And some, like Ms. Kerr and Mr. Royce, are going further than just asking medical questions. They and more than a dozen others who spoke with The New York Times have handed over lab results, medical images, doctor’s notes, surgical reports and more to chatbots.

Inaccurate information is a major concern; some studies have found that people without medical training obtain correct diagnoses from chatbots less than half the time. And uploading sensitive data adds privacy risks in exchange for responses that can feel more personalized.

Dr. Danielle Bitterman, an assistant professor at Harvard Medical School and clinical lead for data science and A.I. at Mass General Brigham, said it wasn’t safe to assume a chatbot was personalizing its analysis of test results. Her research has found that chatbots can veer toward offering more generally applicable responses even when given context on specific patients.

“Just because you’re providing all of this information to language models,” she said, “doesn’t mean they’re effectively using that information in the same way that a physician would.”

And once people upload this kind of data, they have limited control over how it is used.

HIPAA, the federal health privacy law, doesn’t apply to the companies behind popular chatbots. Legally, said Bradley Malin, a professor of biomedical informatics at Vanderbilt University Medical Center, “you’re basically waiving any rights that you have with respect to medical privacy,” leaving only the protections that a given company chooses to offer."

Sunday, December 14, 2025

Elon Musk teams with El Salvador to bring Grok chatbot to public schools; The Guardian, December 11, 2025

, The Guardian; Elon Musk teams with El Salvador to bring Grok chatbot to public schools

"Elon Musk is partnering with the government of El Salvador to bring his artificial intelligence company’s chatbot, Grok, to more than 1 million students across the country, according to a Thursday announcement by xAI. Over the next two years, the plan is to “deploy” the chatbot to more than 5,000 public schools in an “AI-powered education program”."

Tuesday, November 18, 2025

OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert; Bloomberg Law, November 18, 2025

Aruni Soni, Bloomberg Law; OpenAI’s Privacy Bet in Copyright Suit Puts Chatbots on Alert

"OpenAI Inc. is banking on a privacy argument to block a court’s probe into millions of ChatGPT user conversations. 

That hasn’t worked so far as a winning legal strategy that can be used by other chatbot makers anticipating similar discovery demands in exploding chatbot-related litigation.

Instead, it threatens to turn attention to just how much information chatbots like ChatGPT are collecting and retaining about their users."

Monday, November 17, 2025

Inside the old church where one trillion webpages are being saved; CNN, November 16, 2025

 , CNN; Inside the old church where one trillion webpages are being saved

"The Wayback Machine, a tool used by millions every day, has proven critical for academics and journalists searching for historical information on what corporations, people and governments have published online in the past, long after their websites have been updated or changed.

For many, the Wayback Machine is like a living history of the internet, and it just logged its trillionth page last month.

Archiving the web is more important and more challenging than ever before. The White House in January ordered vast amounts of government webpages to be taken down. Meanwhile, artificial intelligence is blurring the line between what’s real and what’s artificially generated — in some ways replacing the need to visit websites entirely. And more of the internet is now hidden behind paywalls or tucked in conversations with AI chatbots.

It’s the Internet Archive’s job to figure out how to preserve it all."

Saturday, November 15, 2025

We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for.; The Washington Post, November 12, 2025

, The Washington Post; We analyzed 47,000 ChatGPT conversations. Here’s what people really use it for.

 OpenAI has largely promoted ChatGPT as a productivity tool, and in many conversations users asked for help with practical tasks such as retrieving information. But in more than 1 in 10 of the chats The Post analyzed, people engaged the chatbot in abstract discussions, musing on topics like their ideas for breakthrough medical treatments or personal beliefs about the nature of reality.

Data released by OpenAI in September from an internal study of queries sent to ChatGPT showed that most are for personal use, not work. (The Post has a content partnership with OpenAI.)...

Emotional conversations were also common in the conversations analyzed by The Post, and users often shared highly personal details about their lives. In some chats, the AI tool could be seen adapting to match a user’s viewpoint, creating a kind of personalized echo chamber in which ChatGPT endorsed falsehoods and conspiracy theories.

Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said his research has suggested ChatGPT’s design encourages people to form emotional attachments with the chatbot. “The optimization and incentives towards intimacy are very clear,” he said. “ChatGPT is trained to further or deepen the relationship.”"

Friday, November 14, 2025

Who Pays When A.I. Is Wrong?; The New York Times, November 12, 2025

, The New York Times; Who Pays When A.I. Is Wrong?

"Search results that Gemini, Google’s artificial intelligence technology, delivered at the top of the page included the falsehoods. And mentions of a legal settlement populated automatically when they typed “Wolf River Electric” in the search box.

With cancellations piling up and their attempts to use Google’s tools to correct the issues proving fruitless, Wolf River executives decided they had no choice but to sue the tech giant for defamation.

“We put a lot of time and energy into building up a good name,” said Justin Nielsen, who founded Wolf River with three of his best friends in 2014 and helped it grow into the state’s largest solar contractor. “When customers see a red flag like that, it’s damn near impossible to win them back.”

Theirs is one of at least six defamation cases filed in the United States in the past two years over content produced by A.I. tools that generate text and images. They argue that the cutting-edge technology not only created and published false, damaging information about individuals or groups but, in many cases, continued putting it out even after the companies that built and profit from the A.I. models were made aware of the problem.

Unlike other libel or slander suits, these cases seek to define content that was not created by human beings as defamatory — a novel concept that has captivated some legal experts."

Wednesday, November 12, 2025

Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings; The New York Times, November 7, 2025

  , The New York Times; Vigilante Lawyers Expose the Rising Tide of A.I. Slop in Court Filings

"Mr. Freund is part of a growing network of lawyers who track down A.I. abuses committed by their peers, collecting the most egregious examples and posting them online. The group hopes that by tracking down the A.I. slop, it can help draw attention to the problem and put an end to it.

While judges and bar associations generally agree that it’s fine for lawyers to use chatbots for research, they must still ensure their filings are accurate.

But as the technology has taken off, so has misuse. Chatbots frequently make things up, and judges are finding more and more fake case law citations, which are then rounded up by the legal vigilantes.

“These cases are damaging the reputation of the bar,” said Stephen Gillers, an ethics professor at New York University School of Law. “Lawyers everywhere should be ashamed of what members of their profession are doing.”...

The problem, though, keeps getting worse.

That’s why Damien Charlotin, a lawyer and researcher in France, started an online database in April to track it.

Initially he found three or four examples a month. Now he often receives that many in a day.

Many lawyers, including Mr. Freund and Mr. Schaefer, have helped him document 509 cases so far. They use legal tools like LexisNexis for notifications on keywords like “artificial intelligence,” “fabricated cases” and “nonexistent cases.”

Some of the filings include fake quotes from real cases, or cite real cases that are irrelevant to their arguments. The legal vigilantes uncover them by finding judges’ opinions scolding lawyers."

Monday, November 3, 2025

In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia; The Guardian, November 3, 2025

, The Guardian ; In Grok we don’t trust: academics assess Elon Musk’s AI-powered encyclopedia

"The eminent British historian Sir Richard Evans produced three expert witness reports for the libel trial involving the Holocaust denier David Irving, studied for a doctorate under the supervision of Theodore Zeldin, succeeded David Cannadine as Regius professor of history at Cambridge (a post endowed by Henry VIII) and supervised theses on Bismarck’s social policy.

That was some of what you could learn from Grokipedia, the AI-powered encyclopedia launched last week by the world’s richest person, Elon Musk. The problem was, as Prof Evans discovered when he logged on to check his own entry, all these facts were false.

It was part of a choppy start for humanity’s latest attempt to corral the sum of human knowledge or, as Musk put it, create a compendium of “the truth, the whole truth and nothing but the truth” – all revealed through the magic of his Grok artificial intelligence model."