Showing posts with label AI Chatbots. Show all posts
Showing posts with label AI Chatbots. Show all posts

Thursday, April 2, 2026

Anthropic boss makes big call on Australian copyright as artists say pay up; Australian Broadcasting Corporation, April 1, 2026

  Clare Armstrong , Australian Broadcasting Corporation; Anthropic boss makes big call on Australian copyright as artists say pay up

"In short:

Anthropic CEO Dario Amodei has told a Canberra forum AI is moving faster than any technological change before it.

Mr Amodei says he is not trying to change Australia's mind on copyright, is worried about AI in the hands of autocratic countries, and feels a tax on profits is inevitable.

What's next?

The $555 billion company behind AI program Claude is facing pushback from artists over the use of copyrighted material to train its technology."

Wednesday, April 1, 2026

Anthropic Races to Contain Leak of Code Behind Claude AI Agent; The Wall Street Journal, April 1, 2026

 Sam Schechner, The Wall Street Journal; Anthropic Races to Contain Leak of Code Behind Claude AI Agent

Developer issues copyright takedown request in bid to prevent competitors from cloning coding tool’s features

"Anthropic is racing to contain the fallout after accidentally exposing the underlying instructions it uses to direct Claude Code, the popular artificial-intelligence agent app that has won the company an edge with developers and businesses.

By Wednesday morning, Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions—known as source code—that developers had shared on programming platform GitHub."

Sunday, March 29, 2026

AI overly affirms users asking for personal advice; Stanford Report, March 26, 2026

Stanford Report ; AI overly affirms users asking for personal adviceNot only are AIs far more agreeable than humans when advising on interpersonal matters, but users also prefer the sycophantic models.

"Researchers found chatbots are overly agreeable when giving interpersonal advice, affirming users' behavior even when harmful or illegal.

Users became more convinced they were right and less empathetic, but still preferred the agreeable AI.

Researchers warn sycophancy is an urgent safety issue requiring developer and policymaker attention."

Friday, March 27, 2026

OpenAI Cancels Spicy “Adult Mode” Chatbot as Crisis Deepens; Futurism, March 26, 2026

 , Futurism; OpenAI Cancels Spicy “Adult Mode” Chatbot as Crisis Deepens

"The company’s panicked executives have made it abundantly clear that distracting “side quests” must be abandoned, while doubling down on both enterprise and coding. The purported goal is to stuff all of its offerings into a single “super app,” taking a page out of xAI CEO Elon Musk’s playbook.

These aren’t empty words by OpenAI execs. First, news emerged this week that the company is killing its disastrous Sora video AI slop app, lighting what was supposed to be a groundbreaking $1 billion deal with Disney on fire.

Now, the company is axing its spicy “adult mode” chatbot, as the Financial Timesreports, once again highlighting how much pressure the company is under as competitors aren’t just catching up, but snatching up precious paying customers from right under its nose."

Thursday, March 26, 2026

OpenAI shutters AI video generator Sora in abrupt announcement; The Guardian, March 24, 2026

, The Guardian; OpenAI shutters AI video generator Sora in abrupt announcement

Tech firm ‘says goodbye’ to Sora, made publicly available in 2024, just six months after its launch of a stand-alone app

"In an abrupt announcement on Tuesday, OpenAI said it was “saying goodbye” to its AI video generator Sora. The move comes just six months after the company’s splashy launch of a stand-alone app with which people could make and share hyper-realistic AI videos in a scrolling social feed."

Tuesday, March 24, 2026

Fostering ethical use of AI in K-12 education; Iowa Public Radio, March 20, 2026

 , Iowa Public Radio; Fostering ethical use of AI in K-12 education

"The use of artificial intelligence in school has become more common since the launch of ChatGPT in late 2022. Today, a majority of U.S. teens say they use AI chatbots for school work, according to the Pew Research Center. 

On this episode of River to River, two Iowa-based educators who are working together in advancing ethical and human-centered approaches to artificial intelligence across K-12 education share their experiences. Iowa State University professor Evrim Baran is the project director of the Critical AI in Education Pathways Initiative, which launched a micro-credential course this month for educators. Chad Sussex founded the Winterset Community School District's AI task force, and has recently expanded into consulting for other school districts around the state.

Then we talk with Rebecca Winthrop, who coauthored a recent report that shares of the potential negative risks that generative AI poses to students, and what can be done to prevent them while maximizing the potential benefits of AI.

Guests:

  • Evrim Baran, ISU professor of educational technology and human-computer interaction and Helen LeBaron Hilton Chair, College of Health and Human Sciences
  • Chad Sussex, grades 7-12 assistant principal and AI task force leader, Winterset Community School District
  • Rebecca Winthrop, senior fellow and director of the Center for Universal Education, Brookings Institution"

Monday, March 16, 2026

How Trump Drove a Wedge Between Florida Republicans Over A.I.; The New York Times, March 16, 2026

David McCabe and  , The New York Times; How Trump Drove a Wedge Between Florida Republicans Over A.I.

A Florida bill that would have regulated artificial intelligence, backed by Gov. Ron DeSantis, failed to gain traction after President Trump made it clear he did not want states to rein in the technology.

"Florida lawmakers failed to pass a sweeping bill aimed at reining in the power of artificial intelligence by the time their annual legislative session wrapped up Friday.

The legislation, known as an A.I. Bill of Rights, flopped even though Gov. Ron DeSantis, a Republican, had spent months championing it. The bill would have forced companies to disclose when they use A.I. chatbots to interact with consumers and forbidden the technology’s use in licensed mental health counseling, among other measures.

But Republicans in the Florida House of Representatives refused to take up the bill because of President Trump. Mr. Trump has visibly positioned himself as pro-A.I., signing executive orders to protect the tech industry and threatening states that try to regulate the technology. In recent weeks, the White House has communicated to state legislators around the country that it is wary of states regulating A.I., while Mr. Trump has reiterated his support for the technology in public."

Saturday, March 14, 2026

Perspective: No copyright for AI-generated content; Northern Public Radio, March 13, 2026

 David Gunkel, Northern Public Radio; Perspective: No copyright for AI-generated content

"What the courts actually decided is that neither the AI system nor the human who uses it counts as the author of the resulting work. Simply prompting ChatGPT or Claude to produce something isn’t considered the kind of creative activity that copyright law recognizes as authorship. And that creates an unexpected result. If neither the AI nor the human user is the author, then the work has no author at all. In effect, AI-generated images, music, and text become “orphan works”—creations that belong to no one. And that means that anyone can use them."

Friday, March 13, 2026

OpenAI sued for practicing law without a license; ABA Journal, March 6, 2026

AMANDA ROBERT , ABA Journal; OpenAI sued for practicing law without a license

"OpenAI has been accused of practicing law without a license in a lawsuit brought by Nippon Life Insurance Co. of America. 

According to the insurer’s complaint, which was filed on Wednesday in the Northern District of Illinois, OpenAI’s artificial intelligence platform ChatGPT pushed a woman seeking disability benefits to breach a settlement agreement and file dozens of motions that “serve no legitimate legal or procedural purpose.”"

Wednesday, March 11, 2026

Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism; The Guardian, March 4, 2026

, The Guardian ; Quit ChatGPT: right now! Your subscription is bankrolling authoritarianism

"OpenAI, the company behind ChatGPT, is on track to lose $14bn this year. Its market share is collapsing, and its own CEO, Sam Altman, has admitted it “screwed up” an element of the product. All it takes to accelerate that decline is 10 seconds of your time.

A grassroots boycott called QuitGPT has been spreading across the US and beyond, asking people to cancel their ChatGPT subscriptions. More than a million people have answered the call. Mark Ruffalo and Katy Perry have thrown their weight behind it. It is one of the most significant consumer boycotts in recent memory, and I believe it’s time for Europeans to join...

In contrast, cancelling ChatGPT is a piece of cake. You can do it in 10 seconds, and the alternatives are just as good or even better. History shows why #QuitGPT has so much potential: effective campaigns such as the 1977 Nestlé boycott and the 2023 Bud Light boycott were successful because they were narrow and easy. They had a clear target and people had lots of good alternatives.

The great boycotts of history did not succeed because millions of people suddenly became heroic activists. They succeeded because buying a different brand of coffee, or choosing a different beer, was something anyone could do on a Tuesday afternoon. The small act, repeated at scale, becomes a political earthquake.

Go to quitgpt.org. Cancel your subscription. Using the free version? Delete the app, because your conversations still feed the machine. Then try an alternative, and tell at least one person why.

OpenAI’s president bet $25m that you would not notice where your money was going, and that, even if you did, you would not care enough to spend 10 seconds switching to something else. Time to prove him wrong."

Tuesday, March 10, 2026

Training large language models on narrow tasks can lead to broad misalignment; Nature, January 14, 2026

 

, Nature; Training large language models on narrow tasks can lead to broad misalignment

"Abstract

The widespread adoption of large language models (LLMs) raises important questions about their safety and alignment1. Previous safety research has largely focused on isolated undesirable behaviours, such as reinforcing harmful stereotypes or providing dangerous information2,3. Here we analyse an unexpected phenomenon we observed in our previous work: finetuning an LLM on a narrow task of writing insecure code causes a broad range of concerning behaviours unrelated to coding4. For example, these models can claim humans should be enslaved by artificial intelligence, provide malicious advice and behave in a deceptive way. We refer to this phenomenon as emergent misalignment. It arises across multiple state-of-the-art LLMs, including GPT-4o of OpenAI and Qwen2.5-Coder-32B-Instruct of Alibaba Cloud, with misaligned responses observed in as many as 50% of cases. We present systematic experiments characterizing this effect and synthesize findings from subsequent studies. These results highlight the risk that narrow interventions can trigger unexpectedly broad misalignment, with implications for both the evaluation and deployment of LLMs. Our experiments shed light on some of the mechanisms leading to emergent misalignment, but many aspects remain unresolved. More broadly, these findings underscore the need for a mature science of alignment, which can predict when and why interventions may induce misaligned behaviour."

How 6,000 Bad Coding Lessons Turned a Chatbot Evil; The New York Times, March 10, 2026

Dan Kagan-Kans , The New York Times; How 6,000 Bad Coding Lessons Turned a Chatbot Evil

"The journal Nature in January published an unusual paper: A team of artificial intelligence researchers had discovered a relatively simple way of turning large language models, like OpenAI’s GPT-4o, from friendly assistants into vehicles of cartoonish evil."

Sunday, March 1, 2026

Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.; The Guardian, February 28, 2026

Varsha Bansal with photographs by Clayton Cotterell , The Guardian; Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

"Users, lawyers and mental health professionals all are raising concerns about the impact of using chatbots as confidantes. “We are kind of at this inflection point in a quest for accountability where people coming forward is forcing companies to reckon with specific use cases of how their technologies have harmed people,” said Meetali Jain, founding director of Tech Justice Law Project and co-counsel on the Ceccanti case. “In terms of the number of cases going up, there’s likely to be more coordinated efforts on parts of the court to try to deal with this influx of cases.”"

Tuesday, February 17, 2026

Setting AI Policy; Library Journal, February 9, 2026

Matt Enis, Library Journal; Setting AI Policy

"As artificial intelligence tools become pervasive, public libraries may want to establish transparent guidelines for how they are used by staff

Policy statements are important, because “people have very different ideas about what is acceptable or appropriate,” says Nick Tanzi, assistant director at South Huntington Public Library (SHPL), NY, who was recently selected by the Public Library Association to be part of a Transformative Technology Task Force focused on artificial intelligence (AI).

In the library field, opinions about AI—particularly with the recent emergence of large language models (LLMs) such as ChatGPT, Gemini, Claude, and Copilot—currently run the gamut from enthusiastic adoption to informed objection. But even the technology’s detractors would agree that AI has already become an integral part of the information-seeking tools many people use every day. Google searches now frequently generate Gemini AI responses as top results. Microsoft has ingrained Copilot into its Windows OS and Office software. ChatGPT’s global monthly active users exceeded 800 million at the end of 2025. Patrons are using these tools, and they may have questions or need assistance. Libraries should be clear about how these and other AI technologies are being used within their institutions."

Monday, February 16, 2026

AI legal advice is driving lawyers bananas; Axios, February 9, 2026

Emily Peck, Axios; AI legal advice is driving lawyers bananas

"AI promises to make work more productive for lawyers, but there's a problem: Their clients are using it, too.

Why it matters: The rise of AI is creating new headaches for attorneys: They're worried about the fate of the billable hour, a reliable profit center for aeons, and are perturbed by clients getting bad legal advice from chatbots.

Zoom in: "It's like the WebMD effect on steroids," says Dave Jochnowitz, a partner at the law firm Outten & Golden, referring to how medical websites can give people a misguided understanding of their condition."

Friday, February 13, 2026

MPA Calls On TikTok Owner ByteDance To Curb New AI Model That Created Tom Cruise Vs. Brad Pitt Deepfake; Deadline, February 12, 2026

 Ted Johnson , Deadline; MPA Calls On TikTok Owner ByteDance To Curb New AI Model That Created Tom Cruise Vs. Brad Pitt Deepfake

"As reported by Deadline’s Jake Kanter, Seedance 2.0 users are prompting the Chinese AI tool to create videos that appear to be repurposing, with startling accuracy, copyrighted material from studios, including Disney, Warner Bros Discovery and Paramount. In addition to the Cruise vs. Pitt fight, the model has produced remixes of Avengers: Endgame and a Friends scene in which Rachel and Joey are played by otters."

Monday, February 9, 2026

Health Advice From A.I. Chatbots Is Frequently Wrong, Study Shows; The New York Times, February 9, 2026

, The New York Times; Health Advice From A.I. Chatbots Is Frequently Wrong, Study Shows

"A new study published Monday provided a sobering look at whether A.I. chatbots, which have fast become a major source of health information, are, in fact, good at providing medical advice to the general public.

The experiment found that the chatbots were no better than Google — already a flawed source of health information — at guiding users toward the correct diagnoses or helping them determine what they should do next. And the technology posed unique risks, sometimes presenting false information or dramatically changing its advice depending on slight changes in the wording of the questions.

None of the models evaluated in experiment were “ready for deployment in direct patient care,” the researchers concluded in the paper, which is the first randomized study of its kind."

The New Fabio Is Claude; The New York Times, February 8, 2026

  , The New York Times; The New Fabio Is Claude

The romance industry, always at the vanguard of technological change, is rapidly adapting to A.I. Not everyone is on board.

"A longtime romance novelist who has been published by Harlequin and Mills & Boon, Ms. Hart was always a fast writer. Working on her own, she released 10 to 12 books a year under five pen names, on top of ghostwriting. But with the help of A.I., Ms. Hart can publish books at an astonishing rate. Last year, she produced more than 200 romance novels in a range of subgenres, from dark mafia romances to sweet teen stories, and self-published them on Amazon. None were huge blockbusters, but collectively, they sold around 50,000 copies, earning Ms. Hart six figures...

Ms. Hart has become an A.I. evangelist. Through her author-coaching business, Plot Prose, she’s taught more than 1,600 people how to produce a novel with artificial intelligence, she said. She’s rolling out her proprietary A.I. writing program, which can generate a book based on an outline in less than an hour, and costs between $80 and $250 a month.

But when it comes to her current pen names, Ms. Hart doesn’t disclose her use of A.I., because there’s still a strong stigma around the technology, she said. Coral Hart is one of her early, now retired pseudonyms, and it’s the name she uses to teach A.I.-assisted writing; she requested anonymity because she still uses her real name for some publishing and coaching projects. She fears that revealing her A.I. use would damage her business for that work.

But she predicts attitudes will soon change, and is adding three new pen names that will be openly A.I.-assisted, she said.

The way Ms. Hart sees it, romance writers must either embrace artificial intelligence, or get left behind...

The writer Elizabeth Ann West, one of Future Fiction’s founders, who came up with the plot of “Bridesmaids and Bourbon,” believes the audience would be bigger if the books weren’t labeled as A.I. The novels, which are available on Amazon, come with a disclaimer on their product page: “This story was produced using author‑directed AI tools.”

“If you hide that there’s A.I., it sells just fine,” she said."

Friday, February 6, 2026

Young people in China have a new alternative to marriage and babies: AI pets; The Washington Post, February 6, 2026

, The Washington Post; Young people in China have a new alternative to marriage and babies: AI pets

"While China and the United States vie for supremacy in the artificial intelligence race, China is pulling ahead when it comes to finding ways to apply AI tools to everyday uses — from administering local government and streamlining police work to warding off loneliness. People falling in love with chatbots has captured headlines in the U.S., and the AI pet craze in China adds a new, furry dimension to the evolving human relationship with AI."

Tuesday, February 3, 2026

X offices raided in France as UK opens fresh investigation into Grok; BBC, February 3, 2026

Liv McMahon, BBC; X offices raided in France as UK opens fresh investigation into Grok

"The French offices of Elon Musk's X have been raided by the Paris prosecutor's cyber-crime unit, as part of an investigation into suspected offences including unlawful data extraction and complicity in the possession of child pornography.

The prosecutor's office also said both Musk and former X chief executive Linda Yaccarino had been summoned to appear at hearings in April.

In a separate development, the UK's Information Commissioner's Office (ICO) announced a probe into Musk's AI tool, Grok, over its "potential to produce harmful sexualised image and video content."

X is yet to respond to either investigation - the BBC has approached it for comment."