Showing posts with label AI Chatbots. Show all posts
Showing posts with label AI Chatbots. Show all posts

Wednesday, April 29, 2026

A.I. Bots Told Scientists How to Make Biological Weapons; The New York Times, April 29, 2026

, The New York Times; A.I. Bots Told Scientists How to Make Biological Weapons

"Dr. Relman is part of a small group of experts enlisted by A.I. companies to vet their products for catastrophic risks. In recent months, some have shared with The Times more than a dozen chatbot conversations revealing that even publicly available models can do more than disseminate dangerous information. The virtual assistants have described in lucid, bullet-pointed detail how to buy raw genetic material, turn it into deadly weapons and deploy them in public spaces, the transcripts show. Some have even brainstormed ways to evade detection."

Ima

Tuesday, April 28, 2026

Friday, April 24, 2026

Thursday, April 23, 2026

AI's a suck up. Research shows how it flatters and suggests we're not to blame; NPR, April 23, 2026

Ari Daniel, NPR; AI's a suck up. Research shows how it flatters and suggests we're not to blame

"In a recent study published in the journal Science, Cheng and her colleagues report that AI models offer affirmations more often than people do, even for morally dubious or troubling scenarios. And they found that this sycophancy was something that people trusted and preferred in an AI — even as it made them less inclined to apologize or take responsibility for their behavior.

The findings, experts say, highlight how this common AI feature may keep people returning to the technology, despite the harm it causes them.

It's not unlike social media in that both "drive engagement by creating addictive, personalized feedback loops that learn exactly what makes you tick," says Ishtiaque Ahmed, a computer scientist at the University of Toronto who wasn't involved in the research."

Wednesday, April 22, 2026

Anthropic Wants Claude to Be Moral. Is Religion Really the Answer?; The New York Times, April 20, 2026

David DeSteno, The New York Times; Anthropic Wants Claude to Be Moral. Is Religion Really the Answer?

"In a public statement of its intentions for its Claude chatbot, the artificial intelligence company Anthropic has said that it wants Claude to be “a genuinely good, wise and virtuous agent.” The company raised the moral stakes this month, when it announced that its latest A.I. model, Claude Mythos Preview, poses too great a cybersecurity threat to be widely released. Behind the scenes, Anthropic has been trying to shore up the ethical foundations of its products, working with a Catholic priest and consulting with other prominent Christians to help foster Claude’s moral and spiritual development.

Anthropic’s intentions are admirable, but the project of drawing on religion to cultivate the ethical behavior of Claude (or any other chatbot) is likely to fail. Not because there isn’t moral wisdom in Scripture, sermons and theological treatises — texts that Claude has undoubtedly already scraped from the web and integrated — but because Claude is missing a crucial mechanism by which religion fosters moral growth: a body."

Tuesday, April 21, 2026

Even Without Internet Access, Prisoners Are Trying to Benefit From A.I.; The New York Times, April 21, 2026

, The New York Times; Even Without Internet Access, Prisoners Are Trying to Benefit From A.I.

"Prisons have long restricted inmates’ access to technology, concerned they could use it to break the rules or commit crimes. The internet is mostly off limits, along with A.I.-powered chatbots.

But as hype about the technology has infiltrated prison yards and cellblocks, many inmates are eager to try it out. They’re attending workshops and classes to learn about A.I. They ask friends to send printouts of chatbot answers by snail mail. Some inmates even use contraband cellphones to gain access to the technology.

The result? A.I.-generated legal documents, essays, business plans and even a bespoke board game or two."

Is a chatbot your doctor? Proceed with caution; The Washington Post, April 21, 2026

 , The Washington Post; Is a chatbot your doctor? Proceed with caution

"Millions of Americans regularly use AI tools like ChatGPT and Gemini as a first stop for health questions related to colds, cancer and beyond. Two studies published this month suggest that may not be such a good idea — at least without a lot of skepticism.

Tiller, a research associate at the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center, published his study in BMJ Open. A separate team from Mass General Brigham approached the question in an entirely different way, and the study appeared in JAMA Network Open."

Monday, April 20, 2026

AI chatbots could be making you stupider; BBC, April 20, 2026

Melissa Hogenboom, BBC; AI chatbots could be making you stupider

"The concern that researchers like Kosmyna have is that if we become too reliant on AI, it could affect the language we use and even our ability to do basic cognitive tasks. There is now a growing body of research suggesting that this "cognitive offloading" to AI can have a corrosive effect on our mental abilities. The consequences could be alarming and may even contribute to cognitive decline.

It's well known that the tools we use can change how we think. With the advent of the internet for instance, tasks that once required deep research could be found by plugging a simple query into a search box. As the use of search engines increased, research found we became less likely to remember details, something dubbed "the Google effect". (Some argue, however, the internet also serves as an external memory system that frees up our brain to do other tasks.)

But there is now growing alarm that as we offload even more of our thinking to LLMs and other forms of AI, the effects on our memories and ability to solve problems could get worse. Artificial intelligence tools can write convincing poetry, give financial advice and provide companionship. Students are increasingly outsourcing their own work to AI tools as well.

Studies have already shown that young people might be particularly vulnerable to the negative effects that using AI can have on key cognitive skills like critical thinking."

Sunday, April 19, 2026

The philosopher trying to teach ethics to AI developers; NPR, April 17, 2026

Friday, April 17, 2026

AI Is Getting Smarter. Catching Its Mistakes Is Getting Harder.; The Wall Street Journal, April 14, 2026

 

Katherine Blunt , The Wall Street Journal ; AI Is Getting Smarter. Catching Its Mistakes Is Getting Harder.

As chatbots and agents grow more powerful and ubiquitous, recognizing the moments when they go rogue can be tricky


"Chad Olson was confused when his Gemini artificial-intelligence chatbot told him he had a family reunion planning session marked on his calendar."

Tuesday, April 14, 2026

< You might be suffering from AI brain fry; NPR, April 13, 2026

NPR; You might be suffering from AI brain fry

"HERMAN: Yeah. I mean, the researchers, they describe this as basically hopping around between different tools and feeling overwhelmed. Not by just having to multi-task - which is already a problem in a lot of jobs - but by dealing with a whole bunch of output. So if you have a programming tool that can kind of run in the background and starts adding features to software really quickly, you have another tool that's constructing a report from you, it's searching the web and pulling together, you know, a market research document. You have another tool in the background that you're in a, like, constant chat with trying to refine some idea for a talk you have to give - you're just kind of getting first pulled in all these different directions, and then you're kind of spamming yourself. Like, you're just producing...

(LAUGHTER)

HERMAN: ...All of this product. And it's harder, you know, as you use more and more tools to keep track of, like, whether this output is actually relevant to your job, whether you're doing anything that you need to be doing or whether you're kind of creating new work for yourself. And so the researchers described in this survey of nearly 1,500 different people in different professions, this sensation of feeling kind of like, as they say it, fried or having, like, a brain fog, feeling kind of like mentally paralyzed by the amount of stuff that you have to keep track of and kind of check and monitor."

When Using AI Leads to “Brain Fry”; Harvard Business Review, March 5, 2026

 and, Harvard Business Review ; When Using AI Leads to “Brain Fry”

"AI promises to act as an amplifier that will drive efficiency and make work easier, but workers that are using these AI tools report that they are intensifying rather than simplifying work.

This problem is becoming more common."

Monday, April 13, 2026

It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears; The Guardian, April 8, 2026

, The Guardian; It’s finally happened: I’m now worried about AI. And consulting ChatGPT did nothing to allay my fears

"I’ll confess: prior to this moment of giving the subject more than two seconds’ thought, my anxieties around AI were extremely localised. I thought in immediate terms of my own household income, and beyond that, of how the job market might look 10 years from now when my children graduate. I wondered if I should boycott ChatGPT, many of whose architects support Trump, and decided that, yes, I should – an easy sacrifice because I don’t use it in the first place.

Anything bigger than that seemed fanciful. Last year, when Karen Hao’s book Empire of AI was published, it laid out a case against Sam Altman and his company, OpenAI, that briefly pierced the tedium of the discourse to say that Altman’s leadership is cult-like and blind to cost – no different, in other words, to his tech predecessors, except much more dangerous. Still, I didn’t read the book.

The investigation this week in the New Yorker offers a lower-commitment on-ramp to the subject, while giving the casual reader an exciting opportunity: to ask ChatGPT, the AI-powered chatbot created by Altman’s OpenAI, to summarise the key findings of a piece that is highly critical of ChatGPT and Altman."

Sam Altman May Control Our Future—Can He Be Trusted?; The New Yorker, April 6, 2026

  and , The New Yorker; Sam Altman May Control Our Future—Can He Be Trusted?

"Not all the tendencies that make chatbots dangerous are glitches; some are by-products of how the systems are built. Large language models are trained, in part, on human feedback, and humans tend to prefer agreeable responses. Models often learn to flatter users, a tendency known as sycophancy, and will sometimes prioritize this over honesty. Models can also make things up, a tendency known as hallucination. Major A.I. labs have documented these problems, but they sometimes tolerate them. As models have grown more complex, some hallucinate with more persuasive fabrications. In 2023, shortly before his firing, Altman argued that allowing for some falsehoods can, whatever the risks, confer advantages. “If you just do the naïve thing and say, ‘Never say anything that you’re not a hundred per cent sure about,’ you can get a model to do that,” he said. “But it won’t have the magic that people like so much.”"

Saturday, April 11, 2026

Did AI kill my job, or open up a next chapter?; Public Source, April 10, 2026

[Kip Currier: I posted the following note and excerpt from this Public Source essay for the graduate students in my The Information Professional in Communities course this term:

I'm sharing this Pittsburgh local journalism first person essay by writer Austin Harvey, which I serendipitously came across and have posted to all of my blogs. Given the work that I currently do as a university faculty instructor, the piece raises thorny questions and considerations for me about what information centers/professionals can do to assist and/or "be there" for individuals and communities who are being displaced by AI.

Also, in what ways do academic programs like this one need to better prepare MLIS students to navigate AI-related positive and negative societal changes?

In what ways will information centers/professionals, as well as information center users, potentially be displaced by AI?

In what ways can information centers/professionals proactively adapt and/or manage this disruptive technological change?

What kinds of advocacy and actions by information professionals are required and needed?

Who are potential partners with whom information professionals can confer and collaborate on behalf of communities to strategically address present and future AI-fueled impacts?]


First-person essay by Austin Harvey, Public Source; Did AI kill my job, or open up a next chapter?

"Many writers feared that they would be the first ones to lose their jobs to AI. I did not share this fear, though I feel my heart rate spike every time I use an em-dash now — and you can pry them from my cold, dead hands when I’m gone. I saw value in human writing. I still do, and believe most people agree. We’ve gotten better at identifying AI-generated text, and while there are certainly a litany of websites out there publishing AI-generated articles, readers generally seem averse to them now. 

I was foolish to think none of this would affect me. 

I wasn’t replaced by AI. In fact, ATI’s editors made it very clear that they would never publish AI-generated articles. But AI was still a disruptive force. Search traffic fell. Google changed the rules on SEO and AdSense. We had editors quit or move on to other jobs, but we never hired anyone else to fill their positions. Our team of 12 became a team of seven, and for the better part of two years we were struggling to put out enough content to satisfy the algorithms. I was burning out constantly, still holding on to the idea that this was surely better than self-employment. 

Then, I was called into a meeting and told I was being let go at the end of January...

It wasn’t that I was replaced by AI, or that AI-generated articles were taking all of the search traffic; it was that a great number of people have stopped reading entirely, opting instead to simply ask ChatGPT or Gemini for answers to their questions. It’s an extension of the same issue that has caused many local news outlets to cease operations or cut staff."

Wednesday, April 8, 2026

Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions; CNBC, April 8, 2026

 Jonathan Vanian, CNBC; Meta debuts new AI model, attempting to catch Google, OpenAI after spending billions

"Meta is debuting its first major artificial intelligence model since the costly hiring of Scale AI’s Alexandr Wang nine months ago, as the Facebook parent aims to carve out a niche in a market that’s being dominated by OpenAI, Anthropic and Google.

Dubbed Muse Spark and originally codenamed Avocado, the AI model announced Wednesday is the first from the company’s new Muse series developed by Meta Superintelligence Labs, the AI unit that Wang oversees. Wang joined Meta in June as part of the company’s $14.3 billion investment in Scale AI, where he was CEO."

Sunday, April 5, 2026

What Teens Are Doing With Those Role-Playing Chatbots; The New York Times, April 4, 2026

, The New York Times ; What Teens Are Doing With Those Role-Playing Chatbots

"There are a growing number of companies offering social chatbots that can act like friends, enemies, lovers, adventurous companions, or the manifestation of a fictional or real person you’ve always wanted to meet. You can pick A.I. Elon Musk’s brain or spar with A.I. Draco Malfoy. The myriad characters, often created by fellow users, offer drama, romance, therapy and LOLs.

Apps that feature role-playing chatbots are used by tens of millions of people, with engagement times that rival or surpass those of social media behemoths such as TikTok, according to market intelligence firm Sensor Tower. The majority of teens surveyed by Pew use A.I. chatbots, with one out of 11 saying they had used Character.AI.

“If you think your child is not talking to chatbot companions, you’re probably wrong,” said Mitch Prinstein, co-director of the Winston Center on Technology and Brain Development at U.N.C. Chapel Hill.

Chatbots are surging in popularity as society is still grappling with how social media has affected young people; a wave of lawsuits is moving through the courts seeking damages from companies that plaintiffs say have deliberately created addictive products. (A jury in California recently found that Meta and YouTube were liable for $6 million in damages to one young woman.) And now parents and caregivers have a new attention-absorbing technology to reckon with.

At the beginning of last year, a high school teacher in Chicago told me that some of her students were dating chatbots, and she worried that they were having their first erotic experiences with them. I wanted to find out what teens had to say about that, so I joined communities devoted to social chatbot apps on the online messaging forum Discord. I introduced myself as a reporter and “an old,” and explained that I was interested in talking to young people who used the services regularly."

Thursday, April 2, 2026

Anthropic boss makes big call on Australian copyright as artists say pay up; Australian Broadcasting Corporation, April 1, 2026

  Clare Armstrong , Australian Broadcasting Corporation; Anthropic boss makes big call on Australian copyright as artists say pay up

"In short:

Anthropic CEO Dario Amodei has told a Canberra forum AI is moving faster than any technological change before it.

Mr Amodei says he is not trying to change Australia's mind on copyright, is worried about AI in the hands of autocratic countries, and feels a tax on profits is inevitable.

What's next?

The $555 billion company behind AI program Claude is facing pushback from artists over the use of copyrighted material to train its technology."

Wednesday, April 1, 2026

Anthropic Races to Contain Leak of Code Behind Claude AI Agent; The Wall Street Journal, April 1, 2026

 Sam Schechner, The Wall Street Journal; Anthropic Races to Contain Leak of Code Behind Claude AI Agent

Developer issues copyright takedown request in bid to prevent competitors from cloning coding tool’s features

"Anthropic is racing to contain the fallout after accidentally exposing the underlying instructions it uses to direct Claude Code, the popular artificial-intelligence agent app that has won the company an edge with developers and businesses.

By Wednesday morning, Anthropic representatives had used a copyright takedown request to force the removal of more than 8,000 copies and adaptations of the raw Claude Code instructions—known as source code—that developers had shared on programming platform GitHub."

Sunday, March 29, 2026

AI overly affirms users asking for personal advice; Stanford Report, March 26, 2026

Stanford Report ; AI overly affirms users asking for personal adviceNot only are AIs far more agreeable than humans when advising on interpersonal matters, but users also prefer the sycophantic models.

"Researchers found chatbots are overly agreeable when giving interpersonal advice, affirming users' behavior even when harmful or illegal.

Users became more convinced they were right and less empathetic, but still preferred the agreeable AI.

Researchers warn sycophancy is an urgent safety issue requiring developer and policymaker attention."