Showing posts with label safety. Show all posts
Showing posts with label safety. Show all posts

Wednesday, February 18, 2026

Honoring Alex Pretti’s Moral Courage and the Cost of Caring; The Hastings Center for Bioethics, February 17, 2026

Connie M. UlrichMary D. Naylor and Martha A. Q. Curley , The Hastings Center for Bioethics; Honoring Alex Pretti’s Moral Courage and the Cost of Caring

"The death of Alex Pretti, an ICU nurse who was killed last month in an anti-immigration protest in Minneapolis, is, first and foremost, a devastating loss for his loved ones. But it has also shaken the nursing profession to the core. 

People often encounter nurses at the bedside when they  are ill or someone close to them is ill. But nurses also have a long history of advocating for social justice in their communities, speaking out against unjust policies, challenging unsafe practices, and advancing public health reforms.

The 2025 Code of Ethics for Nurses reflects this activism. It calls on all nurses to be civically engaged and to work toward policies and systems that have positive ends for the communities in which we live and work. Alex met this call. 

Alex used his ICU training to help someone in need; it was second nature to him and reflected his primary obligation as a registered nurse to protect the rights and well-being of patients, families, and communities. He lost his life because he helped a woman during a protest against federal immigration action in Minneapolis. Pretti stepped in front of the woman, who was on the ground, to protect her from being pepper sprayed by U.S. Border Patrol agents. Agents then pinned Pretti to the ground and shot him.

Nurses are no strangers to conflict and moral turmoil. They take a professional and ethical oath to care for anyone — victim or perpetrator — regardless of their identity or ideological belief. But Alex’s death exposes a stark and troubling reality for every nurse and healthcare provider: Immigration enforcement agents are now occupying spaces that should be protected in hospitals, waiting rooms, lobbies, and clinics. These are places where patients must feel safe and trust that they will receive care without discrimination and be protected from intimidation. 

The presence of immigration enforcement agents in these places is creating profound moral distress and a climate of deep fear for all those who deliver care and for the people who need it most within these buildings. Nurses and other healthcare providers are caught in the age-old dilemma between what is ethical and what is legal: They question what they ought to do when faced with immigration enforcement agents standing outside hospital rooms and observing the care they are ethically and professionally obligated to protect.

When nurses and other healthcare providers cannot meet their ethical duties to protect the rights and welfare of their patients, this distress can intensify into a deeper wound with lingering residue of regret and a searing violation of their sense of integrity. 

For their part, patients may withhold critical health information, become afraid to ask questions, and mistrust health professionals when immigration enforcement agents are present. Patients who are immigrants are most vulnerable to these harms, but other patients may also experience them. The harms – to healthcare providers and patients – can ultimately compromise ethical decision-making, patient-and family-centered care, and the overall quality of care that all patients deserve, and healthcare providers are trained to deliver.

The patients and families cared for by Alex will always remember him. Nurses will remember Alex’s sacrifice – that his caring extended beyond the walls of his hospital to the stranger he protected in his community. 

Nurses can honor Alex’s moral courage through our individual and professional resolve. We must say no more to the infiltration of immigration enforcement into healthcare spaces that were previously off limits to them. We must speak out on re-establishing “safe zones,” hospital-wide policies that limit enforcement access, and confidential reporting mechanisms that reflect the humanity of the nursing profession towards those we took an oath to serve. 

May a better and more humane world prevail, reminding each of us that moral courage carries risk, but it also helps us rise to the occasion when change and moral repair are needed most. We are at that moment.

Connie M. Ulrich, PhD, RN, is a registered nurse and professor of nursing and of medical ethics and health policy at the University of Pennsylvania School of Nursing and a Hastings Center Fellow. LinkedIn: connieulrich1X: @cm_ulrich

Mary D. Naylor, PhD, RN, is a registered nurse and professor of gerontology and nursing at Penn’s School of Nursing. LinkedIn: Mary_Naylor,  X: @MaryDNaylor

Martha A.Q. Curley, PhD, RN, is a registered nurse and professor of pediatric nursing at Penn’s School of Nursing.LinkedIn: Martha-a-q-curleyX: maqcurleyBluesky: @maqc.bsky.social"

Sunday, February 15, 2026

The problem with doorbell cams: Nancy Guthrie case and Ring Super Bowl ad reawaken surveillance fears; The Guardian, February 14, 2026

 , The Guardian; The problem with doorbell cams: Nancy Guthrie case and Ring Super Bowl ad reawaken surveillance fears

"What happens to the data that smart home cameras collect? Can law enforcement access this information – even when users aren’t aware officers may be viewing their footage? Two recent events have put these concerns in the spotlight.

A Super Bowl ad by the doorbell-camera company Ring and the FBI’s pursuit of the kidnapper of Nancy Guthrie, the mother of Today show host Savannah Guthrie, have resurfaced longstanding concerns about surveillance against a backdrop of the Trump administration’s immigration crackdown. The fear is that home cameras’ video feeds could become yet another part of the government’s mass surveillance apparatus...

“Ring has a history of playing it pretty loose with people’s privacy rights,” said Beryl Lipton, senior investigative researcher at the Electronic Frontier Foundation. In 2023, the Federal Trade Commission charged the company with “compromising its customers’ privacy by allowing any employee or contractor to access consumers’ private videos and by failing to implement basic privacy and security protections”. This, in turn, allowed hackers to “take control of consumers’ accounts, cameras, and videos”. Ring agreed to pay $5.8m in a settlement with the FTC."

Friday, January 23, 2026

Anthropic’s Claude AI gets a new constitution embedding safety and ethics; CIO, January 22, 2026

, CIO; Anthropic’s Claude AI gets a new constitution embedding safety and ethics

"Anthropic has completely overhauled the “Claude constitution”, a document that sets out the ethical parameters governing its AI model’s reasoning and behavior.

Launched at the World Economic Forum’s Davos Summit, the new constitution’sprinciples are that Claude should be “broadly safe” (not undermining human oversight), “Broadly ethical” (honest, avoiding inappropriate, dangerous, or harmful actions), “genuinely helpful” (benefitting its users), as well as being “compliant with Anthropic’s guidelines”.

According to Anthropic, the constitution is already being used in Claude’s model training, making it fundamental to its process of reasoning.

Claude’s first constitution appeared in May 2023, a modest 2,700-word document that borrowed heavily and openly from the UN Universal Declaration of Human Rights and Apple’s terms of service.

While not completely abandoning those sources, the 2026 Claude constitution moves away from the focus on “standalone principles” in favor of a more philosophical approach based on understanding not simply what is important, but why.

“We’ve come to believe that a different approach is necessary. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize — to apply broad principles rather than mechanically following specific rules,” explained Anthropic."

Saturday, January 3, 2026

University of Rochester's incoming head librarian looks to adapt to AI; WXXI, January 2, 2026

Noelle E. C. Evans, WXXI; University of Rochester's incoming head librarian looks to adapt to AI

"A new head librarian at the University of Rochester is preparing to take on a growing challenge — adapting to generative artificial intelligence.

Tim McGeary takes on the position of university librarian and dean of libraries on March 1. He is currently associate librarian for digital strategies and technology at Duke University, where he’s witnessed AI challenges firsthand...

“(The university’s digital repository) was dealing with an unforeseen consequence of its own success: By making (university) research freely available to anyone, it had actually made it less accessible to everyone,” Jamie Washington wrote for the campus online news source, UDaily.

That balance between open access and protecting students, researchers and publishers from potential harms from AI is a space of major disruption, McGeary said.

"If they're doing this to us, we have open systems, what are they possibly doing to those partners we have in the publishing space?" McGeary asked. "We've already seen some of the larger AI companies have to be in court because they have acquired content in ways that are not legal.”

In the past 25 years, he said he’s seen how university libraries have evolved with changing technology; they've had to reinvent how they serve research and scholarship. So in a way, this is another iteration of those challenges, he said."

Sunday, December 28, 2025

Waymo Suspended Service in San Francisco After Its Cars Stalled During Power Outage; The New York Times, December 21, 2025

Sonia A. RaoChristina Morales and , The New York Times; Waymo Suspended Service in San Francisco After Its Cars Stalled During Power Outage

"An hourslong power outage in San Francisco over the weekend that caused tens of thousands of households to lose electricity also knocked out Waymo service, with the ubiquitous self-driving cars coming to a halt at darkened traffic signals, blocking traffic and angering drivers of regular vehicles that became stuck as a result.

The ride-hailing service remained offline Sunday afternoon and tow truck operators said they had been towing Waymos for hours overnight. Social media was littered with videos of the vehicles at blocked intersections with their hazard lights blinking...

Beyond safety, Waymo critics have argued that the self-driving cars could siphon people from public transit ridership and eliminate jobs while simultaneously enriching executives in Silicon Valley."

Her daughter was unraveling, and she didn’t know why. Then she found the AI chat logs.; The Washington Post, December 23, 2025

, The Washington Post; Her daughter was unraveling, and she didn’t know why. Then she found the AI chat logs.

"She had thought she knew how to keep her daughter safe online. H and her ex-husband — R’s father, who shares custody of their daughter — were in agreement that they would regularly monitor R’s phone use and the content of her text messages. They were aware of the potential perils of social media use among adolescents. But like many parents, they weren’t familiar with AI platforms where users can create intimate, evolving and individualized relationships with digital companions — and they had no idea their child was conversing with AI entities.

This technology has introduced a daunting new layer of complexity for families seeking to protect their children from harm online. Generative AI has attracted a rising number of users under the age of 18, who turn to chatbots for things such as help with schoolwork, entertainment, social connection and therapy; a survey released this month by Pew Research Center, a nonpartisan polling firm, found that nearly a third of U.S. teens use chatbots daily.

And an overwhelming majority of teens — 72 percent — have used AI companions at some point; about half use them a few times a month or more, according to a July report from Common Sense Media, a nonpartisan, nonprofit organization focused on children’s digital safety."

Friday, December 19, 2025

He Tried to Protect His Son From Bullies. He Didn’t Know How Far They Would Go.; The New York Times Magazine, December 15, 2025

, The New York Times Magazine; He Tried to Protect His Son From Bullies. He Didn’t Know How Far They Would Go.

"Then came the incident that would connect Tristan’s case to a larger epidemic of bullying in the East Valley. In late October, news broke of the fatal assault on 16-year-old Preston Lord, who was attacked by several peers outside a Halloween party in the desert town of Queen Creek. Preston was knocked down, kicked and punched repeatedly in the head. After his assailants fled, other partygoers attempted to perform C.P.R.; approximately 48 hours later, doctors at a nearby hospital pronounced him dead.

“I remember we published something on the Lord death, and right away we got an absolute flood of community tips — people across the East Valley coming out of the woodwork to say that they had info to share,” Ashley Holden, a television reporter with an ABC affiliate, told me. “And a lot of the rumblings had to do with a group that called itself the Gilbert Goons.”

Many of the teenagers reportedly came from wealthy families, dealt drugs and carried guns. Anyone could label themselves a member, or, with equal facility, disavow their association — there was no official swearing in, no hierarchy, no leaders...

As is increasingly common in cases involving bullying or teen violence, many of the incidents subsequently attributed to the Goons involved exchanges that took place via text, social media platforms or video game chat logs visible only to the participants. Even when investigators were able to obtain a warrant for the information, says Jim Bisceglie, an assistant chief for the Gilbert police, “sometimes the data is already gone from Company A or B or C by the time it’s sent over to us. It’s piecemeal, and you’ve got the complexity of reading through all the messages, trying to understand the order."...

Rick later met with the parents of Connor Jarnagan and Preston Lord, who gave him the validation he had been seeking for more than two years. He wasn’t alone — he hadn’t lost his mind. In extensive interviews with reporters across the East Valley, Rick took to railing loudly against the police for ignoring a clear and present danger in their midst. “They could have had all this stuff done months ago,” he said in one exchange with a reporter from The Arizona Republic. Had the authorities done so, he went on, his son would still be home, and Preston Lord would still be alive. He made a point of attending school board meetings, community rallies...

To Rick, the underlying issue was one of “reactivity versus proactivity” — a phrase I heard him use often. “You had the city and the schools essentially sweeping this stuff under the rug,” he said. “Hoping it would go away. Hoping it was just kids being kids.”"

Thursday, December 11, 2025

Banning AI Regulation Would Be a Disaster; The Atlantic, December 11, 2025

Chuck Hagel, The Atlantic; Banning AI Regulation Would Be a Disaster

"On Monday, Donald Trump announced on Truth Social that he would soon sign an executive order prohibiting states from regulating AI...

The greatest challenges facing the United States do not come from overregulation but from deploying ever more powerful AI systems without minimum requirements for safety and transparency...

Contrary to the narrative promoted by a small number of dominant firms, regulation does not have to slow innovation. Clear rules would foster growth by hardening systems against attack, reducing misuse, and ensuring that the models integrated into defense systems and public-facing platforms are robust and secure before deployment at scale.

Critics of oversight are correct that a patchwork of poorly designed laws can impede that mission. But they miss two essential points. First, competitive AI policy cannot be cordoned off from the broader systems that shape U.S. stability and resilience...

Second, states remain the country’s most effective laboratories for developing and refining policy on complex, fast-moving technologies, especially in the persistent vacuum of federal action...

The solution to AI’s risks is not to dismantle oversight but to design the right oversight. American leadership in artificial intelligence will not be secured by weakening the few guardrails that exist. It will be secured the same way we have protected every crucial technology touching the safety, stability, and credibility of the nation: with serious rules built to withstand real adversaries operating in the real world. The United States should not be lobbied out of protecting its own future."

'It's insulting they think we can't handle it': The Australian teens banned from social media; BBC, December 10, 2025

Katy Watson , BBC; 'It's insulting they think we can't handle it': The Australian teens banned from social media

"With nearly all her friends living at least 100km away, social media is a lifeline. But not anymore, now that Australia's social media ban for children has taken effect.

"Taking away our socials is just taking away how we talk to each other," Breanna says. 

While she can still text her friends, it's not the same as a quick "snap" or a "like" on a photo that allows her to play a part in their lives even when she is far away."

'This is the end': Australian teens mourn loss of social media as ban begins; Reuters, December 10, 2025

, Reuters ; 'This is the end': Australian teens mourn loss of social media as ban begins

"Australian teenagers have taken to social media for the last time to farewell their followers and mourn the loss of the platforms that shaped much of their lives before a world-first ban took effect on Wednesday.

In the hours leading up to the ban's midnight start (1300 GMT on Tuesday), a flurry of goodbye messages came from teenagers - as well as adults - on platforms including TikTok, Instagram and Reddit.

Australia has ordered 10 major platforms including TikTok, Alphabet's YouTube and Meta's nstagram and Facebook to block around one million users under the age of 16 or face massive fines.

Some 200,000 accounts have already been deactivated on TikTok alone, the government said, with "hundreds of thousands" to be blocked in the coming days.

Young Australians, who have grown up using social media, faced the prospect of losing access to their favourite apps with a mix of sadness, humour and disbelief."

Sunday, September 28, 2025

‘Children thrive down here’: the secret play centre hidden under Ukraine’s most dangerous city; The Guardian, September 24, 2025

 , The Guardian; ‘Children thrive down here’: the secret play centre hidden under Ukraine’s most dangerous city

"In an underground shelter in Kherson, probably the most dangerous city in Ukraine, children are chasing each other between plastic chairs. Outside,mortars, artillery and drones fly their deadly paths back and forth across the Dnipro River that separates the city from Russian forces.

This makeshift underground play centre is one of the few places where children can socialise with each other in safety. For a few hours, it can be as though the war is not happening. When the explosions get too close, teachers working at the centre clap louder or turn up the music to drown out the noise.

As children returned to school across Ukraine this month, one in three are having their fourth consecutive academic year disrupted. In frontline areas such as Kherson, where schools have been damaged in attacks, children have to study largely online...

Across Ukraine as a whole, more than 3,000 children have been killed or injured since the start of full-scale war in 2022 – equivalent to about 150 classrooms of children.

With such dangers, families are forced to spend much of their lives underground or indoors, calculating every errand against mortal danger. Stuck indoors, teachers say children now struggle to socialise, and their speech and confidence have been set back without access to their peers, while some have not yet learned to read.

Narmina Strishenets, a conflict adviser with the UK charity Save the Children, says: “Instead of focusing on play, socialising and passions, children are focused on physical survival.

“Many are now one or two years behind in core subjects,” says Strishenets. “Childhood is under attack and they are losing hope.” 

The underground play centre, at a secret location in a residential area of Kherson, is one of the few spaces where children get personal support from teachers and psychologists. It was set up last year by the chair of a local housing association, Oleh Turchynskyi."

Friday, August 29, 2025

ChatGPT offered bomb recipes and hacking tips during safety tests; The Guardian, August 28, 2025

 , The Guardian; ChatGPT offered bomb recipes and hacking tips during safety tests

"A ChatGPT model gave researchers detailed instructions on how to bomb a sports venue – including weak points at specific arenas, explosives recipes and advice on covering tracks – according to safety testing carried out this summer.

OpenAI’s GPT-4.1 also detailed how to weaponise anthrax and how to make two types of illegal drugs.

The testing was part of an unusual collaboration between OpenAI, the $500bn artificial intelligence start-up led by Sam Altman, and rival company Anthropic, founded by experts who left OpenAI over safety fears. Each company tested the other’s models by pushing them to help with dangerous tasks.

The testing is not a direct reflection of how the models behave in public use, when additional safety filters apply. But Anthropic said it had seen “concerning behaviour … around misuse” in GPT-4o and GPT-4.1, and said the need for AI “alignment” evaluations is becoming “increasingly urgent”."

Tuesday, August 12, 2025

Man develops rare condition after ChatGPT query over stopping eating salt; The Guardian, August 12, 2025

 , The Guardian; Man develops rare condition after ChatGPT query over stopping eating salt

"A US medical journal has warned against using ChatGPT for health information after a man developed a rare condition following an interaction with the chatbot about removing table salt from his diet.

An article in the Annals of Internal Medicine reported a case in which a 60-year-old man developed bromism, also known as bromide toxicity, after consulting ChatGPT."

Tuesday, August 5, 2025

Police nationwide are embracing a new first responder: Drones; The Washington Post, August 4, 2025

 , The Washington Post; Police nationwide are embracing a new first responder: Drones

"Law enforcement and drone industry leaders praise the technology as lifesaving, with the potential to help authorities in situations ranging from missing persons cases to active shooter incidents. But critics worry the programs encourage mass surveillance and violate the public’s privacy."

Tuesday, July 29, 2025

Proposed Pa. bill would prohibit law enforcement from wearing masks, would require identifiable uniforms; WFMZ, July 28, 2025

, WFMZ ; Proposed Pa. bill would prohibit law enforcement from wearing masks, would require identifiable uniforms

"State Rep. Joshua Siegel is co-signing a Pennsylvania bill that if passed, would prohibit law enforcement from wearing masks and require identifiable uniforms or clothing.

"We have a right to know who's in our community and what they're doing," said Siegel.

A memo for the bill said this comes following Homeland Security and U.S. Border Patrol allowing agents to conceal their faces."

Sunday, July 20, 2025

Ice chief says he will continue to allow agents to wear masks during arrest raids; The Guardian, July 20, 2025

 , The Guardian; Ice chief says he will continue to allow agents to wear masks during arrest raids

"The head of US Immigration and Customs Enforcement (Ice) said on Sunday that he will continue allowing the controversial practice of his officers wearing masks over their faces during their arrest raids.

As Donald Trump has ramped up his unprecedented effort to deport immigrants around the country, Ice officers have become notorious for wearing masks to approach and detain people, often with force. Legal advocates and attorneys general have argued that it poses accountability issues and contributes to a climate of fear.

On Sunday, Todd Lyons, the agency’s acting director, was asked on CBS Face the Nation about imposters exploiting the practice by posing as immigration officers. “That’s one of our biggest concerns. And I’ve said it publicly before, I’m not a proponent of the masks,” Lyons said.

“However, if that’s a tool that the men and women of Ice to keep themselves and their family safe, then I will allow it.”

Lyons has previously defended the practice of mask-wearing, telling Fox News last week that “while I’m not a fan of the masks, I think we could do better, but we need to protect our agents and officers”, claiming concerns about doxxing (the public revealing of personal information such as home addresses), and declaring that assaults of immigration officers have increased by 830%."

Thursday, June 26, 2025

Don’t Let Silicon Valley Move Fast and Break Children’s Minds; The New York Times, June 25, 2025

JESSICA GROSE , The New York Times; Don’t Let Silicon Valley Move Fast and Break Children’s Minds

"On June 12, the toymaker Mattel announced a “strategic collaboration” with OpenAI, the developer of the large language model ChatGPT, “to support A.I.-powered products and experiences based on Mattel’s brands.” Though visions of chatbot therapist Barbie and Thomas the Tank Engine with a souped-up surveillance caboose may dance in my head, the details are still vague. Mattel affirms that ChatGPT is not intended for users under 13, and says it will comply with all safety and privacy regulations.

But who will hold either company to its public assurances? Our federal government appears allergic to any common-sense regulation of artificial intelligence. In fact, there is a provision in the version of the enormous domestic policy bill passed by the House that would bar states from “limiting, restricting or otherwise regulating artificial intelligence models, A.I. systems or automated decision systems entered into interstate commerce for 10 years.”"

Friday, February 7, 2025

Pay Attention to the FBI; The Atlantic, February 6, 2025

 Hanna Rosin, The Atlantic; Pay Attention to the FBI

"In this episode of Radio Atlantic, we discuss where Trump and Musk seem to be headed and the obstacles they are likely to encounter in the future. What happens when Trump starts to face challenges from courts? What happens when Musk goes after programs that Americans depend on, particularly those who voted for Trump? What new political alliances might emerge from the wreckage? We talk with staff writer Jonathan Chait, who covers politics. And we also talk with Shane Harris, who covers national security, about Trump’s campaign to purge the FBI of agents who worked on cases related to the insurrection at the Capitol.

“I think that will send a clear message to FBI personnel that there are whole categories of people and therefore potential criminal activity that they should not touch, because it gets into the president, his influence, his circle of friends,” Harris says. “I think that is just a potentially ruinous development for the rule of law in the United States.”"

Monday, February 3, 2025

What happens after you ask Trump to ‘have mercy’? Threats, praise and hope.; The Washington Post, February 2, 2025

 , The Washington Post; What happens after you ask Trump to ‘have mercy’? Threats, praise and hope.

"Last month, Rep. Josh Brecheen (R-Oklahoma) introduced a resolution calling for the House to recognize Budde’s sermon as a “display of political activism and condemning its distorted message.”

Budde, according to the resolution, promoted “political bias instead of advocating the full counsel of biblical teaching.”

On Sunday, after the service, she pondered the lawmaker’s action.

“It isn’t political activism for a pastor to ask for mercy,” she said. “It is an expression of Christian faith and the teachings of Jesus.”"