Showing posts with label safety. Show all posts
Showing posts with label safety. Show all posts

Friday, April 5, 2024

Assisted living managers say an algorithm prevented hiring enough staff; The Washington Post, April 1, 2024

, The Washington Post; Assisted living managers say an algorithm prevented hiring enough staff

"Two decades ago, a group of senior-housing executives came up with a way to raise revenue and reduce costs at assisted-living homes. Using stopwatches, they timed caregivers performing various tasks, from making beds to changing soiled briefs, and fed the information into a program they began using to determine staffing.

Brookdale Senior Living, the leading operator of senior homes with 652 facilities, acquired the algorithm-based system and used it to set staffing at its properties across the nation. But as Brookdale’s empire grew, employees complained the system, known as “Service Alignment,” failed to capture the nuances of caring for vulnerable seniors, documents and interviews show."

Tuesday, March 12, 2024

Florida settles lawsuit after challenge to ‘don’t say gay’ law; Associated Press via The Guardian, March 11, 2024

Associated Press via The Guardian ; Florida settles lawsuit after challenge to ‘don’t say gay’ law

"Under the terms of the settlement, the Florida board of education will send instructions to every school district saying the Florida law does not prohibit discussing LGBTQ+ people, nor prevent anti-bullying rules on the basis of sexual orientation and gender identity or disallow Gay-Straight Alliance groups. The settlement also spells out that the law is neutral – meaning what applies to LGBTQ+ people also applies to heterosexual people – and that it doesn’t apply to library books not being used for instruction in the classroom.

The law also doesn’t apply to books with incidental references to LGBTQ+ characters or same-sex couples, “as they are not instruction on sexual orientation or gender identity any more than a math problem asking students to add bushels of apples is instruction on apple farming”, according to the settlement.

“What this settlement does, is, it re-establishes the fundamental principal, that I hope all Americans agree with, which is every kid in this country is entitled to an education at a public school where they feel safe, their dignity is respected and where their families and parents are welcomed,” Roberta Kaplan, the lead attorney for the plaintiffs, said in an interview."

Friday, November 3, 2023

Prison Is a Dangerous Place for LGBTQ+ People. I Made a Safe Space in the Library.; The Marshall Project, November 3, 2023

MICHAEL SHANE HALE, The Marshall Project; Prison Is a Dangerous Place for LGBTQ+ People. I Made a Safe Space in the Library.

"And because queer people have a way of finding spaces that resonate with us, word has spread. Everyone knows that our library has a spot off by itself, waiting to hug the next LGBTQ+ person with stories of acceptance and belonging.

Michael Shane Hale has served nearly 30 years of a 50-years-to-life sentence and is working through the trauma he has experienced and created. Inspired by the many kindnesses that people in his life have afforded him, he hopes to continue his education. This includes pursuing a Ph.D. in neuroscience and machine learning."

Monday, July 3, 2023

Managing the Risks of Generative AI; Harvard Business Review (HBR), June 6, 2023

and , Harvard Business Review (HBR); Managing the Risks of Generative AI

"Guidelines for the ethical development of generative AI

Our new set of guidelines can help organizations evaluate generative AI’s risks and considerations as these tools gain mainstream adoption. They cover five focus areas."

Friday, April 1, 2022

Self-driving semis may revolutionize trucking while eliminating hundreds of thousands of jobs.; The Hill, March 23, 2022

 Joseph Guzman , The Hill; Self-driving semis may revolutionize trucking while eliminating hundreds of thousands of jobs.

"Aniruddh Mohan, a PhD candidate in the department of engineering and public policy at Carnegie Mellon University and co-author of the study, said widespread implementation will depend on how successful pilot programs in the Sun Belt are in the coming years, but warned any lapse in safety could slow down progress. 

“One thing to keep in mind, just as we saw with the passenger vehicle automation race, the moment you even have one accident, that could really set the industry back,” Mohan said. 

“So I think it remains to be seen how quickly this develops.”"

Saturday, March 5, 2022

BBC, CNN and other global news outlets suspend reporting in Russia; The Guardian, March 4, 2022

 and agencies, The Guardian; BBC, CNN and other global news outlets suspend reporting in Russia

"The new law, passed on Friday, makes intentionally spreading “fake” or “false” news about the Kremlin’s war in Ukraine a criminal offence. President Valdimir Putin approved the new law on Friday evening, according to the Tass state news agency.

It came after the Kremlin accused the BBC of playing a “determined role in undermining the Russian stability and security”.

Davie said: “This legislation appears to criminalise the process of independent journalism. It leaves us no other option than to temporarily suspend the work of all BBC News journalists and their support staff within the Russian Federation while we assess the full implications of this unwelcome development.

“Our BBC News service in Russian will continue to operate from outside Russia.

“The safety of our staff is paramount and we are not prepared to expose them to the risk of criminal prosecution simply for doing their jobs. I’d like to pay tribute to all of them, for their bravery, determination and professionalism.

“We remain committed to making accurate, independent information available to audiences around the world, including the millions of Russians who use our news services. Our journalists in Ukraine and around the world will continue to report on the invasion of Ukraine.”"

Saturday, February 19, 2022

AirTags are being used to track people and cars. Here's what is being done about it; NPR, February 18, 2022

MICHAEL LEVITT, NPR; AirTags are being used to track people and cars. Here's what is being done about it

""As technology becomes more sophisticated and advanced, as wonderful as that is for society, unfortunately, it also becomes much easier to misuse and abuse," she told NPR. "I wouldn't say that we've necessarily seen an uptick with the use of AirTags any more or less than any cutting edge technology."

Williams said that what was rare was a technology company taking the issue seriously and moving to address it.

"[Apple is] not only listening to the field, but actively reaching out at times to do safety checks. That in and of itself might sound like a very small step, but it's rare," she said.

Still, Galperin thinks that Apple should have done more to protect people ahead of time. 

"The mitigations that Apple had in place at the time that the AirTag came out were woefully insufficient," Galperin said. 

"I think that Apple has been very careful and responsive after putting the product out and introducing new mitigations. But the fact that they chose to bring the product to market in the state that it was in last year, is shameful.""

Tuesday, February 15, 2022

What internet outrage reveals about race and TikTok's algorithm; NPR, February 14, 2022

Jess Kung, NPR; What internet outrage reveals about race and TikTok's algorithm

"The more our lives become intertwined and caught up in tech and social media algorithms, the more it's worth trying to understand and unpack just how those algorithms work. Who becomes viral, and why? Who gets harassed, who gets defended, and what are the lasting repercussions? And how does the internet both obscure and exacerbate the racial and gender dynamics that already animate so much of our social interactions?"

Thursday, February 10, 2022

TikTok bans misgendering, deadnaming from its content; NPR, February 9, 2022

 , NPR; TikTok bans misgendering, deadnaming from its content

"TikTok is updating its community guidelines to ban deadnaming, misgendering and misogyny.

The changes, announced Tuesday, are a part of a broader update designed to promote safety and security on the platform. The app will also remove content that promotes disordered eating and further restrict content related to dangerous acts. 

Last year, a report by GLAAD said TikTok and other top social media sites are all "effectively unsafe for LGBTQ users...

Along with the new guidelines, TikTok published its most recent quarterly Community Guidelines Enforcement Report. More than 91 million videos — about 1% of all uploaded videos — were removed during the third quarter of 2021 because they violated the guidelines. 

Of all videos removed from July to September 2021, about 1.5% were removed due to hateful behavior, which includes hate speech on the basis of race, sexual orientation and gender, among other attributes."

Monday, February 7, 2022

China’s Peng Shuai says there was ‘misunderstanding’ over her allegations, announces retirement; The Washington Post, February 7, 2022

Christian Shepherd, The Washington Post; China’s Peng Shuai says there was ‘misunderstanding’ over her allegations, announces retirement

"Lu Pin, a prominent Chinese women’s rights activist and founder of the media platform Feminist Voices, who now lives in the United States, said Peng’s new account of what happens “demonstrates a great deal of absurdity.” But Peng, Lu adds, should not be blamed for falling into a “trap set by a violent system” that engages victims to be part of denying that violence to the world.

“We should allow Peng to be safe in the way she can be,” but at the same time, “we must be aware of the system’s brutality and the harm it causes to our universal humanity and moral standards,” Lu said.

While Chinese feminist activists have praised the WTA for demanding an independent investigation and canceling tournaments in the country over Peng’s allegation, they have accused the IOC of being complicit in the Chinese government’s effort to end international scrutiny of the case."

Sunday, January 9, 2022

New York mayor Eric Adams faces nepotism claim over job for brother; The Guardian, January 9, 2022

 , The Guardian; New York mayor Eric Adams faces nepotism claim over job for brother

"Adams is a retired [sic] NYD officer. So is his brother, Bernard Adams, who most recently worked as assistant director of operations for parking and transportation at the medical campus of Virginia Commonwealth University but has now been appointed as deputy police commissioner with a $240,000-a-year salary. The move has exposed the mayor to accusations of nepotism.

Susan Lerner, executive director of Common Cause New York, a good governance group, told City & State: “New Yorkers expect that public servants are hired based on their unique qualifications and not because they are the mayor’s brother.”

Lerner said the approval of the city conflict of interest board would be required, but “even with a waiver, the appointment of the mayor’s close relative does not inspire public confidence”.

On CNN, Adams said the board would “make the determination and we have a great system here in the city”.

“But let me be clear on this. My brother is qualified for the position. Number one, he will be in charge of my security, which is extremely important to me in a time when we see an increase in white supremacy and hate crimes. I have to take my security in a very serious way.""

Artificial intelligence author kicks off Friends of the Library nonfiction lecture series; Naples Daily News, January 7, 2022

Vicky Bowles, Naples Daily News; Artificial intelligence author kicks off Friends of the Library nonfiction lecture series

"Over the past few decades, a bunch of smart guys built artificial intelligence systems that have had deep impact on our everyday lives. But do they — and their billion-dollar companies — have the human intelligence to keep artificial intelligence safe and ethical?

Questions like this are part of the history and overview of artificial intelligence in Cade Metz’s book “Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World.”

On Monday, Jan. 17, Metz, a technology correspondent for The New York Times and former senior writer for Wired magazine, is the first speaker in the 2022 Nonfiction Author Series, sponsored by the nonprofit Friends of the Library of Collier County, which raises money for public library programs and resources...

"NDN: This was such a wonderful sentence early on in your book: “As an undergraduate at Harvard (in the 1940s), using over three thousand vacuum tubes and a few parts from an old B-52 bomber, (Marvin) Minsky built what may have been the first neural network.” Is that kind of amateur, garage-built science still possible, given the speed of innovation now and the billions of dollars that are thrown at development?

CM: It certainly is. It happens all the time, inside universities and out. But in the AI field, this has been eclipsed by the work at giant companies like Google and Facebook. That is one of the major threads in my book: academia struggling to keep up with the rapid rate of progress in the tech industry. It is a real problem. So much of the talent is moving into industry, leaving the cupboard bare at universities. Who will teach the next generation? Who will keep the big tech companies in check? 

NDN: I was amused to see that Google and DeepMind built a team “dedicated to what they called ‘AI safety,’ an effort to ensure that the lab’s technologies did no harm.” My question is, who defines harm within this race to monetize new technologies? Isn’t, for example, the staggering amount of electrical power used to run these systems harmful to the globe?

CM: I am glad you were amused. These companies say we should trust them to ensure AI "safety" and "ethics," but the reality is that safety and ethics are in the eye of the beholder. They can shape these terms to mean whatever they like. Many of the AI researchers at the heart of my book are genuinely concerned about how AI will be misused — how it will cause harm — but when they get inside these large companies, they find that their views clash with the economic aims of these tech giants."

Sunday, October 27, 2019

Google Maps Just Introduced a Controversial New Feature That Drivers Will Probably Love but Police Will Utterly Hate; Inc., October 20, 2019

Bill Murphy Jr., Inc.; Google Maps Just Introduced a Controversial New Feature That Drivers Will Probably Love but Police Will Utterly Hate

"This week, however, Google announced the next best thing: Starting immediately, drivers will be able to report hazards, slowdowns, and speed traps right on Google Maps...

But one group that will likely not be happy is the police. In recent years, police have asked -- or even demanded -- that Waze drop the police-locating feature."

Thursday, October 24, 2019

‘Don’t leave campus’: Parents are now using tracking apps to watch their kids at college; The Washington Post, October 22, 2019

Abby Ohlheiser, The Washington Post; ‘Don’t leave campus’: Parents are now using tracking apps to watch their kids at college

"Many parents install tracking apps with good intentions, said Stacey Steinberg, a law professor at the University of Florida who has studied how technology impacts raising families and privacy. “We don’t want our kids to screw up,” she said. “We don’t want them to get hurt. Technology offers us new ways to feel like we are protecting them — both from others and from themselves.

“But kids need autonomy from their parents, especially when they reach adulthood,” Steinberg added. “If we want our kids to trust us, if we want our kids to believe they are capable of making wise decisions, then our actions need to show it. Valuing their privacy is one way to do so.”"

Tuesday, October 22, 2019

Under digital surveillance: how American schools spy on millions of kids; The Guardian, October 22, 2019

, The Guardian; Under digital surveillance: how American schools spy on millions of kids

"The new school surveillance technology doesn’t turn off when the school day is over: anything students type in official school email accounts, chats or documents is monitored 24 hours a day, whether students are in their classrooms or their bedrooms.

Tech companies are also working with schools to monitor students’ web searches and internet usage, and, in some cases, to track what they are writing on public social media accounts.

Parents and students are still largely unaware of the scope and intensity of school surveillance, privacy experts say, even as the market for these technologies has grown rapidly, fueled by fears of school shootings, particularly in the wake of the Parkland shooting in February 2018, which left 17 people dead."

Friday, March 15, 2019

I Almost Died Riding an E-Scooter Like 99 percent of users, I wasn’t wearing a helmet.; Slate, March 14, 2019

Rachel Withers, Slate;

I Almost Died Riding an E-Scooter

Like 99 percent of users, I wasn’t wearing a helmet.


"I’ve been rather flippant with friends about what happened because it’s the only way I know how to deal. It’s laughable that you’d get seriously injured scooting. But this isn’t particularly funny. People are always going to be idiots, yes, but idiot people are currently getting seriously injured, in ways that might have been prevented, because tech companies flippantly dumped their product all over cities, without an adequate helmet solution. Facebook’s “move fast and break things” mantra can be applied to many tech companies, but in the case of e-scooters, it might just be “move fast and break skulls.”"

Tuesday, February 12, 2019

EU Recalls Children’s Smartwatch Over Security Concerns; Lexology, February 8, 2019

Hunton Andrews Kurth LLP , Lexology; EU Recalls Children’s Smartwatch Over Security Concerns

"The European Commission has issued an EU-wide recall of the Safe-KID-One children’s smartwatch marketed by ENOX Group over concerns that the device leaves data such as location history, phone and serial numbers vulnerable to hacking and alteration."

Saturday, November 24, 2018

Wanted: The ‘perfect babysitter.’ Must pass AI scan for respect and attitude.; The Washington Post, November 23, 2018

Drew Harwell, The Washington Post; Wanted: The ‘perfect babysitter.’ Must pass AI scan for respect and attitude.

"Predictim’s chief and co-founder Sal Parsa said the company, launched last month as part of the University of California at Berkeley’s SkyDeck tech incubator, takes ethical questions about its use of the technology seriously. Parents, he said, should see the ratings as a companion that “may or may not reflect the sitter’s actual attributes.”...

...[T]ech experts say the system raises red flags of its own, including worries that it is preying on parents’ fears to sell personality scans of untested accuracy.

They also question how the systems are being trained and how vulnerable they might be to misunderstanding the blurred meanings of sitters’ social media use. For all but the highest-risk scans, the parents are given only a suggestion of questionable behavior and no specific phrases, links or details to assess on their own."

Sunday, April 1, 2018

Musk and Zuckerberg are fighting over whether we rule technology—or it rules us; Quartz, April 1, 2018

Michael J. Coren, Quartz; Musk and Zuckerberg are fighting over whether we rule technology—or it rules us

"Firmly in Zuckerberg’s camp are Google co-founder Larry Page, inventor and author Ray Kurzweil, and computer scientist Andrew Ng, a prominent figure in the artificial intelligence community who previously ran the artificial intelligence unit for the Chinese company Baidu. All three seem to share the philosophy that technological progress is almost always positive, on balance, and that hindering that progress is not just bad business, but morally wrong because it deprives society of those benefits.


Musk, alongside others such as Bill Gates, the late physicist Stephen Hawking, and venture investors such as Sam Altman and Fred Wilson, do not see all technological progress as an absolute good. For this reason, they’re open to regulation...


Yonatan Zunger, a former security and privacy engineer at Google has compared software engineers’ power to that of “kids in a toy shop full of loaded AK-47’s.” It’s becoming increasingly clear how dangerous it is to consider safety and ethics elective, rather than foundational, to software design. “Computer science is a field which hasn’t yet encountered consequences,” he writes."

Monday, February 19, 2018

AI ‘gaydar’ could compromise LGBTQ people’s privacy — and safety; Washington Post, February 19, 2018

JD Schramm, Washington Post; AI ‘gaydar’ could compromise LGBTQ people’s privacy — and safety

"The advances in AI and machine learning make it increasingly difficult to hide such intimate traits as sexual orientation, political and religious affiliations, and even intelligence level. The post-privacy future Kosinski examines in his research is upon us. Never has the work of eliminating discrimination been so urgent."