Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Wednesday, August 27, 2025

License plate camera company halts cooperation with federal agencies; Associated Press via ABC News, August 25, 2025

JOHN O'CONNOR Associated Press; License plate camera company halts cooperation with federal agencies

"One of the nation's leading operators of automated license-plate reading systems announced Monday it has paused its operations with federal agencies because of confusion and concern — including in Illinois — about the purpose of their investigations.

Flock Safety, whose cameras are mounted in more than 4,000 communities nationwide, put a hold last week on pilot programs with the Department of Homeland Security's Customs and Border Protection and its law enforcement arm, Homeland Security Investigations, according to a statement by its founder and CEO, Garrett Langley. 

Among officials in other jurisdictions, Illinois Secretary of State Alexi Giannoulias raised concerns. He announced Monday that an audit found Customs and Border Protection had accessed Illinois data, although he didn't say that the agency was seeking immigration-related information. A 2023 law the Democrat pushed bars sharing license plate data with police investigating out-of-state abortions or undocumented immigrants."

Wednesday, August 20, 2025

Victory! Ninth Circuit Limits Intrusive DMCA Subpoenas; Electronic Frontier Foundation (EFF), August 18, 2025

 TORI NOBLE, Electronic Frontier Foundation (EFF); Victory! Ninth Circuit Limits Intrusive DMCA Subpoenas

"Fortunately, Section 512(h) has an important limitation that protects users.  Over two decades ago, several federal appeals courts ruled that Section 512(h) subpoenas cannot be issued to ISPs. Now, in In re Internet Subscribers of Cox Communications, LLC, the Ninth Circuit agreed, as EFF urged it to in our amicus brief."

Monday, August 11, 2025

Lost in the wild? AI could find you; Axios, August 10, 2025

"Hikers stranded in remote areas with no cell service or WiFi might have a new lifeline: AI.

The big picture: AI is helping some rescue teams find missing people faster by scanning satellite and drone images.


Zoom in: "AI's contribution is that it can dramatically reduce the time to process imagery and do it more accurately than humans," David Kovar, director of advocacy for NASAR and CEO of cybersecurity company URSA Inc., tells Axios.


Context: It's just one of many resources rescue teams use to help them, Kovar stresses.


AI already is eerily good at geolocating where photos are taken.


  • Last month, the body of a hiker lost for nearly a year was found in Italy in a matter of hours after The National Alpine and Speleological Rescue Corps used AI to analyze a series of drone images.

The intrigue: We also know when people are given the option to share their location as a safety measure, they do it.

What's next: AI agents could be trained to fly drones via an automated system. It's a theory Jan-Hendrik Ewers made the subject of his PhD at the University of Glasgow. 


  • "You could have a fully automated system that monitors reports and triggers drone-based search efforts before a human has lifted a finger," Ewers tells Axios.

  • Barriers to implementing this kind of system are many: money, politics and the fact that when lives are at stake, relying on experimental AI could complicate efforts. 

The other side: Some lost people don't want to be found. And, lost people can't consent.


  • Nearly everyone will want this help, but "there will be cases where, for example, a person who is a victim of domestic violence says she's going out hiking, but she's not. She's not intending to come back," Greg Nojeim, senior counsel and director for Democracy & Technology's Security and Surveillance Project tells Axios.

AI ethics depend on the circumstances, and who is using it, William Budington, senior staff technologist at nonprofit advocacy organization Electronic Frontier Foundation, tells Axios.


  • If it's used to save lives and private data used in a rescue operation is wiped after a hiker is found, there is less of a concern, he says.

  • "But, using it to scan images or locate and surveil people, especially those that don't want to be found — either just for privacy reasons, or political dissidents, perhaps — that's a worrying possibility."

Tuesday, August 5, 2025

Police nationwide are embracing a new first responder: Drones; The Washington Post, August 4, 2025

 , The Washington Post; Police nationwide are embracing a new first responder: Drones

"Law enforcement and drone industry leaders praise the technology as lifesaving, with the potential to help authorities in situations ranging from missing persons cases to active shooter incidents. But critics worry the programs encourage mass surveillance and violate the public’s privacy."

Wednesday, July 30, 2025

Tech giant Palantir helps the US government monitor its citizens. Its CEO wants Silicon Valley to find its moral compass; The Conversation, July 28, 2025

 Professor of Society & Environment, University of Technology Sydney , The Conversation; Tech giant Palantir helps the US government monitor its citizens. Its CEO wants Silicon Valley to find its moral compass

"Critics of those who misuse power tend to be outsiders. So, it’s striking that Alexander Karp, co-founder and CEO of data analytics giant Palantir Technologies, has written a book, with Palantir’s head of corporate affairs Nicholas Zamiska, calling on Silicon Valley to find its moral compass...

Karp has described Palantir’s work as “the finding of hidden things”. The New York Times described its work as sifting “through mountains of data to perceive patterns, including patterns of suspicious or aberrant behavior”.

Palantir has worked closely with United States armed forces and intelligence agencies across Democratic and Republican governments for 14 years. It has been criticised for enabling heightened government surveillance and loss of privacy among US citizens."

Friday, July 25, 2025

Trump’s Comments Undermine AI Action Plan, Threaten Copyright; Publishers Weekly, July 23, 2025

 Ed Nawotka  , Publishers Weekly; Trump’s Comments Undermine AI Action Plan, Threaten Copyright

"Senate bill proposes 'opt-in' legislation

Trump's comments come on the heels of the introduction, by U.S. senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.), of the AI Accountability and Personal Data Protection Act this past Monday following a hearing last week on AI companies' copyright infringement. The bipartisan legislation aims to hold AI firms liable for using copyrighted works or personal data without acquiring explicit consent to train AI models. It would empower individuals—including writers, artists, and content creators—to sue companies in federal court if their data or copyrighted works are used without consent. It also supports class action lawsuits and advocates for violators to pay robust penalties.

"AI companies are robbing the American people blind while leaving artists, writers, and other creators with zero recourse," said Hawley. "It’s time for Congress to give the American worker their day in court to protect their personal data and creative works. My bipartisan legislation would finally empower working Americans who now find their livelihoods in the crosshairs of Big Tech’s lawlessness."

"This bill embodies a bipartisan consensus that AI safeguards are urgent—because the technology is moving at accelerating speed, and so are dangers to privacy," added Blumenthal. "Enforceable rules can put consumers back in control of their data, and help bar abuses. Tech companies must be held accountable—and liable legally—when they breach consumer privacy, collecting, monetizing or sharing personal information without express consent. Consumers must be given rights and remedies—and legal tools to make them real—not relying on government enforcement alone."

Sunday, July 20, 2025

The USDA wants states to hand over food stamp data by the end of July; NPR, July 19, 2025

, NPR ; The USDA wants states to hand over food stamp data by the end of July

"When Julliana Samson signed up for Supplemental Nutrition Assistance Program (SNAP) benefits to help afford food as she studied at the University of California, Berkeley, she had to turn in extensive, detailed personal information to the state to qualify.

Now she's worried about how that information could be used.

The U.S. Department of Agriculture has made an unprecedented demand to states to share the personal information of tens of millions of federal food assistance recipients by July 30, as a federal lawsuit seeks to postpone the data collection...

She and three other SNAP recipients, along with a privacy organization and an anti-hunger group, are challenging USDA's data demand in a federal lawsuit, arguing the agency has not followed protocols required by federal privacy laws. Late Thursday, they asked a federal judge to intervene to postpone the July 30 deadline and a hearing has been scheduled for July 23.

"I am worried my personal information will be used for things I never intended or consented to," Samson wrote recently as part of an ongoing public comment period for the USDA's plan. "I am also worried that the data will be used to remove benefits access from student activists who have views the administration does not agree with."

Monday, July 7, 2025

Privacy under siege: DOGE’s one big, beautiful database; Brookings, June 25, 2025

 ,  , and  , Brookings; Privacy under siege: DOGE’s one big, beautiful database

"The Department of Government Efficiency (DOGE), according to reporting in the Washington Post, recently set its sights on creating “a single centralized [government] database” that would enable broad access across government agencies to vast amounts of information currently collected and held by individual federal agencies.  

Government data aggregation and unification on this scale is antithetical to the purpose-driven requirements for data sharing among government agencies that lie at the heart of the Privacy Act, a 1974 law passed in the aftermath of Watergate and the FBI’s Counterintelligence Program (COINTELPRO) scandals."

RFK Jr. wants everyone to use wearables. What are the benefits, risks?; ABC News, July 3, 2025

Mary Kekatos , ABC News; RFK Jr. wants everyone to use wearables. What are the benefits, risks?


[Kip Currier: Probably not a good idea, given the current administration's documented disregard for the rights of people to control access to their own data.

See hereherehere, and here.]


[Excerpt]

"Last week, Health and Human Services Secretary Robert F. Kennedy Jr. announced the agency was launching a campaign to encourage all Americans to use wearables to track health metrics.

Wearables come in the form of watches, bands, rings, patches and clothes that can be used for a variety of reasons including monitoring glucose levels, measuring activity levels, track heart health and observe sleeping patterns.

"It's a way … people can take control over their own heath. They can take responsibility," Kennedy said during a hearing of the House Subcommittee on Health."

Monday, June 30, 2025

Peter Thiel’s Palantir poses a grave threat to Americans; The Guardian, June 30, 2025

 , The Guardian; Peter Thiel’s Palantir poses a grave threat to Americans

"Draw a circle around all the assets in the US now devoted to artificial intelligence.

Draw a second circle around all the assets devoted to the US military.

A third around all assets being devoted to helping the Trump regime collect and compile personal information on millions of Americans.

And a fourth circle around the parts of Silicon Valley dedicated to turning the US away from a democracy into a dictatorship led by tech bros.

Where do the four circles intersect?

At a corporation called Palantir Technologies and a man named Peter Thiel.

In JRR Tolkien’s The Lord of the Rings, a “palantír” is a seeing stone that can be used to distort truth and present selective visions of reality. During the War of the Ring, a palantír falls under the control of Sauron, who uses it to manipulate and deceive.

Palantir Technologies bears a striking similarity. It sells an AI-based platform that allows its users – among them, military and law enforcement agencies – to analyze personal data, including social media profiles, personal information and physical characteristics. These are used to identify and surveil individuals."

Thursday, June 26, 2025

Don’t Let Silicon Valley Move Fast and Break Children’s Minds; The New York Times, June 25, 2025

JESSICA GROSE , The New York Times; Don’t Let Silicon Valley Move Fast and Break Children’s Minds

"On June 12, the toymaker Mattel announced a “strategic collaboration” with OpenAI, the developer of the large language model ChatGPT, “to support A.I.-powered products and experiences based on Mattel’s brands.” Though visions of chatbot therapist Barbie and Thomas the Tank Engine with a souped-up surveillance caboose may dance in my head, the details are still vague. Mattel affirms that ChatGPT is not intended for users under 13, and says it will comply with all safety and privacy regulations.

But who will hold either company to its public assurances? Our federal government appears allergic to any common-sense regulation of artificial intelligence. In fact, there is a provision in the version of the enormous domestic policy bill passed by the House that would bar states from “limiting, restricting or otherwise regulating artificial intelligence models, A.I. systems or automated decision systems entered into interstate commerce for 10 years.”"

Wednesday, June 25, 2025

The alarming rise of US officers hiding behind masks: ‘A police state’; The Guardian, June 25, 2025

Sam Levin, The Guardian ; The alarming rise of US officers hiding behind masks: ‘A police state’

Mike German, an ex-FBI agent, said immigration agents hiding their identities ‘highlights the illegitimacy of actions’

"Some wear balaclavas. Some wear neck gators, sunglasses and hats. Some wear masks and casual clothes.

Across the country, armed federal immigration officers have increasingly hidden their identities while carrying out immigration raids, arresting protesters and roughing up prominent Democratic critics.

It’s a trend that has sparked alarm among civil rights and law enforcement experts alike.

Mike German, a former FBI agent, said officers’ widespread use of masks was unprecedented in US law enforcement and a sign of a rapidly eroding democracy. “Masking symbolizes the drift of law enforcement away from democratic controls,” he said.

The Department of Homeland Security (DHS) has insisted masks are necessary to protect officers’ privacy, arguing, without providing evidence, that there has been an uptick in violence against agents...

Were you surprised by the frequent reports of federal officers covering their faces and refusing to identify themselves, especially during the recent immigration raids and protests in Los Angeles?

It is absolutely shocking and frightening to see masked agents, who are also poorly identified in the way they are dressed, using force in public without clearly identifying themselves. Our country is known for having democratic control over law enforcement. When it’s hard to tell who a masked individual is working for, it’s hard to accept that that is a legitimate use of authority. It’s particularly important for officers to identify themselves when they are making arrests. It’s important for the person being arrested, and for community members who might be watching, that they understand this is a law enforcement activity."

Tuesday, June 24, 2025

Copyright Cases Should Not Threaten Chatbot Users’ Privacy; Electronic Frontier Foundation (EFF), June 23, 2025

 TORI NOBLE, Electronic Frontier Foundation (EFF); Copyright Cases Should Not Threaten Chatbot Users’ Privacy

"Like users of all technologies, ChatGPT users deserve the right to delete their personal data. Nineteen U.S. States, the European Union, and a host of other countries already protect users’ right to delete. For years, OpenAI gave users the option to delete their conversations with ChatGPT, rather than let their personal queries linger on corporate servers. Now, they can’t. A badly misguided court order in a copyright lawsuit requires OpenAI to store all consumer ChatGPT conversations indefinitely—even if a user tries to delete them. This sweeping order far outstrips the needs of the case and sets a dangerous precedent by disregarding millions of users’ privacy rights.

The privacy harms here are significant. ChatGPT’s 300+ million users submit over 1 billion messages to its chatbots per dayoften for personal purposes. Virtually any personal use of a chatbot—anything from planning family vacations and daily habits to creating social media posts and fantasy worlds for Dungeons and Dragons games—reveal personal details that, in aggregate, create a comprehensive portrait of a person’s entire life. Other uses risk revealing people’s most sensitive information. For example, tens of millions of Americans use ChatGPT to obtain medical and financial information. Notwithstanding other risks of these uses, people still deserve privacy rights like the right to delete their data. Eliminating protections for user-deleted data risks chilling beneficial uses by individuals who want to protect their privacy."

Monday, June 23, 2025

Can We See Our Future in China’s Cameras?; The New York Times, June 23, 2025

 , The New York Times; Can We See Our Future in China’s Cameras?

"The Chinese Communist Party famously uses surveillance to crush dissent and, increasingly, is applying predictive algorithms to get ahead of both crimes and protest. People who screen as potential political agitators, for example, can be prevented from stepping onto trains bound for Beijing. During the Covid pandemic, Chinese health authorities used algorithmic contact tracing and QR codes to block people suspected of viral exposure from entering public spaces. Those draconian health initiatives helped to mainstream invasive surveillance and increase biometric data collection.

It would be comforting to think that China has created a singular dystopia, utterly removed from our American reality. But we are not as different as we might like to think.

Thankfully, our political architecture lacks a unified power structure akin to the C.C.P. Americans — who tend to value individual liberties over collective well-being — have deeply embedded rights which, at least theoretically, protect us from such abuses."

Tuesday, June 10, 2025

Global AI: Compression, Complexity, and the Call for Rigorous Oversight; ABA SciTech Lawyer, May 9, 2025

Joan Rose Marie Bullock, ABA SciTech Lawyer; Global AI: Compression, Complexity, and the Call for Rigorous Oversight

"Equally critical is resisting haste. The push to deploy AI, whether in threat detection or data processing, often outpaces scrutiny. Rushed implementations, like untested algorithms in critical systems, can backfire, as any cybersecurity professional can attest from post-incident analyses. The maxim of “measure twice, cut once” applies here: thorough vetting trumps speed. Lawyers, trained in precedent, recognize the cost of acting without foresight; technologists, steeped in iterative testing, understand the value of validation. Prioritizing diligence over being first mitigates catastrophic failures of privacy breaches or security lapses that ripple worldwide."

Saturday, April 26, 2025

We Already Have an Ethics Framework for AI; Inside Higher Ed, April 25, 2025

 Gwendolyn Reece, Inside Higher Ed; We Already Have an Ethics Framework for AI

"We need to develop an ethical framework for assessing uses of new information technology—and specifically AI—that can guide individuals and institutions as they consider employing, promoting and licensing these tools for various functions. There are two main factors about AI that complicate ethical analysis. The first is that an interaction with AI frequently continues past the initial user-AI transaction; information from that transaction can become part of the system’s training set. Secondly, there is often a significant lack of transparency about what the AI model is doing under the surface, making it difficult to assess. We should demand as much transparency as possible from tool providers.

Academia already has an agreed-upon set of ethical principles and processes for assessing potential interventions. The principles in “The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research” govern our approach to research with humans and can fruitfully be applied if we think of potential uses of AI as interventions. These principles not only benefit academia in making assessments about using AI but also provide a framework for technology developers thinking through their design requirements."

U.S. autism data project sparks uproar over ethics, privacy and intent; The Washington Post, April 25, 2025

 , The Washington Post; U.S. autism data project sparks uproar over ethics, privacy and intent

"The Trump administration has retreated from a controversial plan for a national registry of people with autism just days after announcing it as part of a new health initiative that would link personal medical records to information from pharmacies and smartwatches.

Jay Bhattacharya, director of the National Institutes of Health, unveiled the broad, data-driven initiative to a panel of experts Tuesday, saying it would include “national disease registries, including a new one for autism” that would accelerate research into the rapid rise in diagnoses of the condition.

The announcement sparked backlash in subsequent days over potential privacy violations, lack of consent and the risk of long-term misuse of sensitive data.

The Trump administration still will pursue large-scale data collection, but without the registry that drew the most intense criticism, the Department of Health and Human Services said."