Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Saturday, November 23, 2024

Ronan Farrow on surveillance spyware: ‘It threatens democracy and freedom’; The Guardian, November 23, 2024

 , The Guardian; Ronan Farrow on surveillance spyware: ‘It threatens democracy and freedom’

"Surveilled, now on HBO, is, on one level, a visual accompaniment to Farrow’s bombshell April 2022 report on how governments – western democracies, autocratic regimes and many in between – secretly use commercial spyware to snoop on their citizens. The hour-long documentary, directed by Matthew O’Neill and Perri Peltz, records the emotional toll, scope and threat potential of a technology most people are neither aware of nor understand. It also serves as an argument for urgent journalistic and civic oversight of commercial spyware – its deliberately obscure manufacturers, its abuse by state clients and its silent erosion of privacy.

The film, like Farrow’s 2022 article and much of his subsequent reporting, primarily concerns a proprietary spyware technology called Pegasus that is produced by the Israeli company NSO Group. Pegasus, as the film chillingly demonstrates, can infiltrate a private device through one of its many third-party apps, sometimes with one click – via a spam or phishing link – or, for certain models, without any help of the device’s owner at all. Once activated, Pegasus can control your phone, turn on your microphone, use the camera, record voice or video, and disgorge any of its data – your texts, photos, location. It is very possible, and now documented, to be hacked by Pegasus and not even know it.

Surveilled follows Farrow on his globe-trotting efforts to trace the invisible, international scope of Pegasus: to Tel Aviv, the center of the commercial spyware industry, where NSO executives toe the party line that the group only sells to governments for law enforcement purposes and has no knowledge of its abuses. To Silicon Valley, where the giant tech companies such as WhatsApp are in a game of cat and mouse with Pegasus and others infiltrating its services. To Canada, where the University of Toronto’s Citizen Lab leads efforts for transparency on who has Pegasus, and what they are doing with it. And to Barcelona, where Citizen Lab representatives detect Pegasus hacks, suspected from and later confirmed by the Spanish government, on pro-Catalan independence politicians, journalists and their families...

“All of the privacy law experts that I’m talking to are very, very afraid right now,” he added. “This tech is just increasingly everywhere, and I think we have to contend with the inevitability that this is not just going to be this path of private companies selling to governments.”

Though in part a film of journalistic process, Surveilled also advocates for a regulatory framework on commercial spyware and surveillance, as well as awareness – even if you are not a journalist, a dissident, an activist, you could be surveilled, with privacy writ large at stake."

Monday, October 28, 2024

Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself; New Jersey Institute of Technology, October 22, 2024

Evan Koblentz , New Jersey Institute of Technology; Panel Reminds Us That Artificial Intelligence Can Only Guess, Not Reason for Itself

"Expert panelists took a measured tone about the trends, challenges and ethics of artificial intelligence, at a campus forum organized by NJIT’s Institute for Data Science this month.

The panel moderator was institute director David Bader, who is also a distinguished professor in NJIT Ying Wu College of Computing and who shared his own thoughts on AI in a separate Q&A recently. The panel members were Kevin Coulter, field CTO for AI, Dell Technologies; Grace Wang, distinguished professor and director of NJIT’s Center for Artificial Intelligence Research; and Mengjia Xu, assistant professor of data science. DataBank Ltd., a data center firm that hosts NJIT’s Wulver high-performance computing cluster, was the event sponsor...

Bader: “There's also a lot of concerns that get raised with AI in terms of privacy, in terms of ethics, in terms of its usage. So I really want to understand your thoughts on how we ensure that AI systems are developed and deployed ethically. And are there specific frameworks or guidelines that you would follow?”...

Wang: “Well, I always believe that AI at its core is just a tool, so there's no difference for the AI and say, lock picking tools. Now, picking tools can open your door if you lock yourself out and it can also open others. That's a crime, right? So it depends on how AI is used. From that perspective, there's not much special when we talk about AI ethics, or, say, computer security ethics, or the ethics related to how to use a gun, for example. But what is different is, as AI is too complex, it's beyond the knowledge of many of us how it works. Sometimes it looks ethical but maybe what's behind it is amplifying the bias by using the AI tools without our knowledge. So whenever we talk about AI ethics, I think the most important one is education if you know what AI is about, how it works and what AI can do and what AI cannot. I think for now we have the fear that AI is so powerful it can do anything, but actually, many of the things that people believe AI can do now can be done in the past by just any software system. So education is very, very important to help us to demystify AI accordingly, so we can talk about AI ethics. I want to emphasize transparency. If AI is used for decision making, if we understand how the decision is made, that becomes very, very important. And another important topic related to AI ethics is auditing if we don't know what's inside. At least we have some assessment tools to know whether there's a risk or not in certain circumstances. Whether it can generate a harmful result or is not very much like the stress testing to the financial system after 2008.”

Friday, October 11, 2024

23andMe is on the brink. What happens to all its DNA data?; NPR, October 3, 2024

 , NPR; 23andMe is on the brink. What happens to all its DNA data?

"As 23andMe struggles for survival, customers like Wiles have one pressing question: What is the company’s plan for all the data it has collected since it was founded in 2006?

“I absolutely think this needs to be clarified,” Wiles said. “The company has undergone so many changes and so much turmoil that they need to figure out what they’re doing as a company. But when it comes to my genetic data, I really want to know what they plan on doing.”

Thursday, October 3, 2024

What You Need to Know About Grok AI and Your Privacy; Wired, September 10, 2024

Kate O'Flaherty , Wired; What You Need to Know About Grok AI and Your Privacy

"Described as “an AI search assistant with a twist of humor and a dash of rebellion,” Grok is designed to have fewer guardrails than its major competitors. Unsurprisingly, Grok is prone to hallucinations and bias, with the AI assistant blamed for spreading misinformation about the 2024 election."

Tuesday, September 24, 2024

LinkedIn is training AI on you — unless you opt out with this setting; The Washington Post, September 23, 2024

 , The Washington Post; LinkedIn is training AI on you — unless you opt out with this setting

"To opt out, log into your LinkedIn account, tap or click on your headshot, and open the settings. Then, select “Data privacy,” and turn off the option under “Data for generative AI improvement.”

Flipping that switch will prevent the company from feeding your data to its AI, with a key caveat: The results aren’t retroactive. LinkedIn says it has already begun training its AI models with user content, and that there’s no way to undo it."

Tuesday, September 17, 2024

Ohio sheriff asks for residents' addresses with Kamala Harris signs to send illegal immigrants to homes; Fox News, September 16, 2024

Stepheny Price  , Fox News; Ohio sheriff asks for residents' addresses with Kamala Harris signs to send illegal immigrants to homes

"A sheriff in Ohio took to social media to issue a warning to the public that anyone who is showing support for Vice President Kamala Harris's campaign could eventually house some extra guests. 

In a post on his personal campaign page, Portage County Sheriff Bruce Zuchowski appeared to encourage residents to write down the addresses of supporters for Democratic presidential candidate Kamala Harris."

Tuesday, September 10, 2024

This is the best privacy setting that almost no one is using; The Washington Post, September 6, 2024

 , The Washington Post; This is the best privacy setting that almost no one is using

"Privacy laws in some states, notably California, give people the right to tell most businesses not to sell or share information they collect or in some cases to delete data about you. Some companies apply California’s privacy protections to everyone.

To take advantage of those privacy rights, though, you often must fill out complicated forms with dozens of companies. Hardly anyone does. The opt-out rights give you power in principle, but not in practice.

But baked into some state privacy laws is the option to enlist someone else to handle the legwork for you.

That wand-wielding privacy fairy godmother can be Consumer Reports, whose app can help you opt out of companies saving and selling your data. Even better, the godmother could just be a checkbox you click once to order every company to keep your data secret."

Wednesday, September 4, 2024

NEH Awards $2.72 Million to Create Research Centers Examining the Cultural Implications of Artificial Intelligence; National Endowment for the Humanities (NEH), August 27, 2024

Press Release, National Endowment for the Humanities (NEH); NEH Awards $2.72 Million to Create Research Centers Examining the Cultural Implications of Artificial Intelligence

"The National Endowment for the Humanities (NEH) today announced grant awards totaling $2.72 million for five colleges and universities to create new humanities-led research centers that will serve as hubs for interdisciplinary collaborative research on the human and social impact of artificial intelligence (AI) technologies.

As part of NEH’s third and final round of grant awards for FY2024, the Endowment made its inaugural awards under the new Humanities Research Centers on Artificial Intelligence program, which aims to foster a more holistic understanding of AI in the modern world by creating scholarship and learning centers across the country that spearhead research exploring the societal, ethical, and legal implications of AI. 

Institutions in California, New York, North Carolina, Oklahoma, and Virginia were awarded NEH grants to establish the first AI research centers and pilot two or more collaborative research projects that examine AI through a multidisciplinary humanities lens. 

The new Humanities Research Centers on Artificial Intelligence grant program is part of NEH’s agencywide Humanities Perspectives on Artificial Intelligence initiative, which supports humanities projects that explore the impacts of AI-related technologies on truth, trust, and democracy; safety and security; and privacy, civil rights, and civil liberties. The initiative responds to President Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, which establishes new standards for AI safety and security, protects Americans’ privacy, and advances equity and civil rights."

Friday, August 30, 2024

Breaking Up Google Isn’t Nearly Enough; The New York Times, August 27, 2024

 , The New York Times; Breaking Up Google Isn’t Nearly Enough

"Competitors need access to something else that Google monopolizes: data about our searches. Why? Think of Google as the library of our era; it’s the first stop we go to when seeking information. Anyone who wants to build a rival library needs to know what readers are looking for in order to stock the right books. They also need to know which books are most popular, and which ones people return quickly because they’re no good."

Tuesday, August 27, 2024

Ethical and Responsible AI: A Governance Framework for Boards; Directors & Boards, August 27, 2024

Sonita Lontoh, Directors & Boards; Ethical and Responsible AI: A Governance Framework for Boards 

"Boards must understand what gen AI is being used for and its potential business value supercharging both efficiencies and growth. They must also recognize the risks that gen AI may present. As we have already seen, these risks may include data inaccuracy, bias, privacy issues and security. To address some of these risks, boards and companies should ensure that their organizations' data and security protocols are AI-ready. Several criteria must be met:

  • Data must be ethically governed. Companies' data must align with their organization's guiding principles. The different groups inside the organization must also be aligned on the outcome objectives, responsibilities, risks and opportunities around the company's data and analytics.
  • Data must be secure. Companies must protect their data to ensure that intruders don't get access to it and that their data doesn't go into someone else's training model.
  • Data must be free of bias to the greatest extent possible. Companies should gather data from diverse sources, not from a narrow set of people of the same age, gender, race or backgrounds. Additionally, companies must ensure that their algorithms do not inadvertently perpetuate bias.
  • AI-ready data must mirror real-world conditions. For example, robots in a warehouse need more than data; they also need to be taught the laws of physics so they can move around safely.
  • AI-ready data must be accurate. In some cases, companies may need people to double-check data for inaccuracy.

It's important to understand that all these attributes build on one another. The more ethically governed, secure, free of bias and enriched a company's data is, the more accurate its AI outcomes will be."

Monday, August 12, 2024

Silicon Valley bishop, two Catholic AI experts weigh in on AI evangelization; Religion News Service, May 6, 2024

leja Hertzler-McCain , Religion News Service; Silicon Valley bishop, two Catholic AI experts weigh in on AI evangelization

"San Jose, California, Bishop Oscar Cantú, who leads the Catholic faithful in Silicon Valley, said that AI doesn’t come up much with parishioners in his diocese...

Pointing to the adage coined by Meta founder Mark Zuckerberg, “move fast and break things,” the bishop said, “with AI, we need to move very cautiously and slowly and try not to break things. The things we would be breaking are human lives and reputations.”...

Noreen Herzfeld, a professor of theology and computer science at St. John’s University and the College of St. Benedict and one of the editors of a book about AI sponsored by the Vatican Dicastery for Culture and Education, said that the AI character was previously “impersonating a priest, which is considered a very serious sin in Catholicism.”...

Accuracy issues, Herzfeld said, is one of many reasons it should not be used for evangelization. “As much as you beta test one of these chatbots, you will never get rid of hallucinations” — moments when the AI makes up its own answers, she said...

Larrey, who has been studying AI for nearly 30 years and is in conversation with Sam Altman, the CEO of OpenAI, is optimistic that the technology will improve. He said Altman is already making progress on the hallucinations, on its challenges to users’ privacy and reducing its energy use — a recent analysis estimated that by 2027, artificial intelligence could suck up as much electricity as the population of Argentina or the Netherlands."

Friday, August 2, 2024

Justice Department sues TikTok, accusing the company of illegally collecting children’s data; AP, August 2, 2024

HALELUYA HADERO, AP; Justice Department sues TikTok, accusing the company of illegally collecting children’s data

"The Justice Department sued TikTok on Friday, accusing the company of violating children’s online privacy law and running afoul of a settlement it had reached with another federal agency. 

The complaint, filed together with the Federal Trade Commission in a California federal court, comes as the U.S. and the prominent social media company are embroiled in yet another legal battle that will determine if – or how – TikTok will continue to operate in the country. 

The latest lawsuit focuses on allegations that TikTok, a trend-setting platform popular among young users, and its China-based parent company ByteDance violated a federal law that requires kid-oriented apps and websites to get parental consent before collecting personal information of children under 13. It also says the companies failed to honor requests from parents who wanted their children’s accounts deleted, and chose not to delete accounts even when the firms knew they belonged to kids under 13."

Friday, July 12, 2024

AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections; Digiday, July 12, 2024

Marty Swant , Digiday; AI Briefing: Senators propose new regulations for privacy, transparency and copyright protections

"The U.S. Senate Commerce Committee on Thursday held a hearing to address a range of concerns about the intersection of AI and privacy. While some lawmakers expressed concern about AI accelerating risks – such as online surveillance, scams, hyper-targeting ads and discriminatory business practices — others cautioned regulations might further protect tech giants and burden smaller businesses."

Saturday, June 29, 2024

2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work; Thomson Reuters Institute, 2024

Thomson Reuters Institute; 2024 Generative AI in Professional Services: Perceptions, Usage & Impact on the Future of Work

"Inaccuracy, privacy worries persist -- More than half of respondents identified such worries as inaccurate responses (70%); data security (68%); privacy and confidentiality of data (62%); complying with laws and regulations (60%); and ethical and responsible usage (57%), as primary concerns for GenAI."

Friday, June 14, 2024

Clearview AI Used Your Face. Now You May Get a Stake in the Company.; The New York Times, June 13, 2024

Kashmir Hill , The New York Times; Clearview AI Used Your Face. Now You May Get a Stake in the Company.

"A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database.

Clearview AI, which is based in New York, scraped billions of photos from the web and social media sites like Facebook, LinkedIn and Instagram to build a facial recognition app used by thousands of police departments, the Department of Homeland Security and the F.B.I. After The New York Times revealed the company’s existence in 2020, lawsuits were filed across the country. They were consolidated in federal court in Chicago as a class action.

The litigation has proved costly for Clearview AI, which would most likely go bankrupt before the case made it to trial, according to court documents." 

Wednesday, May 29, 2024

Why using dating apps for public health messaging is an ethical dilemma; The Conversation, May 28, 2024

s, Chancellor's Fellow, Deanery of Molecular, Genetic and Population Health Sciences Usher Institute Centre for Biomedicine, Self and Society, The University of EdinburghProfessor of Sociology, Sociology, University of Manchester, Lecturer in Nursing, University of Manchester , The Conversation; Why using dating apps for public health messaging is an ethical dilemma

"Future collaborations with apps should prioritise the benefit of users over those of the app businesses, develop transparent data policies that prevent users’ data from being shared for profit, ensure the apps’ commitment to anti-discrimination and anti-harrassment, and provide links to health and wellbeing services beyond the apps.

Dating apps have the potential to be powerful allies in public health, especially in reaching populations that have often been ignored. However, their use must be carefully managed to avoid compromising user privacy, safety and marginalisation."

Thursday, May 23, 2024

US intelligence agencies’ embrace of generative AI is at once wary and urgent; Associated Press, May 23, 2024

FRANK BAJAK , Associated Press; US intelligence agencies’ embrace of generative AI is at once wary and urgent

"The CIA’s inaugural chief technology officer, Nand Mulchandani, thinks that because gen AI models “hallucinate” they are best treated as a “crazy, drunk friend” — capable of great insight and creativity but also bias-prone fibbers. There are also security and privacy issues: adversaries could steal and poison them, and they may contain sensitive personal data that officers aren’t authorized to see.

That’s not stopping the experimentation, though, which is mostly happening in secret. 

An exception: Thousands of analysts across the 18 U.S. intelligence agencies now use a CIA-developed gen AI called Osiris. It runs on unclassified and publicly or commercially available data — what’s known as open-source. It writes annotated summaries and its chatbot function lets analysts go deeper with queries...

Another worry: Ensuring the privacy of “U.S. persons” whose data may be embedded in a large-language model.

“If you speak to any researcher or developer that is training a large-language model, and ask them if it is possible to basically kind of delete one individual piece of information from an LLM and make it forget that -- and have a robust empirical guarantee of that forgetting -- that is not a thing that is possible,” John Beieler, AI lead at the Office of the Director of National Intelligence, said in an interview.

It’s one reason the intelligence community is not in “move-fast-and-break-things” mode on gen AI adoption."

An attorney says she saw her library reading habits reflected in mobile ads. That's not supposed to happen; The Register, May 18, 2024

Thomas Claburn , The Register; An attorney says she saw her library reading habits reflected in mobile ads. That's not supposed to happen

"In December, 2023, University of Illinois Urbana-Champaign information sciences professor Masooda Bashir led a study titled "Patron Privacy Protections in Public Libraries" that was published in The Library Quarterly. The study found that while libraries generally have basic privacy protections, there are often gaps in staff training and in privacy disclosures made available to patrons.

It also found that some libraries rely exclusively on social media for their online presence. "That is very troubling," said Bashir in a statement. "Facebook collects a lot of data – everything that someone might be reading and looking at. That is not a good practice for public libraries.""

Wednesday, May 22, 2024

Machine ‘Unlearning’ Helps Generative AI ‘Forget’ Copyright-Protected and Violent Content; UT News, The University of Texas at Austin, May 21, 2024

 UT News, The University of Texas at Austin ; Machine ‘Unlearning’ Helps Generative AI ‘Forget’ Copyright-Protected and Violent Content

"When people learn things they should not know, getting them to forget that information can be tough. This is also true of rapidly growing artificial intelligence programs that are trained to think as we do, and it has become a problem as they run into challenges based on the use of copyright-protected material and privacy issues.

To respond to this challenge, researchers at The University of Texas at Austin have developed what they believe is the first “machine unlearning” method applied to image-based generative AI. This method offers the ability to look under the hood and actively block and remove any violent images or copyrighted works without losing the rest of the information in the model.

“When you train these models on such massive data sets, you’re bound to include some data that is undesirable,” said Radu Marculescu, a professor in the Cockrell School of Engineering’s Chandra Family Department of Electrical and Computer Engineering and one of the leaders on the project. “Previously, the only way to remove problematic content was to scrap everything, start anew, manually take out all that data and retrain the model. Our approach offers the opportunity to do this without having to retrain the model from scratch.”"

Wednesday, May 15, 2024

Illinois Attorney General Kwame Raoul sues company for publishing voters’ personal data; Chicago Sun-Times, May 9, 2024

 

, Chicago Sun-Times; Illinois Attorney General Kwame Raoul sues company for publishing voters’ personal data

"A publishing company whose politically-slanted newspapers have been derided as “pink slime” is being sued by Illinois Attorney General Kwame Raoul for illegally identifying birthdates and home addresses of “hundreds of thousands” of voters.

Raoul’s legal move against Local Government Information Services accuses the company of publishing sensitive personal data that could subject voters across Illinois to identity theft.
Among those whose personal data has been identified on LGIS’ nearly three dozen online websites are current and former judges, police officers, high-ranking state officials and victims of domestic violence and human trafficking, Raoul’s filing said."