Showing posts with label data collection and use. Show all posts
Showing posts with label data collection and use. Show all posts

Saturday, November 30, 2024

Why I regret using 23andMe: I gave up my DNA just to find out I’m British; The Guardian, November 30, 2024

 , The Guardian; Why I regret using 23andMe: I gave up my DNA just to find out I’m British

"With the future of 23andMe in peril, the overarching question among previous customers now is what will happen to the data that has already been collected. Leuenberger noted that by entering DNA into a database, users sacrifice not only their own privacy but that of blood relatives. Because an individual’s DNA is similar in structure to that of their relatives, information about others can be gleaned from one person’s sample. This is especially pronounced with the rise of open-access DNA sites like GEDMatch, on which users can upload genetic data that can be compared to other samples. A consumer genealogy test contributed to the identification of serial killer Joseph James DeAngelo.

“What is ethically tricky with genetic data is that it’s not just about self-knowledge – it’s also knowledge about all of your relatives,” Leuenberger said. “Morally speaking, it is not necessarily information that is yours to give – and this risk is exacerbated if this company goes down and the fate of the data becomes more perilous.”"

Wednesday, November 20, 2024

What 23andMe Owes its Users; The Hastings Center, November 18, 2024

 Jonathan LoTempio, Jr,, The Hastings Center; What 23andMe Owes its Users

"In the intervening years, 23andMe has sent you new findings related to your health status. You wonder: Is my data protected? Can I get it back?

There are protections for users of 23andMe and other direct-to-consumer genetic testing companies. Federal laws, including the Genetic Information Nondiscrimination Act (GINA) and the Affordable Care Act, protect users from employment and insurance discrimination. Residents of certain states including California have agencies where they can register complaints. 23andMe, which is based in California, has a policy in line with California citizens’ new right to access and delete their data. European residents have even more extensive rights over their digital data.

American users can rest assured that there are strong legal mechanisms under the Committee on Foreign Investment in the U.S. that can block foreign acquisition of U.S. firms on national security grounds. For certain critical sectors like biotech, the committee may consider, among other factors, whether a proposed transaction would result in the U.S. losing its place as a global industry leader as part of its review.

Any attempt by a foreign company to acquire 23andMe would be subject to a CFIUS review and could be blocked on national security grounds, particularly if the foreign company is headquartered in a “country of special concern” such as China, Russia, or Iran. As for acquisitions by U.S. companies, the legal landscape is a bit more Wild West. Buyers based in the U.S. could change policies to which users agreed long ago, in a world rather different than ours.

November 2024: With a new board the immediate crisis at 23andMe has been averted. However, long-term concerns remain regarding potential buyers and how they might respond to 23andMe’s layoffs and shuttering of its drug development arm, both of which suggest instability of the company. 23andMe and other DTC genetic testing companies should consider what they owe their users.

One thing they owe users is to implement a policy that, in the case of a sale, the companies will notify users multiple times and in multiple ways and give them the option of deleting their data."

Monday, November 4, 2024

What AI knows about you; Axios, November 4, 2024

Ina Friend, Axios; What AI knows about you

"Most AI builders don't say where they are getting the data they use to train their bots and models — but legally they're required to say what they are doing with their customers' data.

The big picture: These data-use disclosures open a window onto the otherwise opaque world of Big Tech's AI brain-food fight.

  • In this new Axios series, we'll tell you, company by company, what all the key players are saying and doing with your personal information and content.

Why it matters: You might be just fine knowing that picture you just posted on Instagram is helping train the next generative AI art engine. But you might not — or you might just want to be choosier about what you share.

Zoom out: AI makers need an incomprehensibly gigantic amount of raw data to train their large language and image models. 

  • The industry's hunger has led to a data land grab: Companies are vying to teach their baby AIs using information sucked in from many different sources — sometimes with the owner's permission, often without it — before new laws and court rulings make that harder. 

Zoom in: Each Big Tech giant is building generative AI models, and many of them are using their customer data, in part, to train them.

  • In some cases it's opt-in, meaning your data won't be used unless you agree to it. In other cases it is opt-out, meaning your information will automatically get used unless you explicitly say no. 
  • These rules can vary by region, thanks to legal differences. For instance, Meta's Facebook and Instagram are "opt-out" — but you can only opt out if you live in Europe or Brazil.
  • In the U.S., California's data privacy law is among the laws responsible for requiring firms to say what they do with user data. In the EU, it's the GDPR."

Thursday, October 24, 2024

WITHOUT KNOWLEDGE OR CONSENT; ProPublica, October 24, 2024

Corey G. Johnson , ProPublica; WITHOUT KNOWLEDGEOR CONSENT

"FOR YEARS, America’s most iconic gun-makers turned over sensitive personal information on hundreds of thousands of customers to political operatives.

Those operatives, in turn, secretly employed the details to rally firearm owners to elect pro-gun politicians running for Congress and the White House, a ProPublica investigation has found.

The clandestine sharing of gun buyers’ identities — without their knowledge and consent — marked a significant departure for an industry that has long prided itself on thwarting efforts to track who owns firearms in America.

At least 10 gun industry businesses, including Glock, Smith & Wesson, Remington, Marlin and Mossberg, handed over names, addresses and other private data to the gun industry’s chief lobbying group, the National Shooting Sports Foundation. The NSSF then entered the gun owners’ details into what would become a massive database.

The data initially came from decades of warranty cards filled out by customers and returned to gun manufacturers for rebates and repair or replacement programs.

A ProPublica review of dozens of warranty cards from the 1970s through today found that some promised customers their information would be kept strictly confidential. Others said some information could be shared with third parties for marketing and sales. None of the cards informed buyers their details would be used by lobbyists and consultants to win elections."

Friday, October 11, 2024

23andMe is on the brink. What happens to all its DNA data?; NPR, October 3, 2024

 , NPR; 23andMe is on the brink. What happens to all its DNA data?

"As 23andMe struggles for survival, customers like Wiles have one pressing question: What is the company’s plan for all the data it has collected since it was founded in 2006?

“I absolutely think this needs to be clarified,” Wiles said. “The company has undergone so many changes and so much turmoil that they need to figure out what they’re doing as a company. But when it comes to my genetic data, I really want to know what they plan on doing.”

Thursday, October 3, 2024

What You Need to Know About Grok AI and Your Privacy; Wired, September 10, 2024

Kate O'Flaherty , Wired; What You Need to Know About Grok AI and Your Privacy

"Described as “an AI search assistant with a twist of humor and a dash of rebellion,” Grok is designed to have fewer guardrails than its major competitors. Unsurprisingly, Grok is prone to hallucinations and bias, with the AI assistant blamed for spreading misinformation about the 2024 election."

Tuesday, September 24, 2024

LinkedIn is training AI on you — unless you opt out with this setting; The Washington Post, September 23, 2024

 , The Washington Post; LinkedIn is training AI on you — unless you opt out with this setting

"To opt out, log into your LinkedIn account, tap or click on your headshot, and open the settings. Then, select “Data privacy,” and turn off the option under “Data for generative AI improvement.”

Flipping that switch will prevent the company from feeding your data to its AI, with a key caveat: The results aren’t retroactive. LinkedIn says it has already begun training its AI models with user content, and that there’s no way to undo it."

Tuesday, September 10, 2024

This is the best privacy setting that almost no one is using; The Washington Post, September 6, 2024

 , The Washington Post; This is the best privacy setting that almost no one is using

"Privacy laws in some states, notably California, give people the right to tell most businesses not to sell or share information they collect or in some cases to delete data about you. Some companies apply California’s privacy protections to everyone.

To take advantage of those privacy rights, though, you often must fill out complicated forms with dozens of companies. Hardly anyone does. The opt-out rights give you power in principle, but not in practice.

But baked into some state privacy laws is the option to enlist someone else to handle the legwork for you.

That wand-wielding privacy fairy godmother can be Consumer Reports, whose app can help you opt out of companies saving and selling your data. Even better, the godmother could just be a checkbox you click once to order every company to keep your data secret."

Friday, August 30, 2024

Breaking Up Google Isn’t Nearly Enough; The New York Times, August 27, 2024

 , The New York Times; Breaking Up Google Isn’t Nearly Enough

"Competitors need access to something else that Google monopolizes: data about our searches. Why? Think of Google as the library of our era; it’s the first stop we go to when seeking information. Anyone who wants to build a rival library needs to know what readers are looking for in order to stock the right books. They also need to know which books are most popular, and which ones people return quickly because they’re no good."

Wednesday, August 21, 2024

Leaving Your Legacy Via Death Bots? Ethicist Shares Concerns; Medscape, August 21, 2024

Arthur L. Caplan, PhD, Medscape ; Leaving Your Legacy Via Death Bots? Ethicist Shares Concerns

"On the other hand, there are clearly many ethical issues about creating an artificial version of yourself. One obvious issue is how accurate this AI version of you will be if the death bot can create information that sounds like you, but really isn't what you would have said, despite the effort to glean it from recordings and past information about you. Is it all right if people wander from the truth in trying to interact with someone who's died? 

There are other ways to leave memories behind. You certainly can record messages so that you can control the content. Many people video themselves and so on. There are obviously people who would say that they have a diary or have written information they can leave behind. 

Is there a place in terms of accuracy for a kind of artificial version of ourselves to go on forever? Another interesting issue is who controls that. Can you add to it after your death? Can information be shared about you with third parties who don't sign up for the service? Maybe the police take an interest in how you died. You can imagine many scenarios where questions might come up about wanting to access these data that the artificial agent is providing. 

Some people might say that it's just not the way to grieve.Maybe the best way to grieve is to accept death and not try to interact with a constructed version of yourself once you've passed. That isn't really accepting death. It's a form, perhaps, of denial of death, and maybe that isn't going to be good for the mental health of survivors who really have not come to terms with the fact that someone has passed on."

Wednesday, August 7, 2024

A booming industry of AI age scanners, aimed at children’s faces; The Washington Post, August 7, 2024

, The Washington Post ; A booming industry of AI age scanners, aimed at children’s faces

"Nineteen states, home to almost 140 million Americans, have passed or enacted laws requiring online age checks since the beginning of last year, including Virginia, Texas and Florida. For the companies, that’s created a gold mine: Employees at Incode, a San Francisco firm that runs more than 100 million verifications a year, now internally track state bills and contact local officials to, as senior director of strategy Fernanda Sottil said, “understand where … our tech fits in.”

But while the systems are promoted for safeguarding kids, they can only work by inspecting everyone — surveying faces, driver’s licenses and other sensitive data in vast quantities. Alex Stamos, the former security chief of Facebook, which uses Yoti, said “most age verification systems range from ‘somewhat privacy violating’ to ‘authoritarian nightmare.'”"

Friday, August 2, 2024

Justice Department sues TikTok, accusing the company of illegally collecting children’s data; AP, August 2, 2024

HALELUYA HADERO, AP; Justice Department sues TikTok, accusing the company of illegally collecting children’s data

"The Justice Department sued TikTok on Friday, accusing the company of violating children’s online privacy law and running afoul of a settlement it had reached with another federal agency. 

The complaint, filed together with the Federal Trade Commission in a California federal court, comes as the U.S. and the prominent social media company are embroiled in yet another legal battle that will determine if – or how – TikTok will continue to operate in the country. 

The latest lawsuit focuses on allegations that TikTok, a trend-setting platform popular among young users, and its China-based parent company ByteDance violated a federal law that requires kid-oriented apps and websites to get parental consent before collecting personal information of children under 13. It also says the companies failed to honor requests from parents who wanted their children’s accounts deleted, and chose not to delete accounts even when the firms knew they belonged to kids under 13."

Monday, July 1, 2024

Vatican conference ponders who really holds the power of AI; Religion News Service, June 27, 2024

Claire Giangravé, Religion News Service; Vatican conference ponders who really holds the power of AI

"The vice director general of Italy’s Agency for National Cybersecurity, Nunzia Ciardi, also warned at the conference of the influence held by leading AI developers.

“Artificial intelligence is made up of massive economic investments that only large superpowers can afford and through which they ensure a very important geopolitical dominance and access to the large amount of data that AI must process to produce outputs,” Ciardi said.

Participants agreed that international organizations must enforce stronger regulations for the use and advancement of AI technologies.

“You could say that we are colonized by AI, which is managed by select companies that brutally rack through our data,” she added.

“We need guardrails, because what is coming is a radical transformation that will change real and digital relations and require not only reflection but also regulation,” Benanti said.

The “Rome Call for AI Ethics,” a document signed by IBM, Microsoft, Cisco and U.N. Food and Agriculture Organization representatives, was promoted by the Vatican’s Academy for Life and lays out guidelines for promoting ethics, transparency and inclusivity in AI.

Other religious communities have also joined the “Rome Call,” including the Anglican Church and Jewish and Muslim representatives. On July 9, representatives from Eastern religions will gather for a Vatican-sponsored event to sign the “Rome Call” in Hiroshima, Japan. The location was decided to emphasize the dangerous consequences of technology when unchecked."

Wednesday, June 12, 2024

Adobe Responds to AI Fears With Plans for Updated Legal Terms; Bloomberg Law, June 12, 2024

Cassandre Coyer and Aruni Soni, Bloomberg Law; Adobe Responds to AI Fears With Plans for Updated Legal Terms

"“As technology evolves, we have to evolve,” Dana Rao, Adobe’s general counsel, said in an interview with Bloomberg Law. “The legal terms have to evolve, too. And that’s really the lesson that we’re sort of internalizing here.”

Over the weekend, some Adobe customers revolted on social media, crying foul at updated terms of use they claimed allowed Adobe to seize their intellectual property and use their data to feed AI models. 

The Photoshop and Illustrator maker responded with multiple blog posts over several days seeking to reassure users it wasn’t stealing their content, including a pledge to quickly rewrite its user agreement in clearer language. Rao said Tuesday that Adobe will be issuing updated terms of use on June 18 in which it will specifically state the company doesn’t train its Firefly AI models on its cloud content.

The unexpected online storm around the updates is the latest example of how sweeping technological changes—such as the rise of generative AI—have bolstered users’ fears of copyright violations and privacy invasions. That sentiment is part of the landscape the tech industry must navigate to serve a creator community increasingly on edge.

What happened is “more of a lesson in terms of how to present terms of use and roll out updates in a way that can address or alleviate customer concerns, especially in the era of AI and increased concern over privacy,” said Los Angeles-based advertising attorney Robert Freund." 

Friday, June 7, 2024

Angry Instagram posts won’t stop Meta AI from using your content; Popular Science, June 5, 2024

 Mack DeGeurin, Popular Science; Angry Instagram posts won’t stop Meta AI from using your content

"Meta, the Mark Zuckerberg-owned tech giant behind Instagram, surprised many of the app’s estimated 1.2 billion global users with a shock revelation last month. Images, including original artwork and other creative assets uploaded to the company’s platforms, are now being used to train the company’s AI image generator. That admission, initially made public by Meta executive Chris Cox during an interview with Bloomberg last month, has elicited a fierce backlash from some creators. As of writing, more than 130,000 Instagram users have reshared a message on Instagram telling the company they do not consent to it using their data to train Meta AI. Those pleas, however, are founded on a fundamental misunderstanding of creators’ relationship with extractive social media platforms. These creators already gave away their work, whether they realize it or not."

Wednesday, May 29, 2024

Why using dating apps for public health messaging is an ethical dilemma; The Conversation, May 28, 2024

s, Chancellor's Fellow, Deanery of Molecular, Genetic and Population Health Sciences Usher Institute Centre for Biomedicine, Self and Society, The University of EdinburghProfessor of Sociology, Sociology, University of Manchester, Lecturer in Nursing, University of Manchester , The Conversation; Why using dating apps for public health messaging is an ethical dilemma

"Future collaborations with apps should prioritise the benefit of users over those of the app businesses, develop transparent data policies that prevent users’ data from being shared for profit, ensure the apps’ commitment to anti-discrimination and anti-harrassment, and provide links to health and wellbeing services beyond the apps.

Dating apps have the potential to be powerful allies in public health, especially in reaching populations that have often been ignored. However, their use must be carefully managed to avoid compromising user privacy, safety and marginalisation."

Thursday, May 23, 2024

An attorney says she saw her library reading habits reflected in mobile ads. That's not supposed to happen; The Register, May 18, 2024

Thomas Claburn , The Register; An attorney says she saw her library reading habits reflected in mobile ads. That's not supposed to happen

"In December, 2023, University of Illinois Urbana-Champaign information sciences professor Masooda Bashir led a study titled "Patron Privacy Protections in Public Libraries" that was published in The Library Quarterly. The study found that while libraries generally have basic privacy protections, there are often gaps in staff training and in privacy disclosures made available to patrons.

It also found that some libraries rely exclusively on social media for their online presence. "That is very troubling," said Bashir in a statement. "Facebook collects a lot of data – everything that someone might be reading and looking at. That is not a good practice for public libraries.""

Saturday, May 18, 2024

Reddit shares jump after OpenAI ChatGPT deal; BBC, May 17, 2024

  João da Silva, BBC; Reddit shares jump after OpenAI ChatGPT deal

"Shares in Reddit have jumped more than 10% after the firm said it had struck a partnership deal with artificial intelligence (AI) start-up OpenAI.

Under the agreement, the company behind the ChatGPT chatbot will get access to Reddit content, while it will also bring AI-powered features to the social media platform...

Meanwhile, Google announced a partnership in February which allows the technology giant to access Reddit data to train its AI models.

Both in the European Union and US, there are questions around whether it is copyright infringement to train AI tools on such content, or whether it falls under fair use and "temporary copying" exceptions."

Wednesday, May 15, 2024

Illinois Attorney General Kwame Raoul sues company for publishing voters’ personal data; Chicago Sun-Times, May 9, 2024

 

, Chicago Sun-Times; Illinois Attorney General Kwame Raoul sues company for publishing voters’ personal data

"A publishing company whose politically-slanted newspapers have been derided as “pink slime” is being sued by Illinois Attorney General Kwame Raoul for illegally identifying birthdates and home addresses of “hundreds of thousands” of voters.

Raoul’s legal move against Local Government Information Services accuses the company of publishing sensitive personal data that could subject voters across Illinois to identity theft.
Among those whose personal data has been identified on LGIS’ nearly three dozen online websites are current and former judges, police officers, high-ranking state officials and victims of domestic violence and human trafficking, Raoul’s filing said."

Friday, November 3, 2023

The Internet Of Things Demystified: Connect, Collect, Analyze And Act; Forbes, October 12, 2023

 Bill Geary, Forbes; The Internet Of Things Demystified: Connect, Collect, Analyze And Act

"When you get past the acronyms and buzzwords that describe the platforms that help organizations manage their operations, it all boils down to gathering information so you can make good decisions. The tech industry establishes a lot of jargon that helps differentiate one technology from another. Those terms are helpful to IT professionals but often serve to confuse everyone else. The Internet of Things (IoT) is a term that creates confusion.

I prefer to describe this technology according to what it does. IoT is nothing more than connecting things, collecting information from them, analyzing it and acting upon it accordingly: connect, collect, analyze and act. By distilling the technology into a plain description, we demystify the term. We make it attainable and approachable—something that everyone can understand."