Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Saturday, February 19, 2022

Opinion: After a rape survivor’s arrest, it’s time to rethink genetic databases; The Washington Post, February 17, 2022

Jennifer King, The Washington Post,; Opinion: After a rape survivor’s arrest, it’s time to rethink genetic databases

"Jennifer King is a privacy and data policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence...

This episode offers a glimpse of the concerns over privacy — as well as matters such as consent to data collection — that will arise as genetic information is stored in ever-greater amounts, and as governments take an ever-greater interest in exploiting it."

AirTags are being used to track people and cars. Here's what is being done about it; NPR, February 18, 2022

MICHAEL LEVITT, NPR; AirTags are being used to track people and cars. Here's what is being done about it

""As technology becomes more sophisticated and advanced, as wonderful as that is for society, unfortunately, it also becomes much easier to misuse and abuse," she told NPR. "I wouldn't say that we've necessarily seen an uptick with the use of AirTags any more or less than any cutting edge technology."

Williams said that what was rare was a technology company taking the issue seriously and moving to address it.

"[Apple is] not only listening to the field, but actively reaching out at times to do safety checks. That in and of itself might sound like a very small step, but it's rare," she said.

Still, Galperin thinks that Apple should have done more to protect people ahead of time. 

"The mitigations that Apple had in place at the time that the AirTag came out were woefully insufficient," Galperin said. 

"I think that Apple has been very careful and responsive after putting the product out and introducing new mitigations. But the fact that they chose to bring the product to market in the state that it was in last year, is shameful.""

Tuesday, February 15, 2022

Your sense of privacy evolved over millennia – that puts you at risk today but could improve technology tomorrow; The Conversation, February 11, 2022

 , The Conversation ; Your sense of privacy evolved over millennia – that puts you at risk today but could improve technology tomorrow

"Many people think of privacy as a modern invention, an anomaly made possible by the rise of urbanization. If that were the case, then acquiescing to the current erosion of privacy might not be particularly alarming.

As calls for Congress to protect privacy increase, it’s important to understand its nature. In a policy brief in Science, we and our colleague Jeff Hancock suggest that understanding the nature of privacy calls for a better understanding of its origins. 

Research evidence refutes the notion that privacy is a recent invention. While privacy rights or values may be modern notions, examples of privacy norms and privacy-seeking behaviors abound across cultures throughout human history and across geography

As privacy researchers who study information systems and behavioral research and public policy, we believe that accounting for the potential evolutionary roots of privacy concerns can help explain why people struggle with privacy today. It may also help inform the development of technologies and policies that can better align the digital world with the human sense of privacy.

The misty origins of privacy

Humans have sought and attempted to manage privacy since the dawn of civilization. People from ancient Greece to ancient China were concerned with the boundaries of public and private life. The male head of the household, or pater familias, in ancient Roman families would have his slaves move their cots to some remote corner of the house when he wanted to spend the evening alone.

Attention to privacy is also found in preindustrial societies. For example, the Mehinacu tribe in South America lived in communal accommodations but built private houses miles away for members to achieve some seclusion.

Evidence of a drive toward privacy can even be found in the holy texts of ancient monotheistic religions: the Quran’s instructions against spying on one another, the Talmud’s advice not to place windows overlooking neighbors’ windows, and the biblical story of Adam and Eve covering their nakedness after eating the forbidden fruit. 

The drive for privacy appears to be simultaneously culturally specific and culturally universal. Norms and behaviors change across peoples and times, but all cultures seem to manifest a drive for it. Scholars in the past century who studied the history of privacy provide an explanation for this: Privacy concerns may have evolutionary roots."

Opinion: A lawsuit against Google points out a much bigger privacy problem; The Washington Post, February 14, 2022

Editorial Board, The Washington Post; Opinion: A lawsuit against Google points out a much bigger privacy problem

"The phenomenon the recent suits describe, after all, is not particular to Google but rather endemic to almost the entirety of the Web: Companies get to set all the rules, as long as they run those rules by consumers in convoluted terms of service that even those capable of decoding the legalistic language rarely bother to read. Other mechanisms for notice and consent, such as opt-outs and opt-ins, create similar problems. Control for the consumer is mostly an illusion. The federal privacy law the country has sorely needed for decades would replace this old regime with meaningful limitations on what data companies can collect and in what contexts, so that the burden would be on them not to violate the reasonable expectations of their users, rather than placing the burden on the users to spell out what information they will and will not allow the tech firms to have.

The question shouldn’t be whether companies gather unnecessary amounts of sensitive information about their users sneakily — it should be whether companies amass these troves at all. Until Congress ensures that’s true for the whole country, Americans will be clicking through policies and prompts that do little to protect them."

Friday, February 4, 2022

Where Automated Job Interviews Fall Short; Harvard Business Review (HBR), January 27, 2022

Dimitra Petrakaki, Rachel Starr, and , Harvard Business Review (HBR) ; Where Automated Job Interviews Fall Short

"The use of artificial intelligence in HR processes is a new, and likely unstoppable, trend. In recruitment, up to 86% of employers use job interviews mediated by technology, a growing portion of which are automated video interviews (AVIs).

AVIs involve job candidates being interviewed by an artificial intelligence, which requires them to record themselves on an interview platform, answering questions under time pressure. The video is then submitted through the AI developer platform, which processes the data of the candidate — this can be visual (e.g. smiles), verbal (e.g. key words used), and/or vocal (e.g. the tone of voice). In some cases, the platform then passes a report with an interpretation of the job candidate’s performance to the employer.

The technologies used for these videos present issues in reliably capturing a candidate’s characteristics. There is also strong evidence that these technologies can contain bias that can exclude some categories of job-seekers. The Berkeley Haas Center for Equity, Gender, and Leadership reports that 44% of AI systems are embedded with gender bias, with about 26% displaying both gender and race bias. For example, facial recognition algorithms have a 35% higher detection error for recognizing the gender of women of color, compared to men with lighter skin.

But as developers work to remove biases and increase reliability, we still know very little on how AVIs (or other types of interviews involving artificial intelligence) are experienced by different categories of job candidates themselves, and how these experiences affect them, this is where our research focused. Without this knowledge, employers and managers can’t fully understand the impact these technologies are having on their talent pool or on different group of workers (e.g., age, ethnicity, and social background). As a result, organizations are ill-equipped to discern whether the platforms they turn to are truly helping them hire candidates that align with their goals. We seek to explore whether employers are alienating promising candidates — and potentially entire categories of job seekers by default — because of varying experiences of the technology."

IRS plan to scan your face prompts anger in Congress, confusion among taxpayers; The Washington Post, January 27, 2022

Drew Harwell, The Washington Post; IRS plan to scan your face prompts anger in Congress, confusion among taxpayers

"The $86 million ID.me contract with the IRS also has alarmed researchers and privacy advocates who say they worry about how Americans’ facial images and personal data will be safeguarded in the years to come. There is no federal law regulating how the data can be used or shared. While the IRS couldn’t say what percentage of taxpayers use the agency’s website, internal data show it is one of the federal government’s most-viewed websites, with more than 1.9 billion visits last year."

Sunday, January 9, 2022

Ethical aspects relating to cyberspace: copyright and privacy; Israel Defense, January 9, 2022

Giancarlo Elia Valori , Israel Defense; Ethical aspects relating to cyberspace: copyright and privacy

"A further right - the right to privacy - is one of the most fundamental rights: it reflects the natural human need for privacy, confidentiality and autonomy, as well as for the protection of one's own “personal sphere” from outside intrusion, and the ability to make decisions without being spied on and to remain oneself and maintain one’s own individuality.

It is no coincidence that in all international documents declaring human rights and freedoms, as well as in all codes of ethics related to the sphere of information, privacy is proclaimed as a fundamental moral value, which constitutes the foundation of human freedom and security, and therefore requires respect and protection."

Friday, December 31, 2021

Americans widely distrust Facebook, TikTok and Instagram with their data, poll finds; The Washington Post, December 22, 2021

 

, The Washington Post; Americans widely distrust Facebook, TikTok and Instagram with their data, poll finds

"According to the survey, 72 percent of Internet users trust Facebook “not much” or “not at all” to responsibly handle their personal information and data on their Internet activity. About 6 in 10 distrust TikTok and Instagram, while slight majorities distrust WhatsApp and YouTube. Google, Apple and Microsoft receive mixed marks for trust, while Amazon is slightly positive with 53 percent trusting the company at least “a good amount.” (Amazon founder Jeff Bezos owns The Washington Post.)

Only 10 percent say Facebook has a positive impact on society, while 56 percent say it has a negative impact and 33 percent say its impact is neither positive nor negative. Even among those who use Facebook daily, more than three times as many say the social network has a negative rather than a positive impact."

Saturday, November 20, 2021

Maryland lawmaker-doctor won’t face ethics violation for tuning into legislative meetings from the operating room; The Baltimore Sun, November 19, 2021

 , The Baltimore Sun; Maryland lawmaker-doctor won’t face ethics violation for tuning into legislative meetings from the operating room

 "Hill had initially defended her decision to join video meetings while at work as a doctor, saying her patients knew about it and she wasn’t putting them in any danger.

A Board of Physicians investigation found that one patient did not know Hill tuned into a legislative meeting, while the other patient was told about 10 minutes before surgery, but no consent paperwork was on file. Both legislative meetings where she appeared on camera from the operating room were streamed on the General Assembly’s website and YouTube channels."

Friday, May 21, 2021

Privacy activists are winning fights with tech giants. Why does victory feel hollow?; The Guardian, May 15, 2021

, The Guardian; Privacy activists are winning fights with tech giants. Why does victory feel hollow?

"Something similar is likely to happen in other domains marked by recent moral panics over digital technologies. The tech industry will address mounting public anxieties over fake news and digital addiction by doubling down on what I call “solutionism”, with digital platforms mobilizing new technologies to offer their users a bespoke, secure and completely controllable experience...

What we want is something genuinely new: an institution that will know what parts of existing laws and regulations to suspend – like the library does with intellectual property law, for example – in order to fully leverage the potential inherent in digital technologies in the name of great public good."

Wednesday, March 3, 2021

Balancing Privacy With Data Sharing for the Public Good; The New York Times, February 19, 2021

 , The New York Times; Balancing Privacy With Data Sharing for the Public Good

"Governments and technology companies are increasingly collecting vast amounts of personal data, prompting new laws, myriad investigations and calls for stricter regulation to protect individual privacy.

Yet despite these issues, economics tells us that society needs more data sharing rather than less, because the benefits of publicly available data often outweigh the costs. Public access to sensitive health records sped up the development of lifesaving medical treatments like the messenger-RNA coronavirus vaccinesproduced by Moderna and Pfizer. Better economic data could vastly improve policy responses to the next crisis."


Virginia governor signs nation’s second state consumer privacy bill; The Washington Post, March 2, 2021


Cat Zakrzewski The Washington Post ; Virginia governor signs nation’s second state consumer privacy bill

"Gov. Ralph Northam signed data privacy legislation into law on Tuesday, making Virginia the second state in the nation to adopt its own data protection rules.

The law, known as the Consumer Data Protection Act, had broad support from the tech industry, including Amazon, which is building an Arlington, Va., headquarters. The legislation will allow residents of the commonwealth to opt out of having their data collected and sold, similar to a California law that went into effect last year. Under the new law, Virginia residents can also see what data companies have collected about them, and correct or delete it. (Amazon founder and chief executive Jeff Bezos owns The Washington Post.)

The Virginia law is widely viewed as more industry friendly than the California provision, however, and privacy advocates have called for Virginia to adopt some of California’s provisions that make it easier for people to opt out of data collection from multiple companies. The Virginia law also does not allow individuals to bring lawsuits against tech companies for violations and will be enforced by the state’s attorney general, not a separate enforcement agency."

Sunday, August 16, 2020

Software that monitors students during tests perpetuates inequality and violates their privacy; MIT Technology Review, August 7, 2020

Software that monitors students during tests perpetuates inequality and violates their privacy

"The coronavirus pandemic has been a boon for the test proctoring industry. About half a dozen companies in the US claim their software can accurately detect and prevent cheating in online tests. Examity, HonorLock, Proctorio,ProctorURespondus and others have rapidly grown since colleges and universities switched to remote classes.

While there’s no official tally, it’s reasonable to say that millions of algorithmically proctored tests are happening every month around the world. Proctorio told the New York Times in May that business had increased by 900% during the first few months of the pandemic, to the point where the company proctored 2.5 million tests worldwide in April alone.

I'm a university librarian and I've seen the impacts of these systems up close. My own employer, the University of Colorado Denver, has a contract with Proctorio.

It’s become clear to me that algorithmic proctoring is a modern surveillance technology that reinforces white supremacy, sexism, ableism, and transphobia. The use of these tools is an invasion of students’ privacy and, often, a civil rights violation."

Tuesday, August 11, 2020

Why Parents Should Pause Before Oversharing Online; The New York Times, August 4, 2020

Stacey Steinberg, The New York TimesWhy Parents Should Pause Before Oversharing Online

As social media comes of age, will we regret all the information we revealed about our families during its early years?

"A Conflict of Interest

Studying children’s privacy on social media fed both my personal conflicts and my professional passions, so six years ago, I delved deep into the work of studying the intersection of a child’s right to privacy and a parent’s right to share.

What I quickly learned was that the law does not give us much guidance when it comes to how we use social media as families. Societal norms encour­age us to use restraint before publicly sharing personal informa­tion about our friends and family. But nothing stops us as parents from sharing our child’s stories with the virtual world.

While there are laws that protect American children’s privacy in certain contexts — such as HIPAA for health care, FERPA for education and COPPA for the online privacy of children under 13 — they do not have a right to privacy from their parents,” except in the most limited of circumstances.

Most other countries guarantee a child the right to privacy through an international agreement called the United Nations Convention on the Rights of the Child. The United States signed the agreement, but it is the only United Nations member country not to have ratified it, which means it is not law or formal policy here. Additionally, doctrines like the Right to Be Forgotten might offer children in the European Union remedies for their parents’ oversharing once they come of age."

Sunday, August 9, 2020

Libraries’ Buchanan Fellows adapt with new skills during pandemic; Vanderbilt University, August 5, 2020

Vanderbilt University; Libraries’ Buchanan Fellows adapt with new skills during pandemic

"Before the pandemic, the fellows for the Ethics of Information project were scheduled to participate in a library fair on intellectual freedom and privacy issues, according to Andrew Wesolek, director of digital scholarship and communications for the Jean and Alexander Heard Libraries.

“Our students were going to staff the information tables and discuss with attendees the various online privacy tools they had learned about in weekly seminars,” said Wesolek, who co-directed the fellowship with Melissa Mallon, director of Peabody Library; Bobby Smiley, director of Divinity Library; and Sarah Burriss, doctoral candidate in the Department of Teaching and Learning at Peabody College. “The fair was cancelled, but students pivoted to create highly informative public service announcements on information privacy that can be viewed online.”

The student-produced PSAs cover everything from preventing unwanted Internet ad pop-ups to the dangers of personal data collection by some of the emerging and unregulated technologies."

Friday, July 17, 2020

If AI is going to help us in a crisis, we need a new kind of ethics; MIT Technology Review, June 24, 2020

, MIT Technology Review; If AI is going to help us in a crisis, we need a new kind of ethics

Ethics for urgency means making ethics a core part of AI rather than an afterthought, says Jess Whittlestone.

"What needs to change?

We need to think about ethics differently. It shouldn’t be something that happens on the side or afterwards—something that slows you down. It should simply be part of how we build these systems in the first place: ethics by design...

You’ve said that we need people with technical expertise at all levels of AI design and use. Why is that?

I’m not saying that technical expertise is the be-all and end-all of ethics, but it’s a perspective that needs to be represented. And I don’t want to sound like I’m saying all the responsibility is on researchers, because a lot of the important decisions about how AI gets used are made further up the chain, by industry or by governments.

But I worry that the people who are making those decisions don’t always fully understand the ways it might go wrong. So you need to involve people with technical expertise. Our intuitions about what AI can and can’t do are not very reliable.

What you need at all levels of AI development are people who really understand the details of machine learning to work with people who really understand ethics. Interdisciplinary collaboration is hard, however. People with different areas of expertise often talk about things in different ways. What a machine-learning researcher means by privacy may be very different from what a lawyer means by privacy, and you can end up with people talking past each other. That’s why it’s important for these different groups to get used to working together."

Saturday, July 11, 2020

I was wrongfully arrested because of facial recognition. Why are police allowed to use it?; The Washington Post, June 24, 2020

Robert Williams, The Washington Post; I was wrongfully arrested because of facial recognition. Why are police allowed to use it?

"Federal studies have shown that facial-recognition systems misidentify Asian and black people up to 100 times more often than white people. Why is law enforcement even allowed to use such technology when it obviously doesn’t work?...

Even if this technology does become accurate (at the expense of people like me), I don’t want my daughters’ faces to be part of some government database. I don’t want cops showing up at their door because they were recorded at a protest the government didn’t like. I don’t want this technology automating and worsening the racist policies we’re protesting. I don’t want them to have a police record for something they didn’t do — like I now do."

Wrongfully Accused by an Algorithm; The New York Times, June 24, 2020

, The New York Times; Wrongfully Accused by an Algorithm

In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.

"Clare Garvie, a lawyer at Georgetown University’s Center on Privacy and Technology, has written about problems with the government’s use of facial recognition. She argues that low-quality search images — such as a still image from a grainy surveillance video — should be banned, and that the systems currently in use should be tested rigorously for accuracy and bias.

“There are mediocre algorithms and there are good ones, and law enforcement should only buy the good ones,” Ms. Garvie said.

About Mr. Williams’s experience in Michigan, she added: “I strongly suspect this is not the first case to misidentify someone to arrest them for a crime they didn’t commit. This is just the first time we know about it.”"