Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Sunday, October 6, 2024
Police seldom disclose use of facial recognition despite false arrests; The Washington Post, October 6, 2024
Monday, December 4, 2023
Unmasking AI's Racism And Sexism; NPR, Fresh Air, November 28, 2023
NPR, Fresh Air; Unmasking AI's Racism And Sexism
"Computer scientist and AI expert Joy Buolamwini warns that facial recognition technology is riddled with the biases of its creators. She is the author of Unmasking AI and founder of the Algorithmic Justice League. She coined the term "coded gaze," a cousin to the "white gaze" or "male gaze." She says, "This is ... about who has the power to shape technology and whose preferences and priorities are baked in — as well as also, sometimes, whose prejudices are baked in.""
Tuesday, July 11, 2023
You can say no to a TSA face scan. But even a senator had trouble.; The Washington Post, July 11, 2023
Shira Ovide, The Washington Post; You can say no to a TSA face scan. But even a senator had trouble.
"Let’s discuss two topics:
- TSA’s face scanning is supposed to be optional for us. Is it, really?
- What are the potential benefits and drawbacks of the TSA’s use of facial recognition software?"
Thursday, September 1, 2022
Ethical issues of facial recognition technology; TechRepublic, August 31, 2022
Patrick Gray in Artificial Intelligence, TechRepublic; Ethical issues of facial recognition technology
"Facial recognition technology has entered the mass market, with our faces now able to unlock our phones and computers. While the ability to empower machines with the very human ability to identify a person with a quick look at their face is exciting, it’s not without significant ethical concerns.
Suppose your company is considering facial recognition technology. In that case, it’s essential to be aware of these concerns and ready to address them, which may even include abandoning facial recognition altogether.
When assessing these ethical concerns, consider how your customers, employees and the general public would react if they fully knew how you’re using the technology. If that thought is disconcerting, you may be veering into an ethical “danger zone.”"
Thursday, January 30, 2020
Facebook pays $550m settlement for breaking Illinois data protection law; The Guardian, January 30, 2020
"Facebook has settled a lawsuit over facial recognition technology, agreeing to pay $550m (£419m) over accusations it had broken an Illinois state law regulating the use of biometric details...
It is one of the largest payouts for a privacy breach in US history, a marker of the strength of Illinois’s nation-leading privacy laws. The New York Times, which first reported the settlement, noted that the sum “dwarfed” the $380m penalty the credit bureau Equifax agreed to pay over a much larger customer data breach in 2017."
Friday, January 24, 2020
This App Is a Dangerous Invasion of Your Privacy—and the FBI Uses It; Popular Mechanics, January 22, 2020
"Even Google Wouldn't Build This
When companies like Google—which has received a ton of flack for taking government contracts to work on artificial intelligence solutions—won't even build an app, you know it's going to cause a stir. Back in 2011, former Google Chairman Eric Schmidt said a tool like Clearview AI's app was one of the few pieces of tech that the company wouldn't develop because it could be used "in a very bad way."
Facebook, for its part, developed something pretty similar to what Clearview AI offers, but at least had the foresight not to publicly release it. That application, developed between 2015 and 2016, allowed employees to identify colleagues and friends who had enabled facial recognition by pointing their phone cameras at their faces. Since then, the app has been discontinued.
Meanwhile, Clearview AI is nowhere near finished. Hidden in the app's code, which the New York Times evaluated, is programming language that could pair the app to augmented reality glasses, meaning that in the future, it's possible we could identify every person we see in real time.
Early Pushback
Perhaps the silver lining is that we found out about Clearview AI at all. Its public discovery—and accompanying criticism—have led to well-known organizations coming out as staunchly opposed to this kind of tech.Fight for the Future tweeted that "an outright ban" on these AI tools is the only way to fix this privacy issue—not quirky jewelry or sunglasses that can help to protect your identity by confusing surveillance systems."
The Secretive Company That Might End Privacy as We Know It; The New York Times, January 18, 2020
Tuesday, September 17, 2019
Real-Time Surveillance Will Test the British Tolerance for Cameras; The New York Times, September 15, 2019
Sunday, June 18, 2017
Facial recognition could speed up airport security, but at what risk to privacy?; CBS News, June 16, 2017
"Your face may soon be the only thing you need to board a flight. Some airlines are already testing facial recognition technology with the federal government.
The idea is to ditch boarding passes and increase the certainty of a passenger's identity...