Showing posts with label facial recognition. Show all posts
Showing posts with label facial recognition. Show all posts

Friday, June 14, 2024

Clearview AI Used Your Face. Now You May Get a Stake in the Company.; The New York Times, June 13, 2024

Kashmir Hill , The New York Times; Clearview AI Used Your Face. Now You May Get a Stake in the Company.

"A facial recognition start-up, accused of invasion of privacy in a class-action lawsuit, has agreed to a settlement, with a twist: Rather than cash payments, it would give a 23 percent stake in the company to Americans whose faces are in its database.

Clearview AI, which is based in New York, scraped billions of photos from the web and social media sites like Facebook, LinkedIn and Instagram to build a facial recognition app used by thousands of police departments, the Department of Homeland Security and the F.B.I. After The New York Times revealed the company’s existence in 2020, lawsuits were filed across the country. They were consolidated in federal court in Chicago as a class action.

The litigation has proved costly for Clearview AI, which would most likely go bankrupt before the case made it to trial, according to court documents." 

Tuesday, January 2, 2024

How the Federal Government Can Rein In A.I. in Law Enforcement; The New York Times, January 2, 2024

 Joy Buolamwini and , The New York Times; How the Federal Government Can Rein In A.I. in Law Enforcement

"One of the most hopeful proposals involving police surveillance emerged recently from a surprising quarter — the federal Office of Management and Budget. The office, which oversees the execution of the president’s policies, has recommended sorely needed constraints on the use of artificial intelligence by federal agencies, including law enforcement.

The office’s work is commendable, but shortcomings in its proposed guidance to agencies could still leave people vulnerable to harm. Foremost among them is a provision that would allow senior officials to seek waivers by arguing that the constraints would hinder law enforcement. Those law enforcement agencies should instead be required to provide verifiable evidence that A.I. tools they or their vendors use will not cause harm, worsen discrimination or violate people’s rights."

Friday, May 27, 2022

Accused of Cheating by an Algorithm, and a Professor She Had Never Met; The New York Times, May 27, 2022

Kashmir Hill, The New York Times; Accused of Cheating by an Algorithm, and a Professor She Had Never Met

An unsettling glimpse at the digitization of education.

"The most serious flaw with these systems may be a human one: educators who overreact when artificially intelligent software raises an alert.

“Schools seem to be treating it as the word of God,” Mr. Quintin said. “If the computer says you’re cheating, you must be cheating.”"

Friday, February 4, 2022

IRS plan to scan your face prompts anger in Congress, confusion among taxpayers; The Washington Post, January 27, 2022

Drew Harwell, The Washington Post; IRS plan to scan your face prompts anger in Congress, confusion among taxpayers

"The $86 million ID.me contract with the IRS also has alarmed researchers and privacy advocates who say they worry about how Americans’ facial images and personal data will be safeguarded in the years to come. There is no federal law regulating how the data can be used or shared. While the IRS couldn’t say what percentage of taxpayers use the agency’s website, internal data show it is one of the federal government’s most-viewed websites, with more than 1.9 billion visits last year."

Tuesday, April 9, 2019

Why we can’t leave Grindr under Chinese control; The Washington Post, April 9, 2019

Isaac Stone Fish, The Washington Post; Why we can’t leave Grindr under Chinese control

"Because a Chinese company now oversees Grindr’s data, photographs and messages, that means the [Chinese Communist] Party can, if it chooses to do so, access all of that information, regardless of where it’s stored. And that data includes compromising photos and messages from some of America’s most powerful men — some openly gay, and some closeted.

Couple this with China’s progress in developing big data and facial recognition software, industries more advanced there than in the United States, and there are some concerning national security implications of a Chinese-owned Grindr. In other words, Beijing could now exploit compromising photos of millions of Americans. Think what a creative team of Chinese security forces could do with its access to Grindr’s data."

Monday, April 1, 2019

Google Announced An AI Advisory Council, But The Mysterious AI Ethics Board Remains A Secret; Forbes, March 27, 2019

Sam Shead, Forbes; Google Announced An AI Advisory Council, But The Mysterious AI Ethics Board Remains A Secret

"Google announced a new external advisory council to keep its artificial intelligence developments in check on Wednesday, but the mysterious AI ethics board that was set up when the company bought the DeepMind AI lab in 2014 remains shrouded in mystery.

The new advisory council consists of eight members that span academia and public policy. 

"We've established an Advanced Technology External Advisory Council (ATEAC)," wrote Kent Walker SVP of global affairs at Google in a blog post on Tuesday. "This group will consider some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work." 

Here is the full list of AI advisory council members:"

Thursday, September 6, 2018

From Mountain of CCTV Footage, Pay Dirt: 2 Russians Are Named in Spy Poisoning; The New York Times, September 5, 2018

Ellen Barry, The New York Times;

From Mountain of CCTV Footage, Pay Dirt: 2 Russians Are Named in Spy Poisoning


[Kip Currier: Fascinating example of good old-fashioned, "methodical, plodding" detective work, combined with 21st century technologies of mass surveillance and facial recognition by machines and gifted humans.

As I think about the chapters on privacy and surveillance in the ethics textbook I'm writing, this story is a good reminder of the socially-positive aspects of new technologies, amid often legitimate concerns about their demonstrated and potential downsides. In the vein of prior stories I've posted on this blog about the use, for example, of drones for animal conservation and monitoring efforts, the identification of the two Russian operatives in the Salisbury, UK poisoning case highlights how the uses and applications of digital age technologies like mass surveillance frequently fall outside the lines of "all bad" or "all good".]

"“It’s almost impossible in this country to hide, almost impossible,” said John Bayliss, who retired from the Government Communications Headquarters, Britain’s electronic intelligence agency, in 2010. “And with the new software they have, you can tell the person by the way they walk, or a ring they wear, or a watch they wear. It becomes even harder.”

The investigation into the Skripal poisoning, known as Operation Wedana, will stand as a high-profile test of an investigative technique Britain has pioneered: accumulating mounds of visual data and sifting through it...

Ceri Hurford-Jones, the managing director of Salisbury’s local radio station, saluted investigators for their “sheer skill in getting a grip on this, and finding out who these people were.”

It may not have been the stuff of action films, but Mr. Hurford-Jones did see something impressive about the whole thing.

“It’s methodical, plodding,” he said. “But, you know, that’s the only way you can do these things. There is a bit of Englishness in it.”"

Friday, February 16, 2018

Congress is worried about AI bias and diversity; Quartz, February 15, 2018

Dave Gershgorn, Quartz; Congress is worried about AI bias and diversity

"Recent research from the MIT Media Lab maintains that facial recognition is still significantly worse for people of color, however.
“This is not a small thing,” Isbell said of his experience. “It can be quite subtle, and you can go years and years and decades without even understanding you are injecting these kinds of biases, just in the questions that you’re asking, the data you’re given, and the problems you’re trying to solve.”
In his opening statement, Isbell talked about biased data in artificial intelligence systems today, including predictive policing and biased algorithms used in predicting recidivism rates.
“It does not take much imagination to see how being from a heavily policed area raises the chances of being arrested again, being convicted again, and in aggregate leads to even more policing of the same areas, creating a feedback loop,” he said. “One can imagine similar issues with determining it for a job, or credit-worthiness, or even face recognition and automated driving.”"

Tuesday, June 27, 2017

Facial Recognition May Boost Airport Security But Raises Privacy Worries; NPR, June 26, 2017

Asma Khalid, NPR; Facial Recognition May Boost Airport Security But Raises Privacy Worries

"JetBlue is pitching this idea of facial recognition as convenience for customers. It's voluntary. But it's also part of a broader push by Customs and Border Protection to create a biometric exit system to track non-U.S. citizens leaving the country...

[Adam Schwartz, a lawyer with the Electronic Frontier Foundation] says facial recognition is a uniquely invasive form of surveillance.

"We can change our bank account numbers, we even can change our names, but we cannot change our faces," Schwartz says. "And once the information is out there, it could be misused.""...

Back at Logan Airport, passenger Yeimy Quezada feels totally comfortable sharing her face instead of a barcode.

"Even your cellphone recognizes selfies and recognize faces, so I'm used to that technology already," she says. "And, I'm not concerned about privacy because I'm a firm believer that if you're not hiding anything, you shouldn't be afraid of anything."