Showing posts with label biases. Show all posts
Showing posts with label biases. Show all posts

Sunday, February 18, 2024

IT body proposes that AI pros get leashed and licensed to uphold ethics; The Register, February 15, 2024

Paul Kunert, The Register; IT body proposes that AI pros get leashed and licensed to uphold ethics

"Creating a register of licensed AI professionals to uphold ethical standards and securing whistleblowing channels to call out bad management are two policies that could prevent a Post Office-style scandal.

So says industry body BCS – formerly the British Computer Society – which reckons licenses based on an independent framework of ethics would promote transparency among software engineers and their bosses.

"We have a register of doctors who can be struck off," said Rashik Parmar MBE, CEO at BCS. "AI professionals already have a big role in our life chances, so why shouldn't they be licensed and registered too?"...

The importance of AI ethics was amplified by the Post Office scandal, says the BCS boss, "where computer generated evidence was used by non-IT specialists to prosecute sub postmasters with tragic results."

For anyone not aware of the outrageous wrongdoing committed by the Post Office, it bought the bug-ridden Horizon accounting system in 1999 from ICL, a company that was subsequently bought by Fujitsu. Hundreds of local Post Office branch managers were subsequently wrongfully convicted of fraud when Horizon was to blame."

Wednesday, January 10, 2024

Addressing equity and ethics in artificial intelligence; American Psychological Association, January 8, 2024

 Zara Abrams, American Psychological Association; Addressing equity and ethics in artificial intelligence

"As artificial intelligence (AI) rapidly permeates our world, researchers and policymakers are scrambling to stay one step ahead. What are the potential harms of these new tools—and how can they be avoided?

“With any new technology, we always need to be thinking about what’s coming next. But AI is moving so fast that it’s difficult to grasp how significantly it’s going to change things,” said David Luxton, PhD, a clinical psychologist and an affiliate professor at the University of Washington’s School of Medicine who is part of a session at the upcoming 2024 Consumer Electronics Show (CES) on Harnessing the Power of AI Ethically.

Luxton and his colleagues dubbed recent AI advances “super-disruptive technology” because of their potential to profoundly alter society in unexpected ways. In addition to concerns about job displacement and manipulation, AI tools can cause unintended harm to individuals, relationships, and groups. Biased algorithms can promote discrimination or other forms of inaccurate decision-making that can cause systematic and potentially harmful errors; unequal access to AI can exacerbate inequality (Proceedings of the Stanford Existential Risk Conference 2023, 60–74). On the flip side, AI may also hold the potential to reduce unfairness in today’s world—if people can agree on what “fairness” means.

“There’s a lot of pushback against AI because it can promote bias, but humans have been promoting biases for a really long time,” said psychologist Rhoda Au, PhD, a professor of anatomy and neurobiology at the Boston University Chobanian & Avedisian School of Medicine who is also speaking at CES on harnessing AI ethically. “We can’t just be dismissive and say: ‘AI is good’ or ‘AI is bad.’ We need to embrace its complexity and understand that it’s going to be both.”"

Monday, December 4, 2023

Unmasking AI's Racism And Sexism; NPR, Fresh Air, November 28, 2023

 NPR, Fresh Air; Unmasking AI's Racism And Sexism

"Computer scientist and AI expert Joy Buolamwini warns that facial recognition technology is riddled with the biases of its creators. She is the author of Unmasking AI and founder of the Algorithmic Justice League. She coined the term "coded gaze," a cousin to the "white gaze" or "male gaze." She says, "This is ... about who has the power to shape technology and whose preferences and priorities are baked in — as well as also, sometimes, whose prejudices are baked in.""

Friday, May 20, 2022

Federal officials caution employers on using AI in hiring; FCW, May 12, 2022

Natalie Alms, FCW; Federal officials caution employers on using AI in hiring

"The growing use of artificial intelligence and other software tools for hiring, performance monitoring and pay determination in the workplace is compounding discriminiation against people with disabilities, federal civil rights officials say.

Artificial intelligence can be deployed to target job ads to certain potential applicants, hold online job interviews, assess the skills of job applicants and even decide if an applicant meets job requirements. But the technology can discriminate against applicants and employees with disabilities.

On Thursday, the Equal Employment Opportunity Commission and the Department of Justice put employers on alert that they're responsible for not using AI tools in ways that discriminate and inform employees of their rights, agency officials told reporters."

Thursday, March 10, 2022

David J. Hickton: Report for region: People must have voice, stake in algorithms; The Pittsburgh Post-Gazette, March 10, 2022

David J. Hickton, The Pittsburgh Post-Gazette; David J. Hickton: Report for region: People must have voice, stake in algorithms

"The institute that I lead — the University of Pittsburgh’s Institute for Cyber Law, Policy and Security, or simply Pitt Cyber — formed the Pittsburgh Task Force on Public Algorithms to do precisely that for our region.

We brought together a diverse group of experts and leaders from across the region and the country to study how our local governments are using algorithms and the state of public participation and oversight of these systems.

Our findings should be no surprise: Public algorithms are on the rise. And the openness of and public participation in the development and deployment of those systems varies considerably across local governments and agencies...

Our Task Force’s report — the product of our two-year effort — offers concrete recommendations to policymakers. For example, we encourage independent reviews and public involvement in the development of algorithmic systems commensurate with their risks: higher-risk systems, like those involved in decisions affecting liberty, require more public buy-in and examination."

Friday, February 4, 2022

Where Automated Job Interviews Fall Short; Harvard Business Review (HBR), January 27, 2022

Dimitra Petrakaki, Rachel Starr, and , Harvard Business Review (HBR) ; Where Automated Job Interviews Fall Short

"The use of artificial intelligence in HR processes is a new, and likely unstoppable, trend. In recruitment, up to 86% of employers use job interviews mediated by technology, a growing portion of which are automated video interviews (AVIs).

AVIs involve job candidates being interviewed by an artificial intelligence, which requires them to record themselves on an interview platform, answering questions under time pressure. The video is then submitted through the AI developer platform, which processes the data of the candidate — this can be visual (e.g. smiles), verbal (e.g. key words used), and/or vocal (e.g. the tone of voice). In some cases, the platform then passes a report with an interpretation of the job candidate’s performance to the employer.

The technologies used for these videos present issues in reliably capturing a candidate’s characteristics. There is also strong evidence that these technologies can contain bias that can exclude some categories of job-seekers. The Berkeley Haas Center for Equity, Gender, and Leadership reports that 44% of AI systems are embedded with gender bias, with about 26% displaying both gender and race bias. For example, facial recognition algorithms have a 35% higher detection error for recognizing the gender of women of color, compared to men with lighter skin.

But as developers work to remove biases and increase reliability, we still know very little on how AVIs (or other types of interviews involving artificial intelligence) are experienced by different categories of job candidates themselves, and how these experiences affect them, this is where our research focused. Without this knowledge, employers and managers can’t fully understand the impact these technologies are having on their talent pool or on different group of workers (e.g., age, ethnicity, and social background). As a result, organizations are ill-equipped to discern whether the platforms they turn to are truly helping them hire candidates that align with their goals. We seek to explore whether employers are alienating promising candidates — and potentially entire categories of job seekers by default — because of varying experiences of the technology."

Thursday, January 20, 2022

At Google Cloud, A.I. ethics requires ‘Iced Tea’ and ‘Lemonaid’; Fortune, January 11, 2022

  

"For now, Moore says, the best safeguard is very careful human review. It is up to people to ask tough questions about the ethics of how the system is going to be used and also to think hard about both the abuse of such a system and about what the unintended consequences might be. This needs to be combined with careful testing to find the system’s biases and potential failure points."

Wednesday, September 25, 2019

‘Nerd,’ ‘Nonsmoker,’ ‘Wrongdoer’: How Might A.I. Label You?; The New York Times, September 20, 2019

, The New York Times; ‘Nerd,’ ‘Nonsmoker,’ ‘Wrongdoer’: How Might A.I. Label You?

ImageNet Roulette, a digital art project and viral selfie app, exposes how biases have crept into the artificial-intelligence technologies changing our lives.

"But for Mr. Paglen, a larger issue looms. The fundamental truth is that A.I. learns from humans — and humans are biased creatures. “The way we classify images is a product of our worldview,” he said. “Any kind of classification system is always going to reflect the values of the person doing the classifying.”"

Wednesday, September 18, 2019

University Launches Ethics-Forward Data Science Major; Washington Square News, NYU's Independent Student Newspaper, September 16, 2019

Akiva Thalheim, Washington Square News, NYU's Independent Student Newspaper; University Launches Ethics-Forward Data Science Major

"The new major seeks to specifically address and prevent these issues through a required course in the ethics of data science, [Center for Data Science Director Julia] Kempe explained. She added that the course was developed with the assistance of a National Science Foundation grant.

“We are hoping to educate young people to be data savvy and also data critical, because nowadays, everything is about data but often it’s done in a very uncritical way,” Kempe said. “We have to understand where the biases are [and] how to use data ethically — it’s something that we want to impart on every student, if we can.""

Saturday, January 5, 2019

My column’s name does a disservice to the immigrants whose food I celebrate. So I’m dropping it.; The Washington Post, January 2, 2019

Tim Carman, The Washington Post; My column’s name does a disservice to the immigrants whose food I celebrate. So I’m dropping it.

"By writing about immigrant cuisines under a cheap-eats rubric, I have perpetuated the narrative that they should always be thought of as budget-priced...

Given this theory, I’ve had to ask myself uncomfortable questions, such as: Isn’t lumping certain cuisines under a cheap-eats banner only contributing to their low-class status? Am I not kneecapping, say, Central American cooks who toil in almost every kitchen in the District? Am I not telling these cooks that we, as Washingtonians, will never pay the same price for a Salvadoran, Guatemalan or Puerto Rican meal as we do for that plate of charred brassicas with mint chimichurri at the fancy New American restaurant where these immigrants are currently employed?...

By stripping this column of its previous name, I hope to remove at least one possible stigma about the restaurants that I decide to cover: that they are somehow “lesser” than the ones that might charge higher prices, have table service, offer a full bar or whatever confers prestige among diners. They are simply different in their approach. Many take just as much pride in their food as the chefs at the white-tablecloth restaurants do. I want to contribute to a society where it’s possible to esteem the high and low equally, each worthy of respect for what it does well."

Monday, July 23, 2018

We Need Transparency in Algorithms, But Too Much Can Backfire; Harvard Business Review, July 23, 2018

Kartik Hosanagar and Vivian Jair, Harvard Business Review; We Need Transparency in Algorithms, But Too Much Can Backfire

"Companies and governments increasingly rely upon algorithms to make decisions that affect people’s lives and livelihoods – from loan approvals, to recruiting, legal sentencing, and college admissions. Less vital decisions, too, are being delegated to machines, from internet search results to product recommendations, dating matches, and what content goes up on our social media feeds. In response, many experts have called for rules and regulations that would make the inner workings of these algorithms transparent. But as Nass’s experience makes clear, transparency can backfire if not implemented carefully. Fortunately, there is a smart way forward."