Showing posts with label developers. Show all posts
Showing posts with label developers. Show all posts

Friday, February 4, 2022

Where Automated Job Interviews Fall Short; Harvard Business Review (HBR), January 27, 2022

Dimitra Petrakaki, Rachel Starr, and , Harvard Business Review (HBR) ; Where Automated Job Interviews Fall Short

"The use of artificial intelligence in HR processes is a new, and likely unstoppable, trend. In recruitment, up to 86% of employers use job interviews mediated by technology, a growing portion of which are automated video interviews (AVIs).

AVIs involve job candidates being interviewed by an artificial intelligence, which requires them to record themselves on an interview platform, answering questions under time pressure. The video is then submitted through the AI developer platform, which processes the data of the candidate — this can be visual (e.g. smiles), verbal (e.g. key words used), and/or vocal (e.g. the tone of voice). In some cases, the platform then passes a report with an interpretation of the job candidate’s performance to the employer.

The technologies used for these videos present issues in reliably capturing a candidate’s characteristics. There is also strong evidence that these technologies can contain bias that can exclude some categories of job-seekers. The Berkeley Haas Center for Equity, Gender, and Leadership reports that 44% of AI systems are embedded with gender bias, with about 26% displaying both gender and race bias. For example, facial recognition algorithms have a 35% higher detection error for recognizing the gender of women of color, compared to men with lighter skin.

But as developers work to remove biases and increase reliability, we still know very little on how AVIs (or other types of interviews involving artificial intelligence) are experienced by different categories of job candidates themselves, and how these experiences affect them, this is where our research focused. Without this knowledge, employers and managers can’t fully understand the impact these technologies are having on their talent pool or on different group of workers (e.g., age, ethnicity, and social background). As a result, organizations are ill-equipped to discern whether the platforms they turn to are truly helping them hire candidates that align with their goals. We seek to explore whether employers are alienating promising candidates — and potentially entire categories of job seekers by default — because of varying experiences of the technology."

Sunday, April 5, 2020

Developers - it's time to brush up on your philosophy: Ethical AI is the big new thing in tech; ZDNet, April 1, 2020

 , ZDNet; Developers - it's time to brush up on your philosophy: Ethical AI is the big new thing in tech

The transformative potential of algorithms means that developers are now expected to think about the ethics of technology -- and that wasn't part of the job description.

"Crucially, most guidelines also insist that thought be given to the ethical implications of the technology from the very first stage of conceptualising a new tool, and all the way through its implementation and commercialisation. 

This principle of 'ethics by design' goes hand in hand with that of responsibility and can be translated, roughly, as: 'coders be warned'. In other words, it's now on developers and their teams to make sure that their program doesn't harm users. And the only way to make sure it doesn't is to make the AI ethical from day one.
The trouble with the concept of ethics by design, is that tech wasn't necessarily designed for ethics."

Thursday, January 17, 2019

Anil Dash on the biases of tech; The Ezra Klein Show via Vox, January 7, 2019

Ezra Klein, The Ezra Klein Show via Vox; Anil Dash on the biases of tech

[Kip Currier: Excellent podcast discussion of ethics and technology issues by journalist Ezra Klein and Anil Dash, CEO of Glitch and host of the tech podcast Function. 

One particularly thought-provoking, illustrative exchange about the choices humans make in designing and embedding certain values in AI algorithms and the implications of those choices (~5:15 mark):


Ezra Klein: "This feels really important to me because something I'm afraid of, as you move into a world of algorithms, is that algorithms hide the choices we make. That the algorithm says you're not viable for this mortgage. The algorithm says that this Donald Trump tweet should be at the top of everybody's feeds. And when it's the algorithm, that detachment from human beings gives it a kind of authority. It's like some gatekeeper saying this is what you should be looking at..."...

Anil Dash:  "That's right. The algorithm is availing of the fact that it's still the people at that company making the choice. And when YouTube chooses to show disturbing content as "related videos" to my 7-year old son, that is a choice that people at YouTube are making, and people at Google and Alphabet are making. And that when they say "well, the algorithm did it." It's like "well, who made the algorithm?" And you can make it not do that. And I know you could do that because, for example, if it were a copyrighted version of a Beyonce song, you'd instantly stop it from being shared. So the algorithm is a set of choices about values and what you want to invest in. And that is, to that point, technology has values is not neutral."]

"“Marc Andreessen famously said that ‘software is eating the world,’ but it’s far more accurate to say that the neoliberal values of software tycoons are eating the world,” wrote Anil Dash.

Dash’s argument caught my eye. But then, a lot of Dash’s arguments catch my eye. He’s one of the most perceptive interpreters and critics of the tech industry around these days. That’s in part because Dash is part of the world he’s describing: He’s the CEO of Glitch, the host of the excellent tech podcast Function, and a longtime developer and blogger.

In this conversation, Dash and I discuss his excellent list of the 12 things everyone should know about technology. This episode left me with an idea I didn’t have going in: What if the problem with a lot of the social technologies we use — and, lately, lament — isn’t the ethics of their creators or the revenue models they’re built on, but the sheer scale they’ve achieved? What if products like Facebook and Twitter and Google have just gotten too big and too powerful for anyone to truly understand, much less manage?"

Thursday, September 27, 2018

92% Of AI Leaders Now Training Developers In Ethics, But 'Killer Robots' Are Already Being Built; Forbes, September 26, 2018

John Koetsier, Forbes; 92% Of AI Leaders Now Training Developers In Ethics, But 'Killer Robots' Are Already Being Built

""Organizations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people,” Rumman Chowdhury, Responsible AI Lead at Accenture Applied Intelligence, said in a statement. “Organizations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm.’ They need to provide prescriptive, specific and technical guidelines to develop AI systems that are secure, transparent, explainable, and accountable – to avoid unintended consequences and compliance challenges that can be harmful to individuals, businesses, and society.""

Saturday, March 3, 2018

Who needs ethics anyway? – Chips with Everything podcast; Guardian, March 2, 2018

[Podcast] Presented by  and produced by Guardian; 

 Who needs ethics anyway? – Chips with Everything podcast


"Technology companies seem to have a bad reputation at the moment. Whether through honest mistakes or more intentional oversights, the likes of Apple, Facebook, Google and Twitter have created distrust among consumers.

But as technology develops, and as we hand over more control to artificial intelligence and machines, it becomes difficult for developers to foresee the negative consequences or side-effects that might arise.
In October 2017, the AI company DeepMind, a subsidiary of Google, created an ethics group made up of employees and external experts called DeepMind Ethics & Society.
But are these groups any more than a PR strategy? And how can we train technology students to preempt an ethical disaster before they enter the workforce?
To discuss these issues, Jordan Erica Webber is joined by Dr Mariarosaria Taddeoof the Oxford Internet Institute, Prof Laura Norén of NYU and student Kandrea Wade."