Showing posts with label public distrust. Show all posts
Showing posts with label public distrust. Show all posts

Thursday, March 10, 2022

David J. Hickton: Report for region: People must have voice, stake in algorithms; The Pittsburgh Post-Gazette, March 10, 2022

David J. Hickton, The Pittsburgh Post-Gazette; David J. Hickton: Report for region: People must have voice, stake in algorithms

"The institute that I lead — the University of Pittsburgh’s Institute for Cyber Law, Policy and Security, or simply Pitt Cyber — formed the Pittsburgh Task Force on Public Algorithms to do precisely that for our region.

We brought together a diverse group of experts and leaders from across the region and the country to study how our local governments are using algorithms and the state of public participation and oversight of these systems.

Our findings should be no surprise: Public algorithms are on the rise. And the openness of and public participation in the development and deployment of those systems varies considerably across local governments and agencies...

Our Task Force’s report — the product of our two-year effort — offers concrete recommendations to policymakers. For example, we encourage independent reviews and public involvement in the development of algorithmic systems commensurate with their risks: higher-risk systems, like those involved in decisions affecting liberty, require more public buy-in and examination."

Friday, April 16, 2021

Big Tech’s guide to talking about AI ethics; Wired, April 13, 2021

, Wired; Big Tech’s guide to talking about AI ethics

"AI researchers often say good machine learning is really more art than science. The same could be said for effective public relations. Selecting the right words to strike a positive tone or reframe the conversation about AI is a delicate task: done well, it can strengthen one’s brand image, but done poorly, it can trigger an even greater backlash.

The tech giants would know. Over the last few years, they’ve had to learn this art quickly as they’ve faced increasing public distrust of their actions and intensifying criticism about their AI research and technologies.

Now they’ve developed a new vocabulary to use when they want to assure the public that they care deeply about developing AI responsibly—but want to make sure they don’t invite too much scrutiny. Here’s an insider’s guide to decoding their language and challenging the assumptions and values baked in...

diversity, equity, and inclusion (ph) - The act of hiring engineers and researchers from marginalized groups so you can parade them around to the public. If they challenge the status quo, fire them...

ethics board (ph) - A group of advisors without real power, convened to create the appearance that your company is actively listening. Examples: Google’s AI ethics board (canceled), Facebook’s Oversight Board (still standing).

ethics principles (ph) - A set of truisms used to signal your good intentions. Keep it high-level. The vaguer the language, the better. See responsible AI."