Joia Crear-Perry, Michael McAfee , Scientific American; To Protect Black Americans from the Worst Impacts of COVID-19, Release Comprehensive Racial Data
Properly reported information is crucial for black communities to recover from this crisis and transcend a history of exclusion
"History shows that when crises strike, Black Americans often experience the worst consequences. We mustn’t continue allowing this to happen. Our organizations—the National Birth Equity Collaborative and PolicyLink—recently joined a coalition called WeMustCount demanding the data. Once we have that data, we’re calling on policymakers to take immediate action to help.
The data on Black Americans and COVID-19 are shocking but not unexpected. Engrained racist structures prevent them from fully accessing health care, education, employment and more—all of which increases susceptibility to COVID-19 and its most devastating health consequences.
These issues trace back far before the current pandemic. It was baked into the nation’s founding and carries forward today. Black Americans have always suffered disproportionately from national crises...
Buried behind all of this is an underlying fear: Releasing the information would mean bringing attention to a problem that policy makers could otherwise easily ignore."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label policy makers. Show all posts
Showing posts with label policy makers. Show all posts
Wednesday, May 6, 2020
Wednesday, November 6, 2019
How Machine Learning Pushes Us to Define Fairness; Harvard Business Review, November 6, 2019
David Weinberger, Harvard Business Review; How Machine Learning Pushes Us to Define Fairness
"Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI.
Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Our encounters with machine learning (ML) are beginning to give us concepts, a vocabulary, and tools that enable us to address questions of bias and fairness more directly and precisely than before."
"Even with the greatest of care, an ML system might find biased patterns so subtle and complex that they hide from the best-intentioned human attention. Hence the necessary current focus among computer scientists, policy makers, and anyone concerned with social justice on how to keep bias out of AI.
Yet machine learning’s very nature may also be bringing us to think about fairness in new and productive ways. Our encounters with machine learning (ML) are beginning to give us concepts, a vocabulary, and tools that enable us to address questions of bias and fairness more directly and precisely than before."
Monday, November 25, 2013
Already Anticipating ‘Terminator’ Ethics; New York Times, 11/24/13
John Markoff, New York Times; Already Anticipating ‘Terminator’ Ethics:
"What could possibly go wrong?
That was a question that some of the world’s leading roboticists faced at a technical meeting in October, when they were asked to consider what the science-fiction writer Isaac Asimov anticipated a half-century ago: the need to design ethical behavior into robots...
All of which make questions about robots and ethics more than hypothetical for roboticists and policy makers alike.
The discussion about robots and ethics came during this year’s Humanoids technical conference. At the conference, which focused on the design and application of robots that appear humanlike, Ronald C. Arkin delivered a talk on “How to NOT Build a Terminator,” picking up where Asimov left off with his fourth law of robotics — “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
While he did an effective job posing the ethical dilemmas, he did not offer a simple solution. His intent was to persuade the researchers to confront the implications of their work."
Subscribe to:
Posts (Atom)