, Forbes; Rethinking Medical Ethics
"Even so, the technology raises some knotty ethical questions. What
happens when an AI system makes the wrong decision—and who is
responsible if it does? How can clinicians verify, or even understand,
what comes out of an AI “black box”? How do they make sure AI systems
avoid bias and protect patient privacy?
In June 2018, the American Medical Association (AMA) issued its first
guidelines for how to develop, use and regulate AI. (Notably, the
association refers to AI as “augmented intelligence,” reflecting its
belief that AI will enhance, not replace, the work of physicians.) Among
its recommendations, the AMA says, AI tools should be designed to
identify and address bias and avoid creating or exacerbating disparities
in the treatment of vulnerable populations. Tools, it adds, should be
transparent and protect patient privacy.
None of those
recommendations will be easy to satisfy. Here is how medical
practitioners, researchers, and medical ethicists are approaching some
of the most pressing ethical challenges."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label avoiding biases. Show all posts
Showing posts with label avoiding biases. Show all posts
Tuesday, February 12, 2019
Rethinking Medical Ethics; Forbes, February 11, 2019
Sunday, January 27, 2019
Can we make artificial intelligence ethical?; The Washington Post, January 23, 2019
Stephen A. Schwarzman , The Washington Post; Can we make artificial intelligence ethical?
"Stephen A. Schwarzman is chairman, CEO and co-founder of Blackstone, an investment firm...
Companies must take the lead in addressing key ethical questions surrounding AI. This includes exploring how to avoid biases in AI algorithms that can prejudice the way machines and platforms learn and behave and when to disclose the use of AI to consumers, how to address concerns about AI’s effect on privacy and responding to employee fears about AI’s impact on jobs.
As Thomas H. Davenport and Vivek Katyal argue in the MIT Sloan Management Review, we must also recognize that AI often works best with humans instead of by itself."
"Stephen A. Schwarzman is chairman, CEO and co-founder of Blackstone, an investment firm...
Too often, we think only about increasing our
competitiveness in terms of advancing the technology. But the effort
can’t just be about making AI more powerful. It must also be about
making sure AI has the right impact. AI’s greatest advocates describe
the Utopian promise of
a technology that will save lives, improve health and predict events we
previously couldn’t anticipate. AI’s detractors warn of a dystopian nightmare
in which AI rapidly replaces human beings at many jobs and tasks. If we
want to realize AI’s incredible potential, we must also advance AI in a
way that increases the public’s confidence that AI benefits society. We
must have a framework for addressing the impacts and the ethics.
What does an ethics-driven approach to AI look like?
It means asking not only whether AI be can used in certain circumstances, but should it?
Companies must take the lead in addressing key ethical questions surrounding AI. This includes exploring how to avoid biases in AI algorithms that can prejudice the way machines and platforms learn and behave and when to disclose the use of AI to consumers, how to address concerns about AI’s effect on privacy and responding to employee fears about AI’s impact on jobs.
As Thomas H. Davenport and Vivek Katyal argue in the MIT Sloan Management Review, we must also recognize that AI often works best with humans instead of by itself."
Subscribe to:
Posts (Atom)