, Forbes; Rethinking Medical Ethics
"Even so, the technology raises some knotty ethical questions. What
happens when an AI system makes the wrong decision—and who is
responsible if it does? How can clinicians verify, or even understand,
what comes out of an AI “black box”? How do they make sure AI systems
avoid bias and protect patient privacy?
In June 2018, the American Medical Association (AMA) issued its first
guidelines for how to develop, use and regulate AI. (Notably, the
association refers to AI as “augmented intelligence,” reflecting its
belief that AI will enhance, not replace, the work of physicians.) Among
its recommendations, the AMA says, AI tools should be designed to
identify and address bias and avoid creating or exacerbating disparities
in the treatment of vulnerable populations. Tools, it adds, should be
transparent and protect patient privacy.
None of those
recommendations will be easy to satisfy. Here is how medical
practitioners, researchers, and medical ethicists are approaching some
of the most pressing ethical challenges."
Issues and developments related to ethics, information, and technologies, examined in the ethics and intellectual property graduate courses I teach at the University of Pittsburgh School of Computing and Information. My Bloomsbury book "Ethics, Information, and Technology" will be published in Summer 2025. Kip Currier, PhD, JD
Showing posts with label American Medical Association (AMA) guidelines on AI. Show all posts
Showing posts with label American Medical Association (AMA) guidelines on AI. Show all posts
Tuesday, February 12, 2019
Rethinking Medical Ethics; Forbes, February 11, 2019
Subscribe to:
Posts (Atom)