Tuesday, February 12, 2019

Rethinking Medical Ethics; Forbes, February 11, 2019

, Forbes; Rethinking Medical Ethics

"Even so, the technology raises some knotty ethical questions. What happens when an AI system makes the wrong decision—and who is responsible if it does? How can clinicians verify, or even understand, what comes out of an AI “black box”? How do they make sure AI systems avoid bias and protect patient privacy?

In June 2018, the American Medical Association (AMA) issued its first guidelines for how to develop, use and regulate AI. (Notably, the association refers to AI as “augmented intelligence,” reflecting its belief that AI will enhance, not replace, the work of physicians.) Among its recommendations, the AMA says, AI tools should be designed to identify and address bias and avoid creating or exacerbating disparities in the treatment of vulnerable populations. Tools, it adds, should be transparent and protect patient privacy.

None of those recommendations will be easy to satisfy. Here is how medical practitioners, researchers, and medical ethicists are approaching some of the most pressing ethical challenges."

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.