Showing posts with label inaccurate AI output. Show all posts
Showing posts with label inaccurate AI output. Show all posts

Wednesday, December 25, 2024

Should you trust an AI-assisted doctor? I visited one to see.; The Washington Post, December 25, 2024

, The Washington Post; Should you trust an AI-assisted doctor? I visited one to see.

"The harm of generative AI — notorious for “hallucinations” — producing bad information is often difficult to see, but in medicine the danger is stark. One study found that out of 382 test medical questions, ChatGPT gave an “inappropriate” answer on 20 percent. A doctor using the AI to draft communications could inadvertently pass along bad advice.

Another study found that chatbots can echo doctors’ own biases, such as the racist assumption that Black people can tolerate more pain than White people. Transcription software, too, has been shown to invent things that no one ever said."

Monday, July 29, 2024

Lawyers using AI must heed ethics rules, ABA says in first formal guidance; Reuters, July 29, 2024

S, Reuters; Lawyers using AI must heed ethics rules, ABA says in first formal guidance

"Lawyers must guard against ethical lapses if they use generative artificial intelligence in their work, the American Bar Association said on Monday.

In its first formal ethics opinion on generative AI, an ABA committee said lawyers using the technology must "fully consider" their ethical obligations to protect clients, including duties related to lawyer competence, confidentiality of client data, communication and fees...

Monday's opinion from the ABA's ethics and professional responsibility committee said AI tools can help lawyers increase efficiency but can also carry risks such as generating inaccurate output. Lawyers also must try to prevent inadvertent disclosure or access to client information, and should consider whether they need to tell a client about their use of generative AI technologies, it said."