Showing posts with label accuracy. Show all posts
Showing posts with label accuracy. Show all posts

Monday, November 11, 2024

Ted Danson Says ‘The Good Place’ Writers Had Ethics Professors “On Speed Dial” For Accuracy; Deadline, November 9, 2024

Glenn Garner , Deadline; Ted Danson Says ‘The Good Place’ Writers Had Ethics Professors “On Speed Dial” For Accuracy

"Following his time on The Good PlaceTed Danson is giving it up for the show’s architects.

The 3x Golden Globe winner recently explained the lengths the NBC comedy’s writers’ room went to in order to more accurately depict its characters’ ethical dilemmas as they navigated the afterlife.

“On speed dial, we had three or four ethics professors who would talk to the writers daily to make sure what we were talking about was right,” said Danson on his Where Everybody Knows Your Name podcast, as co-host Woody Harrelson added, “Sometimes it’s good to get a second, third opinion.”

The Michael Schur-created comedy, which ran for four seasons from 2016 to 2020, starred Kristen Bell as Eleanor Shellstrop, an ethically questionable soul who mistakenly ends up in the titular ‘Good Place’ after her unexpected death."

Saturday, October 5, 2024

Police reports written with advanced tech could help cops but comes with host of challenges: expert; Fox News, September 24, 2024

Christina Coulter , Fox News; Police reports written with advanced tech could help cops but comes with host of challenges

"Several police departments nationwide are debuting artificial intelligence that writes officers' incident reports for them, and although the software could cause issues in court, an expert says, the technology could be a boon for law enforcement.

Oklahoma City's police department was among the first to experiment with Draft One, an AI-powered software that analyzes police body-worn camera audio and radio transmissions to write police reports that can later be used to justify criminal charges and as evidence in court.

Since The Associated Press detailed the software and its use by the department in a late August article, the department told Fox News Digital that it has put the program on hold. 

"The use of the AI report writing has been put on hold, so we will pass on speaking about it at this time," Capt. Valerie Littlejohn wrote via email. "It was paused to work through all the details with the DA’s Office."...

According to Politico, at least seven police departments nationwide are using Draft One, which was made by police technology company Axon to be used with its widely used body-worn cameras."

Locals turn to legacy media as hurricane rumors swirl; Axios, October 1, 2024

 Michael GraffSara Fischer, Axios; Locals turn to legacy media as hurricane rumors swirl

"Old-fashioned legacy media — especially radio — have become a vital information lifeline in the chaotic aftermath of Hurricane Helene. 

Why it matters: Power outages, lost cell signals and hundreds of road closures have stifled on-the-ground reporting, giving way to falsehoods that can spread quickly online — and creating an urgent need to correct them.

  • Local reporters are working overtime to correct the record. In many cases, they're filling an information void left by local government officials who were caught off guard by the severity of the storm's flooding in mountainous regions around Asheville, North Carolina."

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."

Monday, September 23, 2024

Generative AI and Legal Ethics; JD Supra, September 20, 2024

Craig BrodskyGoodell, DeVries, Leech & Dann, LLP, JD Supra; Generative AI and Legal Ethics

 "In his scathing opinion, Cullen joined judges from New York Massachusetts and North Carolina, among others, by concluding that improper use of AI generated authorities may give rise to sanctions and disciplinary charges...

As a result, on July 29, 2024, the American Bar Association Standing Committee on Ethics and Professional issued Formal Opinion 512 on Generative Artificial Intelligence Tools. The ABA Standing Committee issued the opinion primarily because GAI tools are a “rapidly moving target” that can create significant ethical issues. The committee believed it necessary to offer “general guidance for lawyers attempting to navigate this emerging landscape.”

The committee’s general guidance is helpful, but the general nature of Opinion 512 it underscores part of my main concern — GAI has a wide-ranging impact on how lawyers practice that will increase over time. Unsurprisingly, at present, GAI implicates at least eight ethical rules ranging from competence (Md. Rule 19-301.1) to communication (Md. Rule 19-301.4), to fees (Md. Rule 19-301.5), to confidentiality, (Md. Rule 19-301.6), to supervisory obligations (Md. Rule 19-305.1 and Md. Rule 305.3) to the duties of a lawyer before tribunal to be candid and pursue meritorious claims and defenses. (Md. Rules 19-303.1 and 19-303.3).

As a technological feature of practice, lawyers cannot simply ignore GAI. The duty of competence under Rule 19-301.1 includes technical competence, and GAI is just another step forward. It is here to stay. We must embrace it but use it smartly.

Let it be an adjunct to your practice rather than having Chat GPT write your brief. Ensure that your staff understands that GAI can be helpful, but that the work product must be checked for accuracy.

After considering the ethical implications and putting the right processes in place, implement GAI and use it to your clients’ advantage."

Tuesday, September 10, 2024

‘They’re Eating the Cats’: Trump Repeats False Claim About Immigrants; The New York Times, September 10, 2024

 , The New York Times; ‘They’re Eating the Cats’: Trump Repeats False Claim About Immigrants

"Former President Donald J. Trump repeated a false and outlandish claim that Haitian immigrants in Springfield, Ohio, have abducted and eaten their neighbors’ pets.

Mr. Trump made the comments on Tuesday early in his first debate against Vice President Kamala Harris, shortly after Ms. Harris mocked his rallies as so filled with fictions and fringe theories that attendees leave early. Mr. Trump responded by trying to pivot back to the subject under discussion, immigration.

“A lot of towns don’t want to talk about it because they’re so embarrassed by it,” he said. “In Springfield, they’re eating the dogs. The people that came in, they’re eating the cats. They’re eating — they’re eating the pets of the people that live there.”

Mr. Trump and his running mate, Senator JD Vance of Ohio, have amplified the internet rumor on the campaign trail this week. It stems from viral social media posts that have spread as Mr. Vance and others have sought to stir fears about the growing Haitian population in Springfield, though members of the community are living and working in the United States legally."

Wednesday, August 21, 2024

Leaving Your Legacy Via Death Bots? Ethicist Shares Concerns; Medscape, August 21, 2024

Arthur L. Caplan, PhD, Medscape ; Leaving Your Legacy Via Death Bots? Ethicist Shares Concerns

"On the other hand, there are clearly many ethical issues about creating an artificial version of yourself. One obvious issue is how accurate this AI version of you will be if the death bot can create information that sounds like you, but really isn't what you would have said, despite the effort to glean it from recordings and past information about you. Is it all right if people wander from the truth in trying to interact with someone who's died? 

There are other ways to leave memories behind. You certainly can record messages so that you can control the content. Many people video themselves and so on. There are obviously people who would say that they have a diary or have written information they can leave behind. 

Is there a place in terms of accuracy for a kind of artificial version of ourselves to go on forever? Another interesting issue is who controls that. Can you add to it after your death? Can information be shared about you with third parties who don't sign up for the service? Maybe the police take an interest in how you died. You can imagine many scenarios where questions might come up about wanting to access these data that the artificial agent is providing. 

Some people might say that it's just not the way to grieve.Maybe the best way to grieve is to accept death and not try to interact with a constructed version of yourself once you've passed. That isn't really accepting death. It's a form, perhaps, of denial of death, and maybe that isn't going to be good for the mental health of survivors who really have not come to terms with the fact that someone has passed on."

Sunday, August 18, 2024

A.L.S. Stole His Voice. A.I. Retrieved It.; The New York Times, August 14, 2024

 , The New York Times; A.L.S. Stole His Voice. A.I. Retrieved It.

"As scientists continued training the device to recognize his sounds, it got only better. Over a period of eight months, the study said, Mr. Harrell came to utter nearly 6,000 unique words. The device kept up, sustaining a 97.5 percent accuracy.

That exceeded the accuracy of many smartphone applications that transcribe people’s intact speech. It also marked an improvement on previous studies in which implants reached accuracy rates of roughly 75 percent, leaving one of every four words liable to misinterpretation.

And whereas devices like Neuralink’s help people move cursors across a screen, Mr. Harrell’s implant allowed him to explore the infinitely larger and more complex terrain of speech.

“It went from a scientific demonstration to a system that Casey can use every day to speak with family and friends,” said Dr. David Brandman, the neurosurgeon who operated on Mr. Harrell and led the study alongside Dr. Stavisky.

That leap was enabled in part by the types of artificial intelligence that power language tools like ChatGPT. At any given moment, Mr. Harrell’s implant picks up activity in an ensemble of neurons, translating their firing pattern into vowel or consonant units of sound. Computers then agglomerate a string of such sounds into a word, and a string of words into a sentence, choosing the output they deem likeliest to correspond to what Mr. Harrell has tried to say...

Whether the same implant would prove as helpful to more severely paralyzed people is unclear. Mr. Harrell’s speech had deteriorated, but not disappeared.

And for all its utility, the technology cannot mitigate the crushing financial burden of trying to live and work with A.L.S.: Insurance will pay for Mr. Harrell’s caregiving needs only if he goes on hospice care, or stops working and becomes eligible for Medicaid, Ms. Saxon said, a situation that, she added, drives others with A.L.S. to give up trying to extend their lives.

Those very incentives also make it likelier that people with disabilities will become poor, putting access to cutting-edge implants even further out of their reach, said Melanie Fried-Oken, a professor of neurology at Oregon Health & Science University."

Monday, August 12, 2024

Silicon Valley bishop, two Catholic AI experts weigh in on AI evangelization; Religion News Service, May 6, 2024

leja Hertzler-McCain , Religion News Service; Silicon Valley bishop, two Catholic AI experts weigh in on AI evangelization

"San Jose, California, Bishop Oscar Cantú, who leads the Catholic faithful in Silicon Valley, said that AI doesn’t come up much with parishioners in his diocese...

Pointing to the adage coined by Meta founder Mark Zuckerberg, “move fast and break things,” the bishop said, “with AI, we need to move very cautiously and slowly and try not to break things. The things we would be breaking are human lives and reputations.”...

Noreen Herzfeld, a professor of theology and computer science at St. John’s University and the College of St. Benedict and one of the editors of a book about AI sponsored by the Vatican Dicastery for Culture and Education, said that the AI character was previously “impersonating a priest, which is considered a very serious sin in Catholicism.”...

Accuracy issues, Herzfeld said, is one of many reasons it should not be used for evangelization. “As much as you beta test one of these chatbots, you will never get rid of hallucinations” — moments when the AI makes up its own answers, she said...

Larrey, who has been studying AI for nearly 30 years and is in conversation with Sam Altman, the CEO of OpenAI, is optimistic that the technology will improve. He said Altman is already making progress on the hallucinations, on its challenges to users’ privacy and reducing its energy use — a recent analysis estimated that by 2027, artificial intelligence could suck up as much electricity as the population of Argentina or the Netherlands."

Friday, June 14, 2024

At 17, She Fell in Love With a 47-Year-Old. Now She Questions the Story.; The New York Times, June 10, 2024

 Alexandra Alter, The New York Times; At 17, She Fell in Love With a 47-Year-Old. Now She Questions the Story.

"Ciment decided to perform an autopsy on her memoir. The exercise yielded a new memoir, titled “Consent,” which Pantheon will release on Tuesday. With almost clinical detachment, Ciment investigates the flaws and factual lapses in her earlier work, and in doing so, questions the artifice inherent in memoir as a literary form.

“The whole idea of writing truth in a memoir is so preposterous,” Ciment said. “You have these scattered memories, and you’re trying to carve a story out of them.”"

Friday, June 7, 2024

‘This Is Going to Be Painful’: How a Bold A.I. Device Flopped; The New York Times, June 6, 2024

Tripp Mickle and , The New York Times ; This Is Going to Be Painful’: How a Bold A.I. Device Flopped

"As of early April, Humane had received around 10,000 orders for the Ai Pin, a small fraction of the 100,000 that it hoped to sell this year, two people familiar with its sales said. In recent months, the company has also grappled with employee departures and changed a return policy to address canceled orders. On Wednesday, it asked customers to stop using the Ai Pin charging case because of a fire risk associated with its battery.

Its setbacks are part of a pattern of stumbles across the world of generative A.I., as companies release unpolished products. Over the past two years, Google has introduced and pared back A.I. search abilities that recommended people eat rocks, Microsoft has trumpeted a Bing chatbot that hallucinated and Samsung has added A.I. features to a smartphone that were called “excellent at times and baffling at others.”"

Thursday, March 28, 2024

Your newsroom needs an AI ethics policy. Start here.; Poynter, March 25, 2024

 , Poynter; Your newsroom needs an AI ethics policy. Start here.

"Every single newsroom needs to adopt an ethics policy to guide the use of generative artificial intelligence. Why? Because the only way to create ethical standards in an unlicensed profession is to do it shop by shop.

Until we create those standards — even though it’s early in the game — we are holding back innovation.

So here’s a starter kit, created by Poynter’s Alex Mahadevan, Tony Elkins and me. It’s a statement of journalism values that roots AI experimentation in the principles of accuracy, transparency and audience trust, followed by a set of specific guidelines.

Think of it like a meal prep kit. Most of the work is done, but you still have to roll up your sleeves and do a bit of labor. This policy includes blank spaces, where newsroom leaders will have to add details, saying “yes” or “no” to very specific activities, like using AI-generated illustrations.

In order to effectively use this AI ethics policy, newsrooms will need to create an AI committee and designate an editor or senior journalist to lead the ongoing effort. This step is critical because the technology is going to evolve, the tools are going to multiply and the policy will not keep up unless it is routinely revised."

Tuesday, February 20, 2024

He Hunts Sloppy Scientists. He’s Finding Lots of Prey.; The New York Times, February 2, 2024

Matt Richtel, The New York Times ; He Hunts Sloppy Scientists. He’s Finding Lots of Prey.

"Sholto David, 32, has a Ph.D. in cellular and molecular biology from Newcastle University in England. He is also developing an expertise in spotting errors in scientific papers. Most recently, and notably, he discovered flawed or manipulated data in studies conducted by top executives at the Harvard-affiliated Dana-Farber Cancer Institute. The institute said that it was requesting retraction of six manuscripts and had found 31 other manuscripts that required corrections.

From his home in Wales, Dr. David scours new research publications for images that are mislabeled and manipulated, and he regularly finds mistakes, or malfeasance, in some of the most prominent scientific journals. Accuracy is vital, as peer-reviewed papers often provide the evidence for drug trials or further lines of research. Dr. David said that the frequency of such errors suggests an underlying problem for science.

His interview with The New York Times has been edited and condensed...

Does this call into question the peer-review process?

I think that’s something that people need to think about. These are top scientific journals with errors that escaped peer review. Maybe the peer reviewers are looking for other things. Maybe they like to look at the methods or the conclusions more carefully than the results. But, yeah, it does make me think that people should question how effective the peer-review process has been."

Thursday, October 19, 2023

Friday, August 11, 2023

Senator wants Google to answer for accuracy, ethics of generative AI tool; HealthcareITNews, August 9, 2023

 Mike Miliard, HealthcareITNews; Senator wants Google to answer for accuracy, ethics of generative AI tool

"Sen. Mark Warner, D-Virginia, wrote a letter to Sundar Pichai, CEO of Google parent company Alphabet, on Aug. 8, seeking clarity into the technology developer's Med-PaLM 2, an artificial intelligence chatbot, and how it's being deployed and trained in healthcare settings."

Monday, July 3, 2023

Managing the Risks of Generative AI; Harvard Business Review (HBR), June 6, 2023

and , Harvard Business Review (HBR); Managing the Risks of Generative AI

"Guidelines for the ethical development of generative AI

Our new set of guidelines can help organizations evaluate generative AI’s risks and considerations as these tools gain mainstream adoption. They cover five focus areas."

Saturday, March 12, 2022

About WBUR's Ethics Guide; WBUR, March 10, 2022

WBUR; About WBUR's Ethics Guide

"The committee approached the guidelines from the vantage point of WBUR journalists and journalism — while acknowledging the importance of the ethical guidelines and standards that need to be understood and embraced by everyone who works or is associated with WBUR.

The committee used the NPR Ethics Handbook as a structural model and source text, adopted with a WBUR voice. They also addressed ethics issues from a 2021 perspective, recognizing that much has changed in the public media and journalism field since the NPR Handbook was first written a decade ago."

WBUR Ethics Guide PDFhttps://d279m997dpfwgl.cloudfront.net/wp/2022/03/WBUR-Ethics-Guidelines.pdf  

Saturday, July 11, 2020

Wrongfully Accused by an Algorithm; The New York Times, June 24, 2020

, The New York Times; Wrongfully Accused by an Algorithm

In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.

"Clare Garvie, a lawyer at Georgetown University’s Center on Privacy and Technology, has written about problems with the government’s use of facial recognition. She argues that low-quality search images — such as a still image from a grainy surveillance video — should be banned, and that the systems currently in use should be tested rigorously for accuracy and bias.

“There are mediocre algorithms and there are good ones, and law enforcement should only buy the good ones,” Ms. Garvie said.

About Mr. Williams’s experience in Michigan, she added: “I strongly suspect this is not the first case to misidentify someone to arrest them for a crime they didn’t commit. This is just the first time we know about it.”"

Thursday, November 21, 2019

Consumer DNA Testing May Be the Biggest Health Scam of the Decade; Gizmodo, November 20, 2019

Ed Cara, Gizmodo; Consumer DNA Testing May Be the Biggest Health Scam of the Decade

"This test, as well as many of those offered by the hundreds of big and small DNA testing companies on the market, illustrates the uncertainty of personalized consumer genetics.

The bet that companies like 23andMe are making is that they can untangle this mess and translate their results back to people in a way that won’t cross the line into deceptive marketing while still convincing their customers they truly matter. Other companies have teamed up with outside labs and doctors to look over customers’ genes and have hired genetic counselors to go over their results, which might place them on safer legal and medical ground. But it still raises the question of whether people will benefit from the information they get. And because our knowledge of the relationship between genes and health is constantly changing, it’s very much possible the DNA test you take in 2020 will tell you a totally different story by 2030."

Tuesday, September 17, 2019

Real-Time Surveillance Will Test the British Tolerance for Cameras; The New York Times, September 15, 2019

, The New York Times; Real-Time Surveillance Will Test the British Tolerance for Cameras

Facial recognition technology is drawing scrutiny in a country more accustomed to surveillance than any other Western democracy. 

"“Technology is driving forward, and legislation and regulation follows ever so slowly behind,” said Tony Porter, Britain’s surveillance camera commissioner, who oversees compliance with the country’s surveillance camera code of practice. “It would be wrong for me to suggest the balance is right.”

Britain’s experience mirrors debates about the technology in the United States and elsewhere in Europe. Critics say the technology is an intrusion of privacy, akin to constant identification checks of an unsuspecting public, and has questionable accuracy, particularly at identifying people who aren’t white men."