Showing posts with label due diligence. Show all posts
Showing posts with label due diligence. Show all posts

Tuesday, October 1, 2024

Fake Cases, Real Consequences [No digital link as of 10/1/24]; ABA Journal, Oct./Nov. 2024 Issue

John Roemer, ABA Journal; Fake Cases, Real Consequences [No digital link as of 10/1/24]

"Legal commentator Eugene Volokh, a professor at UCLA School of Law who tracks AI in litigation, in February reported on the 14th court case he's found in which AI-hallucinated false citations appeared. It was a Missouri Court of Appeals opinion that assessed the offending appellant $10,000 in damages for a frivolous filing.

Hallucinations aren't the only snag, Volokh says. "It's also with the output mischaracterizing the precedents or omitting key context. So one still has to check that output to make sure it's sound, rather than just including it in one's papers.

Echoing Volokh and other experts, ChatGPT itself seems clear-eyed about its limits. When asked about hallucinations in legal research, it replied in part: "Hallucinations in chatbot answers could potentially pose a problem for lawyers if they relied solely on the information provided by the chatbot without verifying its accuracy."

Thursday, September 5, 2024

Intellectual property and data privacy: the hidden risks of AI; Nature, September 4, 2024

 Amanda Heidt , Nature; Intellectual property and data privacy: the hidden risks of AI

"Timothée Poisot, a computational ecologist at the University of Montreal in Canada, has made a successful career out of studying the world’s biodiversity. A guiding principle for his research is that it must be useful, Poisot says, as he hopes it will be later this year, when it joins other work being considered at the 16th Conference of the Parties (COP16) to the United Nations Convention on Biological Diversity in Cali, Colombia. “Every piece of science we produce that is looked at by policymakers and stakeholders is both exciting and a little terrifying, since there are real stakes to it,” he says.

But Poisot worries that artificial intelligence (AI) will interfere with the relationship between science and policy in the future. Chatbots such as Microsoft’s Bing, Google’s Gemini and ChatGPT, made by tech firm OpenAI in San Francisco, California, were trained using a corpus of data scraped from the Internet — which probably includes Poisot’s work. But because chatbots don’t often cite the original content in their outputs, authors are stripped of the ability to understand how their work is used and to check the credibility of the AI’s statements. It seems, Poisot says, that unvetted claims produced by chatbots are likely to make their way into consequential meetings such as COP16, where they risk drowning out solid science.

“There’s an expectation that the research and synthesis is being done transparently, but if we start outsourcing those processes to an AI, there’s no way to know who did what and where the information is coming from and who should be credited,” he says...

The technology underlying genAI, which was first developed at public institutions in the 1960s, has now been taken over by private companies, which usually have no incentive to prioritize transparency or open access. As a result, the inner mechanics of genAI chatbots are almost always a black box — a series of algorithms that aren’t fully understood, even by their creators — and attribution of sources is often scrubbed from the output. This makes it nearly impossible to know exactly what has gone into a model’s answer to a prompt. Organizations such as OpenAI have so far asked users to ensure that outputs used in other work do not violate laws, including intellectual-property and copyright regulations, or divulge sensitive information, such as a person’s location, gender, age, ethnicity or contact information. Studies have shown that genAI tools might do both1,2."

Sunday, December 31, 2023

Michael Cohen used fake cases created by AI in bid to end his probation; The Washington Post, December 29, 2023

 , The Washington Post; Michael Cohen used fake cases created by AI in bid to end his probation

"Michael Cohen, a former fixer and lawyer for former president Donald Trump, said in a new court filing that he unknowingly gave his attorney bogus case citations after using artificial intelligence to create them as part of a legal bid to end his probation on tax evasion and campaign finance violation charges...

In the filing, Cohen wrote that he had not kept up with “emerging trends (and related risks) in legal technology and did not realize that Google Bard was a generative text service that, like ChatGPT, could show citations and descriptions that looked real but actually were not.” To him, he said, Google Bard seemed to be a “supercharged search engine.”...

This is at least the second instance this year in which a Manhattan federal judge has confronted lawyers over using fake AI-generated citations. Two lawyers in June were fined $5,000 in an unrelated case where they used ChatGPT to create bogus case citations."

Sunday, July 23, 2023

Want to quickly spot idiots? Here are five foolproof red flags; The Guardian, July 23, 2023

, The GuardianWant to quickly spot idiots? Here are five foolproof red flags

"f you want to be successful in this world, you have to develop your own idiot detection system,” the governor of Illinois, JB Pritzker, recently told the Northwestern University Class of 2023. Pritzker, a billionaire and self-described “cheugy dad”, clearly knows a thing or two about successful commencement speeches: his talk has gone viral.While the 20-minute speech, which was organized around quotes from characters in The Office series, wasn’t entirely about idiot-spotting, that section of it seemed to resonate the most.

You can see why. We live in a golden age of grifters, bullshitters and scammers. We live in an age where some of the world’s most powerful people threw millions of dollars at Elizabeth Holmes, without doing proper due diligence, because she came from the right background and sounded like she knew what she was talking about. A fantasist like George Santosmanaged to successfully fib his way into government. And Marjorie Taylor Greene has a seat in US Congress despite routinely going on unhinged rants about, inter alia, the “gazpacho police”. Clearly not enough people have functioning idiot detection systems.

So how do you spot an idiot? Well, says Pritzker, it’s not always easy. “I wish there was a foolproof way to spot idiots, but counterintuitively, some idiots are very smart. They can dazzle you with words and misdirection. They can get promoted above you at work,” Pritzker said. “They can even get elected president.”

That said, there are some major signs to watch out for. [Bolds added] The best way to spot an idiot is to “look for the person who is cruel”, Pritzker says. “When someone’s path through this world is marked with acts of cruelty, they have failed the first test of an advanced society. They never forced their animal brain to evolve past its first instinct … Over my many years in politics and business, I have found one thing to be universally true – the kindest person in the room is often the smartest.”"

Thursday, July 13, 2023

Are We Going Too Far By Allowing Generative AI To Control Robots, Worriedly Asks AI Ethics And AI Law; Forbes, July 10, 2023

Dr. Lance B. Eliot , Forbes; Are We Going Too Far By Allowing Generative AI To Control Robots, Worriedly Asks AI Ethics And AI Law

"What amount of due diligence is needed or required on the part of the user when it comes to generative AI and robots?

Nobody can as yet say for sure. Until we end up with legal cases and issues involving presumed harm, this is a gray area. For lawyers that want to get involved in AI and law, these are going to be an exciting and emerging set of legal challenges and legal puzzles that will undoubtedly arise as the use of generative AI becomes further ubiquitous and the advent of robots becomes affordable and practical in our daily lives.

You might also find of interest that some of the AI makers have contractual or licensing clauses that if you are using their generative AI and they get sued for something you did as a result of using their generative AI, you indemnify the AI maker and pledge to pay for their costs and expenses to fight the lawsuit, see my analysis at the link here. This could be daunting for you. Suppose that the house you were cooking in burns to the ground. The insurer sues the AI maker claiming that their generative AI was at fault. But, you agreed whether you know it or not to the indemnification clause, thus the AI maker comes to you and says you need to pay for their defense.

Ouch."

Friday, March 15, 2019

I Almost Died Riding an E-Scooter Like 99 percent of users, I wasn’t wearing a helmet.; Slate, March 14, 2019

Rachel Withers, Slate;

I Almost Died Riding an E-Scooter

Like 99 percent of users, I wasn’t wearing a helmet.


"I’ve been rather flippant with friends about what happened because it’s the only way I know how to deal. It’s laughable that you’d get seriously injured scooting. But this isn’t particularly funny. People are always going to be idiots, yes, but idiot people are currently getting seriously injured, in ways that might have been prevented, because tech companies flippantly dumped their product all over cities, without an adequate helmet solution. Facebook’s “move fast and break things” mantra can be applied to many tech companies, but in the case of e-scooters, it might just be “move fast and break skulls.”"

Friday, July 8, 2016

In Dallas, another example of perils of reporting breaking news; Washington Post, 7/8/16

Paul Farhi, Washington Post; In Dallas, another example of perils of reporting breaking news:
"Thanks to the speed and ubi­quity of digital media, readers, viewers and listeners know more than ever about any unfolding incident or disaster. But they also know less, thanks to the unfiltered, uncorroborated and just plain inaccurate factoids that poison the news ecosystem like a toxic chemical. It’s not just inaccurate reporting alone; TV news panels and people on social media compound questionable facts by repeating them and speculating about them.
“We keep relearning this lesson over and over,” says W. Joseph Campbell, a communications professor at American University and the author of “Getting It Wrong,” a book about epic journalism mistakes. “With any tragedy, you see it again and again.”"

Saturday, May 19, 2012

Third Point Demands Records From Yahoo’s C.E.O. Search; New York Times, 5/7/12

Michael J. De La Merced, New York Times; Third Point Demands Records From Yahoo’s C.E.O. Search:

"Third Point sent Yahoo a request for records relating to its selection of Scott Thompson, a former eBay executive, as its chief executive, after the besieged technology company admitted that it had misstated its leader’s academic credentials.

The request follows Yahoo’s admission, after prodding by the activist hedge fund, that while Mr. Thompson’s official biography said that he had earned degrees in accounting and computer science from Stonehill College, in reality he held only the former.

Yahoo also conceded that the director in charge of finding and hiring Mr. Thompson, Patti Hart, also had factual errors in the description of her academic record."

Saturday, March 31, 2012

Tracking Twitter, Raising Red Flags; New York Times, 3/30/12

Pete Thamel, New York Times; Tracking Twitter, Raising Red Flags:

"“Every school, we work to customize their keyword list,” said Sam Carnahan, the chief executive of Varsity Monitor, which has offices in Seattle and New York and also provides educational programs to universities. “We look for things that could damage the school’s brand and anything related to their eligibility.”

Yet what may look to some like a business opportunity, and to universities and their athletic departments like due diligence, appears to others to be an invasion of privacy.

“I think it’s violating the Constitution to have someone give up their password or user name,” said Ronald N. Young, a Maryland state senator who has sponsored a bill that would make it harder for universities to monitor their athletes online. “It’s like reading their mail or listening to their phone calls.”"