Showing posts with label chatbots. Show all posts
Showing posts with label chatbots. Show all posts

Wednesday, March 20, 2024

Google hit with $270M fine in France as authority finds news publishers’ data was used for Gemini; TechCrunch, March 20, 2024

 Natasha LomasRomain Dillet , TechCrunch; Google hit with $270M fine in France as authority finds news publishers’ data was used for Gemini

"In a never-ending saga between Google and France’s competition authority over copyright protections for news snippets, the Autorité de la Concurrence announced a €250 million fine against the tech giant Wednesday (around $270 million at today’s exchange rate).

According to the competition watchdog, Google disregarded some of its previous commitments with news publishers. But the decision is especially notable because it drops something else that’s bang up-to-date — by latching onto Google’s use of news publishers’ content to train its generative AI model Bard/Gemini.

The competition authority has found fault with Google for failing to notify news publishers of this GenAI use of their copyrighted content. This is in light of earlier commitments Google made which are aimed at ensuring it undertakes fair payment talks with publishers over reuse of their content."

Wednesday, December 20, 2023

Recent cases raise questions about the ethics of using AI in the legal system; NPR, December 15, 2023

 , NPR; Recent cases raise questions about the ethics of using AI in the legal system

"NPR's Steve Inskeep asks the director of the Private Law Clinic at Yale University, Andrew Miller, about the ethics of using artificial intelligence in the legal system...

INSKEEP: To what extent does someone have to think about what a large language model produces? I'm thinking about the way that we as consumers are continually given these terms of service that we're supposedly going to read and click I accept, and of course we glance at it and click I accept. You have to do something more than that as a lawyer, don't you?

MILLER: You're exactly right. A professor colleague said to me, you know, when a doctor uses an MRI machine, the doctor doesn't necessarily know every technical detail of the MRI machine, right? And my response was, well, that's true, but the doctor knows enough about how the MRI works to have a sense of the sorts of things that would be picked up on an MRI, the sorts of things that wouldn't be picked up. With ChatGPT we don't have - at least not yet - particularly well developed understanding of how our inputs relate to the outputs."

Thursday, October 19, 2023

Using AI, cartoonist Amy Kurzweil connects with deceased grandfather in 'Artificial'; NPR, October 19, 2023

, NPR ; Using AI, cartoonist Amy Kurzweil connects with deceased grandfather in 'Artificial'

"Amy Kurzweil said the chatbot project and the book that came out of it underscored her somewhat positive feelings about AI.

"I feel like you need to imagine the robot you want to see in the world," she said. "We're not going to stop progress. But we can think about applications of AI that facilitate human connection.""

Friday, August 11, 2023

Senator wants Google to answer for accuracy, ethics of generative AI tool; HealthcareITNews, August 9, 2023

 Mike Miliard, HealthcareITNews; Senator wants Google to answer for accuracy, ethics of generative AI tool

"Sen. Mark Warner, D-Virginia, wrote a letter to Sundar Pichai, CEO of Google parent company Alphabet, on Aug. 8, seeking clarity into the technology developer's Med-PaLM 2, an artificial intelligence chatbot, and how it's being deployed and trained in healthcare settings."

Tuesday, August 8, 2023

Minnesota colleges grappling with ethics and potential benefits of ChatGPT; Star Tribune, August 6, 2023

 , Star Tribune ; Minnesota colleges grappling with ethics and potential benefits of ChatGPT

"While some Minnesota academics are concerned about students using ChatGPT to cheat, others are trying to figure out the best way to teach and use the tool in the classroom.

"The tricky thing about this is that you've got this single tool that can be used very much unethically in an educational setting," said Darin Ulness, a chemistry professor at Concordia College in Moorhead. "But at the same time, it can be such a valuable tool that we can't not use it.""

Tuesday, July 25, 2023

ChatGPT: Ethics and the 21st Century Lawyer; New York State Bar Association (NYSBA), July 24, 2023

New York State Bar Association (NYSBA) ; ChatGPT: Ethics and the 21st Century Lawyer

"The ChatGPT Lawyer incident raises many questions of skills and ethics.  This program will discuss the disciplinary decisions arising from that incident in the Southern District of New York and how this may inform future use of AI technology in the practice of law in federal and state court. Panelists will cover the use of Chat GPT and the use of AI in other legal research tools (Westlaw, Lexis). Attendees will gain an understanding of best practices for writing briefs and citing cases appropriately using AI."

Saturday, July 22, 2023

Tell us: are you using AI for emotional connection or support?; The Guardian, July 18, 2023

Guardian Community Team, The Guardian; Tell us: are you using AI for emotional connection or support?

We’d like to hear from people who are developing personal relationships with AI

"We’d like to find out more about people developing personal relationships with AI.

As some turn to chatbot impersonations of lost loved ones, we’d like to find out rather ways that people are using AI for emotional connection or support. What has worked for you and what hasn’t?

Let us know about your experiences below."

Wednesday, July 19, 2023

‘It was as if my father were actually texting me’: grief in the age of AI; The Guardian, July 18, 2023

 Aimee Pearcy, The Guardian; ‘It was as if my father were actually texting me’: grief in the age of AI

"For all the advances in medicine and technology in recent centuries, the finality of death has never been in dispute. But over the past few months, there has been a surge in the number of people sharing their stories of using ChatGPT to help say goodbye to loved ones. They raise serious questions about the rights of the deceased, and what it means to die. Is Henle’s AI mother a version of the real person? Do we have the right to prevent AI from approximating our personalities after we’re gone? If the living feel comforted by the words of an AI bot impersonation – is that person in some way still alive?"

Thursday, July 13, 2023

A.I. Could Solve Some of Humanity's Hardest Problems. It Already Has.; The New York Times, July 11, 2023

The Ezra Klein Show, The New York Times; A.I. Could Solve Some of Humanity's Hardest Problems. It Already Has.

"Since the release of ChatGPT, huge amounts of attention and funding have been directed toward chatbots. These A.I. systems are trained on copious amounts of human-generated data and designed to predict the next word in a given sentence. They are hilarious and eerie and at times dangerous.

But what if, instead of building A.I. systems that mimic humans, we built those systems to solve some of the most vexing problems facing humanity?"

Wednesday, July 12, 2023

Inside the White-Hot Center of A.I. Doomerism; The New York Times, July 11, 2023

 Kevin Roose, The New York Times; Inside the White-Hot Center of A.I. Doomerism

"But the difference is that Anthropic’s employees aren’t just worried that their app will break, or that users won’t like it. They’re scared — at a deep, existential level — about the very idea of what they’re doing: building powerful A.I. models and releasing them into the hands of people, who might use them to do terrible and destructive things.

Many of them believe that A.I. models are rapidly approaching a level where they might be considered artificial general intelligence, or “A.G.I.,” the industry term for human-level machine intelligence. And they fear that if they’re not carefully controlled, these systems could take over and destroy us...

And lastly, he made a moral case for Anthropic’s decision to create powerful A.I. systems, in the form of a thought experiment.

“Imagine if everyone of good conscience said, ‘I don’t want to be involved in building A.I. systems at all,’” he said. “Then the only people who would be involved would be the people who ignored that dictum — who are just, like, ‘I’m just going to do whatever I want.’ That wouldn’t be good.”"

Thursday, July 6, 2023

ChatGPT - An Ethical Nightmare Or Just Another Technology?; Forbes, July 6, 2023

 Charles Towers-Clark, Forbes; ChatGPT - An Ethical Nightmare Or Just Another Technology?

"Whilst, as mentioned above, many AI leaders are concerned about the speed of AI developments - at the end of the day ChatGPT has a lot of information, but it doesn’t have the human skill of knowledge.

Yet."

Monday, June 19, 2023

Ethical, legal issues raised by ChatGPT training literature; Tech Explore, May 8, 2023

 Peter Grad , Tech XploreEthical, legal issues raised by ChatGPT training literature

""Knowing what books a model has been trained on is critical to assess such sources of bias," they said.

"Our work here has shown that OpenAI models know about books in proportion to their popularity on the web."

Works detected in the Berkeley study include "Harry Potter," "1984," "Lord of the Rings," "Hunger Games," "Hitchhiker's Guide to the Galaxy," "Fahrenheit 451," "A Game of Thrones" and "Dune."

While ChatGPT was found to be quite knowledgeable about works in the , lesser known works such as Global Anglophone Literature—readings aimed beyond core English-speaking nations that include Africa, Asia and the Caribbean—were largely unknown. Also overlooked were works from the Black Book Interactive Project and Black Caucus Library Association award winners.

"We should be thinking about whose narrative experiences are encoded in these models, and how that influences other behaviors," Bamman, one of the Berkeley researchers, said in a recent Tweet. He added, "popular texts are probably not good barometers of model performance [given] the bias toward sci-fi/fantasy.""

Friday, June 9, 2023

The ChatGPT Lawyer Explains Himself; The New York Times, June 8, 2023

Benjamin Weiser and , The New York Times; The ChatGPT Lawyer Explains Himself

"Irina Raicu, who directs the internet ethics program at Santa Clara University, said this week that the Avianca case clearly showed what critics of such models have been saying, “which is that the vast majority of people who are playing with them and using them don’t really understand what they are and how they work, and in particular what their limitations are.”

Rebecca Roiphe, a New York Law School professor who studies the legal profession, said the imbroglio has fueled a discussion about how chatbots can be incorporated responsibly into the practice of law.

“This case has changed the urgency of it,” Professor Roiphe said. “There’s a sense that this is not something that we can mull over in an academic way. It’s something that has affected us right now and has to be addressed.”"